diff mbox series

LoongArch: KVM: Fix build due to API changes

Message ID 20231115090735.2404866-1-chenhuacai@loongson.cn (mailing list archive)
State New, archived
Headers show
Series LoongArch: KVM: Fix build due to API changes | expand

Commit Message

Huacai Chen Nov. 15, 2023, 9:07 a.m. UTC
Commit 8569992d64b8f750e34b7858eac ("KVM: Use gfn instead of hva for
mmu_notifier_retry") replaces mmu_invalidate_retry_hva() usage with
mmu_invalidate_retry_gfn() for X86, LoongArch also need similar changes
to fix build.

Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
---
 arch/loongarch/kvm/mmu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Randy Dunlap Nov. 15, 2023, 7:47 p.m. UTC | #1
On 11/15/23 01:07, Huacai Chen wrote:
> Commit 8569992d64b8f750e34b7858eac ("KVM: Use gfn instead of hva for
> mmu_notifier_retry") replaces mmu_invalidate_retry_hva() usage with
> mmu_invalidate_retry_gfn() for X86, LoongArch also need similar changes
> to fix build.
> 
> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

Reported-by: Randy Dunlap <rdunlap@infradead.org>
Tested-by: Randy Dunlap <rdunlap@infradead.org> # build-tested
Acked-by: Randy Dunlap <rdunlap@infradead.org>

Thanks.

> ---
>  arch/loongarch/kvm/mmu.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
> index 80480df5f550..9463ebecd39b 100644
> --- a/arch/loongarch/kvm/mmu.c
> +++ b/arch/loongarch/kvm/mmu.c
> @@ -627,7 +627,7 @@ static bool fault_supports_huge_mapping(struct kvm_memory_slot *memslot,
>   *
>   * There are several ways to safely use this helper:
>   *
> - * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before
> + * - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, before
>   *   consuming it.  In this case, mmu_lock doesn't need to be held during the
>   *   lookup, but it does need to be held while checking the MMU notifier.
>   *
> @@ -807,7 +807,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
>  
>  	/* Check if an invalidation has taken place since we got pfn */
>  	spin_lock(&kvm->mmu_lock);
> -	if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) {
> +	if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) {
>  		/*
>  		 * This can happen when mappings are changed asynchronously, but
>  		 * also synchronously if a COW is triggered by
Huacai Chen Nov. 24, 2023, 2:22 p.m. UTC | #2
Hi, Paolo,

On Thu, Nov 16, 2023 at 3:48 AM Randy Dunlap <rdunlap@infradead.org> wrote:
>
>
>
> On 11/15/23 01:07, Huacai Chen wrote:
> > Commit 8569992d64b8f750e34b7858eac ("KVM: Use gfn instead of hva for
> > mmu_notifier_retry") replaces mmu_invalidate_retry_hva() usage with
> > mmu_invalidate_retry_gfn() for X86, LoongArch also need similar changes
> > to fix build.
> >
> > Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
>
> Reported-by: Randy Dunlap <rdunlap@infradead.org>
> Tested-by: Randy Dunlap <rdunlap@infradead.org> # build-tested
> Acked-by: Randy Dunlap <rdunlap@infradead.org>
I think this patch should go through your kvm tree rather than the
loongarch tree. Because the loongarch tree is based on 6.7 now, this
patch can fix a build error for kvm tree, but will cause a build error
on the loongarch tree.


Huacai

>
> Thanks.
>
> > ---
> >  arch/loongarch/kvm/mmu.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
> > index 80480df5f550..9463ebecd39b 100644
> > --- a/arch/loongarch/kvm/mmu.c
> > +++ b/arch/loongarch/kvm/mmu.c
> > @@ -627,7 +627,7 @@ static bool fault_supports_huge_mapping(struct kvm_memory_slot *memslot,
> >   *
> >   * There are several ways to safely use this helper:
> >   *
> > - * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before
> > + * - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, before
> >   *   consuming it.  In this case, mmu_lock doesn't need to be held during the
> >   *   lookup, but it does need to be held while checking the MMU notifier.
> >   *
> > @@ -807,7 +807,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
> >
> >       /* Check if an invalidation has taken place since we got pfn */
> >       spin_lock(&kvm->mmu_lock);
> > -     if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) {
> > +     if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) {
> >               /*
> >                * This can happen when mappings are changed asynchronously, but
> >                * also synchronously if a COW is triggered by
>
> --
> ~Randy
Randy Dunlap Dec. 16, 2023, 5:08 a.m. UTC | #3
Hi,

Someone please merge this patch...
Thanks.


On 11/15/23 01:07, Huacai Chen wrote:
> Commit 8569992d64b8f750e34b7858eac ("KVM: Use gfn instead of hva for
> mmu_notifier_retry") replaces mmu_invalidate_retry_hva() usage with
> mmu_invalidate_retry_gfn() for X86, LoongArch also need similar changes
> to fix build.
> 
> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
> ---
>  arch/loongarch/kvm/mmu.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
> index 80480df5f550..9463ebecd39b 100644
> --- a/arch/loongarch/kvm/mmu.c
> +++ b/arch/loongarch/kvm/mmu.c
> @@ -627,7 +627,7 @@ static bool fault_supports_huge_mapping(struct kvm_memory_slot *memslot,
>   *
>   * There are several ways to safely use this helper:
>   *
> - * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before
> + * - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, before
>   *   consuming it.  In this case, mmu_lock doesn't need to be held during the
>   *   lookup, but it does need to be held while checking the MMU notifier.
>   *
> @@ -807,7 +807,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
>  
>  	/* Check if an invalidation has taken place since we got pfn */
>  	spin_lock(&kvm->mmu_lock);
> -	if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) {
> +	if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) {
>  		/*
>  		 * This can happen when mappings are changed asynchronously, but
>  		 * also synchronously if a COW is triggered by
Huacai Chen Dec. 19, 2023, 3:06 a.m. UTC | #4
Hi, Randy,

On Sat, Dec 16, 2023 at 1:08 PM Randy Dunlap <rdunlap@infradead.org> wrote:
>
> Hi,
>
> Someone please merge this patch...
> Thanks.
I prepared loongarch-kvm changes for 6.8 and the base is 6.7-rc6 [1],
If I merge this patch then the loongarch-next branch will build fail.
So I think this patch should be merged to Paolo's next branch in his
kvm tree.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson.git/log/?h=loongarch-next

Huacai

>
>
> On 11/15/23 01:07, Huacai Chen wrote:
> > Commit 8569992d64b8f750e34b7858eac ("KVM: Use gfn instead of hva for
> > mmu_notifier_retry") replaces mmu_invalidate_retry_hva() usage with
> > mmu_invalidate_retry_gfn() for X86, LoongArch also need similar changes
> > to fix build.
> >
> > Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
> > ---
> >  arch/loongarch/kvm/mmu.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
> > index 80480df5f550..9463ebecd39b 100644
> > --- a/arch/loongarch/kvm/mmu.c
> > +++ b/arch/loongarch/kvm/mmu.c
> > @@ -627,7 +627,7 @@ static bool fault_supports_huge_mapping(struct kvm_memory_slot *memslot,
> >   *
> >   * There are several ways to safely use this helper:
> >   *
> > - * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before
> > + * - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, before
> >   *   consuming it.  In this case, mmu_lock doesn't need to be held during the
> >   *   lookup, but it does need to be held while checking the MMU notifier.
> >   *
> > @@ -807,7 +807,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
> >
> >       /* Check if an invalidation has taken place since we got pfn */
> >       spin_lock(&kvm->mmu_lock);
> > -     if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) {
> > +     if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) {
> >               /*
> >                * This can happen when mappings are changed asynchronously, but
> >                * also synchronously if a COW is triggered by
>
> --
> #Randy
> https://people.kernel.org/tglx/notes-about-netiquette
> https://subspace.kernel.org/etiquette.html
Stephen Rothwell Dec. 20, 2023, 3:40 a.m. UTC | #5
Hi all,

On Fri, 15 Dec 2023 21:08:06 -0800 Randy Dunlap <rdunlap@infradead.org> wrote:
>
> Someone please merge this patch...

I have applied it to my merge of the kvm tree today and will keep
applying it until it is applied to the kvm tree ...

It looks like this:

From: Huacai Chen <chenhuacai@loongson.cn>
To: Paolo Bonzini <pbonzini@redhat.com>,
	Huacai Chen <chenhuacai@kernel.org>,
	Tianrui Zhao <zhaotianrui@loongson.cn>,
	Bibo Mao <maobibo@loongson.cn>
Cc: kvm@vger.kernel.org,
	loongarch@lists.linux.dev,
	linux-kernel@vger.kernel.org,
	Xuerui Wang <kernel@xen0n.name>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Huacai Chen <chenhuacai@loongson.cn>
Subject: [PATCH] LoongArch: KVM: Fix build due to API changes
Date: Wed, 15 Nov 2023 17:07:35 +0800

Commit 8569992d64b8f750e34b7858eac ("KVM: Use gfn instead of hva for
mmu_notifier_retry") replaces mmu_invalidate_retry_hva() usage with
mmu_invalidate_retry_gfn() for X86, LoongArch also need similar changes
to fix build.

Fixes: 8569992d64b8 ("KVM: Use gfn instead of hva for mmu_notifier_retry")
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Tested-by: Randy Dunlap <rdunlap@infradead.org> # build-tested
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
---
 arch/loongarch/kvm/mmu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
index 80480df5f550..9463ebecd39b 100644
--- a/arch/loongarch/kvm/mmu.c
+++ b/arch/loongarch/kvm/mmu.c
@@ -627,7 +627,7 @@ static bool fault_supports_huge_mapping(struct kvm_memory_slot *memslot,
  *
  * There are several ways to safely use this helper:
  *
- * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before
+ * - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, before
  *   consuming it.  In this case, mmu_lock doesn't need to be held during the
  *   lookup, but it does need to be held while checking the MMU notifier.
  *
@@ -807,7 +807,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
 
 	/* Check if an invalidation has taken place since we got pfn */
 	spin_lock(&kvm->mmu_lock);
-	if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) {
+	if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) {
 		/*
 		 * This can happen when mappings are changed asynchronously, but
 		 * also synchronously if a COW is triggered by
Stephen Rothwell Jan. 20, 2024, 12:44 a.m. UTC | #6
Hi all,

On Wed, 20 Dec 2023 14:40:24 +1100 Stephen Rothwell <sfr@canb.auug.org.au> wrote:
>
> On Fri, 15 Dec 2023 21:08:06 -0800 Randy Dunlap <rdunlap@infradead.org> wrote:
> >
> > Someone please merge this patch...  
> 
> I have applied it to my merge of the kvm tree today and will keep
> applying it until it is applied to the kvm tree ...
> 
> It looks like this:
> 
> From: Huacai Chen <chenhuacai@loongson.cn>
> To: Paolo Bonzini <pbonzini@redhat.com>,
> 	Huacai Chen <chenhuacai@kernel.org>,
> 	Tianrui Zhao <zhaotianrui@loongson.cn>,
> 	Bibo Mao <maobibo@loongson.cn>
> Cc: kvm@vger.kernel.org,
> 	loongarch@lists.linux.dev,
> 	linux-kernel@vger.kernel.org,
> 	Xuerui Wang <kernel@xen0n.name>,
> 	Jiaxun Yang <jiaxun.yang@flygoat.com>,
> 	Huacai Chen <chenhuacai@loongson.cn>
> Subject: [PATCH] LoongArch: KVM: Fix build due to API changes
> Date: Wed, 15 Nov 2023 17:07:35 +0800
> 
> Commit 8569992d64b8f750e34b7858eac ("KVM: Use gfn instead of hva for
> mmu_notifier_retry") replaces mmu_invalidate_retry_hva() usage with
> mmu_invalidate_retry_gfn() for X86, LoongArch also need similar changes
> to fix build.
> 
> Fixes: 8569992d64b8 ("KVM: Use gfn instead of hva for mmu_notifier_retry")
> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
> Reported-by: Randy Dunlap <rdunlap@infradead.org>
> Tested-by: Randy Dunlap <rdunlap@infradead.org> # build-tested
> Acked-by: Randy Dunlap <rdunlap@infradead.org>
> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
> ---
>  arch/loongarch/kvm/mmu.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
> index 80480df5f550..9463ebecd39b 100644
> --- a/arch/loongarch/kvm/mmu.c
> +++ b/arch/loongarch/kvm/mmu.c
> @@ -627,7 +627,7 @@ static bool fault_supports_huge_mapping(struct kvm_memory_slot *memslot,
>   *
>   * There are several ways to safely use this helper:
>   *
> - * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before
> + * - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, before
>   *   consuming it.  In this case, mmu_lock doesn't need to be held during the
>   *   lookup, but it does need to be held while checking the MMU notifier.
>   *
> @@ -807,7 +807,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
>  
>  	/* Check if an invalidation has taken place since we got pfn */
>  	spin_lock(&kvm->mmu_lock);
> -	if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) {
> +	if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) {
>  		/*
>  		 * This can happen when mappings are changed asynchronously, but
>  		 * also synchronously if a COW is triggered by
> -- 
> 2.39.3
> 
> Though my Signed-off-by is not necessary if it applied to the kvm tree.

OK, so it needed to be applied to the merge commit when the loongarch
tree was merged by Linus, but appears to have been forgotten. :-(
Paolo Bonzini Jan. 26, 2024, 6:01 p.m. UTC | #7
On Wed, Nov 15, 2023 at 10:14 AM Huacai Chen <chenhuacai@loongson.cn> wrote:
>
> Commit 8569992d64b8f750e34b7858eac ("KVM: Use gfn instead of hva for
> mmu_notifier_retry") replaces mmu_invalidate_retry_hva() usage with
> mmu_invalidate_retry_gfn() for X86, LoongArch also need similar changes
> to fix build.
>
> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

Applied, thanks.

Paolo

> ---
>  arch/loongarch/kvm/mmu.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
> index 80480df5f550..9463ebecd39b 100644
> --- a/arch/loongarch/kvm/mmu.c
> +++ b/arch/loongarch/kvm/mmu.c
> @@ -627,7 +627,7 @@ static bool fault_supports_huge_mapping(struct kvm_memory_slot *memslot,
>   *
>   * There are several ways to safely use this helper:
>   *
> - * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before
> + * - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, before
>   *   consuming it.  In this case, mmu_lock doesn't need to be held during the
>   *   lookup, but it does need to be held while checking the MMU notifier.
>   *
> @@ -807,7 +807,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
>
>         /* Check if an invalidation has taken place since we got pfn */
>         spin_lock(&kvm->mmu_lock);
> -       if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) {
> +       if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) {
>                 /*
>                  * This can happen when mappings are changed asynchronously, but
>                  * also synchronously if a COW is triggered by
> --
> 2.39.3
>
Huacai Chen Jan. 27, 2024, 8:17 a.m. UTC | #8
Hi, Paolo,

On Sat, Jan 27, 2024 at 2:01 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On Wed, Nov 15, 2023 at 10:14 AM Huacai Chen <chenhuacai@loongson.cn> wrote:
> >
> > Commit 8569992d64b8f750e34b7858eac ("KVM: Use gfn instead of hva for
> > mmu_notifier_retry") replaces mmu_invalidate_retry_hva() usage with
> > mmu_invalidate_retry_gfn() for X86, LoongArch also need similar changes
> > to fix build.
> >
> > Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
>
> Applied, thanks.
I'm sorry that I have already sent a PR to Linus which includes this
one and together with some other patches.

Huacai

>
> Paolo
>
> > ---
> >  arch/loongarch/kvm/mmu.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
> > index 80480df5f550..9463ebecd39b 100644
> > --- a/arch/loongarch/kvm/mmu.c
> > +++ b/arch/loongarch/kvm/mmu.c
> > @@ -627,7 +627,7 @@ static bool fault_supports_huge_mapping(struct kvm_memory_slot *memslot,
> >   *
> >   * There are several ways to safely use this helper:
> >   *
> > - * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before
> > + * - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, before
> >   *   consuming it.  In this case, mmu_lock doesn't need to be held during the
> >   *   lookup, but it does need to be held while checking the MMU notifier.
> >   *
> > @@ -807,7 +807,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
> >
> >         /* Check if an invalidation has taken place since we got pfn */
> >         spin_lock(&kvm->mmu_lock);
> > -       if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) {
> > +       if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) {
> >                 /*
> >                  * This can happen when mappings are changed asynchronously, but
> >                  * also synchronously if a COW is triggered by
> > --
> > 2.39.3
> >
>
diff mbox series

Patch

diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
index 80480df5f550..9463ebecd39b 100644
--- a/arch/loongarch/kvm/mmu.c
+++ b/arch/loongarch/kvm/mmu.c
@@ -627,7 +627,7 @@  static bool fault_supports_huge_mapping(struct kvm_memory_slot *memslot,
  *
  * There are several ways to safely use this helper:
  *
- * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before
+ * - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, before
  *   consuming it.  In this case, mmu_lock doesn't need to be held during the
  *   lookup, but it does need to be held while checking the MMU notifier.
  *
@@ -807,7 +807,7 @@  static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
 
 	/* Check if an invalidation has taken place since we got pfn */
 	spin_lock(&kvm->mmu_lock);
-	if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) {
+	if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) {
 		/*
 		 * This can happen when mappings are changed asynchronously, but
 		 * also synchronously if a COW is triggered by