diff mbox series

[RESEND] kvm: make kvm_vcpu_(un)map dependency on CONFIG_HAS_IOMEM explicit

Message ID 20190520164418.06D1CE0184@unicorn.suse.cz (mailing list archive)
State New, archived
Headers show
Series [RESEND] kvm: make kvm_vcpu_(un)map dependency on CONFIG_HAS_IOMEM explicit | expand

Commit Message

Michal Kubecek May 20, 2019, 4:44 p.m. UTC
Recently introduced functions kvm_vcpu_map() and kvm_vcpu_unmap() call
memremap() and memunmap() which are only available if HAS_IOMEM is enabled
but this dependency is not explicit, so that the build fails with HAS_IOMEM
disabled.

As both function are only used on x86 where HAS_IOMEM is always enabled,
the easiest fix seems to be to only provide them when HAS_IOMEM is enabled.

Fixes: e45adf665a53 ("KVM: Introduce a new guest mapping API")
Signed-off-by: Michal Kubecek <mkubecek@suse.cz>
---
 include/linux/kvm_host.h | 6 +++++-
 virt/kvm/kvm_main.c      | 2 ++
 2 files changed, 7 insertions(+), 1 deletion(-)

Comments

Paolo Bonzini May 20, 2019, 5:23 p.m. UTC | #1
On 20/05/19 18:44, Michal Kubecek wrote:
> Recently introduced functions kvm_vcpu_map() and kvm_vcpu_unmap() call
> memremap() and memunmap() which are only available if HAS_IOMEM is enabled
> but this dependency is not explicit, so that the build fails with HAS_IOMEM
> disabled.
> 
> As both function are only used on x86 where HAS_IOMEM is always enabled,
> the easiest fix seems to be to only provide them when HAS_IOMEM is enabled.
> 
> Fixes: e45adf665a53 ("KVM: Introduce a new guest mapping API")
> Signed-off-by: Michal Kubecek <mkubecek@suse.cz>
> ---

Thank you very much.  However, it's better if only the memremap part is
hidden behind CONFIG_HAS_IOMEM.  I'll send a patch tomorrow and have it
reach Linus at most on Wednesday.

There is actually nothing specific to CONFIG_HAS_IOMEM in them,
basically the functionality we want is remap_pfn_range but without a
VMA.  However, it's for a niche use case where KVM guest memory is
mmap-ed from /dev/mem and it's okay if for now that part remains
disabled on s390.

Paolo
Bjorn Andersson May 20, 2019, 10:45 p.m. UTC | #2
On Mon, May 20, 2019 at 9:44 AM Michal Kubecek <mkubecek@suse.cz> wrote:
>
> Recently introduced functions kvm_vcpu_map() and kvm_vcpu_unmap() call
> memremap() and memunmap() which are only available if HAS_IOMEM is enabled
> but this dependency is not explicit, so that the build fails with HAS_IOMEM
> disabled.
>
> As both function are only used on x86 where HAS_IOMEM is always enabled,
> the easiest fix seems to be to only provide them when HAS_IOMEM is enabled.
>
> Fixes: e45adf665a53 ("KVM: Introduce a new guest mapping API")
> Signed-off-by: Michal Kubecek <mkubecek@suse.cz>

Hi Michal,

I see the same build issue on arm64 and as CONFIG_HAS_IOMEM is set
there this patch has no effect on solving that. Instead I had to
include linux/io.h in kvm_main.c to make it compile.

Regards,
Bjorn

> ---
>  include/linux/kvm_host.h | 6 +++++-
>  virt/kvm/kvm_main.c      | 2 ++
>  2 files changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 79fa4426509c..371d68fef5e1 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -759,9 +759,13 @@ struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu);
>  struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn);
>  kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn);
>  kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn);
> +
> +#ifdef CONFIG_HAS_IOMEM
>  int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map);
> -struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn);
>  void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty);
> +#endif
> +
> +struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn);
>  unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn);
>  unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable);
>  int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, int offset,
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f0d13d9d125d..4a2c813e75d6 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1743,6 +1743,7 @@ struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn)
>  }
>  EXPORT_SYMBOL_GPL(gfn_to_page);
>
> +#ifdef CONFIG_HAS_IOMEM
>  static int __kvm_map_gfn(struct kvm_memory_slot *slot, gfn_t gfn,
>                          struct kvm_host_map *map)
>  {
> @@ -1806,6 +1807,7 @@ void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map,
>         map->page = NULL;
>  }
>  EXPORT_SYMBOL_GPL(kvm_vcpu_unmap);
> +#endif /* CONFIG_HAS_IOMEM */
>
>  struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn)
>  {
> --
> 2.21.0
>
Michal Kubecek May 21, 2019, 5:51 a.m. UTC | #3
On Mon, May 20, 2019 at 03:45:29PM -0700, Bjorn Andersson wrote:
> On Mon, May 20, 2019 at 9:44 AM Michal Kubecek <mkubecek@suse.cz> wrote:
> >
> > Recently introduced functions kvm_vcpu_map() and kvm_vcpu_unmap() call
> > memremap() and memunmap() which are only available if HAS_IOMEM is enabled
> > but this dependency is not explicit, so that the build fails with HAS_IOMEM
> > disabled.
> >
> > As both function are only used on x86 where HAS_IOMEM is always enabled,
> > the easiest fix seems to be to only provide them when HAS_IOMEM is enabled.
> >
> > Fixes: e45adf665a53 ("KVM: Introduce a new guest mapping API")
> > Signed-off-by: Michal Kubecek <mkubecek@suse.cz>
> 
> Hi Michal,
> 
> I see the same build issue on arm64 and as CONFIG_HAS_IOMEM is set
> there this patch has no effect on solving that. Instead I had to
> include linux/io.h in kvm_main.c to make it compile.

This sounds like a different problem which was already resolved in
mainline by commit c011d23ba046 ("kvm: fix compilation on aarch64")
which is present in v5.2-rc1. The issue I'm trying to address is link
time failure (unresolved reference to memremap()/memunmap()) when
CONFIG_HAS_IOMEM is disabled (in our case it affects a special
minimalistic s390x config for zfcpdump).

Michal
Michal Kubecek May 21, 2019, 5:53 a.m. UTC | #4
On Mon, May 20, 2019 at 07:23:43PM +0200, Paolo Bonzini wrote:
> On 20/05/19 18:44, Michal Kubecek wrote:
> > Recently introduced functions kvm_vcpu_map() and kvm_vcpu_unmap() call
> > memremap() and memunmap() which are only available if HAS_IOMEM is enabled
> > but this dependency is not explicit, so that the build fails with HAS_IOMEM
> > disabled.
> > 
> > As both function are only used on x86 where HAS_IOMEM is always enabled,
> > the easiest fix seems to be to only provide them when HAS_IOMEM is enabled.
> > 
> > Fixes: e45adf665a53 ("KVM: Introduce a new guest mapping API")
> > Signed-off-by: Michal Kubecek <mkubecek@suse.cz>
> > ---
> 
> Thank you very much.  However, it's better if only the memremap part is
> hidden behind CONFIG_HAS_IOMEM.  I'll send a patch tomorrow and have it
> reach Linus at most on Wednesday.

That sounds like a better solution. As I'm not familiar with the code,
I didn't want to risk and suggested the easiest way around.

Michal

> There is actually nothing specific to CONFIG_HAS_IOMEM in them,
> basically the functionality we want is remap_pfn_range but without a
> VMA.  However, it's for a niche use case where KVM guest memory is
> mmap-ed from /dev/mem and it's okay if for now that part remains
> disabled on s390.
> 
> Paolo
diff mbox series

Patch

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 79fa4426509c..371d68fef5e1 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -759,9 +759,13 @@  struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu);
 struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn);
 kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn);
 kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn);
+
+#ifdef CONFIG_HAS_IOMEM
 int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map);
-struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn);
 void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty);
+#endif
+
+struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn);
 unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn);
 unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *writable);
 int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, int offset,
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index f0d13d9d125d..4a2c813e75d6 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1743,6 +1743,7 @@  struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn)
 }
 EXPORT_SYMBOL_GPL(gfn_to_page);
 
+#ifdef CONFIG_HAS_IOMEM
 static int __kvm_map_gfn(struct kvm_memory_slot *slot, gfn_t gfn,
 			 struct kvm_host_map *map)
 {
@@ -1806,6 +1807,7 @@  void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map,
 	map->page = NULL;
 }
 EXPORT_SYMBOL_GPL(kvm_vcpu_unmap);
+#endif /* CONFIG_HAS_IOMEM */
 
 struct page *kvm_vcpu_gfn_to_page(struct kvm_vcpu *vcpu, gfn_t gfn)
 {