diff mbox series

[XEN,v6,2/3] ioreq: do not build arch_vcpu_ioreq_completion() for non-VMX configurations

Message ID 04e5397ecfcdb4d5680c7d36f71c4b471004ccd0.1723110344.git.Sergiy_Kibrik@epam.com (mailing list archive)
State Superseded
Headers show
Series x86: make CPU virtualisation support configurable | expand

Commit Message

Sergiy Kibrik Aug. 8, 2024, 10:10 a.m. UTC
From: Xenia Ragiadakou <burzalodowa@gmail.com>

VIO_realmode_completion is specific to vmx realmode and thus the function
arch_vcpu_ioreq_completion() has actual handling work only in VMX-enabled build,
as for the rest x86 and ARM build configurations it is basically a stub.

Here a separate configuration option ARCH_IOREQ_COMPLETION introduced that tells
whether the platform we're building for requires any specific ioreq completion
handling. As of now only VMX has such requirement, so the option is selected
by INTEL_VMX, for other configurations a generic default stub is provided
(it is ARM's version of arch_vcpu_ioreq_completion() moved to common header).

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
CC: Julien Grall <julien@xen.org>
CC: Jan Beulich <jbeulich@suse.com>
---
changes in v6:
 - rename option ARCH_IOREQ_COMPLETION -> ARCH_VCPU_IOREQ_COMPLETION
 - put a comment with brief option's description
changes in v5:
 - introduce ARCH_IOREQ_COMPLETION option & put arch_vcpu_ioreq_completion() under it
 - description changed
changes in v4:
 - move whole arch_vcpu_ioreq_completion() under CONFIG_VMX and remove
   ARM's variant of this handler, as Julien suggested
---
 xen/Kconfig              |  6 ++++++
 xen/arch/arm/ioreq.c     |  6 ------
 xen/arch/x86/Kconfig     |  1 +
 xen/arch/x86/hvm/ioreq.c |  2 ++
 xen/include/xen/ioreq.h  | 10 ++++++++++
 5 files changed, 19 insertions(+), 6 deletions(-)

Comments

Jan Beulich Aug. 13, 2024, 7:31 a.m. UTC | #1
On 08.08.2024 12:10, Sergiy Kibrik wrote:
> From: Xenia Ragiadakou <burzalodowa@gmail.com>
> 
> VIO_realmode_completion is specific to vmx realmode and thus the function
> arch_vcpu_ioreq_completion() has actual handling work only in VMX-enabled build,
> as for the rest x86 and ARM build configurations it is basically a stub.
> 
> Here a separate configuration option ARCH_IOREQ_COMPLETION introduced that tells

Nit: The rename of the option wants to be reflected here, too.

> --- a/xen/Kconfig
> +++ b/xen/Kconfig
> @@ -95,4 +95,10 @@ config LTO
>  config ARCH_SUPPORTS_INT128
>  	bool
>  
> +#
> +# For platforms that require specific handling of ioreq completion events
> +#
> +config ARCH_VCPU_IOREQ_COMPLETION
> +	bool

If already you add a comment here, then it similarly wants to disambiguate
things by saying "per-vCPU ioreq completion events" or something along these
lines.

> --- a/xen/include/xen/ioreq.h
> +++ b/xen/include/xen/ioreq.h
> @@ -111,7 +111,17 @@ void ioreq_domain_init(struct domain *d);
>  int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op);
>  
>  bool arch_ioreq_complete_mmio(void);
> +
> +#ifdef CONFIG_VCPU_ARCH_IOREQ_COMPLETION
>  bool arch_vcpu_ioreq_completion(enum vio_completion completion);
> +#else
> +static inline bool arch_vcpu_ioreq_completion(enum vio_completion completion)
> +{
> +    ASSERT_UNREACHABLE();
> +    return true;
> +}

My prior comment here remains: Despite pre-existing behavior being to return
"true" here, I question that to be in line with coding-best-practices.pandoc.
Imo the generalization of the stub is a good opportunity to adjust that. But
yes, it could also be done in a separate change. If you really don't want to
do so right here, then
Acked-by: Jan Beulich <jbeulich@suse.com>
with the two cosmetic adjustments (which likely could also be done while
committing).

Jan
Sergiy Kibrik Aug. 16, 2024, 10:04 a.m. UTC | #2
13.08.24 10:31, Jan Beulich:
>> --- a/xen/include/xen/ioreq.h
>> +++ b/xen/include/xen/ioreq.h
>> @@ -111,7 +111,17 @@ void ioreq_domain_init(struct domain *d);
>>   int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op);
>>   
>>   bool arch_ioreq_complete_mmio(void);
>> +
>> +#ifdef CONFIG_VCPU_ARCH_IOREQ_COMPLETION
>>   bool arch_vcpu_ioreq_completion(enum vio_completion completion);
>> +#else
>> +static inline bool arch_vcpu_ioreq_completion(enum vio_completion completion)
>> +{
>> +    ASSERT_UNREACHABLE();
>> +    return true;
>> +}
> My prior comment here remains: Despite pre-existing behavior being to return
> "true" here, I question that to be in line with coding-best-practices.pandoc.
> Imo the generalization of the stub is a good opportunity to adjust that. But
> yes, it could also be done in a separate change. If you really don't want to
> do so right here, then
> Acked-by: Jan Beulich<jbeulich@suse.com>
> with the two cosmetic adjustments (which likely could also be done while
> committing).

I'm inclined to do a separate patch, that changes return value of 
arch_vcpu_ioreq_completion() depending on completion being handled on 
not. Such change has little to do with this patch series, and also 
probably should have its own separate discussion.

   -Sergiy
diff mbox series

Patch

diff --git a/xen/Kconfig b/xen/Kconfig
index e459cdac0c..b8d08af374 100644
--- a/xen/Kconfig
+++ b/xen/Kconfig
@@ -95,4 +95,10 @@  config LTO
 config ARCH_SUPPORTS_INT128
 	bool
 
+#
+# For platforms that require specific handling of ioreq completion events
+#
+config ARCH_VCPU_IOREQ_COMPLETION
+	bool
+
 source "Kconfig.debug"
diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
index 5df755b48b..2e829d2e7f 100644
--- a/xen/arch/arm/ioreq.c
+++ b/xen/arch/arm/ioreq.c
@@ -135,12 +135,6 @@  bool arch_ioreq_complete_mmio(void)
     return false;
 }
 
-bool arch_vcpu_ioreq_completion(enum vio_completion completion)
-{
-    ASSERT_UNREACHABLE();
-    return true;
-}
-
 /*
  * The "legacy" mechanism of mapping magic pages for the IOREQ servers
  * is x86 specific, so the following hooks don't need to be implemented on Arm:
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 7ef5c8bc48..74e081c7bd 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -127,6 +127,7 @@  config AMD_SVM
 
 config INTEL_VMX
 	def_bool HVM
+	select ARCH_VCPU_IOREQ_COMPLETION
 
 config XEN_SHSTK
 	bool "Supervisor Shadow Stacks"
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 4eb7a70182..5c3d0c69aa 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -29,6 +29,7 @@  bool arch_ioreq_complete_mmio(void)
     return handle_mmio();
 }
 
+#ifdef CONFIG_VCPU_ARCH_IOREQ_COMPLETION
 bool arch_vcpu_ioreq_completion(enum vio_completion completion)
 {
     switch ( completion )
@@ -51,6 +52,7 @@  bool arch_vcpu_ioreq_completion(enum vio_completion completion)
 
     return true;
 }
+#endif
 
 static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s)
 {
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index cd399adf17..29a17e8ff5 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -111,7 +111,17 @@  void ioreq_domain_init(struct domain *d);
 int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op);
 
 bool arch_ioreq_complete_mmio(void);
+
+#ifdef CONFIG_VCPU_ARCH_IOREQ_COMPLETION
 bool arch_vcpu_ioreq_completion(enum vio_completion completion);
+#else
+static inline bool arch_vcpu_ioreq_completion(enum vio_completion completion)
+{
+    ASSERT_UNREACHABLE();
+    return true;
+}
+#endif
+
 int arch_ioreq_server_map_pages(struct ioreq_server *s);
 void arch_ioreq_server_unmap_pages(struct ioreq_server *s);
 void arch_ioreq_server_enable(struct ioreq_server *s);