diff mbox series

[RFC,v1] KVM: x86: Introduce macros to simplify KVM_X86_OPS static calls

Message ID 20240417150354.275353-1-wei.w.wang@intel.com (mailing list archive)
State New
Headers show
Series [RFC,v1] KVM: x86: Introduce macros to simplify KVM_X86_OPS static calls | expand

Commit Message

Wang, Wei W April 17, 2024, 3:03 p.m. UTC
Introduces two new macros, KVM_X86_SC() and KVM_X86_SCC(), to streamline
the usage of KVM_X86_OPS static calls. The current implementation of these
calls is verbose and can lead to alignment challenges due to the two pairs
of parentheses. This makes the code susceptible to exceeding the "80
columns per single line of code" limit as defined in the coding-style
document. The two macros are added to improve code readability and
maintainability, while adhering to the coding style guidelines.

Please note that this RFC only updated a few callsites for demonstration
purposes. If the approach looks good, all callsites will be updated in
the next version.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  3 +++
 arch/x86/kvm/lapic.c            | 15 ++++++++-------
 arch/x86/kvm/trace.h            |  3 ++-
 arch/x86/kvm/x86.c              |  8 ++++----
 4 files changed, 17 insertions(+), 12 deletions(-)


base-commit: 49ff3b4aec51e3abfc9369997cc603319b02af9a

Comments

Sean Christopherson April 17, 2024, 4:26 p.m. UTC | #1
On Wed, Apr 17, 2024, Wei Wang wrote:
> Introduces two new macros, KVM_X86_SC() and KVM_X86_SCC(), to streamline
> the usage of KVM_X86_OPS static calls. The current implementation of these
> calls is verbose and can lead to alignment challenges due to the two pairs
> of parentheses. This makes the code susceptible to exceeding the "80
> columns per single line of code" limit as defined in the coding-style
> document. The two macros are added to improve code readability and
> maintainability, while adhering to the coding style guidelines.

Heh, I've considered something similar on multiple occasionsi.  Not because
the verbosity bothers me, but because I often search for exact "word" matches
when looking for function usage and the kvm_x86_ prefix trips me up.
 
> Please note that this RFC only updated a few callsites for demonstration
> purposes. If the approach looks good, all callsites will be updated in
> the next version.
> 
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  3 +++
>  arch/x86/kvm/lapic.c            | 15 ++++++++-------
>  arch/x86/kvm/trace.h            |  3 ++-
>  arch/x86/kvm/x86.c              |  8 ++++----
>  4 files changed, 17 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 6efd1497b026..42f6450c10ec 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1856,6 +1856,9 @@ extern struct kvm_x86_ops kvm_x86_ops;
>  	DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func));
>  #define KVM_X86_OP_OPTIONAL KVM_X86_OP
>  #define KVM_X86_OP_OPTIONAL_RET0 KVM_X86_OP
> +
> +#define KVM_X86_SC(func, ...) static_call(kvm_x86_##func)(__VA_ARGS__)
> +#define KVM_X86_SCC(func, ...) static_call_cond(kvm_x86_##func)(__VA_ARGS__)

IIRC, static_call_cond() is essentially dead code, i.e. it's the exact same as
static_call().  I believe there's details buried in a proposed series to remove
it[*].  And to not lead things astray, I verified that invoking a NULL kvm_x86_op
with static_call() does no harm (well, doesn't explode at least).

So if we add wrapper macros, I would be in favor in removing all static_call_cond()
as a prep patch so that we can have a single macro.  kvm_ops_update() already WARNs
if a mandatory hook isn't defined, so doing more checks at runtime wouldn't provide
any value.

As for the name, what about KVM_X86_CALL() instead of KVM_X86_SC()?  Two extra
characters, but should make it much more obvious what's going on for readers that
aren't familiar with the infrastructure.

And I bet we can get away with KVM_PMU_CALL() for the PMU hooks.

[*] https://lore.kernel.org/all/cover.1679456900.git.jpoimboe@kernel.org
Wang, Wei W April 18, 2024, 3:57 a.m. UTC | #2
On Thursday, April 18, 2024 12:27 AM, Sean Christopherson wrote:
> On Wed, Apr 17, 2024, Wei Wang wrote:
> > Introduces two new macros, KVM_X86_SC() and KVM_X86_SCC(), to
> > streamline the usage of KVM_X86_OPS static calls. The current
> > implementation of these calls is verbose and can lead to alignment
> > challenges due to the two pairs of parentheses. This makes the code
> > susceptible to exceeding the "80 columns per single line of code"
> > limit as defined in the coding-style document. The two macros are
> > added to improve code readability and maintainability, while adhering to
> the coding style guidelines.
> 
> Heh, I've considered something similar on multiple occasionsi.  Not because
> the verbosity bothers me, but because I often search for exact "word" matches
> when looking for function usage and the kvm_x86_ prefix trips me up.

Yeah, that's another compelling reason for the improvement.

> IIRC, static_call_cond() is essentially dead code, i.e. it's the exact same as
> static_call().  I believe there's details buried in a proposed series to remove
> it[*].  And to not lead things astray, I verified that invoking a NULL kvm_x86_op
> with static_call() does no harm (well, doesn't explode at least).
> 
> So if we add wrapper macros, I would be in favor in removing all
> static_call_cond() as a prep patch so that we can have a single macro.

Sounds good. Maybe KVM_X86_OP_OPTIONAL could now also be removed? 


> kvm_ops_update() already WARNs if a mandatory hook isn't defined, so doing
> more checks at runtime wouldn't provide any value.

> 
> As for the name, what about KVM_X86_CALL() instead of KVM_X86_SC()?  Two
> extra characters, but should make it much more obvious what's going on for
> readers that aren't familiar with the infrastructure.

I thought the macro definition is quite intuitive and those encountering it for the
first time could get familiar with it easily from the definition.
Similarly, KVM_X86_CALL() is fine to me, despite the fact that it doesn't explicitly
denote "static" calls.

> 
> And I bet we can get away with KVM_PMU_CALL() for the PMU hooks.

Yes, this can be covered as well.
Sean Christopherson April 18, 2024, 1:58 p.m. UTC | #3
On Thu, Apr 18, 2024, Wei W Wang wrote:
> On Thursday, April 18, 2024 12:27 AM, Sean Christopherson wrote:
> > On Wed, Apr 17, 2024, Wei Wang wrote:
> > > Introduces two new macros, KVM_X86_SC() and KVM_X86_SCC(), to
> > > streamline the usage of KVM_X86_OPS static calls. The current
> > > implementation of these calls is verbose and can lead to alignment
> > > challenges due to the two pairs of parentheses. This makes the code
> > > susceptible to exceeding the "80 columns per single line of code"
> > > limit as defined in the coding-style document. The two macros are
> > > added to improve code readability and maintainability, while adhering to
> > the coding style guidelines.
> > 
> > Heh, I've considered something similar on multiple occasionsi.  Not because
> > the verbosity bothers me, but because I often search for exact "word" matches
> > when looking for function usage and the kvm_x86_ prefix trips me up.
> 
> Yeah, that's another compelling reason for the improvement.
> 
> > IIRC, static_call_cond() is essentially dead code, i.e. it's the exact same as
> > static_call().  I believe there's details buried in a proposed series to remove
> > it[*].  And to not lead things astray, I verified that invoking a NULL kvm_x86_op
> > with static_call() does no harm (well, doesn't explode at least).
> > 
> > So if we add wrapper macros, I would be in favor in removing all
> > static_call_cond() as a prep patch so that we can have a single macro.
> 
> Sounds good. Maybe KVM_X86_OP_OPTIONAL could now also be removed? 

No, KVM_X86_OP_OPTIONAL() is what allow KVM to WARN if a mandatory hook isn't
defined.  Without the OPTIONAL and OPTIONAL_RET variants, KVM would need to assume
every hook is optional, and thus couldn't WARN.

  static inline void kvm_ops_update(struct kvm_x86_init_ops *ops)
  {
	memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops));

#define __KVM_X86_OP(func) \
	static_call_update(kvm_x86_##func, kvm_x86_ops.func);
#define KVM_X86_OP(func) \
	WARN_ON(!kvm_x86_ops.func); __KVM_X86_OP(func)
#define KVM_X86_OP_OPTIONAL __KVM_X86_OP
#define KVM_X86_OP_OPTIONAL_RET0(func) \
	static_call_update(kvm_x86_##func, (void *)kvm_x86_ops.func ? : \
					   (void *)__static_call_return0);
#include <asm/kvm-x86-ops.h>
#undef __KVM_X86_OP

	kvm_pmu_ops_update(ops->pmu_ops);
  }

> > kvm_ops_update() already WARNs if a mandatory hook isn't defined, so doing
> > more checks at runtime wouldn't provide any value.
> 
> > 
> > As for the name, what about KVM_X86_CALL() instead of KVM_X86_SC()?  Two
> > extra characters, but should make it much more obvious what's going on for
> > readers that aren't familiar with the infrastructure.
> 
> I thought the macro definition is quite intuitive and those encountering it for the
> first time could get familiar with it easily from the definition.
> Similarly, KVM_X86_CALL() is fine to me, despite the fact that it doesn't explicitly
> denote "static" calls.

Repeat readers/developers will get used to KVM_X86_SC(), but for someone that has
never read KVM code before, KVM_X86_SC() is unintuitive, e.g. basically requires
looking at the implementation, especially if the reader isn't familiar with the
kernel's static call framework.

In other words, there needs to be "CALL" somewhere in the name to convey the basic
gist of the code.  And IMO, the "static" part is a low level detail that isn't
necessary to understand the core functionality of the code, so omitting it to
shorten line lengths is a worthwhile tradeoff.
Wang, Wei W April 18, 2024, 2:20 p.m. UTC | #4
On Thursday, April 18, 2024 9:59 PM, Sean Christopherson wrote:
> On Thu, Apr 18, 2024, Wei W Wang wrote:
> > On Thursday, April 18, 2024 12:27 AM, Sean Christopherson wrote:
> > > On Wed, Apr 17, 2024, Wei Wang wrote:
> > > > Introduces two new macros, KVM_X86_SC() and KVM_X86_SCC(), to
> > > > streamline the usage of KVM_X86_OPS static calls. The current
> > > > implementation of these calls is verbose and can lead to alignment
> > > > challenges due to the two pairs of parentheses. This makes the
> > > > code susceptible to exceeding the "80 columns per single line of code"
> > > > limit as defined in the coding-style document. The two macros are
> > > > added to improve code readability and maintainability, while
> > > > adhering to
> > > the coding style guidelines.
> > >
> > > Heh, I've considered something similar on multiple occasionsi.  Not
> > > because the verbosity bothers me, but because I often search for
> > > exact "word" matches when looking for function usage and the kvm_x86_
> prefix trips me up.
> >
> > Yeah, that's another compelling reason for the improvement.
> >
> > > IIRC, static_call_cond() is essentially dead code, i.e. it's the
> > > exact same as static_call().  I believe there's details buried in a
> > > proposed series to remove it[*].  And to not lead things astray, I
> > > verified that invoking a NULL kvm_x86_op with static_call() does no harm
> (well, doesn't explode at least).
> > >
> > > So if we add wrapper macros, I would be in favor in removing all
> > > static_call_cond() as a prep patch so that we can have a single macro.
> >
> > Sounds good. Maybe KVM_X86_OP_OPTIONAL could now also be removed?
> 
> No, KVM_X86_OP_OPTIONAL() is what allow KVM to WARN if a mandatory
> hook isn't defined.  Without the OPTIONAL and OPTIONAL_RET variants, KVM
> would need to assume every hook is optional, and thus couldn't WARN.

Yes, KVM_X86_OP_OPTIONAL is used to enforce the definition of mandatory hooks
with WARN_ON(). But the distinction between mandatory and optional hooks
has now become ambiguous. For example, all the hooks, whether defined or
undefined (NULL), are invoked via static_call() without issues now. In some sense,
all hooks could potentially be deemed as optional, and the undefined ones just lead
to NOOP when unconditionally invoked by the kvm/x86 core code. 
(the KVM_X86_OP_RET0 is needed)
Would you see any practical issues without that WARN_ON?
Wang, Wei W April 18, 2024, 3:22 p.m. UTC | #5
On Thursday, April 18, 2024 10:20 PM, Wang, Wei W wrote:
> On Thursday, April 18, 2024 9:59 PM, Sean Christopherson wrote:
> > On Thu, Apr 18, 2024, Wei W Wang wrote:
> > > On Thursday, April 18, 2024 12:27 AM, Sean Christopherson wrote:
> > > > On Wed, Apr 17, 2024, Wei Wang wrote:
> > > > > Introduces two new macros, KVM_X86_SC() and KVM_X86_SCC(), to
> > > > > streamline the usage of KVM_X86_OPS static calls. The current
> > > > > implementation of these calls is verbose and can lead to
> > > > > alignment challenges due to the two pairs of parentheses. This
> > > > > makes the code susceptible to exceeding the "80 columns per single
> line of code"
> > > > > limit as defined in the coding-style document. The two macros
> > > > > are added to improve code readability and maintainability, while
> > > > > adhering to
> > > > the coding style guidelines.
> > > >
> > > > Heh, I've considered something similar on multiple occasionsi.
> > > > Not because the verbosity bothers me, but because I often search
> > > > for exact "word" matches when looking for function usage and the
> > > > kvm_x86_
> > prefix trips me up.
> > >
> > > Yeah, that's another compelling reason for the improvement.
> > >
> > > > IIRC, static_call_cond() is essentially dead code, i.e. it's the
> > > > exact same as static_call().  I believe there's details buried in
> > > > a proposed series to remove it[*].  And to not lead things astray,
> > > > I verified that invoking a NULL kvm_x86_op with static_call() does
> > > > no harm
> > (well, doesn't explode at least).
> > > >
> > > > So if we add wrapper macros, I would be in favor in removing all
> > > > static_call_cond() as a prep patch so that we can have a single macro.
> > >
> > > Sounds good. Maybe KVM_X86_OP_OPTIONAL could now also be removed?
> >
> > No, KVM_X86_OP_OPTIONAL() is what allow KVM to WARN if a mandatory
> > hook isn't defined.  Without the OPTIONAL and OPTIONAL_RET variants,
> > KVM would need to assume every hook is optional, and thus couldn't WARN.
> 
> Yes, KVM_X86_OP_OPTIONAL is used to enforce the definition of mandatory
> hooks with WARN_ON(). 

I meant the KVM_X86_OP in the current implementation as you shared.
If we don't need KVM_X86_OP_OPTIONAL(), the WARN_ON() from KVM_X86_OP
will need to be removed to allow that all the hooks could be optional.

> But the distinction between mandatory and optional
> hooks has now become ambiguous. For example, all the hooks, whether
> defined or undefined (NULL), are invoked via static_call() without issues now.
> In some sense, all hooks could potentially be deemed as optional, and the
> undefined ones just lead to NOOP when unconditionally invoked by the
> kvm/x86 core code.
> (the KVM_X86_OP_RET0 is needed)
> Would you see any practical issues without that WARN_ON?
diff mbox series

Patch

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6efd1497b026..42f6450c10ec 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1856,6 +1856,9 @@  extern struct kvm_x86_ops kvm_x86_ops;
 	DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func));
 #define KVM_X86_OP_OPTIONAL KVM_X86_OP
 #define KVM_X86_OP_OPTIONAL_RET0 KVM_X86_OP
+
+#define KVM_X86_SC(func, ...) static_call(kvm_x86_##func)(__VA_ARGS__)
+#define KVM_X86_SCC(func, ...) static_call_cond(kvm_x86_##func)(__VA_ARGS__)
 #include <asm/kvm-x86-ops.h>
 
 int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops);
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index ebf41023be38..e819dbae8d79 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -728,8 +728,8 @@  static inline void apic_clear_irr(int vec, struct kvm_lapic *apic)
 	if (unlikely(apic->apicv_active)) {
 		/* need to update RVI */
 		kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
-		static_call_cond(kvm_x86_hwapic_irr_update)(apic->vcpu,
-							    apic_find_highest_irr(apic));
+		KVM_X86_SCC(hwapic_irr_update,
+			    apic->vcpu, apic_find_highest_irr(apic));
 	} else {
 		apic->irr_pending = false;
 		kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
@@ -800,7 +800,7 @@  static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
 	 * and must be left alone.
 	 */
 	if (unlikely(apic->apicv_active))
-		static_call_cond(kvm_x86_hwapic_isr_update)(apic_find_highest_isr(apic));
+		KVM_X86_SCC(hwapic_isr_update, apic_find_highest_isr(apic));
 	else {
 		--apic->isr_count;
 		BUG_ON(apic->isr_count < 0);
@@ -2112,7 +2112,7 @@  static bool start_hv_timer(struct kvm_lapic *apic)
 	if (!ktimer->tscdeadline)
 		return false;
 
-	if (static_call(kvm_x86_set_hv_timer)(vcpu, ktimer->tscdeadline, &expired))
+	if (KVM_X86_SC(set_hv_timer, vcpu, ktimer->tscdeadline, &expired))
 		return false;
 
 	ktimer->hv_timer_in_use = true;
@@ -3041,9 +3041,10 @@  int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
 	kvm_lapic_set_reg(apic, APIC_TMCCT, 0);
 	kvm_apic_update_apicv(vcpu);
 	if (apic->apicv_active) {
-		static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
-		static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
-		static_call_cond(kvm_x86_hwapic_isr_update)(apic_find_highest_isr(apic));
+		KVM_X86_SCC(apicv_post_state_restore, vcpu);
+		KVM_X86_SCC(hwapic_irr_update,
+			    vcpu, apic_find_highest_irr(apic));
+		KVM_X86_SCC(hwapic_isr_update, apic_find_highest_isr(apic));
 	}
 	kvm_make_request(KVM_REQ_EVENT, vcpu);
 	if (ioapic_in_kernel(vcpu->kvm))
diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h
index c6b4b1728006..a51f6c2d43f1 100644
--- a/arch/x86/kvm/trace.h
+++ b/arch/x86/kvm/trace.h
@@ -828,7 +828,8 @@  TRACE_EVENT(kvm_emulate_insn,
 		),
 
 	TP_fast_assign(
-		__entry->csbase = static_call(kvm_x86_get_segment_base)(vcpu, VCPU_SREG_CS);
+		__entry->csbase = KVM_X86_SC(get_segment_base,
+					     vcpu, VCPU_SREG_CS);
 		__entry->len = vcpu->arch.emulate_ctxt->fetch.ptr
 			       - vcpu->arch.emulate_ctxt->fetch.data;
 		__entry->rip = vcpu->arch.emulate_ctxt->_eip - __entry->len;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ebcc12d1e1de..146b88ded5d1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2008,7 +2008,7 @@  static int complete_emulated_rdmsr(struct kvm_vcpu *vcpu)
 
 static int complete_fast_msr_access(struct kvm_vcpu *vcpu)
 {
-	return static_call(kvm_x86_complete_emulated_msr)(vcpu, vcpu->run->msr.error);
+	return KVM_X86_SC(complete_emulated_msr, vcpu, vcpu->run->msr.error);
 }
 
 static int complete_fast_rdmsr(struct kvm_vcpu *vcpu)
@@ -7452,7 +7452,7 @@  static void kvm_init_msr_lists(void)
 	}
 
 	for (i = 0; i < ARRAY_SIZE(emulated_msrs_all); i++) {
-		if (!static_call(kvm_x86_has_emulated_msr)(NULL, emulated_msrs_all[i]))
+		if (!KVM_X86_SC(has_emulated_msr, NULL, emulated_msrs_all[i]))
 			continue;
 
 		emulated_msrs[num_emulated_msrs++] = emulated_msrs_all[i];
@@ -10703,7 +10703,7 @@  static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu)
 		bitmap_or((ulong *)eoi_exit_bitmap,
 			  vcpu->arch.ioapic_handled_vectors,
 			  to_hv_synic(vcpu)->vec_bitmap, 256);
-		static_call_cond(kvm_x86_load_eoi_exitmap)(vcpu, eoi_exit_bitmap);
+		KVM_X86_SCC(load_eoi_exitmap, vcpu, eoi_exit_bitmap);
 		return;
 	}
 #endif
@@ -13497,7 +13497,7 @@  void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
 	 * when the irq is masked/disabled or the consumer side (KVM
 	 * int this case doesn't want to receive the interrupts.
 	*/
-	ret = static_call(kvm_x86_pi_update_irte)(irqfd->kvm, prod->irq, irqfd->gsi, 0);
+	ret = KVM_X86_SC(pi_update_irte, irqfd->kvm, prod->irq, irqfd->gsi, 0);
 	if (ret)
 		printk(KERN_INFO "irq bypass consumer (token %p) unregistration"
 		       " fails: %d\n", irqfd->consumer.token, ret);