diff mbox series

[RFC,v2,1/6] KVM: arm64: Add a helper to check if a VM has ran once

Message ID 20230211013759.3556016-2-oliver.upton@linux.dev (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: Userspace SMCCC call filtering | expand

Commit Message

Oliver Upton Feb. 11, 2023, 1:37 a.m. UTC
The test_bit(...) pattern is quite a lot of keystrokes. Replace
existing callsites with a helper.

No functional change intended.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 arch/arm64/include/asm/kvm_host.h | 3 +++
 arch/arm64/kvm/pmu-emul.c         | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)

Comments

Sean Christopherson Feb. 13, 2023, 3:36 p.m. UTC | #1
On Sat, Feb 11, 2023, Oliver Upton wrote:
> The test_bit(...) pattern is quite a lot of keystrokes. Replace
> existing callsites with a helper.
> 
> No functional change intended.
> 
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>  arch/arm64/include/asm/kvm_host.h | 3 +++
>  arch/arm64/kvm/pmu-emul.c         | 4 ++--
>  2 files changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 35a159d131b5..012e94bc9e4a 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -1019,6 +1019,9 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
>  	(system_supports_32bit_el0() &&				\
>  	 !static_branch_unlikely(&arm64_mismatched_32bit_el0))
>  
> +#define kvm_vm_has_ran_once(kvm)					\

From the peanut gallery...

The ONCE part of the flag+API is unnecessary and flawed from a pendatic point of
view, e.g. if a VM has ran twice...

What about kvm_vm_has_run() to align with a similar proposed x86 API for individual
vCPUs[*], if either one ever gets moved to common code?

[*] https://lore.kernel.org/all/20230210003148.2646712-3-seanjc@google.com
Marc Zyngier Feb. 13, 2023, 3:49 p.m. UTC | #2
On Mon, 13 Feb 2023 15:36:52 +0000,
Sean Christopherson <seanjc@google.com> wrote:
> 
> On Sat, Feb 11, 2023, Oliver Upton wrote:
> > The test_bit(...) pattern is quite a lot of keystrokes. Replace
> > existing callsites with a helper.
> > 
> > No functional change intended.
> > 
> > Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 3 +++
> >  arch/arm64/kvm/pmu-emul.c         | 4 ++--
> >  2 files changed, 5 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 35a159d131b5..012e94bc9e4a 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -1019,6 +1019,9 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
> >  	(system_supports_32bit_el0() &&				\
> >  	 !static_branch_unlikely(&arm64_mismatched_32bit_el0))
> >  
> > +#define kvm_vm_has_ran_once(kvm)					\
> 
> From the peanut gallery...
> 
> The ONCE part of the flag+API is unnecessary and flawed from a pendatic point of
> view, e.g. if a VM has ran twice...

Well, what I really wanted was:

kvm_vm_has_run_at_least_once_on_this_side_of_the_multiverse()

> What about kvm_vm_has_run() to align with a similar proposed x86 API for individual
> vCPUs[*], if either one ever gets moved to common code?

I think the original wording is understood by the very people who mess
with this code, most of whom are not even native English speakers. It
may not be pretty to your eyes, but hey, I'm not pretty either.

	M.
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 35a159d131b5..012e94bc9e4a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1019,6 +1019,9 @@  bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 	(system_supports_32bit_el0() &&				\
 	 !static_branch_unlikely(&arm64_mismatched_32bit_el0))
 
+#define kvm_vm_has_ran_once(kvm)					\
+	(test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &(kvm)->arch.flags))
+
 int kvm_trng_call(struct kvm_vcpu *vcpu);
 #ifdef CONFIG_KVM
 extern phys_addr_t hyp_mem_base;
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 24908400e190..a0fc569fdbca 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -880,7 +880,7 @@  static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id)
 	list_for_each_entry(entry, &arm_pmus, entry) {
 		arm_pmu = entry->arm_pmu;
 		if (arm_pmu->pmu.type == pmu_id) {
-			if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags) ||
+			if (kvm_vm_has_ran_once(kvm) ||
 			    (kvm->arch.pmu_filter && kvm->arch.arm_pmu != arm_pmu)) {
 				ret = -EBUSY;
 				break;
@@ -963,7 +963,7 @@  int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
 
 		mutex_lock(&kvm->lock);
 
-		if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags)) {
+		if (kvm_vm_has_ran_once(kvm)) {
 			mutex_unlock(&kvm->lock);
 			return -EBUSY;
 		}