diff mbox

[v4,1/2] x86/paravirt: Change vcp_is_preempted() arg type to long

Message ID 1487194670-6319-2-git-send-email-longman@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Waiman Long Feb. 15, 2017, 9:37 p.m. UTC
The cpu argument in the function prototype of vcpu_is_preempted()
is changed from int to long. That makes it easier to provide a better
optimized assembly version of that function.

For Xen, vcpu_is_preempted(long) calls xen_vcpu_stolen(int), the
downcast from long to int is not a problem as vCPU number won't exceed
32 bits.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 arch/x86/include/asm/paravirt.h      | 2 +-
 arch/x86/include/asm/qspinlock.h     | 2 +-
 arch/x86/kernel/kvm.c                | 2 +-
 arch/x86/kernel/paravirt-spinlocks.c | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

Comments

Peter Zijlstra Feb. 16, 2017, 4:09 p.m. UTC | #1
On Wed, Feb 15, 2017 at 04:37:49PM -0500, Waiman Long wrote:
> The cpu argument in the function prototype of vcpu_is_preempted()
> is changed from int to long. That makes it easier to provide a better
> optimized assembly version of that function.
> 
> For Xen, vcpu_is_preempted(long) calls xen_vcpu_stolen(int), the
> downcast from long to int is not a problem as vCPU number won't exceed
> 32 bits.
> 

Note that because of the cast in PVOP_CALL_ARG1() this patch is
pointless.

Then again, it doesn't seem to affect code generation, so why not. Takes
away the reliance on that weird cast.
Waiman Long Feb. 16, 2017, 9:02 p.m. UTC | #2
On 02/16/2017 11:09 AM, Peter Zijlstra wrote:
> On Wed, Feb 15, 2017 at 04:37:49PM -0500, Waiman Long wrote:
>> The cpu argument in the function prototype of vcpu_is_preempted()
>> is changed from int to long. That makes it easier to provide a better
>> optimized assembly version of that function.
>>
>> For Xen, vcpu_is_preempted(long) calls xen_vcpu_stolen(int), the
>> downcast from long to int is not a problem as vCPU number won't exceed
>> 32 bits.
>>
> Note that because of the cast in PVOP_CALL_ARG1() this patch is
> pointless.
>
> Then again, it doesn't seem to affect code generation, so why not. Takes
> away the reliance on that weird cast.

I add this patch because I am a bit uneasy about clearing the upper 32
bits of rdi and assuming that the compiler won't have a previous use of
those bits. It gives me peace of mind.

Cheers,
Longman
Peter Zijlstra Feb. 17, 2017, 9:42 a.m. UTC | #3
On Thu, Feb 16, 2017 at 04:02:57PM -0500, Waiman Long wrote:
> On 02/16/2017 11:09 AM, Peter Zijlstra wrote:
> > On Wed, Feb 15, 2017 at 04:37:49PM -0500, Waiman Long wrote:
> >> The cpu argument in the function prototype of vcpu_is_preempted()
> >> is changed from int to long. That makes it easier to provide a better
> >> optimized assembly version of that function.
> >>
> >> For Xen, vcpu_is_preempted(long) calls xen_vcpu_stolen(int), the
> >> downcast from long to int is not a problem as vCPU number won't exceed
> >> 32 bits.
> >>
> > Note that because of the cast in PVOP_CALL_ARG1() this patch is
> > pointless.
> >
> > Then again, it doesn't seem to affect code generation, so why not. Takes
> > away the reliance on that weird cast.
> 
> I add this patch because I am a bit uneasy about clearing the upper 32
> bits of rdi and assuming that the compiler won't have a previous use of
> those bits. It gives me peace of mind.

So currently the PVOP_CALL_ARG#() macros force cast everything to
(unsigned long) anyway, but it would be good not to rely on that I
think, so yes.
diff mbox

Patch

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 1eea6ca..f75fbfe 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -673,7 +673,7 @@  static __always_inline void pv_kick(int cpu)
 	PVOP_VCALL1(pv_lock_ops.kick, cpu);
 }
 
-static __always_inline bool pv_vcpu_is_preempted(int cpu)
+static __always_inline bool pv_vcpu_is_preempted(long cpu)
 {
 	return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu);
 }
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index c343ab5..48a706f 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -34,7 +34,7 @@  static inline void queued_spin_unlock(struct qspinlock *lock)
 }
 
 #define vcpu_is_preempted vcpu_is_preempted
-static inline bool vcpu_is_preempted(int cpu)
+static inline bool vcpu_is_preempted(long cpu)
 {
 	return pv_vcpu_is_preempted(cpu);
 }
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 099fcba..85ed343 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -589,7 +589,7 @@  static void kvm_wait(u8 *ptr, u8 val)
 	local_irq_restore(flags);
 }
 
-__visible bool __kvm_vcpu_is_preempted(int cpu)
+__visible bool __kvm_vcpu_is_preempted(long cpu)
 {
 	struct kvm_steal_time *src = &per_cpu(steal_time, cpu);
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 6259327..8f2d1c9 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -20,7 +20,7 @@  bool pv_is_native_spin_unlock(void)
 		__raw_callee_save___native_queued_spin_unlock;
 }
 
-__visible bool __native_vcpu_is_preempted(int cpu)
+__visible bool __native_vcpu_is_preempted(long cpu)
 {
 	return false;
 }