diff mbox

[delta,V13,14/14] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

Message ID 20130813200211.GA27811@linux.vnet.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Raghavendra K T Aug. 13, 2013, 8:02 p.m. UTC
* Ingo Molnar <mingo@kernel.org> [2013-08-13 18:55:52]:

> Would be nice to have a delta fix patch against tip:x86/spinlocks, which 
> I'll then backmerge into that series via rebasing it.
> 

There was a namespace collision of PER_CPU lock_waiting variable when
we have both Xen and KVM enabled. 

Perhaps this week wasn't for me. Had run 100 times randconfig in a loop
for the fix sent earlier :(. 

Ingo, below delta patch should fix it, IIRC, I hope you will be folding this
back to patch 14/14 itself. Else please let me.
I have already run allnoconfig, allyesconfig, randconfig with below patch. But will
test again. This should apply on top of tip:x86/spinlocks.

---8<---
From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Fix Namespace collision for lock_waiting

Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Jeremy Fitzhardinge Aug. 13, 2013, 8 p.m. UTC | #1
On 08/13/2013 01:02 PM, Raghavendra K T wrote:
> * Ingo Molnar <mingo@kernel.org> [2013-08-13 18:55:52]:
>
>> Would be nice to have a delta fix patch against tip:x86/spinlocks, which 
>> I'll then backmerge into that series via rebasing it.
>>
> There was a namespace collision of PER_CPU lock_waiting variable when
> we have both Xen and KVM enabled. 
>
> Perhaps this week wasn't for me. Had run 100 times randconfig in a loop
> for the fix sent earlier :(. 
>
> Ingo, below delta patch should fix it, IIRC, I hope you will be folding this
> back to patch 14/14 itself. Else please let me.
> I have already run allnoconfig, allyesconfig, randconfig with below patch. But will
> test again. This should apply on top of tip:x86/spinlocks.
>
> ---8<---
> From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
>
> Fix Namespace collision for lock_waiting
>
> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index d442471..b8ef630 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -673,7 +673,7 @@ struct kvm_lock_waiting {
>  static cpumask_t waiting_cpus;
>  
>  /* Track spinlock on which a cpu is waiting */
> -static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
> +static DEFINE_PER_CPU(struct kvm_lock_waiting, klock_waiting);

Has static stopped meaning static?

    J

>  
>  static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
>  {
> @@ -685,7 +685,7 @@ static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
>  	if (in_nmi())
>  		return;
>  
> -	w = &__get_cpu_var(lock_waiting);
> +	w = &__get_cpu_var(klock_waiting);
>  	cpu = smp_processor_id();
>  	start = spin_time_start();
>  
> @@ -756,7 +756,7 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
>  
>  	add_stats(RELEASED_SLOW, 1);
>  	for_each_cpu(cpu, &waiting_cpus) {
> -		const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
> +		const struct kvm_lock_waiting *w = &per_cpu(klock_waiting, cpu);
>  		if (ACCESS_ONCE(w->lock) == lock &&
>  		    ACCESS_ONCE(w->want) == ticket) {
>  			add_stats(RELEASED_SLOW_KICKED, 1);
>
>

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Raghavendra K T Aug. 13, 2013, 8:27 p.m. UTC | #2
On 08/14/2013 01:30 AM, Jeremy Fitzhardinge wrote:
> On 08/13/2013 01:02 PM, Raghavendra K T wrote:
[...]
>> Ingo, below delta patch should fix it, IIRC, I hope you will be folding this
>> back to patch 14/14 itself. Else please let me.

it was.. s/Please let me know/

[...]
>> -static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
>> +static DEFINE_PER_CPU(struct kvm_lock_waiting, klock_waiting);
>
> Has static stopped meaning static?
>

I see it is expanded to static extern, since we have
CONFIG_DEBUG_FORCE_WEAK_PER_CPU=y for allyesconfig

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Raghavendra K T Aug. 14, 2013, 9:50 a.m. UTC | #3
On 08/14/2013 01:32 AM, Raghavendra K T wrote:
>
> Ingo, below delta patch should fix it, IIRC, I hope you will be folding this
> back to patch 14/14 itself. Else please let me.
> I have already run allnoconfig, allyesconfig, randconfig with below patch. But will
> test again.

I Did 2 more runs of allyesconfig, allnoconfig, and 40 runs of
randconfig. patchset with the fix looking good now.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index d442471..b8ef630 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -673,7 +673,7 @@  struct kvm_lock_waiting {
 static cpumask_t waiting_cpus;
 
 /* Track spinlock on which a cpu is waiting */
-static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
+static DEFINE_PER_CPU(struct kvm_lock_waiting, klock_waiting);
 
 static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
@@ -685,7 +685,7 @@  static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 	if (in_nmi())
 		return;
 
-	w = &__get_cpu_var(lock_waiting);
+	w = &__get_cpu_var(klock_waiting);
 	cpu = smp_processor_id();
 	start = spin_time_start();
 
@@ -756,7 +756,7 @@  static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
 
 	add_stats(RELEASED_SLOW, 1);
 	for_each_cpu(cpu, &waiting_cpus) {
-		const struct kvm_lock_waiting *w = &per_cpu(lock_waiting, cpu);
+		const struct kvm_lock_waiting *w = &per_cpu(klock_waiting, cpu);
 		if (ACCESS_ONCE(w->lock) == lock &&
 		    ACCESS_ONCE(w->want) == ticket) {
 			add_stats(RELEASED_SLOW_KICKED, 1);