diff mbox

[v2,02/14] KVM: x86: simplify atomics in kvm_pit_ack_irq

Message ID 1455736496-374-3-git-send-email-rkrcmar@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Radim Krčmář Feb. 17, 2016, 7:14 p.m. UTC
We already have a helper that does the same thing.

Signed-off-by: Radim Kr?má? <rkrcmar@redhat.com>
---
 v2: The pending == 1 means there's only us => don't reinject.

 arch/x86/kvm/i8254.c | 12 +-----------
 1 file changed, 1 insertion(+), 11 deletions(-)

Comments

Paolo Bonzini Feb. 18, 2016, 6:04 p.m. UTC | #1
On 17/02/2016 20:14, Radim Kr?má? wrote:
> -	value = atomic_dec_return(&ps->pending);
> -	if (value < 0)
> -		/* spurious acks can be generated if, for example, the
> -		 * PIC is being reset.  Handle it gracefully here
> -		 */
> -		atomic_inc(&ps->pending);
> -	else if (value > 0 && ps->reinject)
> -		/* in this case, we had multiple outstanding pit interrupts
> -		 * that we needed to inject.  Reinject
> -		 */
> +	if (atomic_dec_if_positive(&ps->pending) > 0 && ps->reinject)
>  		queue_kthread_work(&ps->pit->worker, &ps->pit->expired);

Here it would have made sense to do already

	if (!ps->reinject) {
		WARN_ON_ONCE(ps->pending || !ps->irq_ack);
		return;
	}
	spin_lock(...)
	if (atomic_dec_if_positive(&ps->pending) > 0)
		queue_kthread_work(...);
	ps->irq_ack = 1;
	spin_unlock(...)
	
because ps->pending is only ever nonzero, and irq_ack is only ever zero,
if ps->reinject.  Not a big deal since the ack notifier is going to
disappear altogether for the discard policy, but the nice thing is that
it lets you remove the ack notifier earlier and disentangle a bit more
discard mode.

So if you want for v3 you can reorder the patches like this:

- patch 1, same

- patch 2, what is outlined above

- patch 3, remove ack notifier for discard

- patch 4..14 the rest

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Radim Krčmář Feb. 19, 2016, 3:51 p.m. UTC | #2
2016-02-18 19:04+0100, Paolo Bonzini:
> On 17/02/2016 20:14, Radim Kr?má? wrote:
>> -	value = atomic_dec_return(&ps->pending);
>> -	if (value < 0)
>> -		/* spurious acks can be generated if, for example, the
>> -		 * PIC is being reset.  Handle it gracefully here
>> -		 */
>> -		atomic_inc(&ps->pending);
>> -	else if (value > 0 && ps->reinject)
>> -		/* in this case, we had multiple outstanding pit interrupts
>> -		 * that we needed to inject.  Reinject
>> -		 */
>> +	if (atomic_dec_if_positive(&ps->pending) > 0 && ps->reinject)
>>  		queue_kthread_work(&ps->pit->worker, &ps->pit->expired);
> 
> Here it would have made sense to do already
> 
> 	if (!ps->reinject) {
> 		WARN_ON_ONCE(ps->pending || !ps->irq_ack);
> 		return;
> 	}

I will add the WARN_ON when removing discard notifiers.

> 	spin_lock(...)
> 	if (atomic_dec_if_positive(&ps->pending) > 0)
> 		queue_kthread_work(...);
> 	ps->irq_ack = 1;
> 	spin_unlock(...)
> 	
> because ps->pending is only ever nonzero, and irq_ack is only ever zero,
> if ps->reinject.

(Well, userspace can switch between policies at runtime.)

>                   Not a big deal since the ack notifier is going to
> disappear altogether for the discard policy, but the nice thing is that
> it lets you remove the ack notifier earlier and disentangle a bit more
> discard mode.
> 
> So if you want for v3 you can reorder the patches like this:

The end result is going to be identical.  I had a version that did
something similar and it was pretty tangled as well -- I wanted to
remove useless locks before re-using one for the ioctls.
(We need the protection earlier, because userspace can control notifiers
 while PIT is still being initialized.  And removing the lock had
 dependencies.)

> 
> - patch 1, same
> 
> - patch 2, what is outlined above
> 
> - patch 3, remove ack notifier for discard

I agree that current ordering looks weird.  The dependency tree looked
like this in my mind:

-[1/14]
 ,-[2/14]
-[4/14]
 ,-[3/14]
 |   ,-[5/14]
 | ,-[6/14]
 +-[7/14]
-[8/14]
-[9-14/14]

I added [2-4/14] early (and a bit out of order), because it made diffs
shorter.  Dependency on [7/14] can dropped with correct mutexing inside
initialization, so the v3 order would be:

-[1/14]
 ,-[3/14]
-[8/14]
 ,-[2/14]
-[4/14]
   ,-[5/14]
 ,-[6/14]
-[7/14]
-[9-14/14]

With [8/14] (remove ack notifier for discard) as third.

Would that be ok?

> - patch 4..14 the rest
> 
> Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paolo Bonzini Feb. 19, 2016, 3:56 p.m. UTC | #3
On 19/02/2016 16:51, Radim Kr?má? wrote:
> The end result is going to be identical.  I had a version that did
> something similar and it was pretty tangled as well -- I wanted to
> remove useless locks before re-using one for the ioctls.
> (We need the protection earlier, because userspace can control notifiers
>  while PIT is still being initialized.  And removing the lock had
>  dependencies.)

Yeah, I eventually imagined that cleaning up the locks helps with the
patch that adds/removes the notifiers dynamically.  Then I guess your
current ordering of the patches is good!

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index ab5318727579..7d694ac7f4a4 100644
--- a/arch/x86/kvm/i8254.c
+++ b/arch/x86/kvm/i8254.c
@@ -236,19 +236,9 @@  static void kvm_pit_ack_irq(struct kvm_irq_ack_notifier *kian)
 {
 	struct kvm_kpit_state *ps = container_of(kian, struct kvm_kpit_state,
 						 irq_ack_notifier);
-	int value;
 
 	spin_lock(&ps->inject_lock);
-	value = atomic_dec_return(&ps->pending);
-	if (value < 0)
-		/* spurious acks can be generated if, for example, the
-		 * PIC is being reset.  Handle it gracefully here
-		 */
-		atomic_inc(&ps->pending);
-	else if (value > 0 && ps->reinject)
-		/* in this case, we had multiple outstanding pit interrupts
-		 * that we needed to inject.  Reinject
-		 */
+	if (atomic_dec_if_positive(&ps->pending) > 0 && ps->reinject)
 		queue_kthread_work(&ps->pit->worker, &ps->pit->expired);
 	ps->irq_ack = 1;
 	spin_unlock(&ps->inject_lock);