diff mbox

[v10,06/19] qspinlock: prolong the stay in the pending bit path

Message ID 1399474907-22206-7-git-send-email-Waiman.Long@hp.com (mailing list archive)
State New, archived
Headers show

Commit Message

Waiman Long May 7, 2014, 3:01 p.m. UTC
There is a problem in the current trylock_pending() function.  When the
lock is free, but the pending bit holder hasn't grabbed the lock &
cleared the pending bit yet, the trylock_pending() function will fail.
As a result, the regular queuing code path will be used most of
the time even when there is only 2 tasks contending for the lock.
Assuming that the pending bit holder is going to get the lock and
clear the pending bit soon, it is actually better to wait than to be
queued up which has a higher overhead.

This patch modified the trylock_pending() function to wait until the
pending bit holder gets the lock and clears the pending bit. In case
both the lock and pending bits are set, the new code will also wait
a bit to see if either one is cleared. If they are not, it will quit
and be queued.

The following tables show the before-patch execution time (in ms)
of a micro-benchmark where 5M iterations of the lock/unlock cycles
were run on a 10-core Westere-EX x86-64 CPU with 2 different types of
loads - standalone (lock and protected data in different cachelines)
and embedded (lock and protected data in the same cacheline).

		  [Standalone/Embedded - same node]
  # of tasks	Ticket lock	Queue lock	 %Change
  ----------	-----------	----------	 -------
       1	  135/ 111	 135/ 101	   0%/  -9%
       2	  890/ 779	1885/1990	+112%/+156%
       3	 1932/1859	2333/2341	 +21%/ +26%
       4	 2829/2726	2900/2923	  +3%/  +7%
       5	 3834/3761	3655/3648	  -5%/  -3%
       6	 4963/4976	4336/4326	 -13%/ -13%
       7	 6299/6269	5057/5064	 -20%/ -19%
       8	 7691/7569	5786/5798	 -25%/ -23%

With 1 task per NUMA node, the execution times are:

		[Standalone - different nodes]
  # of nodes	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	   135		  135		  0%
       2	  4604		 5087		+10%
       3	 10940		12224		+12%
       4	 21555		10555		-51%

It can be seen that the queue spinlock is slower than the ticket
spinlock when there are 2 or 3 contending tasks. In all the other case,
the queue spinlock is either equal or faster than the ticket spinlock.

With this patch, the performance data for 2 contending tasks are:

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       2	  890/779	 984/871	+11%/+12%

		[Standalone - different nodes]
  # of nodes	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       2	  4604		   1364		  -70%

It can be seen that the queue spinlock performance for 2 contending
tasks is now comparable to ticket spinlock on the same node, but much
faster when in different nodes. With 3 contending tasks, however,
the ticket spinlock is still quite a bit faster.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 kernel/locking/qspinlock.c |   31 +++++++++++++++++++++++++++++--
 1 files changed, 29 insertions(+), 2 deletions(-)

Comments

Peter Zijlstra May 8, 2014, 6:58 p.m. UTC | #1
On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
> @@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
>  	 */
>  	for (;;) {
>  		/*
> -		 * If we observe any contention; queue.
> +		 * If we observe that the queue is not empty,
> +		 * return and be queued.
>  		 */
> -		if (val & ~_Q_LOCKED_MASK)
> +		if (val & _Q_TAIL_MASK)
>  			return 0;
>  
> +		if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
> +			/*
> +			 * If both the lock and pending bits are set, we wait
> +			 * a while to see if that either bit will be cleared.
> +			 * If that is no change, we return and be queued.
> +			 */
> +			if (!retry)
> +				return 0;
> +			retry--;
> +			cpu_relax();
> +			cpu_relax();
> +			*pval = val = atomic_read(&lock->val);
> +			continue;
> +		} else if (val == _Q_PENDING_VAL) {
> +			/*
> +			 * Pending bit is set, but not the lock bit.
> +			 * Assuming that the pending bit holder is going to
> +			 * set the lock bit and clear the pending bit soon,
> +			 * it is better to wait than to exit at this point.
> +			 */
> +			cpu_relax();
> +			*pval = val = atomic_read(&lock->val);
> +			continue;
> +		}

Didn't I give a much saner alternative to this mess last time?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Waiman Long May 10, 2014, 12:58 a.m. UTC | #2
On 05/08/2014 02:58 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
>> @@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
>>   	 */
>>   	for (;;) {
>>   		/*
>> -		 * If we observe any contention; queue.
>> +		 * If we observe that the queue is not empty,
>> +		 * return and be queued.
>>   		 */
>> -		if (val&  ~_Q_LOCKED_MASK)
>> +		if (val&  _Q_TAIL_MASK)
>>   			return 0;
>>
>> +		if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
>> +			/*
>> +			 * If both the lock and pending bits are set, we wait
>> +			 * a while to see if that either bit will be cleared.
>> +			 * If that is no change, we return and be queued.
>> +			 */
>> +			if (!retry)
>> +				return 0;
>> +			retry--;
>> +			cpu_relax();
>> +			cpu_relax();
>> +			*pval = val = atomic_read(&lock->val);
>> +			continue;
>> +		} else if (val == _Q_PENDING_VAL) {
>> +			/*
>> +			 * Pending bit is set, but not the lock bit.
>> +			 * Assuming that the pending bit holder is going to
>> +			 * set the lock bit and clear the pending bit soon,
>> +			 * it is better to wait than to exit at this point.
>> +			 */
>> +			cpu_relax();
>> +			*pval = val = atomic_read(&lock->val);
>> +			continue;
>> +		}
> Didn't I give a much saner alternative to this mess last time?

I don't recall you have any suggestion last time. Anyway, if you think 
the code is too messy, I think I can give up the first if statement 
which is more an optimistic spinning kind of code for short critical 
section. The 2nd if statement is still need to improve chance of using 
this code path due to timing reason. I will rerun my performance test to 
make sure it won't have too much performance impact.

-Longman
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Peter Zijlstra May 10, 2014, 1:38 p.m. UTC | #3
On Fri, May 09, 2014 at 08:58:47PM -0400, Waiman Long wrote:
> On 05/08/2014 02:58 PM, Peter Zijlstra wrote:
> >On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
> >>@@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> >>  	 */
> >>  	for (;;) {
> >>  		/*
> >>-		 * If we observe any contention; queue.
> >>+		 * If we observe that the queue is not empty,
> >>+		 * return and be queued.
> >>  		 */
> >>-		if (val&  ~_Q_LOCKED_MASK)
> >>+		if (val&  _Q_TAIL_MASK)
> >>  			return 0;
> >>
> >>+		if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
> >>+			/*
> >>+			 * If both the lock and pending bits are set, we wait
> >>+			 * a while to see if that either bit will be cleared.
> >>+			 * If that is no change, we return and be queued.
> >>+			 */
> >>+			if (!retry)
> >>+				return 0;
> >>+			retry--;
> >>+			cpu_relax();
> >>+			cpu_relax();
> >>+			*pval = val = atomic_read(&lock->val);
> >>+			continue;
> >>+		} else if (val == _Q_PENDING_VAL) {
> >>+			/*
> >>+			 * Pending bit is set, but not the lock bit.
> >>+			 * Assuming that the pending bit holder is going to
> >>+			 * set the lock bit and clear the pending bit soon,
> >>+			 * it is better to wait than to exit at this point.
> >>+			 */
> >>+			cpu_relax();
> >>+			*pval = val = atomic_read(&lock->val);
> >>+			continue;
> >>+		}
> >Didn't I give a much saner alternative to this mess last time?
> 
> I don't recall you have any suggestion last time. Anyway, if you think the
> code is too messy, I think I can give up the first if statement which is
> more an optimistic spinning kind of code for short critical section. The 2nd
> if statement is still need to improve chance of using this code path due to
> timing reason. I will rerun my performance test to make sure it won't have
> too much performance impact.

lkml.kernel.org/r/20140417163640.GT11096@twins.programming.kicks-ass.net
diff mbox

Patch

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 3e908f7..e734acb 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -212,6 +212,7 @@  xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
 static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
 {
 	u32 old, new, val = *pval;
+	int retry = 1;
 
 	/*
 	 * trylock || pending
@@ -221,11 +222,37 @@  static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
 	 */
 	for (;;) {
 		/*
-		 * If we observe any contention; queue.
+		 * If we observe that the queue is not empty,
+		 * return and be queued.
 		 */
-		if (val & ~_Q_LOCKED_MASK)
+		if (val & _Q_TAIL_MASK)
 			return 0;
 
+		if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) {
+			/*
+			 * If both the lock and pending bits are set, we wait
+			 * a while to see if that either bit will be cleared.
+			 * If that is no change, we return and be queued.
+			 */
+			if (!retry)
+				return 0;
+			retry--;
+			cpu_relax();
+			cpu_relax();
+			*pval = val = atomic_read(&lock->val);
+			continue;
+		} else if (val == _Q_PENDING_VAL) {
+			/*
+			 * Pending bit is set, but not the lock bit.
+			 * Assuming that the pending bit holder is going to
+			 * set the lock bit and clear the pending bit soon,
+			 * it is better to wait than to exit at this point.
+			 */
+			cpu_relax();
+			*pval = val = atomic_read(&lock->val);
+			continue;
+		}
+
 		new = _Q_LOCKED_VAL;
 		if (val == new)
 			new |= _Q_PENDING_VAL;