diff mbox

[03/11] locking, rwsem: introduce basis for down_write_killable

Message ID 20160331083336.GA27831@dhcp22.suse.cz (mailing list archive)
State New, archived
Headers show

Commit Message

Michal Hocko March 31, 2016, 8:33 a.m. UTC
On Wed 30-03-16 15:25:49, Peter Zijlstra wrote:
[...]
> Why is the signal_pending_state() test _after_ the call to schedule()
> and before the 'trylock'.

No special reason. I guess I was just too focused on the wake_by_signal
path and didn't realize the trylock as well.

> __mutex_lock_common() has it before the call to schedule and after the
> 'trylock'.
> 
> The difference is that rwsem will now respond to the KILL and return
> -EINTR even if the lock is available, whereas mutex will acquire it and
> ignore the signal (for a little while longer).
> 
> Neither is wrong per se, but I feel all the locking primitives should
> behave in a consistent manner in this regard.

Agreed! What about the following on top? I will repost the full patch
if it looks OK.

Thanks!
---

Comments

Peter Zijlstra March 31, 2016, 8:44 a.m. UTC | #1
On Thu, Mar 31, 2016 at 10:33:36AM +0200, Michal Hocko wrote:
> > __mutex_lock_common() has it before the call to schedule and after the
> > 'trylock'.
> > 
> > The difference is that rwsem will now respond to the KILL and return
> > -EINTR even if the lock is available, whereas mutex will acquire it and
> > ignore the signal (for a little while longer).
> > 
> > Neither is wrong per se, but I feel all the locking primitives should
> > behave in a consistent manner in this regard.
> 
> Agreed! What about the following on top? I will repost the full patch
> if it looks OK.

Yep, that seems to have the right shape to it.

Thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-sh" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/kernel/locking/rwsem-spinlock.c b/kernel/locking/rwsem-spinlock.c
index d1d04ca10d0e..fb2db7b408f0 100644
--- a/kernel/locking/rwsem-spinlock.c
+++ b/kernel/locking/rwsem-spinlock.c
@@ -216,14 +216,13 @@  int __sched __down_write_state(struct rw_semaphore *sem, int state)
 		 */
 		if (sem->count == 0)
 			break;
-		set_task_state(tsk, state);
-		raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
-		schedule();
 		if (signal_pending_state(state, current)) {
 			ret = -EINTR;
-			raw_spin_lock_irqsave(&sem->wait_lock, flags);
 			goto out;
 		}
+		set_task_state(tsk, state);
+		raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
+		schedule();
 		raw_spin_lock_irqsave(&sem->wait_lock, flags);
 	}
 	/* got the lock */
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 5cec34f1ad6f..781b2628e41b 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -487,19 +487,19 @@  __rwsem_down_write_failed_state(struct rw_semaphore *sem, int state)
 
 		/* Block until there are no active lockers. */
 		do {
-			schedule();
 			if (signal_pending_state(state, current)) {
 				raw_spin_lock_irq(&sem->wait_lock);
 				ret = ERR_PTR(-EINTR);
 				goto out;
 			}
+			schedule();
 			set_current_state(state);
 		} while ((count = sem->count) & RWSEM_ACTIVE_MASK);
 
 		raw_spin_lock_irq(&sem->wait_lock);
 	}
-	__set_current_state(TASK_RUNNING);
 out:
+	__set_current_state(TASK_RUNNING);
 	list_del(&waiter.list);
 	raw_spin_unlock_irq(&sem->wait_lock);