[2/8] ocfs2: fix deadlock when two nodes are converting same lock from PR to EX and idletimeout closes conn
diff mbox

Message ID 20140609200401.3D4AB5A4A6F@corp2gmr1-2.hot.corp.google.com
State New, archived
Headers show

Commit Message

Andrew Morton June 9, 2014, 8:04 p.m. UTC
From: Tariq Saeed <tariq.x.saeed@oracle.com>
Subject: ocfs2: fix deadlock when two nodes are converting same lock from PR to EX and idletimeout closes conn

Orabug: 18639535

Two node cluster and both nodes hold a lock at PR level and both want to
convert to EX at the same time.  Master node 1 has sent BAST and then
closes the connection due to idletime out.  Node 0 receives BAST, sends
unlock req with cancel flag but gets error -ENOTCONN.  The problem is this
error is ignored in dlm_send_remote_unlock_request() on the **incorrect**
assumption that the master is dead.  See NOTE in comment why it returns
DLM_NORMAL.  Upon getting DLM_NORMAL, node 0 proceeds to sends convert
(without cancel flg) which fails with -ENOTCONN.  waits 5 sec and resends.
 This time gets DLM_IVLOCKID from the master since lock not found in grant
, it had been moved to converting queue in response to conv PR->EX req. 
No way out.

Node 1 (master)				Node 0

Comments

Mark Fasheh June 12, 2014, 11:05 p.m. UTC | #1
On Mon, Jun 09, 2014 at 01:04:00PM -0700, Andrew Morton wrote:
> From: Tariq Saeed <tariq.x.saeed@oracle.com>
> Subject: ocfs2: fix deadlock when two nodes are converting same lock from PR to EX and idletimeout closes conn
> 
> Orabug: 18639535
> 
> Two node cluster and both nodes hold a lock at PR level and both want to
> convert to EX at the same time.  Master node 1 has sent BAST and then
> closes the connection due to idletime out.  Node 0 receives BAST, sends
> unlock req with cancel flag but gets error -ENOTCONN.  The problem is this
> error is ignored in dlm_send_remote_unlock_request() on the **incorrect**
> assumption that the master is dead.  See NOTE in comment why it returns
> DLM_NORMAL.  Upon getting DLM_NORMAL, node 0 proceeds to sends convert
> (without cancel flg) which fails with -ENOTCONN.  waits 5 sec and resends.
>  This time gets DLM_IVLOCKID from the master since lock not found in grant
> , it had been moved to converting queue in response to conv PR->EX req. 
> No way out.
> 
> Node 1 (master)				Node 0
> ==============				======
> 
> lock mode PR				PR
> 
> convert PR -> EX
> mv grant -> convert and que BAST
> ...
>                      <-------- convert PR -> EX
> convert que looks like this: ((node 1, PR -> EX) (node 0, PR -> EX))
> ...
> 			BAST (want PR -> NL)
>                      ------------------>
> ...
> idle timout, conn closed
>                                 ...
> 				In response to BAST,
> 				sends unlock with cancel convert flag
> 				gets -ENOTCONN. Ignores and
>                                 sends remote convert request
>                                 gets -ENOTCONN, waits 5 Sec, retries
> ...
> reconnects
>                    <----------------- convert req goes through on next try
> does not find lock on grant que
>                    status DLM_IVLOCKID
>                    ------------------>
> ...
> 
> No way out.  Fix is to keep retrying unlock with cancel flag until it
> succeeds or the master dies.
> 
> Signed-off-by: Tariq Saeed <tariq.x.saeed@oracle.com>
> Cc: Mark Fasheh <mfasheh@suse.com>
> Cc: Joel Becker <jlbec@evilplan.org>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

That looks good, thanks Tariq, in particular for the detailed explanation.

Reviewed-by: Mark Fasheh <mfasheh@suse.de>
	--Mark

--
Mark Fasheh

Patch
diff mbox

==============				======

lock mode PR				PR

convert PR -> EX
mv grant -> convert and que BAST
...
                     <-------- convert PR -> EX
convert que looks like this: ((node 1, PR -> EX) (node 0, PR -> EX))
...
			BAST (want PR -> NL)
                     ------------------>
...
idle timout, conn closed
                                ...
				In response to BAST,
				sends unlock with cancel convert flag
				gets -ENOTCONN. Ignores and
                                sends remote convert request
                                gets -ENOTCONN, waits 5 Sec, retries
...
reconnects
                   <----------------- convert req goes through on next try
does not find lock on grant que
                   status DLM_IVLOCKID
                   ------------------>
...

No way out.  Fix is to keep retrying unlock with cancel flag until it
succeeds or the master dies.

Signed-off-by: Tariq Saeed <tariq.x.saeed@oracle.com>
Cc: Mark Fasheh <mfasheh@suse.com>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 fs/ocfs2/dlm/dlmunlock.c |   18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff -puN fs/ocfs2/dlm/dlmunlock.c~deadlock-when-two-nodes-are-converting-same-lock-from-pr-to-ex-and-idletimeout-closes-conn fs/ocfs2/dlm/dlmunlock.c
--- a/fs/ocfs2/dlm/dlmunlock.c~deadlock-when-two-nodes-are-converting-same-lock-from-pr-to-ex-and-idletimeout-closes-conn
+++ a/fs/ocfs2/dlm/dlmunlock.c
@@ -191,7 +191,9 @@  static enum dlm_status dlmunlock_common(
 				     DLM_UNLOCK_CLEAR_CONVERT_TYPE);
 		} else if (status == DLM_RECOVERING ||
 			   status == DLM_MIGRATING ||
-			   status == DLM_FORWARD) {
+			   status == DLM_FORWARD ||
+			   status == DLM_NOLOCKMGR
+			   ) {
 			/* must clear the actions because this unlock
 			 * is about to be retried.  cannot free or do
 			 * any list manipulation. */
@@ -200,7 +202,8 @@  static enum dlm_status dlmunlock_common(
 			     res->lockname.name,
 			     status==DLM_RECOVERING?"recovering":
 			     (status==DLM_MIGRATING?"migrating":
-			      "forward"));
+				(status == DLM_FORWARD ? "forward" :
+						"nolockmanager")));
 			actions = 0;
 		}
 		if (flags & LKM_CANCEL)
@@ -364,7 +367,10 @@  static enum dlm_status dlm_send_remote_u
 			 * updated state to the recovery master.  this thread
 			 * just needs to finish out the operation and call
 			 * the unlockast. */
-			ret = DLM_NORMAL;
+			if (dlm_is_node_dead(dlm, owner))
+				ret = DLM_NORMAL;
+			else
+				ret = DLM_NOLOCKMGR;
 		} else {
 			/* something bad.  this will BUG in ocfs2 */
 			ret = dlm_err_to_dlm_status(tmpret);
@@ -638,7 +644,9 @@  retry:
 
 	if (status == DLM_RECOVERING ||
 	    status == DLM_MIGRATING ||
-	    status == DLM_FORWARD) {
+	    status == DLM_FORWARD ||
+	    status == DLM_NOLOCKMGR) {
+
 		/* We want to go away for a tiny bit to allow recovery
 		 * / migration to complete on this resource. I don't
 		 * know of any wait queue we could sleep on as this
@@ -650,7 +658,7 @@  retry:
 		msleep(50);
 
 		mlog(0, "retrying unlock due to pending recovery/"
-		     "migration/in-progress\n");
+		     "migration/in-progress/reconnect\n");
 		goto retry;
 	}