Message ID | 55FABD65.4010107@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Joseph, On 09/17/2015 09:17 PM, Joseph Qi wrote: > There is a race window between dlmconvert_remote and > dlm_move_lockres_to_recovery_list, which will cause a lock with > OCFS2_LOCK_BUSY in grant list, thus system hangs. > > dlmconvert_remote > { > spin_lock(&res->spinlock); > list_move_tail(&lock->list, &res->converting); > lock->convert_pending = 1; > spin_unlock(&res->spinlock); > > status = dlm_send_remote_convert_request(); > >>>>>> race window, master has queued ast and return DLM_NORMAL, > and then down before sending ast. > this node detects master down and call > dlm_move_lockres_to_recovery_list, which will revert the > lock to grant list. > Then OCFS2_LOCK_BUSY won't be cleared as new master won't > send ast any more because it thinks already be authorized. > > spin_lock(&res->spinlock); > lock->convert_pending = 0; > if (status != DLM_NORMAL) > dlm_revert_pending_convert(res, lock); > spin_unlock(&res->spinlock); > } > > In this case, just leave it in convert list and new master will take > care of it after recovery. And if convert request returns other than > DLM_NORMAL, convert thread will do the revert itself. > So remove the revert logic in dlm_move_lockres_to_recovery_list. Yes, looks good. The lock was already in convert list. Recovery process will shuffle the list and send ast again. So why not clean up convert_pending, it is useless now? The same thing happen for lock_pending, the lock was already in block list. I think it can also be removed. Thanks, Junxiao. > > Reported-by: Yiwen Jiang <jiangyiwen@huawei.com> > Signed-off-by: Joseph Qi <joseph.qi@huawei.com> > Cc: <stable@vger.kernel.org> > --- > fs/ocfs2/dlm/dlmrecovery.c | 10 +--------- > 1 file changed, 1 insertion(+), 9 deletions(-) > > diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c > index ce12e0b..d797d49 100644 > --- a/fs/ocfs2/dlm/dlmrecovery.c > +++ b/fs/ocfs2/dlm/dlmrecovery.c > @@ -2058,15 +2058,7 @@ void dlm_move_lockres_to_recovery_list(struct dlm_ctxt *dlm, > queue = dlm_list_idx_to_ptr(res, i); > list_for_each_entry_safe(lock, next, queue, list) { > dlm_lock_get(lock); > - if (lock->convert_pending) { > - /* move converting lock back to granted */ > - BUG_ON(i != DLM_CONVERTING_LIST); > - mlog(0, "node died with convert pending " > - "on %.*s. move back to granted list.\n", > - res->lockname.len, res->lockname.name); > - dlm_revert_pending_convert(res, lock); > - lock->convert_pending = 0; > - } else if (lock->lock_pending) { > + if (lock->lock_pending) { > /* remove pending lock requests completely */ > BUG_ON(i != DLM_BLOCKED_LIST); > mlog(0, "node died with lock pending " >
On 09/17/2015 06:17 AM, Joseph Qi wrote: > There is a race window between dlmconvert_remote and > dlm_move_lockres_to_recovery_list, which will cause a lock with > OCFS2_LOCK_BUSY in grant list, thus system hangs. > > dlmconvert_remote > { > spin_lock(&res->spinlock); > list_move_tail(&lock->list, &res->converting); > lock->convert_pending = 1; > spin_unlock(&res->spinlock); > > status = dlm_send_remote_convert_request(); > >>>>>> race window, master has queued ast and return DLM_NORMAL, > and then down before sending ast. > this node detects master down and call > dlm_move_lockres_to_recovery_list, which will revert the > lock to grant list. > Then OCFS2_LOCK_BUSY won't be cleared as new master won't > send ast any more because it thinks already be authorized. > > spin_lock(&res->spinlock); > lock->convert_pending = 0; > if (status != DLM_NORMAL) > dlm_revert_pending_convert(res, lock); > spin_unlock(&res->spinlock); > } > > In this case, just leave it in convert list and new master will take > care of it after recovery. And if convert request returns other than > DLM_NORMAL, convert thread will do the revert itself. > So remove the revert logic in dlm_move_lockres_to_recovery_list. > First, I have a question about the old code you are removing. How did this scenario ever worked? convert req has been sent to the master and caller of __ocfs2_cluster_lock() is waiting in ocfs2_wait_for_mask(&mw) when master dies. new master receives the res with our lock in granted state (since we move it there in dlm_move_lockres_to_recovery_list). Now, the new master has no idea that this lock needs upconversion. How will it get converted? Now, back to your fix.This is my understanding. If we leave it in the converting list as you propose, then new master will also have it in converting list , its correct place. dlm_shuffle_lists runs after recovery. Will send BAST to the new master which would behave just like the original master would have. If lock can be converted, it will send AST, otherwise it will wait until it can be, just like the old master would have done etc etc. Am I correct? Thanks -Tariq
On 2015/9/18 10:41, Junxiao Bi wrote: > Hi Joseph, > > On 09/17/2015 09:17 PM, Joseph Qi wrote: >> > There is a race window between dlmconvert_remote and >> > dlm_move_lockres_to_recovery_list, which will cause a lock with >> > OCFS2_LOCK_BUSY in grant list, thus system hangs. >> > >> > dlmconvert_remote >> > { >> > spin_lock(&res->spinlock); >> > list_move_tail(&lock->list, &res->converting); >> > lock->convert_pending = 1; >> > spin_unlock(&res->spinlock); >> > >> > status = dlm_send_remote_convert_request(); >> > >>>>>> race window, master has queued ast and return DLM_NORMAL, >> > and then down before sending ast. >> > this node detects master down and call >> > dlm_move_lockres_to_recovery_list, which will revert the >> > lock to grant list. >> > Then OCFS2_LOCK_BUSY won't be cleared as new master won't >> > send ast any more because it thinks already be authorized. >> > >> > spin_lock(&res->spinlock); >> > lock->convert_pending = 0; >> > if (status != DLM_NORMAL) >> > dlm_revert_pending_convert(res, lock); >> > spin_unlock(&res->spinlock); >> > } >> > >> > In this case, just leave it in convert list and new master will take >> > care of it after recovery. And if convert request returns other than >> > DLM_NORMAL, convert thread will do the revert itself. >> > So remove the revert logic in dlm_move_lockres_to_recovery_list. > Yes, looks good. The lock was already in convert list. Recovery process > will shuffle the list and send ast again. So why not clean up > convert_pending, it is useless now? You are right. convert_pending is now useless. I will send a new version later. One more concern is, does it have relations with LVB? > The same thing happen for lock_pending, the lock was already in block > list. I think it can also be removed. I'll investigate on it. > > Thanks, > Junxiao. >
On 2015/9/18 12:26, Tariq Saeed wrote: > On 09/17/2015 06:17 AM, Joseph Qi wrote: >> There is a race window between dlmconvert_remote and >> dlm_move_lockres_to_recovery_list, which will cause a lock with >> OCFS2_LOCK_BUSY in grant list, thus system hangs. >> >> dlmconvert_remote >> { >> spin_lock(&res->spinlock); >> list_move_tail(&lock->list, &res->converting); >> lock->convert_pending = 1; >> spin_unlock(&res->spinlock); >> >> status = dlm_send_remote_convert_request(); >> >>>>>> race window, master has queued ast and return DLM_NORMAL, >> and then down before sending ast. >> this node detects master down and call >> dlm_move_lockres_to_recovery_list, which will revert the >> lock to grant list. >> Then OCFS2_LOCK_BUSY won't be cleared as new master won't >> send ast any more because it thinks already be authorized. >> >> spin_lock(&res->spinlock); >> lock->convert_pending = 0; >> if (status != DLM_NORMAL) >> dlm_revert_pending_convert(res, lock); >> spin_unlock(&res->spinlock); >> } >> >> In this case, just leave it in convert list and new master will take >> care of it after recovery. And if convert request returns other than >> DLM_NORMAL, convert thread will do the revert itself. >> So remove the revert logic in dlm_move_lockres_to_recovery_list. >> > First, I have a question about the old code you are removing. > How did this scenario ever worked? > convert req has been sent to the master and caller of __ocfs2_cluster_lock() > is waiting in ocfs2_wait_for_mask(&mw) when master dies. new master > receives the res with our lock in granted state (since we move it there > in dlm_move_lockres_to_recovery_list). Now, the new master has no idea > that this lock needs upconversion. How will it get converted? As you can see, dlm_shuffle_lists won't handle this kind of lock. That means the convert won't be finished forever. > > Now, back to your fix.This is my understanding. > If we leave it in the converting list as you propose, then > new master will also have it in converting list , its correct place. > > dlm_shuffle_lists runs after recovery. Will send BAST to the new master which > would behave just like the original master would have. If lock can be converted, > it will send AST, otherwise it will wait until it can be, just like the old master would > have done etc etc. Am I correct? Yes. > Thanks > -Tariq > > . >
On 09/18/2015 03:25 PM, Joseph Qi wrote: > On 2015/9/18 10:41, Junxiao Bi wrote: >> Hi Joseph, >> >> On 09/17/2015 09:17 PM, Joseph Qi wrote: >>>> There is a race window between dlmconvert_remote and >>>> dlm_move_lockres_to_recovery_list, which will cause a lock with >>>> OCFS2_LOCK_BUSY in grant list, thus system hangs. >>>> >>>> dlmconvert_remote >>>> { >>>> spin_lock(&res->spinlock); >>>> list_move_tail(&lock->list, &res->converting); >>>> lock->convert_pending = 1; >>>> spin_unlock(&res->spinlock); >>>> >>>> status = dlm_send_remote_convert_request(); >>>> >>>>>> race window, master has queued ast and return DLM_NORMAL, >>>> and then down before sending ast. >>>> this node detects master down and call >>>> dlm_move_lockres_to_recovery_list, which will revert the >>>> lock to grant list. >>>> Then OCFS2_LOCK_BUSY won't be cleared as new master won't >>>> send ast any more because it thinks already be authorized. >>>> >>>> spin_lock(&res->spinlock); >>>> lock->convert_pending = 0; >>>> if (status != DLM_NORMAL) >>>> dlm_revert_pending_convert(res, lock); >>>> spin_unlock(&res->spinlock); >>>> } >>>> >>>> In this case, just leave it in convert list and new master will take >>>> care of it after recovery. And if convert request returns other than >>>> DLM_NORMAL, convert thread will do the revert itself. >>>> So remove the revert logic in dlm_move_lockres_to_recovery_list. >> Yes, looks good. The lock was already in convert list. Recovery process >> will shuffle the list and send ast again. So why not clean up >> convert_pending, it is useless now? > You are right. convert_pending is now useless. I will send a new version > later. > One more concern is, does it have relations with LVB? I can't see how this affect LVB. LVB take affect after convert is done. But convert is still on going here. Thanks, Junxiao. > >> The same thing happen for lock_pending, the lock was already in block >> list. I think it can also be removed. > I'll investigate on it. > >> >> Thanks, >> Junxiao. >> > >
diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c index ce12e0b..d797d49 100644 --- a/fs/ocfs2/dlm/dlmrecovery.c +++ b/fs/ocfs2/dlm/dlmrecovery.c @@ -2058,15 +2058,7 @@ void dlm_move_lockres_to_recovery_list(struct dlm_ctxt *dlm, queue = dlm_list_idx_to_ptr(res, i); list_for_each_entry_safe(lock, next, queue, list) { dlm_lock_get(lock); - if (lock->convert_pending) { - /* move converting lock back to granted */ - BUG_ON(i != DLM_CONVERTING_LIST); - mlog(0, "node died with convert pending " - "on %.*s. move back to granted list.\n", - res->lockname.len, res->lockname.name); - dlm_revert_pending_convert(res, lock); - lock->convert_pending = 0; - } else if (lock->lock_pending) { + if (lock->lock_pending) { /* remove pending lock requests completely */ BUG_ON(i != DLM_BLOCKED_LIST); mlog(0, "node died with lock pending "