From patchwork Fri Sep 18 11:25:46 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joseph Qi X-Patchwork-Id: 7215391 Return-Path: X-Original-To: patchwork-ocfs2-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id D0871BEEC1 for ; Fri, 18 Sep 2015 11:26:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A320320959 for ; Fri, 18 Sep 2015 11:26:45 +0000 (UTC) Received: from userp1040.oracle.com (userp1040.oracle.com [156.151.31.81]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4958A20887 for ; Fri, 18 Sep 2015 11:26:43 +0000 (UTC) Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id t8IBQOcL010412 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 18 Sep 2015 11:26:25 GMT Received: from oss.oracle.com (oss-old-reserved.oracle.com [137.254.22.2]) by aserv0021.oracle.com (8.13.8/8.13.8) with ESMTP id t8IBQMvR032119 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 18 Sep 2015 11:26:23 GMT Received: from localhost ([127.0.0.1] helo=lb-oss.oracle.com) by oss.oracle.com with esmtp (Exim 4.63) (envelope-from ) id 1Zcto6-0006LR-TZ; Fri, 18 Sep 2015 04:26:22 -0700 Received: from userv0022.oracle.com ([156.151.31.74]) by oss.oracle.com with esmtp (Exim 4.63) (envelope-from ) id 1Zctnh-0006Jn-Uc for ocfs2-devel@oss.oracle.com; Fri, 18 Sep 2015 04:25:58 -0700 Received: from aserp1020.oracle.com (aserp1020.oracle.com [141.146.126.67]) by userv0022.oracle.com (8.13.8/8.13.8) with ESMTP id t8IBPuWo027574 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Fri, 18 Sep 2015 11:25:57 GMT Received: from userp2030.oracle.com (userp2030.oracle.com [156.151.31.89]) by aserp1020.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id t8IBPt5C020616 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Fri, 18 Sep 2015 11:25:56 GMT Received: from pps.filterd (userp2030.oracle.com [127.0.0.1]) by userp2030.oracle.com (8.15.0.59/8.15.0.59) with SMTP id t8IBOiK8023344 for ; Fri, 18 Sep 2015 11:25:55 GMT Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [58.251.152.64]) by userp2030.oracle.com with ESMTP id 1wxms7sa55-1 (version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=NOT) for ; Fri, 18 Sep 2015 11:25:55 +0000 Received: from 172.24.1.47 (EHLO szxeml427-hub.china.huawei.com) ([172.24.1.47]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CVJ50296; Fri, 18 Sep 2015 19:25:50 +0800 (CST) Received: from [127.0.0.1] (10.177.22.101) by szxeml427-hub.china.huawei.com (10.82.67.182) with Microsoft SMTP Server id 14.3.235.1; Fri, 18 Sep 2015 19:25:42 +0800 Message-ID: <55FBF4BA.30000@huawei.com> Date: Fri, 18 Sep 2015 19:25:46 +0800 From: Joseph Qi User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5 MIME-Version: 1.0 To: Andrew Morton X-Originating-IP: [10.177.22.101] X-CFilter-Loop: Reflected X-Proofpoint-SPF-Result: pass X-Proofpoint-SPF-Record: v=spf1 ip4:119.145.14.64/30 ip4:58.251.152.64/30 ip4:119.145.14.93 ip4:58.251.152.93 ip4:206.16.17.74 ip4:194.213.3.16 ip4:194.213.3.17 ip4:206.16.17.72 ~all X-ServerName: szxga01-in.huawei.com X-Proofpoint-Virus-Version: vendor=nai engine=5700 definitions=7927 signatures=670636 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1507310000 definitions=main-1509180159 Cc: Mark Fasheh , "ocfs2-devel@oss.oracle.com" Subject: [Ocfs2-devel] [PATCH v2] ocfs2/dlm: fix race between convert and recovery X-BeenThere: ocfs2-devel@oss.oracle.com X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: ocfs2-devel-bounces@oss.oracle.com Errors-To: ocfs2-devel-bounces@oss.oracle.com X-Source-IP: aserv0021.oracle.com [141.146.126.233] X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There is a race window between dlmconvert_remote and dlm_move_lockres_to_recovery_list, which will cause a lock with OCFS2_LOCK_BUSY in grant list, thus system hangs. dlmconvert_remote { spin_lock(&res->spinlock); list_move_tail(&lock->list, &res->converting); lock->convert_pending = 1; spin_unlock(&res->spinlock); status = dlm_send_remote_convert_request(); >>>>>> race window, master has queued ast and return DLM_NORMAL, and then down before sending ast. this node detects master down and calls dlm_move_lockres_to_recovery_list, which will revert the lock to grant list. Then OCFS2_LOCK_BUSY won't be cleared as new master won't send ast any more because it thinks already be authorized. spin_lock(&res->spinlock); lock->convert_pending = 0; if (status != DLM_NORMAL) dlm_revert_pending_convert(res, lock); spin_unlock(&res->spinlock); } In this case, just leave it in convert list and new master will take care of it after recovery. And if convert request returns other than DLM_NORMAL, convert thread will do the revert itself. So remove the revert logic in dlm_move_lockres_to_recovery_list. changelog since v1: Clean up convert_pending since it is now useless. Signed-off-by: Joseph Qi Reported-by: Yiwen Jiang Cc: Reviewed-by: Junxiao Bi --- fs/ocfs2/dlm/dlmcommon.h | 1 - fs/ocfs2/dlm/dlmconvert.c | 2 -- fs/ocfs2/dlm/dlmdebug.c | 7 +++---- fs/ocfs2/dlm/dlmlock.c | 1 - fs/ocfs2/dlm/dlmrecovery.c | 10 +--------- 5 files changed, 4 insertions(+), 17 deletions(-) diff --git a/fs/ocfs2/dlm/dlmcommon.h b/fs/ocfs2/dlm/dlmcommon.h index e88ccf8..25853d7 100644 --- a/fs/ocfs2/dlm/dlmcommon.h +++ b/fs/ocfs2/dlm/dlmcommon.h @@ -369,7 +369,6 @@ struct dlm_lock struct dlm_lockstatus *lksb; unsigned ast_pending:1, bast_pending:1, - convert_pending:1, lock_pending:1, cancel_pending:1, unlock_pending:1, diff --git a/fs/ocfs2/dlm/dlmconvert.c b/fs/ocfs2/dlm/dlmconvert.c index e36d63f..82ba8d5 100644 --- a/fs/ocfs2/dlm/dlmconvert.c +++ b/fs/ocfs2/dlm/dlmconvert.c @@ -291,7 +291,6 @@ enum dlm_status dlmconvert_remote(struct dlm_ctxt *dlm, /* move lock to local convert queue */ /* do not alter lock refcount. switching lists. */ list_move_tail(&lock->list, &res->converting); - lock->convert_pending = 1; lock->ml.convert_type = type; if (flags & LKM_VALBLK) { @@ -315,7 +314,6 @@ enum dlm_status dlmconvert_remote(struct dlm_ctxt *dlm, spin_lock(&res->spinlock); res->state &= ~DLM_LOCK_RES_IN_PROGRESS; - lock->convert_pending = 0; /* if it failed, move it back to granted queue */ if (status != DLM_NORMAL) { if (status != DLM_NOTQUEUED) diff --git a/fs/ocfs2/dlm/dlmdebug.c b/fs/ocfs2/dlm/dlmdebug.c index 8251360..710e94c 100644 --- a/fs/ocfs2/dlm/dlmdebug.c +++ b/fs/ocfs2/dlm/dlmdebug.c @@ -77,7 +77,7 @@ static void __dlm_print_lock(struct dlm_lock *lock) printk(" type=%d, conv=%d, node=%u, cookie=%u:%llu, " "ref=%u, ast=(empty=%c,pend=%c), bast=(empty=%c,pend=%c), " - "pending=(conv=%c,lock=%c,cancel=%c,unlock=%c)\n", + "pending=(lock=%c,cancel=%c,unlock=%c)\n", lock->ml.type, lock->ml.convert_type, lock->ml.node, dlm_get_lock_cookie_node(be64_to_cpu(lock->ml.cookie)), dlm_get_lock_cookie_seq(be64_to_cpu(lock->ml.cookie)), @@ -86,7 +86,6 @@ static void __dlm_print_lock(struct dlm_lock *lock) (lock->ast_pending ? 'y' : 'n'), (list_empty(&lock->bast_list) ? 'y' : 'n'), (lock->bast_pending ? 'y' : 'n'), - (lock->convert_pending ? 'y' : 'n'), (lock->lock_pending ? 'y' : 'n'), (lock->cancel_pending ? 'y' : 'n'), (lock->unlock_pending ? 'y' : 'n')); @@ -502,7 +501,7 @@ static int dump_lock(struct dlm_lock *lock, int list_type, char *buf, int len) #define DEBUG_LOCK_VERSION 1 spin_lock(&lock->spinlock); - out = snprintf(buf, len, "LOCK:%d,%d,%d,%d,%d,%d:%lld,%d,%d,%d,%d,%d," + out = snprintf(buf, len, "LOCK:%d,%d,%d,%d,%d,%d:%lld,%d,%d,%d,%d," "%d,%d,%d,%d\n", DEBUG_LOCK_VERSION, list_type, lock->ml.type, lock->ml.convert_type, @@ -512,7 +511,7 @@ static int dump_lock(struct dlm_lock *lock, int list_type, char *buf, int len) !list_empty(&lock->ast_list), !list_empty(&lock->bast_list), lock->ast_pending, lock->bast_pending, - lock->convert_pending, lock->lock_pending, + lock->lock_pending, lock->cancel_pending, lock->unlock_pending, atomic_read(&lock->lock_refs.refcount)); spin_unlock(&lock->spinlock); diff --git a/fs/ocfs2/dlm/dlmlock.c b/fs/ocfs2/dlm/dlmlock.c index 66c2a49..857dd15 100644 --- a/fs/ocfs2/dlm/dlmlock.c +++ b/fs/ocfs2/dlm/dlmlock.c @@ -411,7 +411,6 @@ static void dlm_init_lock(struct dlm_lock *newlock, int type, newlock->ml.cookie = cpu_to_be64(cookie); newlock->ast_pending = 0; newlock->bast_pending = 0; - newlock->convert_pending = 0; newlock->lock_pending = 0; newlock->unlock_pending = 0; newlock->cancel_pending = 0; diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c index 3d90ad7..2402ef7 100644 --- a/fs/ocfs2/dlm/dlmrecovery.c +++ b/fs/ocfs2/dlm/dlmrecovery.c @@ -2062,15 +2062,7 @@ void dlm_move_lockres_to_recovery_list(struct dlm_ctxt *dlm, queue = dlm_list_idx_to_ptr(res, i); list_for_each_entry_safe(lock, next, queue, list) { dlm_lock_get(lock); - if (lock->convert_pending) { - /* move converting lock back to granted */ - BUG_ON(i != DLM_CONVERTING_LIST); - mlog(0, "node died with convert pending " - "on %.*s. move back to granted list.\n", - res->lockname.len, res->lockname.name); - dlm_revert_pending_convert(res, lock); - lock->convert_pending = 0; - } else if (lock->lock_pending) { + if (lock->lock_pending) { /* remove pending lock requests completely */ BUG_ON(i != DLM_BLOCKED_LIST); mlog(0, "node died with lock pending "