From patchwork Thu Aug 1 12:05:03 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xue jiufei X-Patchwork-Id: 2836954 Return-Path: X-Original-To: patchwork-ocfs2-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id CA2FAC0319 for ; Thu, 1 Aug 2013 12:08:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9761D201F3 for ; Thu, 1 Aug 2013 12:08:16 +0000 (UTC) Received: from userp1040.oracle.com (userp1040.oracle.com [156.151.31.81]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F1D5D200E8 for ; Thu, 1 Aug 2013 12:08:14 +0000 (UTC) Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237]) by userp1040.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with ESMTP id r71C7UM5016970 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 1 Aug 2013 12:07:31 GMT Received: from oss.oracle.com (oss-external.oracle.com [137.254.96.51]) by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id r71C7OMZ017465 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 1 Aug 2013 12:07:24 GMT Received: from localhost ([127.0.0.1] helo=oss.oracle.com) by oss.oracle.com with esmtp (Exim 4.63) (envelope-from ) id 1V4rfA-0003iU-7e; Thu, 01 Aug 2013 05:07:24 -0700 Received: from ucsinet21.oracle.com ([156.151.31.93]) by oss.oracle.com with esmtp (Exim 4.63) (envelope-from ) id 1V4ret-00031G-2Q for ocfs2-devel@oss.oracle.com; Thu, 01 Aug 2013 05:07:07 -0700 Received: from userp1020.oracle.com (userp1020.oracle.com [156.151.31.79]) by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id r71C75CL006279 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 1 Aug 2013 12:07:06 GMT Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [119.145.14.65]) by userp1020.oracle.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with ESMTP id r71C73Ku008911 (version=TLSv1/SSLv3 cipher=DES-CBC3-SHA bits=168 verify=FAIL) for ; Thu, 1 Aug 2013 12:07:05 GMT Received: from 172.24.2.119 (EHLO szxeml211-edg.china.huawei.com) ([172.24.2.119]) by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued) with ESMTP id BFK29529; Thu, 01 Aug 2013 20:05:33 +0800 (CST) Received: from SZXEML423-HUB.china.huawei.com (10.82.67.162) by szxeml211-edg.china.huawei.com (172.24.2.182) with Microsoft SMTP Server (TLS) id 14.1.323.7; Thu, 1 Aug 2013 20:05:33 +0800 Received: from [127.0.0.1] (10.135.72.87) by szxeml423-hub.china.huawei.com (10.82.67.162) with Microsoft SMTP Server id 14.1.323.7; Thu, 1 Aug 2013 20:05:29 +0800 Message-ID: <51FA4EEF.9050102@huawei.com> Date: Thu, 1 Aug 2013 20:05:03 +0800 From: Xue jiufei User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20130620 Thunderbird/17.0.7 MIME-Version: 1.0 To: Andrew Morton X-Originating-IP: [10.135.72.87] X-CFilter-Loop: Reflected X-Flow-Control-Info: class=Pass-to-MM reputation=ipRisk-All ip=119.145.14.65 ct-class=T1 ct-vol1=212 ct-vol2=6 ct-vol3=5 ct-risk=10 ct-spam1=0 ct-spam2=0 ct-bulk=87 rcpts=1 size=2963 X-Sendmail-CM-Score: 0.00% X-Sendmail-CM-Analysis: v=2.1 cv=f6xxWoCM c=1 sm=1 tr=0 a=qbZWUeANkjeORAZY4leFnw==:117 a=qbZWUeANkjeORAZY4leFnw==:17 a=7dTNpnL_XZEA:10 a=je8okafH2F8A:10 a=YdcoXNZd414A:10 a=O9dq5j03pVQA:10 a=-6GgXGw8a1AA:10 a=8nJEP1OIZ-IA:10 a=i0EeH86SAAAA:8 a=CYktMycnIfMA:10 a =9KYZ_lLgTf6M7ON3dPYA:9 a=wPNLvfGTeEIA:10 a=hPjdaMEvmhQA:10 X-Sendmail-CT-Classification: not spam X-Sendmail-CT-RefID: str=0001.0A090201.51FA4F69.019B:SCFSTAT1612107, ss=1, re=-4.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0 Cc: Mark Fasheh , ocfs2-devel@oss.oracle.com Subject: [Ocfs2-devel] [PATCH] ocfs2: force clean refmap when doing local recovery cleanup X-BeenThere: ocfs2-devel@oss.oracle.com X-Mailman-Version: 2.1.9 Precedence: list Reply-To: xuejiufei@huawei.com List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: ocfs2-devel-bounces@oss.oracle.com Errors-To: ocfs2-devel-bounces@oss.oracle.com X-Source-IP: acsinet21.oracle.com [141.146.126.237] X-Spam-Status: No, score=-5.7 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Function dlm_do_local_recovery_cleanup() should force clean refmap if the owner of lockres is UNKNOWN. Otherwise node may hang when umounting filesystems. Here's the situation: Node1 Node2 dlmlock() -> dlm_get_lock_resource() send DLM_MASTER_REQUEST_MSG to other nodes. trying to master this lockres, return MAYBE. selected as the master of lockresA, set mle->master to Node1, and do assert_master, send DLM_ASSERT_MASTER_MSG to Node2. Node 2 has interest on lockresA and return DLM_ASSERT_RESPONSE_MASTERY_REF then something happened and Node2 crashed. receiving DLM_ASSERT_RESPONSE_MASTERY_REF, set Node2 into refmap, and keep sending DLM_ASSERT_MASTER_MSG to other nodes o2hb found node2 down, calling dlm_hb_node_down() --> dlm_do_local_recovery_cleanup() the master of lockresA is still UNKNOWN, no need to call dlm_free_dead_locks(). set the master of lockresA to Node1, but Node2 stills remains in refmap. when Node1 umount, it found that the refmap of lockresA is not empty and attempted to migrate it to Node2, But Node2 is already down, so umount hang, trying to migrate lockresA again and again. Signed-off-by: joyce --- fs/ocfs2/dlm/dlmrecovery.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c index 773bd32..7b4413d 100644 --- a/fs/ocfs2/dlm/dlmrecovery.c +++ b/fs/ocfs2/dlm/dlmrecovery.c @@ -2191,6 +2191,21 @@ static void dlm_revalidate_lvb(struct dlm_ctxt *dlm, } } +static void dlm_force_clean_refmap(struct dlm_ctxt *dlm, + struct dlm_lock_resource *res, u16 dead_node) +{ + assert_spin_locked(&dlm->spinlock); + assert_spin_locked(&res->spinlock); + + if (test_bit(dead_node, res->refmap)) { + mlog(0, "%s:%.*s: dead node %u had a ref, but had " + "no locks and had not purged before dying\n", + dlm->name, res->lockname.len, + res->lockname.name, dead_node); + dlm_lockres_clear_refmap_bit(dlm, res, dead_node); + } +} + static void dlm_free_dead_locks(struct dlm_ctxt *dlm, struct dlm_lock_resource *res, u8 dead_node) { @@ -2328,7 +2343,8 @@ static void dlm_do_local_recovery_cleanup(struct dlm_ctxt *dlm, u8 dead_node) } else if (res->owner == dlm->node_num) { dlm_free_dead_locks(dlm, res, dead_node); __dlm_lockres_calc_usage(dlm, res); - } + } else if (res->owner == DLM_LOCK_RES_OWNER_UNKNOWN) + dlm_force_clean_refmap(dlm, res, dead_node); spin_unlock(&res->spinlock); } }