From patchwork Thu Apr 30 06:16:36 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junxiao Bi X-Patchwork-Id: 6299921 Return-Path: X-Original-To: patchwork-ocfs2-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 109BEBEEE1 for ; Thu, 30 Apr 2015 06:18:08 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0B9F3201B9 for ; Thu, 30 Apr 2015 06:18:07 +0000 (UTC) Received: from aserp1040.oracle.com (aserp1040.oracle.com [141.146.126.69]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B51D82011B for ; Thu, 30 Apr 2015 06:18:05 +0000 (UTC) Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id t3U6HOrI004632 (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 30 Apr 2015 06:17:25 GMT Received: from oss.oracle.com (oss-old-reserved.oracle.com [137.254.22.2]) by userv0022.oracle.com (8.13.8/8.13.8) with ESMTP id t3U6HKpF017193 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Thu, 30 Apr 2015 06:17:21 GMT Received: from localhost ([127.0.0.1] helo=lb-oss.oracle.com) by oss.oracle.com with esmtp (Exim 4.63) (envelope-from ) id 1Ynhmi-0003Y4-Eo; Wed, 29 Apr 2015 23:17:20 -0700 Received: from userv0022.oracle.com ([156.151.31.74]) by oss.oracle.com with esmtp (Exim 4.63) (envelope-from ) id 1YnhmB-0003X9-LB for ocfs2-devel@oss.oracle.com; Wed, 29 Apr 2015 23:16:47 -0700 Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0022.oracle.com (8.13.8/8.13.8) with ESMTP id t3U6Gl1T015731 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL); Thu, 30 Apr 2015 06:16:47 GMT Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15]) by userv0122.oracle.com (8.13.8/8.13.8) with ESMTP id t3U6GkcT020229; Thu, 30 Apr 2015 06:16:46 GMT Received: from [10.191.5.234] (/10.191.5.234) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 29 Apr 2015 23:16:46 -0700 Message-ID: <5541C8C4.1070005@oracle.com> Date: Thu, 30 Apr 2015 14:16:36 +0800 From: Junxiao Bi User-Agent: Mozilla/5.0 (X11; Linux i686; rv:31.0) Gecko/20100101 Thunderbird/31.1.2 MIME-Version: 1.0 To: Joseph Qi , Andrew Morton References: <553B6FF3.8010805@huawei.com> In-Reply-To: <553B6FF3.8010805@huawei.com> Cc: Mark Fasheh , "ocfs2-devel@oss.oracle.com" Subject: Re: [Ocfs2-devel] [PATCH v2] ocfs2/dlm: fix race between purge and get lock resource X-BeenThere: ocfs2-devel@oss.oracle.com X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: ocfs2-devel-bounces@oss.oracle.com Errors-To: ocfs2-devel-bounces@oss.oracle.com X-Source-IP: userv0022.oracle.com [156.151.31.74] X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On 04/25/2015 06:44 PM, Joseph Qi wrote: > There is a race between purge and get lock resource, which will lead to > ast unfinished and system hung. The case is described below: > > mkdir dlm_thread > ----------------------------------------------------------------------- > o2cb_dlm_lock | > -> dlmlock | > -> dlm_get_lock_resource | > -> __dlm_lookup_lockres_full | > -> spin_unlock(&dlm->spinlock) | > | dlm_run_purge_list > | -> dlm_purge_lockres > | -> dlm_drop_lockres_ref > | -> spin_lock(&dlm->spinlock) > | -> spin_lock(&res->spinlock) > | -> ~DLM_LOCK_RES_DROPPING_REF > | -> spin_unlock(&res->spinlock) > | -> spin_unlock(&dlm->spinlock) > -> spin_lock(&tmpres->spinlock)| > DLM_LOCK_RES_DROPPING_REF cleared | > -> spin_unlock(&tmpres->spinlock) | > return the purged lockres | > > So after this, once ast comes, it will ignore the ast because the > lockres cannot be found anymore. Thus the OCFS2_LOCK_BUSY won't be > cleared and corresponding thread hangs. > The &dlm->spinlock was hold when checking DLM_LOCK_RES_DROPPING_REF at > the very beginning. And commit 7b791d68562e ("ocfs2/dlm: Fix race during > lockres mastery") moved it up because of the possible wait. > So take the &dlm->spinlock and introduce a new wait function to fix the > race. The fix to this issue seemed a little complicated. Indeed this is a similar issue with commit cb79662bc2f83f7b3b60970ad88df43085f96514 ("ocfs2: o2dlm: fix a race between purge and master query"), we can reference that fix. How about the following fix? __dlm_wait_on_lockres(tmpres); Thanks, Junxiao. > > Signed-off-by: Joseph Qi > Reviewed-by: joyce.xue > Cc: > --- > fs/ocfs2/dlm/dlmcommon.h | 2 ++ > fs/ocfs2/dlm/dlmmaster.c | 13 +++++++++---- > fs/ocfs2/dlm/dlmthread.c | 23 +++++++++++++++++++++++ > 3 files changed, 34 insertions(+), 4 deletions(-) > > diff --git a/fs/ocfs2/dlm/dlmcommon.h b/fs/ocfs2/dlm/dlmcommon.h > index e88ccf8..c6b76f4 100644 > --- a/fs/ocfs2/dlm/dlmcommon.h > +++ b/fs/ocfs2/dlm/dlmcommon.h > @@ -1014,6 +1014,8 @@ void dlm_move_lockres_to_recovery_list(struct dlm_ctxt *dlm, > > /* will exit holding res->spinlock, but may drop in function */ > void __dlm_wait_on_lockres_flags(struct dlm_lock_resource *res, int flags); > +void __dlm_wait_on_lockres_flags_new(struct dlm_ctxt *dlm, > + struct dlm_lock_resource *res, int flags); > > /* will exit holding res->spinlock, but may drop in function */ > static inline void __dlm_wait_on_lockres(struct dlm_lock_resource *res) > diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c > index a6944b2..9a5f45d 100644 > --- a/fs/ocfs2/dlm/dlmmaster.c > +++ b/fs/ocfs2/dlm/dlmmaster.c > @@ -755,13 +755,16 @@ lookup: > spin_lock(&dlm->spinlock); > tmpres = __dlm_lookup_lockres_full(dlm, lockid, namelen, hash); > if (tmpres) { > - spin_unlock(&dlm->spinlock); > spin_lock(&tmpres->spinlock); > /* Wait on the thread that is mastering the resource */ > if (tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN) { > - __dlm_wait_on_lockres(tmpres); > + __dlm_wait_on_lockres_flags_new(dlm, tmpres, > + (DLM_LOCK_RES_IN_PROGRESS| > + DLM_LOCK_RES_RECOVERING| > + DLM_LOCK_RES_MIGRATING)); > BUG_ON(tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN); > spin_unlock(&tmpres->spinlock); > + spin_unlock(&dlm->spinlock); > dlm_lockres_put(tmpres); > tmpres = NULL; > goto lookup; > @@ -770,9 +773,10 @@ lookup: > /* Wait on the resource purge to complete before continuing */ > if (tmpres->state & DLM_LOCK_RES_DROPPING_REF) { > BUG_ON(tmpres->owner == dlm->node_num); > - __dlm_wait_on_lockres_flags(tmpres, > - DLM_LOCK_RES_DROPPING_REF); > + __dlm_wait_on_lockres_flags_new(dlm, tmpres, > + DLM_LOCK_RES_DROPPING_REF); > spin_unlock(&tmpres->spinlock); > + spin_unlock(&dlm->spinlock); > dlm_lockres_put(tmpres); > tmpres = NULL; > goto lookup; > @@ -782,6 +786,7 @@ lookup: > dlm_lockres_grab_inflight_ref(dlm, tmpres); > > spin_unlock(&tmpres->spinlock); > + spin_unlock(&dlm->spinlock); > if (res) > dlm_lockres_put(res); > res = tmpres; > diff --git a/fs/ocfs2/dlm/dlmthread.c b/fs/ocfs2/dlm/dlmthread.c > index 69aac6f..505730a 100644 > --- a/fs/ocfs2/dlm/dlmthread.c > +++ b/fs/ocfs2/dlm/dlmthread.c > @@ -77,6 +77,29 @@ repeat: > __set_current_state(TASK_RUNNING); > } > > +void __dlm_wait_on_lockres_flags_new(struct dlm_ctxt *dlm, > + struct dlm_lock_resource *res, int flags) > +{ > + DECLARE_WAITQUEUE(wait, current); > + > + assert_spin_locked(&dlm->spinlock); > + assert_spin_locked(&res->spinlock); > + > + add_wait_queue(&res->wq, &wait); > +repeat: > + set_current_state(TASK_UNINTERRUPTIBLE); > + if (res->state & flags) { > + spin_unlock(&res->spinlock); > + spin_unlock(&dlm->spinlock); > + schedule(); > + spin_lock(&dlm->spinlock); > + spin_lock(&res->spinlock); > + goto repeat; > + } > + remove_wait_queue(&res->wq, &wait); > + __set_current_state(TASK_RUNNING); > +} > + > int __dlm_lockres_has_locks(struct dlm_lock_resource *res) > { > if (list_empty(&res->granted) && diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c index a6944b2..25314d2 100644 --- a/fs/ocfs2/dlm/dlmmaster.c +++ b/fs/ocfs2/dlm/dlmmaster.c @@ -757,6 +757,18 @@ lookup: if (tmpres) { spin_unlock(&dlm->spinlock); spin_lock(&tmpres->spinlock); + + /* + * Right after dlm spinlock was released, dlm_thread could have + * purged the lockres. Check if lockres got unhashed. If so + * start over. + */ + if (hlist_unhashed(&tmpres->hash_node)) { + spin_unlock(&tmpres->spinlock); + dlm_lockres_put(tmpres); + goto lookup; + } + /* Wait on the thread that is mastering the resource */ if (tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN) {