From patchwork Thu May 3 18:57:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Layton X-Patchwork-Id: 10378969 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D0D6C60327 for ; Thu, 3 May 2018 18:58:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BC60A27165 for ; Thu, 3 May 2018 18:58:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B05FA2817F; Thu, 3 May 2018 18:58:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5096927165 for ; Thu, 3 May 2018 18:58:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751377AbeECS6P (ORCPT ); Thu, 3 May 2018 14:58:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:44774 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751283AbeECS6I (ORCPT ); Thu, 3 May 2018 14:58:08 -0400 Received: from tleilax.poochiereds.net (cpe-71-70-156-158.nc.res.rr.com [71.70.156.158]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A5C7621783; Thu, 3 May 2018 18:58:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1525373888; bh=8DcnGldkOBUhwsNEqKIY/8Kdvxzg0KvCoM9bYxG6NcU=; h=From:To:Subject:Date:In-Reply-To:References:From; b=s+N8Zw9LwTmQkeloW3IeiJk9kIGOyypkxCp1nMAi3khjop1nbb1lbNofrWpNb1hzw 1zFrCz+tuB8S7T8T9z3I2jheiBk5wuFUT3qgB3DIV5GW+Lr9IpLZueT3B8wEDOczKi /J/d8bZ9+Owtn8xe8d7Xtg04bikiOohLHF1SCudc= From: Jeff Layton To: devel@lists.nfs-ganesha.org, ceph-devel@vger.kernel.org Subject: [nfs-ganesha RFC PATCH v2 05/13] SAL: add new try_lift_grace recovery operation Date: Thu, 3 May 2018 14:57:55 -0400 Message-Id: <20180503185803.25417-6-jlayton@kernel.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180503185803.25417-1-jlayton@kernel.org> References: <20180503185803.25417-1-jlayton@kernel.org> Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jeff Layton When running in a clustered environment, we can't just lift the grace period once the local machine is ready. We must instead wait until no other cluster nodes still need it. Add a new try_lift_grace op, and use that to do extra vetting before allowing the local grace period to be lifted. If it returns true, then we can go ahead and lift the grace period. Change-Id: Ic8060a083ac9d8581d78357ab7f1351793625264 Signed-off-by: Jeff Layton --- src/SAL/nfs4_recovery.c | 16 +++++++++++----- src/include/sal_functions.h | 2 ++ 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/src/SAL/nfs4_recovery.c b/src/SAL/nfs4_recovery.c index f88d1d187f45..120819e621c9 100644 --- a/src/SAL/nfs4_recovery.c +++ b/src/SAL/nfs4_recovery.c @@ -204,6 +204,7 @@ void nfs_try_lift_grace(void) int32_t rc_count = 0; time_t current = atomic_fetch_time_t(¤t_grace); + /* Already lifted? Just return */ if (!current) return; @@ -221,13 +222,18 @@ void nfs_try_lift_grace(void) time(NULL)); /* - * Can we lift the grace period now? If so, take the grace_mutex and - * try to do it. + * Can we lift the grace period now? Clustered backends may need + * extra checks before they can do so. If that is the case, then take + * the grace_mutex and try to do it. If the backend does not implement + * a try_lift_grace operation, then we assume it's always ok. */ if (!in_grace) { - PTHREAD_MUTEX_lock(&grace_mutex); - nfs_lift_grace_locked(current); - PTHREAD_MUTEX_unlock(&grace_mutex); + if (!recovery_backend->try_lift_grace || + recovery_backend->try_lift_grace()) { + PTHREAD_MUTEX_lock(&grace_mutex); + nfs_lift_grace_locked(current); + PTHREAD_MUTEX_unlock(&grace_mutex); + } } } diff --git a/src/include/sal_functions.h b/src/include/sal_functions.h index 7563b021af22..7e30e51eeabf 100644 --- a/src/include/sal_functions.h +++ b/src/include/sal_functions.h @@ -975,6 +975,7 @@ void blocked_lock_polling(struct fridgethr_context *ctx); void nfs_start_grace(nfs_grace_start_t *gsp); bool nfs_in_grace(void); +bool simple_try_lift_grace(void); void nfs_try_lift_grace(void); void nfs4_add_clid(nfs_client_id_t *); void nfs4_rm_clid(nfs_client_id_t *); @@ -1022,6 +1023,7 @@ struct nfs4_recovery_backend { void (*add_clid)(nfs_client_id_t *); void (*rm_clid)(nfs_client_id_t *); void (*add_revoke_fh)(nfs_client_id_t *, nfs_fh4 *); + bool (*try_lift_grace)(void); }; void fs_backend_init(struct nfs4_recovery_backend **);