From patchwork Mon Jul 27 10:59:49 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Layton X-Patchwork-Id: 6872001 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 2235B9F358 for ; Mon, 27 Jul 2015 11:00:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3EBE720547 for ; Mon, 27 Jul 2015 11:00:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9DEC52053F for ; Mon, 27 Jul 2015 11:00:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752152AbbG0LAL (ORCPT ); Mon, 27 Jul 2015 07:00:11 -0400 Received: from mail-qg0-f50.google.com ([209.85.192.50]:36490 "EHLO mail-qg0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751723AbbG0LAK (ORCPT ); Mon, 27 Jul 2015 07:00:10 -0400 Received: by qges31 with SMTP id s31so1660290qge.3 for ; Mon, 27 Jul 2015 04:00:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=/5d6s2aB/fBfLT1Ln0/WBpFLqHYSCABOZ1dIe/M6Q2A=; b=VZ49Q8EnR9OIhXTC7cMWcTNEkyvA+KSW7sOH0QD5BqUQXK/0/RPckxyym4KH3WrMXo 9zvxmX1FOCnBA9uy5e0fRQSYdNiHH1MNH5MDTYXflnvhg3JxqLNkEGe1LRX53OPhSApd swt1PnnXnKabvDXbfvJkhNI1Uo96THOGG9HRAYZ/h2p2caXTuepX9Lzv0L0y5T8cQxt2 EqbxxqxrqGg7wvNFD9cZdCwBx41j9WEh+kjKZhKYQa/r6M6PLVPmwGE7kTzRylxnk46F l+ao6URoun2OFb2e1fKDCZ1Ei1lGJzsgkYZakjtloXBvL53pwMTidanEfQUy4nB0dBfO lejw== X-Gm-Message-State: ALoCoQmjy6f3SaEXPnAqYpholi3ZJ7u5mHAqA2mwZATvVQRXEtfCnJ3gBS24kYE5sy11QFJwAYyk X-Received: by 10.140.145.207 with SMTP id 198mr41927973qhr.45.1437994809026; Mon, 27 Jul 2015 04:00:09 -0700 (PDT) Received: from tlielax.poochiereds.net ([2606:a000:1105:8e:3a60:77ff:fe93:a95d]) by smtp.googlemail.com with ESMTPSA id f11sm9031349qki.1.2015.07.27.04.00.07 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 27 Jul 2015 04:00:08 -0700 (PDT) From: Jeff Layton X-Google-Original-From: Jeff Layton To: trond.myklebust@primarydata.com Cc: linux-nfs@vger.kernel.org Subject: [PATCH] nfs: hold state_lock when updating open stateid Date: Mon, 27 Jul 2015 06:59:49 -0400 Message-Id: <1437994789-14133-1-git-send-email-jeff.layton@primarydata.com> X-Mailer: git-send-email 2.4.3 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-8.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, we check to see if an open stateid needs updating, and then update the stateid if so. The check and update however are not atomic, so it's easily possible to end up finding an old seqid when we check it only to have it updated by a newer one before we can get around to updating it ourselves. We could try to play games with atomic ops here, but the simple fix is to just ensure that we hold the per-stateid state_lock when updating an open stateid. Signed-off-by: Jeff Layton --- fs/nfs/nfs4proc.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 780accb962dd..bc6a7b5d81aa 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -1234,14 +1234,17 @@ static void nfs_clear_open_stateid_locked(struct nfs4_state *state, if (stateid == NULL) return; /* Handle races with OPEN */ + spin_lock(&state->state_lock); if (!nfs4_stateid_match_other(stateid, &state->open_stateid) || !nfs4_stateid_is_newer(stateid, &state->open_stateid)) { nfs_resync_open_stateid_locked(state); + spin_unlock(&state->state_lock); return; } if (test_bit(NFS_DELEGATED_STATE, &state->flags) == 0) nfs4_stateid_copy(&state->stateid, stateid); nfs4_stateid_copy(&state->open_stateid, stateid); + spin_unlock(&state->state_lock); } static void nfs_clear_open_stateid(struct nfs4_state *state, nfs4_stateid *stateid, fmode_t fmode) @@ -1265,11 +1268,13 @@ static void nfs_set_open_stateid_locked(struct nfs4_state *state, nfs4_stateid * case FMODE_READ|FMODE_WRITE: set_bit(NFS_O_RDWR_STATE, &state->flags); } - if (!nfs_need_update_open_stateid(state, stateid)) - return; - if (test_bit(NFS_DELEGATED_STATE, &state->flags) == 0) - nfs4_stateid_copy(&state->stateid, stateid); - nfs4_stateid_copy(&state->open_stateid, stateid); + spin_lock(&state->state_lock); + if (nfs_need_update_open_stateid(state, stateid)) { + if (test_bit(NFS_DELEGATED_STATE, &state->flags) == 0) + nfs4_stateid_copy(&state->stateid, stateid); + nfs4_stateid_copy(&state->open_stateid, stateid); + } + spin_unlock(&state->state_lock); } static void __update_open_stateid(struct nfs4_state *state, nfs4_stateid *open_stateid, const nfs4_stateid *deleg_stateid, fmode_t fmode)