From patchwork Wed Dec 21 17:03:46 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Layton X-Patchwork-Id: 9483305 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DB7D66022E for ; Wed, 21 Dec 2016 17:05:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C4DCD281A7 for ; Wed, 21 Dec 2016 17:05:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B9A9628424; Wed, 21 Dec 2016 17:05:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.4 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, SUSPICIOUS_RECIPS autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E6D427C0B for ; Wed, 21 Dec 2016 17:05:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965033AbcLUREX (ORCPT ); Wed, 21 Dec 2016 12:04:23 -0500 Received: from mx1.redhat.com ([209.132.183.28]:47244 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965012AbcLUREV (ORCPT ); Wed, 21 Dec 2016 12:04:21 -0500 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3D6B0C062CEA; Wed, 21 Dec 2016 17:04:16 +0000 (UTC) Received: from tleilax.poochiereds.net (ovpn-118-75.rdu2.redhat.com [10.10.118.75]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id uBLH3l64022318; Wed, 21 Dec 2016 12:04:15 -0500 From: Jeff Layton To: linux-fsdevel@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-xfs@vger.kernel.org Subject: [RFC PATCH v1 29/30] fs: track whether the i_version has been queried with an i_state flag Date: Wed, 21 Dec 2016 12:03:46 -0500 Message-Id: <1482339827-7882-30-git-send-email-jlayton@redhat.com> In-Reply-To: <1482339827-7882-1-git-send-email-jlayton@redhat.com> References: <1482339827-7882-1-git-send-email-jlayton@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Wed, 21 Dec 2016 17:04:16 +0000 (UTC) Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP NFSv4 has some pretty relaxed rules for the i_version counter that we can exploit for our own (performance) gain. The rules basically boil down to: 1) it must steadily increase so that a client can discard change attributes that are older than ones it has already seen. 2) the value must be different from the last time we checked it if there was a data or metadata change. This last bit is important, as we don't necessarily need to bump the counter when no one is querying for it. On a write-intensive workload this can add up to the metadata being written a lot less. Add a new I_VERS_BUMP i_state flag that we can use to track when the i_version has been queried. When it's queried we take the i_lock, get the value and set the flag and then drop the lock and return it. When we would go to bump it, we check the flag and only bump the the counter if it's set and we weren't requested to forcibly bump it. Signed-off-by: Jeff Layton --- include/linux/fs.h | 66 +++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 46 insertions(+), 20 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index 75323e7b6954..917557faa8e8 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1909,6 +1909,9 @@ static inline bool HAS_UNMAPPED_ID(struct inode *inode) * wb stat updates to grab mapping->tree_lock. See * inode_switch_wb_work_fn() for details. * + * I_VERS_BUMP inode->i_version counter must be bumped on the next + * change. See the inode_*_iversion functions. + * * Q: What is the difference between I_WILL_FREE and I_FREEING? */ #define I_DIRTY_SYNC (1 << 0) @@ -1929,6 +1932,7 @@ static inline bool HAS_UNMAPPED_ID(struct inode *inode) #define __I_DIRTY_TIME_EXPIRED 12 #define I_DIRTY_TIME_EXPIRED (1 << __I_DIRTY_TIME_EXPIRED) #define I_WB_SWITCH (1 << 13) +#define I_VERS_BUMP (1 << 14) #define I_DIRTY (I_DIRTY_SYNC | I_DIRTY_DATASYNC | I_DIRTY_PAGES) #define I_DIRTY_ALL (I_DIRTY | I_DIRTY_TIME) @@ -1976,20 +1980,6 @@ inode_set_iversion(struct inode *inode, const u64 new) } /** - * inode_inc_iversion_locked - increment i_version while protected - * @inode: inode to be updated - * - * Increment the i_version field in the inode. This version is usable - * when there is some other sort of lock in play that would prevent - * concurrent accessors. - */ -static inline void -inode_inc_iversion_locked(struct inode *inode) -{ - inode->i_version++; -} - -/** * inode_set_iversion_read - set i_version to a particular value and flag * set flag to indicate that it has been viewed * @inode: inode to set @@ -2002,7 +1992,10 @@ inode_inc_iversion_locked(struct inode *inode) static inline void inode_set_iversion_read(struct inode *inode, const u64 new) { + spin_lock(&inode->i_lock); inode_set_iversion(inode, new); + inode->i_state |= I_VERS_BUMP; + spin_unlock(&inode->i_lock); } /** @@ -2011,14 +2004,36 @@ inode_set_iversion_read(struct inode *inode, const u64 new) * * Every time the inode is modified, the i_version field will be incremented. * The filesystem has to be mounted with MS_I_VERSION flag. + * + * Returns true if counter was bumped, and false if it wasn't necessary. */ static inline bool inode_inc_iversion(struct inode *inode, bool force) { + bool ret = false; + spin_lock(&inode->i_lock); - inode_inc_iversion_locked(inode); + if (force || (inode->i_state & I_VERS_BUMP)) { + inode->i_version++; + inode->i_state &= ~I_VERS_BUMP; + ret = true; + } spin_unlock(&inode->i_lock); - return true; + return ret; +} + +/** + * inode_inc_iversion_locked - increment i_version while protected + * @inode: inode to be updated + * + * Increment the i_version field in the inode. This version is usable + * when there is some other sort of lock in play that would prevent + * concurrent increments (typically inode->i_rwsem for write). + */ +static inline void +inode_inc_iversion_locked(struct inode *inode) +{ + inode_inc_iversion(inode, true); } /** @@ -2043,9 +2058,15 @@ inode_get_iversion_raw(const struct inode *inode) * to store the returned i_version for later comparison. */ static inline u64 -inode_get_iversion(const struct inode *inode) +inode_get_iversion(struct inode *inode) { - return inode_get_iversion_raw(inode); + u64 ret; + + spin_lock(&inode->i_lock); + inode->i_state |= I_VERS_BUMP; + ret = inode->i_version; + spin_unlock(&inode->i_lock); + return ret; } /** @@ -2054,7 +2075,7 @@ inode_get_iversion(const struct inode *inode) * @old: old value to check against its i_version * * Compare an i_version counter with a previous one. Returns 0 if they are - * the same or non-zero if they are different. + * the same, greater than zero if the inode's is "later" than the old value. */ static inline s64 inode_cmp_iversion(const struct inode *inode, const u64 old) @@ -2072,7 +2093,12 @@ inode_cmp_iversion(const struct inode *inode, const u64 old) static inline bool inode_iversion_need_inc(struct inode *inode) { - return true; + bool ret; + + spin_lock(&inode->i_lock); + ret = inode->i_state & I_VERS_BUMP; + spin_unlock(&inode->i_lock); + return ret; } enum file_time_flags {