From patchwork Wed Aug 5 03:14:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eric W. Biederman" X-Patchwork-Id: 6945751 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id AA3539F38B for ; Wed, 5 Aug 2015 03:21:35 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9781A2051A for ; Wed, 5 Aug 2015 03:21:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6655A203F1 for ; Wed, 5 Aug 2015 03:21:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752138AbbHEDVb (ORCPT ); Tue, 4 Aug 2015 23:21:31 -0400 Received: from out01.mta.xmission.com ([166.70.13.231]:48187 "EHLO out01.mta.xmission.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751776AbbHEDV3 (ORCPT ); Tue, 4 Aug 2015 23:21:29 -0400 Received: from in01.mta.xmission.com ([166.70.13.51]) by out01.mta.xmission.com with esmtps (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.82) (envelope-from ) id 1ZMpGd-0005yW-6G; Tue, 04 Aug 2015 21:21:23 -0600 Received: from 97-119-22-40.omah.qwest.net ([97.119.22.40] helo=x220.int.ebiederm.org.xmission.com) by in01.mta.xmission.com with esmtpsa (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.82) (envelope-from ) id 1ZMpGa-0007I3-Br; Tue, 04 Aug 2015 21:21:23 -0600 From: ebiederm@xmission.com (Eric W. Biederman) To: Linux Containers Cc: , Al Viro , Andy Lutomirski , "Serge E. Hallyn" , Richard Weinberger , Andrey Vagin , Jann Horn , Willy Tarreau , Omar Sandoval , Miklos Szeredi , Linus Torvalds , "J. Bruce Fields" References: <871tncuaf6.fsf@x220.int.ebiederm.org> <87mw5xq7lt.fsf@x220.int.ebiederm.org> <87a8yqou41.fsf_-_@x220.int.ebiederm.org> <874moq9oyb.fsf_-_@x220.int.ebiederm.org> <871tfkawu9.fsf_-_@x220.int.ebiederm.org> Date: Tue, 04 Aug 2015 22:14:25 -0500 In-Reply-To: <871tfkawu9.fsf_-_@x220.int.ebiederm.org> (Eric W. Biederman's message of "Mon, 03 Aug 2015 16:25:18 -0500") Message-ID: <87y4hqwhny.fsf_-_@x220.int.ebiederm.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.3 (gnu/linux) MIME-Version: 1.0 X-XM-AID: U2FsdGVkX18gTKMJj/hrAXtelIlt08qCHK0PAKBENxI= X-SA-Exim-Connect-IP: 97.119.22.40 X-SA-Exim-Mail-From: ebiederm@xmission.com X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-DCC: XMission; sa02 1397; Body=1 Fuz1=1 Fuz2=1 X-Spam-Combo: ;Linux Containers X-Spam-Relay-Country: X-Spam-Timing: total 2148 ms - load_scoreonly_sql: 0.05 (0.0%), signal_user_changed: 4.3 (0.2%), b_tie_ro: 3.2 (0.1%), parse: 1.40 (0.1%), extract_message_metadata: 28 (1.3%), get_uri_detail_list: 6 (0.3%), tests_pri_-1000: 12 (0.5%), tests_pri_-950: 2.3 (0.1%), tests_pri_-900: 1.88 (0.1%), tests_pri_-400: 47 (2.2%), check_bayes: 45 (2.1%), b_tokenize: 20 (0.9%), b_tok_get_all: 11 (0.5%), b_comp_prob: 5 (0.3%), b_tok_touch_all: 4.1 (0.2%), b_finish: 0.87 (0.0%), tests_pri_0: 2037 (94.8%), tests_pri_500: 8 (0.4%), rewrite_mail: 0.00 (0.0%) Subject: [PATCH review 7/6] vfs: Make mnt_escape_count 64bit X-SA-Exim-Version: 4.2.1 (built Wed, 24 Sep 2014 11:00:52 -0600) X-SA-Exim-Scanned: Yes (on in01.mta.xmission.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The primary way that mnt_escape_count differs from a seqcount is that it's value is cached across operations that sleep. In emperical testing mnt_escape_count can be made to rollover in 64 minutes on an Intel 2.5Ghz Core i5 processor on ramfs. Meanwhile a single pathname lookup on an otherwise idle system has been measured at 2 minutes 9 seconds. Those numbers are entirely too close for comfort, especially given that nfs lookups can take indefinitely long. Extend mnt_escape_count to 64bit to increase the expected time to rollover from 1 hour to 489,957 years. Even if the efficiency of rename is increased to be able to rename 2^31 entries in 1 second (instead of the 1 hour that I measured) it will still take 136 years before the escape count rolls over, making it essentially never. On 32bit the low 32bit word of the 64bit count is treated as a sequence count such that if you read the low 32bit value, read the high 32bit value, and then read the low 32bit value again and the low 32bit value remains unchanged the high 32bit value is guaranteed to be stable when the low 32bit value does not equal -1UL. Thankfully in the unlikely event that the low 32bit value is -1UL the code does not care about the high 32bit value so the code does not need to reread the values in that case. Signed-off-by: "Eric W. Biederman" --- fs/mount.h | 32 ++++++++++++++++++++++++++++---- fs/namei.c | 6 +++--- fs/namespace.c | 13 ++++++++++++- 3 files changed, 43 insertions(+), 8 deletions(-) diff --git a/fs/mount.h b/fs/mount.h index d32d074cc0d4..cd89e786efa7 100644 --- a/fs/mount.h +++ b/fs/mount.h @@ -38,7 +38,10 @@ struct mount { struct mount *mnt_parent; struct dentry *mnt_mountpoint; struct vfsmount mnt; - unsigned mnt_escape_count; + unsigned long mnt_escape_count; +#if BITS_PER_LONG < 64 + unsigned long mnt_escape_count_high; +#endif union { struct rcu_head mnt_rcu; struct llist_node mnt_llist; @@ -111,15 +114,36 @@ static inline void detach_mounts(struct dentry *dentry) extern const struct dentry *lock_namespace_rename(struct dentry *, struct dentry *, bool); extern void unlock_namespace_rename(const struct dentry *, struct dentry *, struct dentry *, bool); -static inline unsigned read_mnt_escape_count(struct vfsmount *vfsmount) +static inline u64 read_mnt_escape_count(struct vfsmount *vfsmount) { struct mount *mnt = real_mount(vfsmount); - unsigned ret = READ_ONCE(mnt->mnt_escape_count); +#if BITS_PER_LONG >= 64 + u64 ret = READ_ONCE(mnt->mnt_escape_count); +#else + u64 ret; + unsigned long low0, low, high; + /* In the unlikely event that low0 == low and low == -1 + * mnt_escape_count_high may or not be incremented yet. In + * that event the odd value of low will not match the anything + * cached, will signal that the validity of is_subdir is in + * flux and will not be cached. Therefore when low == -1 the + * value of high does not matter. + */ + low0 = READ_ONCE(mnt->mnt_escape_count); + do { + low = low0; + smp_rmb(); + high = READ_ONCE(mnt->mnt_escape_count_high); + smp_rmb(); + low0 = READ_ONCE(mnt->mnt_escape_count); + } while (low != low0); + ret = (((u64)high) << 32) | low; +#endif smp_rmb(); return ret; } -static inline void cache_mnt_escape_count(unsigned *cache, unsigned escape_count) +static inline void cache_mnt_escape_count(u64 *cache, u64 escape_count) { if (likely(escape_count & 1) == 0) *cache = escape_count; diff --git a/fs/namei.c b/fs/namei.c index 79a5dca073f5..ef1463c0b96a 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -514,7 +514,7 @@ struct nameidata { struct nameidata *saved; unsigned root_seq; int dfd; - unsigned mnt_escape_count; + u64 mnt_escape_count; }; static void set_nameidata(struct nameidata *p, int dfd, struct filename *name) @@ -571,7 +571,7 @@ static int __nd_alloc_stack(struct nameidata *nd) static bool path_connected(struct nameidata *nd) { struct vfsmount *mnt = nd->path.mnt; - unsigned escape_count = read_mnt_escape_count(mnt); + u64 escape_count = read_mnt_escape_count(mnt); if (likely(escape_count == nd->mnt_escape_count)) return true; @@ -3041,7 +3041,7 @@ static int do_last(struct nameidata *nd, unsigned seq; struct inode *inode; struct path save_parent = { .dentry = NULL, .mnt = NULL }; - unsigned save_parent_escape_count = 0; + u64 save_parent_escape_count = 0; struct path path; bool retried = false; int error; diff --git a/fs/namespace.c b/fs/namespace.c index 9faec24f3f23..98596c4b992a 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -1692,10 +1692,21 @@ static void lock_escaped_mounts_begin(struct dentry *root) */ hlist_for_each_entry(mnt, &mr->r_list, mnt_mr_list) { /* Don't return to 0 if the couunt wraps */ - if (unlikely(mnt->mnt_escape_count == (0U - 2))) +#if BITS_PER_LONG >= 64 + if (unlikely(mnt->mnt_escape_count == (0UL - 2))) mnt->mnt_escape_count = 1; else mnt->mnt_escape_count++; +#else + if (unlikely(mnt->mnt_escape_count == (0UL - 2))) { + WRITE_ONCE(mnt->mnt_escape_count, (0UL - 1)); + smp_wmb(); + mnt->mnt_escape_count_high++; + smp_wmb(); + WRITE_ONCE(mnt->mnt_escape_count, 1); + } else + mnt->mnt_escape_count++; +#endif smp_wmb(); } }