From patchwork Thu Oct 6 16:49:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Vagin X-Patchwork-Id: 9365157 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7588B6077E for ; Thu, 6 Oct 2016 16:49:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 66AD3291C3 for ; Thu, 6 Oct 2016 16:49:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5AD25291C5; Thu, 6 Oct 2016 16:49:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3BE73291C3 for ; Thu, 6 Oct 2016 16:49:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S942306AbcJFQth (ORCPT ); Thu, 6 Oct 2016 12:49:37 -0400 Received: from mail-io0-f173.google.com ([209.85.223.173]:34340 "EHLO mail-io0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754012AbcJFQtf (ORCPT ); Thu, 6 Oct 2016 12:49:35 -0400 Received: by mail-io0-f173.google.com with SMTP id r30so21359687ioi.1; Thu, 06 Oct 2016 09:49:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id; bh=96mOLEFjouD4TWJwUAeGf3VhX8m69l0kZNnwGTP2C3E=; b=Q8fTiVPWVz+9doblbcDxoC9dtCPBTjcBfq3ck27uKPfRgQNVp2VRRcJnDZHC4Mfi3X 9qM55bajG9ZH8FCl4R57ndPw3DHfpMvQixsRSpnGg1VWkxrgrLR09al3ahVkOtlyrfRx Xo2+zafUNKrxA+x/xIIVING13fEYfJn+zCLFLZclAb07JmHEny1fVEpqUAYAII+Mhyl7 NFt29As9ah1lD3+0H4CUpm8Mxrluv7KY+PR78qrq159K5mwYzafmgIv4V77XWyi+m/Wy 7yATufb/b1YRrWsUSf6pbKjvnrX2pGVUA3KLXaLwHVZVHe9sMFH2fQpShxyFvhryNur0 xJsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id; bh=96mOLEFjouD4TWJwUAeGf3VhX8m69l0kZNnwGTP2C3E=; b=IDLFTL54K7FBd2tCHdzq5cz9ySPPEe0iJ0oxhjpD/UvDT3FbhN0PB78ZNYN+x41lR8 J+ZPLfxN/DtDrkTFdbOlUHRF4AOiRnC86HFDMNOaaa99ViUATaN4jsfECo24jBNAnWDi ADBSylPFAIKMUmvvZl2ER+U1W+Qukxf41EeG+6sSULU2yPS45+ch+KUDXvOZ6cru4uRr ShRD42E/0tWn0wte4JI5jzsdJniRRZ2e8yJ+AFias9g4ZT0Kg8SZMKONPO8Pi9hs9/aZ HdT051lntqxZDcV1CZP6jM3yrxLm02K+MRvySn4aqxyYacdJB6Yz35p/e5GqxjZ7VJsM 9LKg== X-Gm-Message-State: AA6/9RnKNfV3Sqj9+fFczdnmfnWbgTlhXPxvRzS81B8HCYrdWTDkZyB0EinkFlwuPKXQZg== X-Received: by 10.107.190.68 with SMTP id o65mr16189152iof.95.1475772573893; Thu, 06 Oct 2016 09:49:33 -0700 (PDT) Received: from laptop.vz.com ([162.246.95.100]) by smtp.gmail.com with ESMTPSA id e6sm13995600ite.2.2016.10.06.09.49.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Oct 2016 09:49:32 -0700 (PDT) From: Andrei Vagin To: Alexander Viro Cc: "Eric W. Biederman" , containers@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Andrei Vagin Subject: [PATCH v2] mount: dont execute propagate_umount() many times for same mounts Date: Thu, 6 Oct 2016 09:49:24 -0700 Message-Id: <1475772564-25627-1-git-send-email-avagin@openvz.org> X-Mailer: git-send-email 2.5.5 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The reason of this optimization is that umount() can hold namespace_sem for a long time, this semaphore is global, so it affects all users. Recently Eric W. Biederman added a per mount namespace limit on the number of mounts. The default number of mounts allowed per mount namespace at 100,000. Currently this value is allowed to construct a tree which requires hours to be umounted. In a worse case the current complexity of umount_tree() is O(n^3). * Enumirate all mounts in a target tree (propagate_umount) * Enumirate mounts to find where these changes have to be propagated (mark_umount_candidates) * Enumirate mounts to find a requered mount by parent and dentry (__lookup_mnt_lat) The worse case is when all mounts from the tree live in the same shared group. In this case we have to enumirate all mounts on each step. Here we can optimize the second step. We don't need to make it for mounts which we already met when we did this step for previous mounts. It reduces the complexity of umount_tree() to O(n^2). Here is a script to generate such mount tree: $ cat run.sh mount -t tmpfs xxx /mnt mount --make-shared /mnt for i in `seq $1`; do mount --bind /mnt `mktemp -d /mnt/test.XXXXXX` done time umount -l /mnt $ for i in `seq 10 16`; do echo $i; unshare -Urm bash ./run.sh $i; done Here is performance measurements with and without this patch: mounts | after | before (sec) --------------------- 1024 | 0.024 | 0.084 2048 | 0.041 | 0.39 4096 | 0.059 | 3.198 8192 | 0.227 | 50.794 16384 | 1.015 | 810 This patch is a second step to fix CVE-2016-6213. v2: fix mark_umount_candidates() to not change the existing behaviour. Signed-off-by: Andrei Vagin --- fs/mount.h | 2 ++ fs/namespace.c | 19 ++++++++++++++++--- fs/pnode.c | 25 ++++++++++++++++++++++--- 3 files changed, 40 insertions(+), 6 deletions(-) diff --git a/fs/mount.h b/fs/mount.h index 14db05d..b5631bd 100644 --- a/fs/mount.h +++ b/fs/mount.h @@ -87,6 +87,8 @@ static inline int is_mounted(struct vfsmount *mnt) extern struct mount *__lookup_mnt(struct vfsmount *, struct dentry *); extern struct mount *__lookup_mnt_last(struct vfsmount *, struct dentry *); +extern struct mount *__lookup_mnt_cont(struct mount *, + struct vfsmount *, struct dentry *); extern int __legitimize_mnt(struct vfsmount *, unsigned); extern bool legitimize_mnt(struct vfsmount *, unsigned); diff --git a/fs/namespace.c b/fs/namespace.c index dcd9afe..0af8d01 100644 --- a/fs/namespace.c +++ b/fs/namespace.c @@ -649,9 +649,7 @@ struct mount *__lookup_mnt_last(struct vfsmount *mnt, struct dentry *dentry) goto out; if (!(p->mnt.mnt_flags & MNT_UMOUNT)) res = p; - hlist_for_each_entry_continue(p, mnt_hash) { - if (&p->mnt_parent->mnt != mnt || p->mnt_mountpoint != dentry) - break; + for (; p != NULL; p = __lookup_mnt_cont(p, mnt, dentry)) { if (!(p->mnt.mnt_flags & MNT_UMOUNT)) res = p; } @@ -659,6 +657,21 @@ out: return res; } +struct mount *__lookup_mnt_cont(struct mount *p, + struct vfsmount *mnt, struct dentry *dentry) +{ + struct hlist_node *node = p->mnt_hash.next; + + if (!node) + return NULL; + + p = hlist_entry(node, struct mount, mnt_hash); + if (&p->mnt_parent->mnt != mnt || p->mnt_mountpoint != dentry) + return NULL; + + return p; +} + /* * lookup_mnt - Return the first child mount mounted at path * diff --git a/fs/pnode.c b/fs/pnode.c index 9989970..8b3c1be 100644 --- a/fs/pnode.c +++ b/fs/pnode.c @@ -399,10 +399,24 @@ static void mark_umount_candidates(struct mount *mnt) BUG_ON(parent == mnt); + if (IS_MNT_MARKED(mnt)) + return; + for (m = propagation_next(parent, parent); m; m = propagation_next(m, parent)) { - struct mount *child = __lookup_mnt_last(&m->mnt, - mnt->mnt_mountpoint); + struct mount *child = NULL, *p; + + for (p = __lookup_mnt(&m->mnt, mnt->mnt_mountpoint); p; + p = __lookup_mnt_cont(p, &m->mnt, mnt->mnt_mountpoint)) { + /* + * Mark umounted mounts to not call + * __propagate_umount for them again. + */ + SET_MNT_MARK(p); + if (!(p->mnt.mnt_flags & MNT_UMOUNT)) + child = p; + } + if (child && (!IS_MNT_LOCKED(child) || IS_MNT_MARKED(m))) { SET_MNT_MARK(child); } @@ -420,6 +434,9 @@ static void __propagate_umount(struct mount *mnt) BUG_ON(parent == mnt); + if (IS_MNT_MARKED(mnt)) + return; + for (m = propagation_next(parent, parent); m; m = propagation_next(m, parent)) { @@ -431,6 +448,8 @@ static void __propagate_umount(struct mount *mnt) */ if (!child || !IS_MNT_MARKED(child)) continue; + if (child->mnt.mnt_flags & MNT_UMOUNT) + continue; CLEAR_MNT_MARK(child); if (list_empty(&child->mnt_mounts)) { list_del_init(&child->mnt_child); @@ -454,7 +473,7 @@ int propagate_umount(struct list_head *list) list_for_each_entry_reverse(mnt, list, mnt_list) mark_umount_candidates(mnt); - list_for_each_entry(mnt, list, mnt_list) + list_for_each_entry_reverse(mnt, list, mnt_list) __propagate_umount(mnt); return 0; }