diff mbox

[v2] mount: dont execute propagate_umount() many times for same mounts

Message ID 1475772564-25627-1-git-send-email-avagin@openvz.org (mailing list archive)
State New, archived
Headers show

Commit Message

Andrey Vagin Oct. 6, 2016, 4:49 p.m. UTC
The reason of this optimization is that umount() can hold namespace_sem
for a long time, this semaphore is global, so it affects all users.
Recently Eric W. Biederman added a per mount namespace limit on the
number of mounts. The default number of mounts allowed per mount
namespace at 100,000. Currently this value is allowed to construct a tree
which requires hours to be umounted.

In a worse case the current complexity of umount_tree() is O(n^3).
* Enumirate all mounts in a target tree (propagate_umount)
* Enumirate mounts to find where these changes have to
  be propagated (mark_umount_candidates)
* Enumirate mounts to find a requered mount by parent and dentry
  (__lookup_mnt_lat)

The worse case is when all mounts from the tree live in the same shared
group. In this case we have to enumirate all mounts on each step.

Here we can optimize the second step. We don't need to make it for
mounts which we already met when we did this step for previous mounts.
It reduces the complexity of umount_tree() to O(n^2).

Here is a script to generate such mount tree:
$ cat run.sh
mount -t tmpfs xxx /mnt
mount --make-shared /mnt
for i in `seq $1`; do
	mount --bind /mnt `mktemp -d /mnt/test.XXXXXX`
done
time umount -l /mnt
$ for i in `seq 10 16`; do echo $i; unshare -Urm bash ./run.sh $i; done

Here is performance measurements with and without this patch:

mounts | after  | before (sec)
---------------------
1024   | 0.024  | 0.084
2048   | 0.041  | 0.39
4096   | 0.059  | 3.198
8192   | 0.227  | 50.794
16384  | 1.015  | 810

This patch is a second step to fix CVE-2016-6213.

v2: fix mark_umount_candidates() to not change the existing behaviour.
Signed-off-by: Andrei Vagin <avagin@openvz.org>
---
 fs/mount.h     |  2 ++
 fs/namespace.c | 19 ++++++++++++++++---
 fs/pnode.c     | 25 ++++++++++++++++++++++---
 3 files changed, 40 insertions(+), 6 deletions(-)

Comments

Eric W. Biederman Oct. 6, 2016, 7:46 p.m. UTC | #1
Andrei Vagin <avagin@openvz.org> writes:

> The reason of this optimization is that umount() can hold namespace_sem
> for a long time, this semaphore is global, so it affects all users.
> Recently Eric W. Biederman added a per mount namespace limit on the
> number of mounts. The default number of mounts allowed per mount
> namespace at 100,000. Currently this value is allowed to construct a tree
> which requires hours to be umounted.

I am going to take a hard look at this as this problem sounds very
unfortunate.  My memory of going through this code before strongly
suggests that changing the last list_for_each_entry to
list_for_each_entry_reverse is going to impact the correctness of this
change.

The order of traversal is important if there are several things mounted
one on the other that are all being unmounted.

Now perhaps your other changes have addressed that but I haven't looked
closely enough to see that yet.


> @@ -454,7 +473,7 @@ int propagate_umount(struct list_head *list)
>  	list_for_each_entry_reverse(mnt, list, mnt_list)
>  		mark_umount_candidates(mnt);
>  
> -	list_for_each_entry(mnt, list, mnt_list)
> +	list_for_each_entry_reverse(mnt, list, mnt_list)
>  		__propagate_umount(mnt);
>  	return 0;
>  }

Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Andrey Vagin Oct. 6, 2016, 11:06 p.m. UTC | #2
On Thu, Oct 06, 2016 at 02:46:30PM -0500, Eric W. Biederman wrote:
> Andrei Vagin <avagin@openvz.org> writes:
> 
> > The reason of this optimization is that umount() can hold namespace_sem
> > for a long time, this semaphore is global, so it affects all users.
> > Recently Eric W. Biederman added a per mount namespace limit on the
> > number of mounts. The default number of mounts allowed per mount
> > namespace at 100,000. Currently this value is allowed to construct a tree
> > which requires hours to be umounted.
> 
> I am going to take a hard look at this as this problem sounds very
> unfortunate.  My memory of going through this code before strongly
> suggests that changing the last list_for_each_entry to
> list_for_each_entry_reverse is going to impact the correctness of this
> change.

I have read this code again and you are right, list_for_each_entry can't
be changed on list_for_each_entry_reverse here.

I tested these changes more carefully and find one more issue, so I am
going to send a new patch and would like to get your comments to it.

Thank you for your time.


> 
> The order of traversal is important if there are several things mounted
> one on the other that are all being unmounted.
> 
> Now perhaps your other changes have addressed that but I haven't looked
> closely enough to see that yet.
> 
> 
> > @@ -454,7 +473,7 @@ int propagate_umount(struct list_head *list)
> >  	list_for_each_entry_reverse(mnt, list, mnt_list)
> >  		mark_umount_candidates(mnt);
> >  
> > -	list_for_each_entry(mnt, list, mnt_list)
> > +	list_for_each_entry_reverse(mnt, list, mnt_list)
> >  		__propagate_umount(mnt);
> >  	return 0;
> >  }
> 
> Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric W. Biederman Oct. 7, 2016, 4:45 a.m. UTC | #3
Andrei Vagin <avagin@virtuozzo.com> writes:

> On Thu, Oct 06, 2016 at 02:46:30PM -0500, Eric W. Biederman wrote:
>> Andrei Vagin <avagin@openvz.org> writes:
>> 
>> > The reason of this optimization is that umount() can hold namespace_sem
>> > for a long time, this semaphore is global, so it affects all users.
>> > Recently Eric W. Biederman added a per mount namespace limit on the
>> > number of mounts. The default number of mounts allowed per mount
>> > namespace at 100,000. Currently this value is allowed to construct a tree
>> > which requires hours to be umounted.
>> 
>> I am going to take a hard look at this as this problem sounds very
>> unfortunate.  My memory of going through this code before strongly
>> suggests that changing the last list_for_each_entry to
>> list_for_each_entry_reverse is going to impact the correctness of this
>> change.
>
> I have read this code again and you are right, list_for_each_entry can't
> be changed on list_for_each_entry_reverse here.
>
> I tested these changes more carefully and find one more issue, so I am
> going to send a new patch and would like to get your comments to it.
>
> Thank you for your time.

No problem.

A quick question.  You have introduced lookup_mnt_cont.  Is that a core
part of your fix or do you truly have problmenatic long hash chains.

Simply increasing the hash table size should fix problems long hash
chains (and there are other solutions like rhashtable that may be more
appropriate than pre-allocating large hash chains).

If it is not long hash chains introducing lookup_mnt_cont in your patch
is a distraction to the core of what is going on.

Perhaps I am blind but if the hash chains are not long I don't see mount
propagation could be more than quadratic in the worst case.  As there is
only a loop within a loop.  Or Is the tree walking in propagation_next
that bad?

Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Andrey Vagin Oct. 10, 2016, 8:42 p.m. UTC | #4
On Thu, Oct 06, 2016 at 11:45:48PM -0500, Eric W. Biederman wrote:
> Andrei Vagin <avagin@virtuozzo.com> writes:
> 
> > On Thu, Oct 06, 2016 at 02:46:30PM -0500, Eric W. Biederman wrote:
> >> Andrei Vagin <avagin@openvz.org> writes:
> >> 
> >> > The reason of this optimization is that umount() can hold namespace_sem
> >> > for a long time, this semaphore is global, so it affects all users.
> >> > Recently Eric W. Biederman added a per mount namespace limit on the
> >> > number of mounts. The default number of mounts allowed per mount
> >> > namespace at 100,000. Currently this value is allowed to construct a tree
> >> > which requires hours to be umounted.
> >> 
> >> I am going to take a hard look at this as this problem sounds very
> >> unfortunate.  My memory of going through this code before strongly
> >> suggests that changing the last list_for_each_entry to
> >> list_for_each_entry_reverse is going to impact the correctness of this
> >> change.
> >
> > I have read this code again and you are right, list_for_each_entry can't
> > be changed on list_for_each_entry_reverse here.
> >
> > I tested these changes more carefully and find one more issue, so I am
> > going to send a new patch and would like to get your comments to it.
> >
> > Thank you for your time.
> 
> No problem.
> 
> A quick question.  You have introduced lookup_mnt_cont.  Is that a core
> part of your fix or do you truly have problmenatic long hash chains.

Actually I need a slightly modified copy of __lookup_mnt_last() to set
mark for mounts with the MNT_UMOUNT flag.

> 
> Simply increasing the hash table size should fix problems long hash
> chains (and there are other solutions like rhashtable that may be more
> appropriate than pre-allocating large hash chains).
> 
> If it is not long hash chains introducing lookup_mnt_cont in your patch
> is a distraction to the core of what is going on.
> 
> Perhaps I am blind but if the hash chains are not long I don't see mount
> propagation could be more than quadratic in the worst case.  As there is
> only a loop within a loop.  Or Is the tree walking in propagation_next
> that bad?

I don't think that the hash chains are long here.

The complexity of __lookup_mnt() is O(n). In our case the hash size
(4096) is smaller than a number of elements. If we increase the hash
size, it will be faster but the coplexity in term of O() will be the
same, will it not?

Here are all nested steps where we enumirate mounts:
    * Enumirate all mounts in a target tree (propagate_umount)
    * Enumirate mounts to find where these changes have to
      be propagated (mark_umount_candidates)
    * Enumirate mounts to find a requered mount by parent and dentry
      (__lookup_mnt_last)

And my measurements proves that it isn't O(n^2):
    mounts |  (sec)  |
    ------------
    1024   | 0.08
    2048   | 0.4
    4096   | 3.2
    8192   | 50.8
    16384  | 810

> 
> Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/mount.h b/fs/mount.h
index 14db05d..b5631bd 100644
--- a/fs/mount.h
+++ b/fs/mount.h
@@ -87,6 +87,8 @@  static inline int is_mounted(struct vfsmount *mnt)
 
 extern struct mount *__lookup_mnt(struct vfsmount *, struct dentry *);
 extern struct mount *__lookup_mnt_last(struct vfsmount *, struct dentry *);
+extern struct mount *__lookup_mnt_cont(struct mount *,
+					struct vfsmount *, struct dentry *);
 
 extern int __legitimize_mnt(struct vfsmount *, unsigned);
 extern bool legitimize_mnt(struct vfsmount *, unsigned);
diff --git a/fs/namespace.c b/fs/namespace.c
index dcd9afe..0af8d01 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -649,9 +649,7 @@  struct mount *__lookup_mnt_last(struct vfsmount *mnt, struct dentry *dentry)
 		goto out;
 	if (!(p->mnt.mnt_flags & MNT_UMOUNT))
 		res = p;
-	hlist_for_each_entry_continue(p, mnt_hash) {
-		if (&p->mnt_parent->mnt != mnt || p->mnt_mountpoint != dentry)
-			break;
+	for (; p != NULL; p = __lookup_mnt_cont(p, mnt, dentry)) {
 		if (!(p->mnt.mnt_flags & MNT_UMOUNT))
 			res = p;
 	}
@@ -659,6 +657,21 @@  out:
 	return res;
 }
 
+struct mount *__lookup_mnt_cont(struct mount *p,
+				struct vfsmount *mnt, struct dentry *dentry)
+{
+	struct hlist_node *node = p->mnt_hash.next;
+
+	if (!node)
+		return NULL;
+
+	p = hlist_entry(node, struct mount, mnt_hash);
+	if (&p->mnt_parent->mnt != mnt || p->mnt_mountpoint != dentry)
+		return NULL;
+
+	return p;
+}
+
 /*
  * lookup_mnt - Return the first child mount mounted at path
  *
diff --git a/fs/pnode.c b/fs/pnode.c
index 9989970..8b3c1be 100644
--- a/fs/pnode.c
+++ b/fs/pnode.c
@@ -399,10 +399,24 @@  static void mark_umount_candidates(struct mount *mnt)
 
 	BUG_ON(parent == mnt);
 
+	if (IS_MNT_MARKED(mnt))
+		return;
+
 	for (m = propagation_next(parent, parent); m;
 			m = propagation_next(m, parent)) {
-		struct mount *child = __lookup_mnt_last(&m->mnt,
-						mnt->mnt_mountpoint);
+		struct mount *child = NULL, *p;
+
+		for (p = __lookup_mnt(&m->mnt, mnt->mnt_mountpoint); p;
+		     p = __lookup_mnt_cont(p, &m->mnt, mnt->mnt_mountpoint)) {
+			/*
+			 * Mark umounted mounts to not call
+			 * __propagate_umount for them again.
+			 */
+			SET_MNT_MARK(p);
+			if (!(p->mnt.mnt_flags & MNT_UMOUNT))
+				child = p;
+		}
+
 		if (child && (!IS_MNT_LOCKED(child) || IS_MNT_MARKED(m))) {
 			SET_MNT_MARK(child);
 		}
@@ -420,6 +434,9 @@  static void __propagate_umount(struct mount *mnt)
 
 	BUG_ON(parent == mnt);
 
+	if (IS_MNT_MARKED(mnt))
+		return;
+
 	for (m = propagation_next(parent, parent); m;
 			m = propagation_next(m, parent)) {
 
@@ -431,6 +448,8 @@  static void __propagate_umount(struct mount *mnt)
 		 */
 		if (!child || !IS_MNT_MARKED(child))
 			continue;
+		if (child->mnt.mnt_flags & MNT_UMOUNT)
+			continue;
 		CLEAR_MNT_MARK(child);
 		if (list_empty(&child->mnt_mounts)) {
 			list_del_init(&child->mnt_child);
@@ -454,7 +473,7 @@  int propagate_umount(struct list_head *list)
 	list_for_each_entry_reverse(mnt, list, mnt_list)
 		mark_umount_candidates(mnt);
 
-	list_for_each_entry(mnt, list, mnt_list)
+	list_for_each_entry_reverse(mnt, list, mnt_list)
 		__propagate_umount(mnt);
 	return 0;
 }