From patchwork Mon May 15 18:27:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Vagin X-Patchwork-Id: 9727757 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2F9166028A for ; Mon, 15 May 2017 18:27:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1FB3D28438 for ; Mon, 15 May 2017 18:27:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 12C3228458; Mon, 15 May 2017 18:27:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86A1528449 for ; Mon, 15 May 2017 18:27:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965135AbdEOS1e (ORCPT ); Mon, 15 May 2017 14:27:34 -0400 Received: from mail-ve1eur01on0105.outbound.protection.outlook.com ([104.47.1.105]:43276 "EHLO EUR01-VE1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S964922AbdEOS1a (ORCPT ); Mon, 15 May 2017 14:27:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtuozzo.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=ryeo0Z319wkAv/A8+3TdhuRGvUP21GOWYhWgAqnbYBE=; b=PHE0vd2mcdIn4mq8gQgcS+2gl5VGuL95//vvpYxVVHJi8H3hTMDsNGI//mPt9gMZEb5DA1J9Q6RIe/xTbVySfspForYwequxXWin+TO7IWrG6qdLipURYRioS3Y0aR2pSw7hTHadJbeSsOZHEDS3AQi3o/FywvqHRdPl3SOFaNQ= Authentication-Results: xmission.com; dkim=none (message not signed) header.d=none; xmission.com; dmarc=none action=none header.from=virtuozzo.com; Received: from outlook.office365.com (4.16.175.162) by DB5PR08MB0743.eurprd08.prod.outlook.com (2a01:111:e400:599c::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1084.16; Mon, 15 May 2017 18:27:14 +0000 Date: Mon, 15 May 2017 11:27:05 -0700 From: Andrei Vagin To: "Eric W. Biederman" CC: Al Viro , , Ram Pai Subject: Re: [REVIEW][PATCH] mnt: Tuck mounts under others instead of creating shadow/side mounts. Message-ID: <20170515182704.GA15539@outlook.office365.com> References: <874m1hdkyv.fsf@xmission.com> <20170103014806.GA1555@ZenIV.linux.org.uk> <87ful07ryd.fsf@xmission.com> <20170103040052.GB1555@ZenIV.linux.org.uk> <87y3yr32ig.fsf@xmission.com> <87shoz32g8.fsf_-_@xmission.com> <87a8b6r0z5.fsf_-_@xmission.com> <20170514021504.GA19495@outlook.office365.com> <87inl4ozg2.fsf@xmission.com> <87y3tzoklh.fsf@xmission.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <87y3tzoklh.fsf@xmission.com> User-Agent: Mutt/1.8.0 (2017-02-23) X-Originating-IP: [4.16.175.162] X-ClientProxiedBy: DM5PR1101CA0018.namprd11.prod.outlook.com (2603:10b6:4:4c::28) To DB5PR08MB0743.eurprd08.prod.outlook.com (2a01:111:e400:599c::13) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e2661ad8-2d4c-4162-4ec3-08d49bc002f4 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(201703131423075)(201703031133081); SRVR:DB5PR08MB0743; X-Microsoft-Exchange-Diagnostics: 1; DB5PR08MB0743; 3:0fx4zxIe7NAq6bpc3N/iWd+HpibhSWkPwP/rudypnEWrPBzyyKkygtEu6KpnWXe7XAVZ03rHbElCueVloRPFdhIGLMy0cEm9jcBWb5maBqRNAtmdDBpniU4CpCqcvFAquJeP6NKR3g2iHabC9ZsRYKJb97/AxuIVukhs44GGcnMJm7T7x2Y1OkTNww7MS2moyMKrWY4JdCANQyWsO4hu1Cqk2Aq/pCrRhf7OvfbRsSdS2GgguK2RaONYX9zcFJay67BzvICT1Q1lXhIklaL7h0FDEqJbsu5OdspvOJqbO+kKtxi7rW141RhDi5ryEkrtM5FqXElBEAG3f01j4nJxow==; 25:c9Ab8qb558175fcNiVgxsewduK/p6Z9weTjWFILUW5LNjvvrp2OZ56pCk7NPBKNtn+umBkjCFTRw0MFYC59Sx7ClWVFML8e3Jh8FhMzzBUgYRb+gTua3R0rLFXJSrCeY/2/K5lAZxqUTi6Y63yMZnB9h46wGiglwxoya/ISfCigg9p7eiAPkcZQmRqbh3Y0PXLymGORctVhpiXj0XBL8FwOKr0srEWlJbeMH6RVbgw0ugR0AtO4KTpG64N8Oza8MvgmdmOwMJSKxf6l2f7hB5v5T8ZjhiN/J/N8Y/qudkqRO3/H2u1YcrH2qwBvOuFA5hmGFactATcHQzvRmxEWYvUcQ7X7YsXJc8PZA6vBpsdw7PIqmajS0KtBlHx7o8E3u0odQC3G2dV4sd+dbhtGzqiCkwz4o3JU3u8GFLGZQ5L/1xELeAir1wT42BJNtoCfjtXIFigYDpRWcniTgenA8MZ/kQlaezVoPE+GcYTGprzkE/BlGpZnxriBqEdMhiy2NYuFO/qwHsMN/Vsiz1PX9fA== X-Microsoft-Exchange-Diagnostics: 1; DB5PR08MB0743; 31:bjsBjTloODG/5i7kbEddQ9r5xfwKZVCUksxecs5qKN5Taem7I7fJMoFcbqj2BfakN5DByG5iD5GuLhMcJHZvPFHcXxfiT9U8AHKo7AJvsBfokAz8pNE7DAwvD//h/OCXP02j7M3cs7vqK8MqEQePPrxzaaf9xdYxKVf1QL6qcE3GUY2bSxKGjCy4KHZRp/r8OPBo6crXrVusHrzgi5cQaKTi0FZqiUbPiy45MlTN/Ss4KJJlXeAqgXwLkczg5JOC7sEy+95ogMd4vxEdx7bubAcfdma0OfoasPvGqNC5BydJ4rnLVEx3w0mBTc8HlMiO; 20:lObUMJdMtFCxjjcrZz/LSAmogWzBYFnvL1J9cID1erxDUf47ZB33IRKdjdxp6me53Z6B6/STv4TclCQ+6nGR3WKAR3zzT6YMxYTXtdqgH1qF9NJEXhMBQkgeu1Ey6I6sBZF1hL8E1ryArXAUcRq9vZI8xhv3HWb6faWlZKTwNDDKcMEsVh9OyU3+c/nR4YwLdKeejbcYKLp+M/l9lLKbw4pq9xpePgR9d7w4UVGESBAY3fZ0tQeEe/LjW2bCX9SU1rEu7eGMIfQxT3uxtfTw9eb5Xh6Kn3YXJtSuhVZD/aPOX6GDKhtAhC7wA0ishR/PgFOMCWd63OEe6tvUcPI9LZqhbVigbilebog3IDVPDW76Kr6xGajFXJtDWag9Vi2MqQWGuwIltqyySj8nsN/+zzBsxUFOqpGH4Nyfzncp2b0= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(9452136761055)(788757137089); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(102415395)(6040450)(601004)(2401047)(5005006)(8121501046)(3002001)(93006095)(93001095)(10201501046)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123562025)(20161123564025)(20161123560025)(20161123555025)(20161123558100)(6072148); SRVR:DB5PR08MB0743; BCL:0; PCL:0; RULEID:; SRVR:DB5PR08MB0743; X-Microsoft-Exchange-Diagnostics: 1; DB5PR08MB0743; 4:uQ8xJypRTxGjwGHv3Zs+k4iRsagrVjkFKkuQ0UUtT3nbvqga3lLg8ftvZwX0g49mpEAbEbthtn/vv/xnWY+G1yY1HsBaM6wmNDq6duhMp+te+mGTYPvGAzVf/NbVy1MyKs810LjhK8YErWy7fQHgKbzT6NIiwh0CQAc1330lMUjByOI6y/+D0Hk0+HOvJKvIXeZ/gXbPFhnieiYiXevK3eaMtzuhW1azECkl2fNkjROxN/cNtGN2X4YSconnv5CLPVoXnNuTlBH12bFRgQ4NVwsLW2GthStHLrQTk1zSGBjH1GVSjsTA0kUw79hdFTQ3WKrLQMbYRW7LHS38R7rQcZeJKW/yhZEeqFTMvtvipHErX8Vk+qSns4f94kiwcY4rzQ0EPMqGcV/m4XznAfULVQ9/7oRdncCLFiAM9xoqQec8kVRQiFksLTtAtza973WznNbg3VY/4qri+oJBWQg4uh6K8DGbNP5PEqxHQtIR1nsQRNcGs1YX28FM0nKKKQSx5pl9Pb+wPdsZ6vufZwat453cgqg43ahKXGj4LU0jOuIStTm3F+UGG8UGsjP3kVI7Y4enAll6W7OM1E4T2ozYp9DQ4OOOTvY+adBVpq86yj5f7GcmgHuse4qIiNmiykEe8a750SaNQr7Cs2+0QGYDrMYQdUyKqrml6m1tungwMMXBnqwDjWLIBF68rs9mZO98GkafAUHpczOd0NExyPmD3lyR9O7NPYG42oGF6yKe8VaUVMfb2JIYpFXudZJa9VB1zL01y9H63gIDzSZCasxcnm9ivoMLQ5t1uLpobmaLENtNeT1GzStaxeKu/P6Q922e4HcLkBFDz/dr8gwyizWPM+yKuRADQD7zwcn5WEiTwsubPOblDWcCTLg3rlygen5w X-Forefront-PRVS: 0308EE423E X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10019020)(4630300001)(6009001)(39400400002)(39410400002)(39840400002)(39450400003)(24454002)(42186005)(53416004)(8676002)(2476003)(33656002)(1076002)(229853002)(76176999)(54356999)(6916009)(568964002)(50986999)(6116002)(6246003)(3846002)(6666003)(2950100002)(2906002)(81166006)(37156001)(478600001)(5660300001)(66066001)(25786009)(512884003)(6506006)(38730400002)(110136004)(4610100001)(5890100001)(4001350100001)(54906002)(9686003)(8656002)(7736002)(86362001)(53936002)(574094002)(55016002)(575784001)(305945005)(4326008)(189998001)(84326002)(83506001)(18370500001)(21314002)(2700100001); DIR:OUT; SFP:1102; SCL:1; SRVR:DB5PR08MB0743; H:outlook.office365.com; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DB5PR08MB0743; 23:f1qbUpAKD8I5ZvRXdA5Ly8R9kXq6G5OPtPEgEc5Oz?= =?us-ascii?Q?6gjQ+hBpSpYz8SKUBiIp8cE+0b90ud+8q4luofO/RoZwlx4tf9FvcnlouIqE?= =?us-ascii?Q?J0retiR1FE9NtymjRREWLcG1voaWP2qy94U7DPpUd5DE2Z8aKdvvGUny2dEU?= =?us-ascii?Q?shZFKWCGJwZ/GMAoZ8Otqxv4IUoHPkSFctNS+4bz9qFB0QBjlEiwmbjaojaj?= =?us-ascii?Q?qxPKaA9HfnHTlLybeqP1SHOpGCe8j60WSIJ72sS+GtQ0+DABKqlS2VgMygYk?= =?us-ascii?Q?wSCjukfw7dLDp4r9zogQyutcETNqz3Y5t6lGwOCvn6wAkFDluQ04h8QfaiXb?= =?us-ascii?Q?FdY7sz3TnytoHPTlzCAYoepCinryPMZRFEkhvJ+u42gCzljk45+DZY0tJhkN?= =?us-ascii?Q?p3ljbdazQqpD8sCa9YtHcvrHLYrkgDngyuYk7G+sOWTrVg+53grExFcbRcw8?= =?us-ascii?Q?2/mfLFcTlJsQHnja/jUOiZ9qyRYIdZm0ZjEgTOERcAoX84ruYznBK8ZmBU/F?= =?us-ascii?Q?hNtqRstAs8OTKds/3WuNSXwYLzYanwuZQ1pOBTu16nmBlQSSuTVcb+ox2CwN?= =?us-ascii?Q?kQlJs4qCIZMGXtvtKd39ddbaDRRm9Zg9UN1x54YMK+iMbH4JaQxTCge5bL/r?= =?us-ascii?Q?KqugT2Ti4CsqrnD5zfw8Xzce/W+yZMRZxDEP60NuFasQpLfPUWL6FEIlBQRm?= =?us-ascii?Q?CFA4u4LnW7vEt17ScMup22r0cPR+vtpkTNgB3YW7P2xEwpzY6iJjCGUi94pl?= =?us-ascii?Q?bKZCJn34K+Jk9/mWkXG5LOkhZ7maECHQJf89HgSqeIXk75zgOwPSJT6AfCbo?= =?us-ascii?Q?CGlGU92onBqWmSuryhCZIKGFgVohcs8LUr4gsG4W+SzTc7Z+HrhyK824grqV?= =?us-ascii?Q?CD2lVMqtBk1kMtgvHGuS+DgjHcbIkb8Q3oBls1jDjEPzn87qmKrSg9V6Qqmr?= =?us-ascii?Q?7VM4HSA3mqPhGDcYedXuuJwTSJo1XrJte/qLyWKri/OmcOCivbgK50MH6fPp?= =?us-ascii?Q?AXF7x8cckMaHYBCrTfZeAcbR0zQUNh+s0szaMdt4C9wgvZZd6jKk69GcmlSM?= =?us-ascii?Q?wrZwVzJhI8piXtOHbEI3sq7IgeZbuftYhZAYFaSZoRC+OFyPXJTKy8pKsGZz?= =?us-ascii?Q?ivyxYsa3IDvSYQCqaw1z1GrytvxwcEcKQxIVptZ5a23XPH6jv2XjpHWdEa7n?= =?us-ascii?Q?rPpewYJ28OhIj2aKrcgsoujY429DiStYFBokfBrEkEm0+nYG7IbHD9ghTwL7?= =?us-ascii?Q?Sq6FEDYSAy3TT+TK2mXPTvdbgS9yDN25ogYHp++jq6wChP18YQ65u20gVxHU?= =?us-ascii?Q?7nLVnybsMgeoCD2+xiSSwLsgacD3sNV6ZXtvwRu122FYmTXvZprwfnXYCHkd?= =?us-ascii?Q?tokPw=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; DB5PR08MB0743; 6:9qOdgPlwxBzZxaGtsMKWN9S+dMJve9Ho01VsA+w02RmrG+hm1Y36I4lnyUvm+70HPuGXZi1AwK9EpvDlDfoswHAxsDQNRXh1pjk/4AIZMCiPoVEHQc+Lhj3TZRwYgnLPTDjc5fYfHSKIu5780lKkBbN793tyu8oE876A3+/7hPEReTEIRlgLQR+xiOuLWbMPdri8rRhUG4TLU8selNSltokb5lqSZhktsqm9ZLTQ+w74IJXgWfGOJi1DyTtCNbiHphP8RfHkb6Ah2cNv+PY6Gh0QqzGOZyJgIECyc7YadCKfPuiFzf3DXplpYMHr8mbq5d1nzmKpJZbG1RsLH04CbSxe5ZKqSEREmiL8REzhPAPzkAJmkyPVViX6LFnzc29eWTfckLViIbyXT8ccxk0HfeI+7AT8DbjrsWSljuiCzeAGLKw1O6OU05hg+kxcrnz5KMd7T0bFOjSMdNHxfDGaVNPbwgbFEHQEaqotcDwYnV47BnQrmz+J6XbX4vUfc39QENkaFqPw4881I2KwvNHy1g==; 5:MBLNAXx9sJvcw98xvbmKgGlAEmLDBDAWevPje8FV8JoRQmLoID+Q5ymWR2EKjjfrfZj1p+Ru+XNPy4mo1WR6YHHrxbUlJpAjoVnIs1EkuFw7l2GOoJelfrF33mbnoOHFGc1/TSy+m00+xHboE/YDHw==; 24:MDwJOPVUyt3A+Yg9gehjCItjPoOB3tPKMoWUc/kOQMtdBDqwMem2gLpCMfuk7o1YVRmaFpgKbh7zhS1Bej3XHVYz42xhdqwXaDTMSCry/aQ= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; DB5PR08MB0743; 7:KD8zRRXvG+DSLxJa/AAIAlfWjN0pGhiWkJu+shIy/d6nvu+0XEtZp8H1NjIvl1YaicttTyLMlhDXZvTDvhte1MIh/M4navLY37VQc32AFkMJoKiey+aNdCRZpcHBG8sfljPRFC49W8Klv2qyfeGwrQW30EQW1hBEE0AEZJZEZIw6E8ExUewspAKTP7Nib4bbSOT7gsloOpysKk8O5973NBtAFPzWdVL+nK5CXT04/GkqxZsZmDyM4q/wcYi7eK/e7C8mJlvxvT6ez5BCQNHM+CuTy2pv/shT0oXK94K6E3yC7CiFWI+IqraYESRuExDTwOQyWaMeA/swocUOpRTfYA==; 20:LAnzPmdb2l6zhbOzniWAZBiXN0CBGHWW73cEjH01f40OCne79/PlpyMLvku9FY9vUFMvNpa3hsTi3wWvTEQJmn/TIpmRZu+oSfnpfzjZLMlnjusN7+B+J1rbJ1Pk+LdrBJsvkjTZC+gRekzfvaxXjjwEJJCj23j3DThAEeZjIEY= X-OriginatorOrg: virtuozzo.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2017 18:27:14.2155 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5PR08MB0743 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Sun, May 14, 2017 at 04:26:18AM -0500, Eric W. Biederman wrote: > ebiederm@xmission.com (Eric W. Biederman) writes: > > > Andrei Vagin writes: > > > >> Hi Eric, > >> > >> I found one issue about this patch. Take a look at the next script. The > >> idea is simple, we call mount twice and than we call umount twice, but > >> the second umount fails. > > > > Definitely an odd. I will take a look. > > After a little more looking I have to say the only thing I can see wrong > in the behavior is that the first umount doesn't unmount everything. > > Changing the mount propagation tree while it is being traversed > apparently prevents a complete traversal of the propgation tree. Is it be enough to find topper which will not be umounted? I attached a patch which does this. I can't find a reason why this will not work. The idea of the patch is that we try to find a topper, check whether it is marked, and if it is marked, then we try to find the next one. All marked toppers are umounted and the first unmarked topper is attached to the parent mount. And I think we need to add tests in tools/testing/selftests/... > > Last time I was looking at this I was playing with multipass algorithm > because of these peculiarities that happen with propagation trees. > > I suspect we are going to need to fix umount to behave correct before > we tune it for performance. So that we actually know what correct > behavior is. > > Eric > > > >> > >> [root@fc24 ~]# cat m.sh > >> #!/bin/sh > >> > >> set -x -e > >> mount -t tmpfs xxx /mnt > >> mkdir -p /mnt/1 > >> mkdir -p /mnt/2 > >> mount --bind /mnt/1 /mnt/1 > >> mount --make-shared /mnt/1 > >> mount --bind /mnt/1 /mnt/2 > >> mkdir -p /mnt/1/1 > >> for i in `seq 2`; do > >> mount --bind /mnt/1/1 /mnt/1/1 > >> done > >> for i in `seq 2`; do > >> umount /mnt/1/1 || { > >> cat /proc/self/mountinfo | grep xxx > >> exit 1 > >> } > >> done > >> > >> [root@fc24 ~]# unshare -Urm ./m.sh > >> + mount -t tmpfs xxx /mnt > >> + mkdir -p /mnt/1 > >> + mkdir -p /mnt/2 > >> + mount --bind /mnt/1 /mnt/1 > >> + mount --make-shared /mnt/1 > >> + mount --bind /mnt/1 /mnt/2 > >> + mkdir -p /mnt/1/1 > >> ++ seq 2 > >> + for i in '`seq 2`' > >> + mount --bind /mnt/1/1 /mnt/1/1 > >> + for i in '`seq 2`' > >> + mount --bind /mnt/1/1 /mnt/1/1 > >> ++ seq 2 > >> + for i in '`seq 2`' > >> + umount /mnt/1/1 > >> + for i in '`seq 2`' > >> + umount /mnt/1/1 > >> umount: /mnt/1/1: not mounted > >> + cat /proc/self/mountinfo > >> + grep xxx > >> 147 116 0:42 / /mnt rw,relatime - tmpfs xxx rw > >> 148 147 0:42 /1 /mnt/1 rw,relatime shared:65 - tmpfs xxx rw > >> 149 147 0:42 /1 /mnt/2 rw,relatime shared:65 - tmpfs xxx rw > >> 157 149 0:42 /1/1 /mnt/2/1 rw,relatime shared:65 - tmpfs xxx rw > >> + exit 1 > >> > >> And you can see that /mnt/2 contains a mount, but it should not. > >> > >> Thanks, > >> Andrei > >> > >> On Thu, Jan 05, 2017 at 10:04:14AM +1300, Eric W. Biederman wrote: > >>> > >>> Ever since mount propagation was introduced in cases where a mount in > >>> propagated to parent mount mountpoint pair that is already in use the > >>> code has placed the new mount behind the old mount in the mount hash > >>> table. > >>> > >>> This implementation detail is problematic as it allows creating > >>> arbitrary length mount hash chains. > >>> > >>> Furthermore it invalidates the constraint maintained elsewhere in the > >>> mount code that a parent mount and a mountpoint pair will have exactly > >>> one mount upon them. Making it hard to deal with and to talk about > >>> this special case in the mount code. > >>> > >>> Modify mount propagation to notice when there is already a mount at > >>> the parent mount and mountpoint where a new mount is propagating to > >>> and place that preexisting mount on top of the new mount. > >>> > >>> Modify unmount propagation to notice when a mount that is being > >>> unmounted has another mount on top of it (and no other children), and > >>> to replace the unmounted mount with the mount on top of it. > >>> > >>> Move the MNT_UMUONT test from __lookup_mnt_last into > >>> __propagate_umount as that is the only call of __lookup_mnt_last where > >>> MNT_UMOUNT may be set on any mount visible in the mount hash table. > >>> > >>> These modifications allow: > >>> - __lookup_mnt_last to be removed. > >>> - attach_shadows to be renamed __attach_mnt and the it's shadow > >>> handling to be removed. > >>> - commit_tree to be simplified > >>> - copy_tree to be simplified > >>> > >>> The result is an easier to understand tree of mounts that does not > >>> allow creation of arbitrary length hash chains in the mount hash table. > >>> > >>> v2: Updated to mnt_change_mountpoint to not call dput or mntput > >>> and instead to decrement the counts directly. It is guaranteed > >>> that there will be other references when mnt_change_mountpoint is > >>> called so this is safe. > >>> > >>> Cc: stable@vger.kernel.org > >>> Fixes: b90fa9ae8f51 ("[PATCH] shared mount handling: bind and rbind") > >>> Tested-by: Andrei Vagin > >>> Signed-off-by: "Eric W. Biederman" > >>> --- > >>> > >>> Since the last version some of you may have seen I have modified > >>> my implementation of mnt_change_mountpoint so that it no longer calls > >>> mntput or dput but instead relies on the knowledge that it can not > >>> possibly have the last reference to the mnt and dentry of interest. > >>> This avoids code checking tools from complaining bitterly. > >>> > >>> This is on top of my previous patch that sorts out locking of the > >>> mountpoint hash table. After time giving ample time for review I intend > >>> to push this and the previous bug fix to Linus. > >>> > >>> fs/mount.h | 1 - > >>> fs/namespace.c | 110 +++++++++++++++++++++++++++++++-------------------------- > >>> fs/pnode.c | 27 ++++++++++---- > >>> fs/pnode.h | 2 ++ > >>> 4 files changed, 82 insertions(+), 58 deletions(-) > >>> > >>> diff --git a/fs/mount.h b/fs/mount.h > >>> index 2c856fc47ae3..2826543a131d 100644 > >>> --- a/fs/mount.h > >>> +++ b/fs/mount.h > >>> @@ -89,7 +89,6 @@ static inline int is_mounted(struct vfsmount *mnt) > >>> } > >>> > >>> extern struct mount *__lookup_mnt(struct vfsmount *, struct dentry *); > >>> -extern struct mount *__lookup_mnt_last(struct vfsmount *, struct dentry *); > >>> > >>> extern int __legitimize_mnt(struct vfsmount *, unsigned); > >>> extern bool legitimize_mnt(struct vfsmount *, unsigned); > >>> diff --git a/fs/namespace.c b/fs/namespace.c > >>> index 487ba30bb5c6..91ccfb73f0e0 100644 > >>> --- a/fs/namespace.c > >>> +++ b/fs/namespace.c > >>> @@ -637,28 +637,6 @@ struct mount *__lookup_mnt(struct vfsmount *mnt, struct dentry *dentry) > >>> } > >>> > >>> /* > >>> - * find the last mount at @dentry on vfsmount @mnt. > >>> - * mount_lock must be held. > >>> - */ > >>> -struct mount *__lookup_mnt_last(struct vfsmount *mnt, struct dentry *dentry) > >>> -{ > >>> - struct mount *p, *res = NULL; > >>> - p = __lookup_mnt(mnt, dentry); > >>> - if (!p) > >>> - goto out; > >>> - if (!(p->mnt.mnt_flags & MNT_UMOUNT)) > >>> - res = p; > >>> - hlist_for_each_entry_continue(p, mnt_hash) { > >>> - if (&p->mnt_parent->mnt != mnt || p->mnt_mountpoint != dentry) > >>> - break; > >>> - if (!(p->mnt.mnt_flags & MNT_UMOUNT)) > >>> - res = p; > >>> - } > >>> -out: > >>> - return res; > >>> -} > >>> - > >>> -/* > >>> * lookup_mnt - Return the first child mount mounted at path > >>> * > >>> * "First" means first mounted chronologically. If you create the > >>> @@ -878,6 +856,13 @@ void mnt_set_mountpoint(struct mount *mnt, > >>> hlist_add_head(&child_mnt->mnt_mp_list, &mp->m_list); > >>> } > >>> > >>> +static void __attach_mnt(struct mount *mnt, struct mount *parent) > >>> +{ > >>> + hlist_add_head_rcu(&mnt->mnt_hash, > >>> + m_hash(&parent->mnt, mnt->mnt_mountpoint)); > >>> + list_add_tail(&mnt->mnt_child, &parent->mnt_mounts); > >>> +} > >>> + > >>> /* > >>> * vfsmount lock must be held for write > >>> */ > >>> @@ -886,28 +871,45 @@ static void attach_mnt(struct mount *mnt, > >>> struct mountpoint *mp) > >>> { > >>> mnt_set_mountpoint(parent, mp, mnt); > >>> - hlist_add_head_rcu(&mnt->mnt_hash, m_hash(&parent->mnt, mp->m_dentry)); > >>> - list_add_tail(&mnt->mnt_child, &parent->mnt_mounts); > >>> + __attach_mnt(mnt, parent); > >>> } > >>> > >>> -static void attach_shadowed(struct mount *mnt, > >>> - struct mount *parent, > >>> - struct mount *shadows) > >>> +void mnt_change_mountpoint(struct mount *parent, struct mountpoint *mp, struct mount *mnt) > >>> { > >>> - if (shadows) { > >>> - hlist_add_behind_rcu(&mnt->mnt_hash, &shadows->mnt_hash); > >>> - list_add(&mnt->mnt_child, &shadows->mnt_child); > >>> - } else { > >>> - hlist_add_head_rcu(&mnt->mnt_hash, > >>> - m_hash(&parent->mnt, mnt->mnt_mountpoint)); > >>> - list_add_tail(&mnt->mnt_child, &parent->mnt_mounts); > >>> - } > >>> + struct mountpoint *old_mp = mnt->mnt_mp; > >>> + struct dentry *old_mountpoint = mnt->mnt_mountpoint; > >>> + struct mount *old_parent = mnt->mnt_parent; > >>> + > >>> + list_del_init(&mnt->mnt_child); > >>> + hlist_del_init(&mnt->mnt_mp_list); > >>> + hlist_del_init_rcu(&mnt->mnt_hash); > >>> + > >>> + attach_mnt(mnt, parent, mp); > >>> + > >>> + put_mountpoint(old_mp); > >>> + > >>> + /* > >>> + * Safely avoid even the suggestion this code might sleep or > >>> + * lock the mount hash by taking avantage of the knowlege that > >>> + * mnt_change_mounpoint will not release the final reference > >>> + * to a mountpoint. > >>> + * > >>> + * During mounting, another mount will continue to use the old > >>> + * mountpoint and during unmounting, the old mountpoint will > >>> + * continue to exist until namespace_unlock which happens well > >>> + * after mnt_change_mountpoint. > >>> + */ > >>> + spin_lock(&old_mountpoint->d_lock); > >>> + old_mountpoint->d_lockref.count--; > >>> + spin_unlock(&old_mountpoint->d_lock); > >>> + > >>> + mnt_add_count(old_parent, -1); > >>> } > >>> > >>> /* > >>> * vfsmount lock must be held for write > >>> */ > >>> -static void commit_tree(struct mount *mnt, struct mount *shadows) > >>> +static void commit_tree(struct mount *mnt) > >>> { > >>> struct mount *parent = mnt->mnt_parent; > >>> struct mount *m; > >>> @@ -925,7 +927,7 @@ static void commit_tree(struct mount *mnt, struct mount *shadows) > >>> n->mounts += n->pending_mounts; > >>> n->pending_mounts = 0; > >>> > >>> - attach_shadowed(mnt, parent, shadows); > >>> + __attach_mnt(mnt, parent); > >>> touch_mnt_namespace(n); > >>> } > >>> > >>> @@ -1764,7 +1766,6 @@ struct mount *copy_tree(struct mount *mnt, struct dentry *dentry, > >>> continue; > >>> > >>> for (s = r; s; s = next_mnt(s, r)) { > >>> - struct mount *t = NULL; > >>> if (!(flag & CL_COPY_UNBINDABLE) && > >>> IS_MNT_UNBINDABLE(s)) { > >>> s = skip_mnt_tree(s); > >>> @@ -1786,14 +1787,7 @@ struct mount *copy_tree(struct mount *mnt, struct dentry *dentry, > >>> goto out; > >>> lock_mount_hash(); > >>> list_add_tail(&q->mnt_list, &res->mnt_list); > >>> - mnt_set_mountpoint(parent, p->mnt_mp, q); > >>> - if (!list_empty(&parent->mnt_mounts)) { > >>> - t = list_last_entry(&parent->mnt_mounts, > >>> - struct mount, mnt_child); > >>> - if (t->mnt_mp != p->mnt_mp) > >>> - t = NULL; > >>> - } > >>> - attach_shadowed(q, parent, t); > >>> + attach_mnt(q, parent, p->mnt_mp); > >>> unlock_mount_hash(); > >>> } > >>> } > >>> @@ -1992,10 +1986,18 @@ static int attach_recursive_mnt(struct mount *source_mnt, > >>> { > >>> HLIST_HEAD(tree_list); > >>> struct mnt_namespace *ns = dest_mnt->mnt_ns; > >>> + struct mountpoint *smp; > >>> struct mount *child, *p; > >>> struct hlist_node *n; > >>> int err; > >>> > >>> + /* Preallocate a mountpoint in case the new mounts need > >>> + * to be tucked under other mounts. > >>> + */ > >>> + smp = get_mountpoint(source_mnt->mnt.mnt_root); > >>> + if (IS_ERR(smp)) > >>> + return PTR_ERR(smp); > >>> + > >>> /* Is there space to add these mounts to the mount namespace? */ > >>> if (!parent_path) { > >>> err = count_mounts(ns, source_mnt); > >>> @@ -2022,17 +2024,22 @@ static int attach_recursive_mnt(struct mount *source_mnt, > >>> touch_mnt_namespace(source_mnt->mnt_ns); > >>> } else { > >>> mnt_set_mountpoint(dest_mnt, dest_mp, source_mnt); > >>> - commit_tree(source_mnt, NULL); > >>> + commit_tree(source_mnt); > >>> } > >>> > >>> hlist_for_each_entry_safe(child, n, &tree_list, mnt_hash) { > >>> - struct mount *q; > >>> hlist_del_init(&child->mnt_hash); > >>> - q = __lookup_mnt_last(&child->mnt_parent->mnt, > >>> - child->mnt_mountpoint); > >>> - commit_tree(child, q); > >>> + if (child->mnt.mnt_root == smp->m_dentry) { > >>> + struct mount *q; > >>> + q = __lookup_mnt(&child->mnt_parent->mnt, > >>> + child->mnt_mountpoint); > >>> + if (q) > >>> + mnt_change_mountpoint(child, smp, q); > >>> + } > >>> + commit_tree(child); > >>> } > >>> unlock_mount_hash(); > >>> + put_mountpoint(smp); > >>> > >>> return 0; > >>> > >>> @@ -2046,6 +2053,7 @@ static int attach_recursive_mnt(struct mount *source_mnt, > >>> cleanup_group_ids(source_mnt, NULL); > >>> out: > >>> ns->pending_mounts = 0; > >>> + put_mountpoint(smp); > >>> return err; > >>> } > >>> > >>> diff --git a/fs/pnode.c b/fs/pnode.c > >>> index 06a793f4ae38..eb4331240fd1 100644 > >>> --- a/fs/pnode.c > >>> +++ b/fs/pnode.c > >>> @@ -327,6 +327,9 @@ int propagate_mnt(struct mount *dest_mnt, struct mountpoint *dest_mp, > >>> */ > >>> static inline int do_refcount_check(struct mount *mnt, int count) > >>> { > >>> + struct mount *topper = __lookup_mnt(&mnt->mnt, mnt->mnt.mnt_root); > >>> + if (topper) > >>> + count++; > >>> return mnt_get_count(mnt) > count; > >>> } > >>> > >>> @@ -359,7 +362,7 @@ int propagate_mount_busy(struct mount *mnt, int refcnt) > >>> > >>> for (m = propagation_next(parent, parent); m; > >>> m = propagation_next(m, parent)) { > >>> - child = __lookup_mnt_last(&m->mnt, mnt->mnt_mountpoint); > >>> + child = __lookup_mnt(&m->mnt, mnt->mnt_mountpoint); > >>> if (child && list_empty(&child->mnt_mounts) && > >>> (ret = do_refcount_check(child, 1))) > >>> break; > >>> @@ -381,7 +384,7 @@ void propagate_mount_unlock(struct mount *mnt) > >>> > >>> for (m = propagation_next(parent, parent); m; > >>> m = propagation_next(m, parent)) { > >>> - child = __lookup_mnt_last(&m->mnt, mnt->mnt_mountpoint); > >>> + child = __lookup_mnt(&m->mnt, mnt->mnt_mountpoint); > >>> if (child) > >>> child->mnt.mnt_flags &= ~MNT_LOCKED; > >>> } > >>> @@ -399,9 +402,11 @@ static void mark_umount_candidates(struct mount *mnt) > >>> > >>> for (m = propagation_next(parent, parent); m; > >>> m = propagation_next(m, parent)) { > >>> - struct mount *child = __lookup_mnt_last(&m->mnt, > >>> + struct mount *child = __lookup_mnt(&m->mnt, > >>> mnt->mnt_mountpoint); > >>> - if (child && (!IS_MNT_LOCKED(child) || IS_MNT_MARKED(m))) { > >>> + if (!child || (child->mnt.mnt_flags & MNT_UMOUNT)) > >>> + continue; > >>> + if (!IS_MNT_LOCKED(child) || IS_MNT_MARKED(m)) { > >>> SET_MNT_MARK(child); > >>> } > >>> } > >>> @@ -420,8 +425,8 @@ static void __propagate_umount(struct mount *mnt) > >>> > >>> for (m = propagation_next(parent, parent); m; > >>> m = propagation_next(m, parent)) { > >>> - > >>> - struct mount *child = __lookup_mnt_last(&m->mnt, > >>> + struct mount *topper; > >>> + struct mount *child = __lookup_mnt(&m->mnt, > >>> mnt->mnt_mountpoint); > >>> /* > >>> * umount the child only if the child has no children > >>> @@ -430,6 +435,16 @@ static void __propagate_umount(struct mount *mnt) > >>> if (!child || !IS_MNT_MARKED(child)) > >>> continue; > >>> CLEAR_MNT_MARK(child); > >>> + > >>> + /* If there is exactly one mount covering all of child > >>> + * replace child with that mount. > >>> + */ > >>> + topper = __lookup_mnt(&child->mnt, child->mnt.mnt_root); > >>> + if (topper && > >>> + (child->mnt_mounts.next == &topper->mnt_child) && > >>> + (topper->mnt_child.next == &child->mnt_mounts)) > >>> + mnt_change_mountpoint(child->mnt_parent, child->mnt_mp, topper); > >>> + > >>> if (list_empty(&child->mnt_mounts)) { > >>> list_del_init(&child->mnt_child); > >>> child->mnt.mnt_flags |= MNT_UMOUNT; > >>> diff --git a/fs/pnode.h b/fs/pnode.h > >>> index 550f5a8b4fcf..dc87e65becd2 100644 > >>> --- a/fs/pnode.h > >>> +++ b/fs/pnode.h > >>> @@ -49,6 +49,8 @@ int get_dominating_id(struct mount *mnt, const struct path *root); > >>> unsigned int mnt_get_count(struct mount *mnt); > >>> void mnt_set_mountpoint(struct mount *, struct mountpoint *, > >>> struct mount *); > >>> +void mnt_change_mountpoint(struct mount *parent, struct mountpoint *mp, > >>> + struct mount *mnt); > >>> struct mount *copy_tree(struct mount *, struct dentry *, int); > >>> bool is_path_reachable(struct mount *, struct dentry *, > >>> const struct path *root); > >>> -- > >>> 2.10.1 > >>> diff --git a/fs/pnode.c b/fs/pnode.c index 5bc7896..7aefd06 100644 --- a/fs/pnode.c +++ b/fs/pnode.c @@ -435,6 +435,17 @@ static void mark_umount_candidates(struct mount *mnt) } } +static void __marked_umount(struct mount *mnt, struct mount *child) +{ + CLEAR_MNT_MARK(child); + + if (list_empty(&child->mnt_mounts)) { + list_del_init(&child->mnt_child); + child->mnt.mnt_flags |= MNT_UMOUNT; + list_move_tail(&child->mnt_list, &mnt->mnt_list); + } +} + /* * NOTE: unmounting 'mnt' naturally propagates to all other mounts its * parent propagates to. @@ -448,8 +459,13 @@ static void __propagate_umount(struct mount *mnt) for (m = propagation_next(parent, parent); m; m = propagation_next(m, parent)) { - struct mount *topper; - struct mount *child = __lookup_mnt(&m->mnt, + struct mount *topper, *topper_to_umount; + struct mount *child; + + if (m->mnt.mnt_flags & MNT_UMOUNT) + continue; + + child = __lookup_mnt(&m->mnt, mnt->mnt_mountpoint); /* * umount the child only if the child has no children @@ -457,21 +473,28 @@ static void __propagate_umount(struct mount *mnt) */ if (!child || !IS_MNT_MARKED(child)) continue; - CLEAR_MNT_MARK(child); - /* If there is exactly one mount covering all of child + /* If there is exactly one mount covering all of childV * replace child with that mount. */ - topper = find_topper(child); + topper = topper_to_umount = child; + while (1) { + topper = find_topper(topper); + if (topper == NULL) + break; + if (!IS_MNT_MARKED(topper)) + break; + topper_to_umount = topper; + } if (topper) - mnt_change_mountpoint(child->mnt_parent, child->mnt_mp, - topper); + mnt_change_mountpoint(child->mnt_parent, child->mnt_mp, topper); - if (list_empty(&child->mnt_mounts)) { - list_del_init(&child->mnt_child); - child->mnt.mnt_flags |= MNT_UMOUNT; - list_move_tail(&child->mnt_list, &mnt->mnt_list); + while (topper_to_umount != child) { + __marked_umount(mnt, topper_to_umount); + topper_to_umount = topper_to_umount->mnt_parent; } + + __marked_umount(mnt, child); } }