From patchwork Thu Aug 1 14:02:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ondrej Mosnacek X-Patchwork-Id: 11070777 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 838ED746 for ; Thu, 1 Aug 2019 14:02:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7651728564 for ; Thu, 1 Aug 2019 14:02:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 75201286F3; Thu, 1 Aug 2019 14:02:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F135128564 for ; Thu, 1 Aug 2019 14:02:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731475AbfHAOCt (ORCPT ); Thu, 1 Aug 2019 10:02:49 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:35721 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731335AbfHAOCt (ORCPT ); Thu, 1 Aug 2019 10:02:49 -0400 Received: by mail-wm1-f68.google.com with SMTP id l2so63281731wmg.0 for ; Thu, 01 Aug 2019 07:02:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qAQOsSW3PZlCOOlV7G41FboY5NO3JDvgIsqYW8MdaEk=; b=AcrknMbWp8V68zJW8OKUBZ7EbALb+cxUb4O1ZmQH3ItFaQtcAaELY6JUkVi1FcNdLv 20yN5rMDNrA22C3ix6Pw8GvgDDb9bwIsvFerRbKGWQXaVBLHYKQ5QnFttstTHVswHZI+ YnsP161dttMUY1R5m4EAMpHerTp93NFCgvANgrXiZlHIbRIhDZ5BlLOQdjbNv6Z0CAbE LMlFkrZUZKXC3AKY0AiDBxeY2iSQ/YGah2I6dhQxGWmHIRtaX8p3qz6ZfzFHnvFUodCz 762hsADs69e8+6HN7J16SA2FVFJZ6y4dH4s4zqROl3514TpP/HO3Hmy1aWoYd2YlY5sc 78sg== X-Gm-Message-State: APjAAAV2Qz9XoRHtSCPUmZ6ipkVKc9OzkSUuFwWUCgH4o6qDAhLNMT7C fCtZ16J2ojiFM7LOS7/2e4bw4sTtxsA= X-Google-Smtp-Source: APXvYqwOoyiVZsMYZ8cQ5e9OGb1IvlPrdHbwXo+5ZDiPo/qYsmcC1ojfeFMhV57U40LtD/r6DkjFEg== X-Received: by 2002:a7b:c8c3:: with SMTP id f3mr53608258wml.124.1564668167154; Thu, 01 Aug 2019 07:02:47 -0700 (PDT) Received: from localhost.localdomain.com (nat-pool-brq-t.redhat.com. [213.175.37.10]) by smtp.gmail.com with ESMTPSA id z7sm69909162wrh.67.2019.08.01.07.02.46 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 07:02:46 -0700 (PDT) From: Ondrej Mosnacek To: selinux@vger.kernel.org, Paul Moore Cc: Al Viro , linux-fsdevel@vger.kernel.org Subject: [PATCH v2 1/4] d_walk: optionally lock also parent inode Date: Thu, 1 Aug 2019 16:02:40 +0200 Message-Id: <20190801140243.24080-2-omosnace@redhat.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190801140243.24080-1-omosnace@redhat.com> References: <20190801140243.24080-1-omosnace@redhat.com> MIME-Version: 1.0 Sender: selinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: selinux@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This will be used in a later patch to provide a function to safely perform d_genocide on live trees. Signed-off-by: Ondrej Mosnacek --- fs/dcache.c | 43 +++++++++++++++++++++++++++++++++---------- 1 file changed, 33 insertions(+), 10 deletions(-) diff --git a/fs/dcache.c b/fs/dcache.c index e88cf0554e65..9ed4c0f99e57 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -1259,12 +1259,13 @@ enum d_walk_ret { /** * d_walk - walk the dentry tree * @parent: start of walk + * @lock_inode whether to lock also parent inode * @data: data passed to @enter() and @finish() * @enter: callback when first entering the dentry * * The @enter() callbacks are called with d_lock held. */ -static void d_walk(struct dentry *parent, void *data, +static void d_walk(struct dentry *parent, bool lock_inode, void *data, enum d_walk_ret (*enter)(void *, struct dentry *)) { struct dentry *this_parent; @@ -1276,6 +1277,8 @@ static void d_walk(struct dentry *parent, void *data, again: read_seqbegin_or_lock(&rename_lock, &seq); this_parent = parent; + if (lock_inode) + inode_lock(this_parent->d_inode); spin_lock(&this_parent->d_lock); ret = enter(data, this_parent); @@ -1319,9 +1322,21 @@ resume: if (!list_empty(&dentry->d_subdirs)) { spin_unlock(&this_parent->d_lock); - spin_release(&dentry->d_lock.dep_map, 1, _RET_IP_); + if (lock_inode) { + spin_unlock(&dentry->d_lock); + inode_unlock(this_parent->d_inode); + } else { + spin_release(&dentry->d_lock.dep_map, + 1, _RET_IP_); + } this_parent = dentry; - spin_acquire(&this_parent->d_lock.dep_map, 0, 1, _RET_IP_); + if (lock_inode) { + inode_lock(this_parent->d_inode); + spin_lock(&this_parent->d_lock); + } else { + spin_acquire(&this_parent->d_lock.dep_map, + 0, 1, _RET_IP_); + } goto repeat; } spin_unlock(&dentry->d_lock); @@ -1336,6 +1351,10 @@ ascend: this_parent = child->d_parent; spin_unlock(&child->d_lock); + if (lock_inode) { + inode_unlock(child->d_inode); + inode_lock(this_parent->d_inode); + } spin_lock(&this_parent->d_lock); /* might go back up the wrong parent if we have had a rename. */ @@ -1357,12 +1376,16 @@ ascend: out_unlock: spin_unlock(&this_parent->d_lock); + if (lock_inode) + inode_unlock(this_parent->d_inode); done_seqretry(&rename_lock, seq); return; rename_retry: - spin_unlock(&this_parent->d_lock); rcu_read_unlock(); + spin_unlock(&this_parent->d_lock); + if (lock_inode) + inode_unlock(this_parent->d_inode); BUG_ON(seq & 1); if (!retry) return; @@ -1402,7 +1425,7 @@ int path_has_submounts(const struct path *parent) struct check_mount data = { .mnt = parent->mnt, .mounted = 0 }; read_seqlock_excl(&mount_lock); - d_walk(parent->dentry, &data, path_check_mount); + d_walk(parent->dentry, false, &data, path_check_mount); read_sequnlock_excl(&mount_lock); return data.mounted; @@ -1541,7 +1564,7 @@ void shrink_dcache_parent(struct dentry *parent) struct select_data data = {.start = parent}; INIT_LIST_HEAD(&data.dispose); - d_walk(parent, &data, select_collect); + d_walk(parent, false, &data, select_collect); if (!list_empty(&data.dispose)) { shrink_dentry_list(&data.dispose); @@ -1552,7 +1575,7 @@ void shrink_dcache_parent(struct dentry *parent) if (!data.found) break; data.victim = NULL; - d_walk(parent, &data, select_collect2); + d_walk(parent, false, &data, select_collect2); if (data.victim) { struct dentry *parent; spin_lock(&data.victim->d_lock); @@ -1599,7 +1622,7 @@ static enum d_walk_ret umount_check(void *_data, struct dentry *dentry) static void do_one_tree(struct dentry *dentry) { shrink_dcache_parent(dentry); - d_walk(dentry, dentry, umount_check); + d_walk(dentry, false, dentry, umount_check); d_drop(dentry); dput(dentry); } @@ -1656,7 +1679,7 @@ void d_invalidate(struct dentry *dentry) shrink_dcache_parent(dentry); for (;;) { struct dentry *victim = NULL; - d_walk(dentry, &victim, find_submount); + d_walk(dentry, false, &victim, find_submount); if (!victim) { if (had_submounts) shrink_dcache_parent(dentry); @@ -3106,7 +3129,7 @@ static enum d_walk_ret d_genocide_kill(void *data, struct dentry *dentry) void d_genocide(struct dentry *parent) { - d_walk(parent, parent, d_genocide_kill); + d_walk(parent, false, parent, d_genocide_kill); } EXPORT_SYMBOL(d_genocide);