From patchwork Tue Jul 10 10:02:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 10516767 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4FCF6600CA for ; Tue, 10 Jul 2018 10:02:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3E50328BB7 for ; Tue, 10 Jul 2018 10:02:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 324B028BC0; Tue, 10 Jul 2018 10:02:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9FF3228BB7 for ; Tue, 10 Jul 2018 10:02:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932939AbeGJKCe (ORCPT ); Tue, 10 Jul 2018 06:02:34 -0400 Received: from mx2.suse.de ([195.135.220.15]:35118 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751263AbeGJKCd (ORCPT ); Tue, 10 Jul 2018 06:02:33 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 89973AFB2; Tue, 10 Jul 2018 10:02:32 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id F0F7C1E3C39; Tue, 10 Jul 2018 12:02:31 +0200 (CEST) From: Jan Kara To: linux-audit@redhat.com Cc: Paul Moore , , Al Viro , Richard Guy Briggs , Amir Goldstein , Jan Kara Subject: [PATCH 04/10] audit: Embed key into chunk Date: Tue, 10 Jul 2018 12:02:10 +0200 Message-Id: <20180710100217.12866-5-jack@suse.cz> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20180710100217.12866-1-jack@suse.cz> References: <20180710100217.12866-1-jack@suse.cz> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently chunk hash key (which is in fact pointer to the inode) is derived as chunk->mark.conn->obj. It is tricky to make this dereference reliable for hash table lookups only under RCU as mark can get detached from the connector and connector gets freed independently of the running lookup. Thus there is a possible use after free / NULL ptr dereference issue: CPU1 CPU2 untag_chunk() ... audit_tree_lookup() list_for_each_entry_rcu(p, list, hash) { list_del_rcu(&chunk->hash); fsnotify_destroy_mark(entry); fsnotify_put_mark(entry) chunk_to_key(p) if (!chunk->mark.connector) ... hlist_del_init_rcu(&mark->obj_list); if (hlist_empty(&conn->list)) { inode = fsnotify_detach_connector_from_object(conn); mark->connector = NULL; ... frees connector from workqueue chunk->mark.connector->obj This race is probably impossible to hit in practice as the race window on CPU1 is very narrow and CPU2 has a lot of code to execute. Still it's better to have this fixed. Since the inode the chunk is attached to is constant during chunk's lifetime it is easy to cache the key in the chunk itself and thus avoid these issues. Signed-off-by: Jan Kara --- kernel/audit_tree.c | 27 ++++++++------------------- 1 file changed, 8 insertions(+), 19 deletions(-) diff --git a/kernel/audit_tree.c b/kernel/audit_tree.c index 7f9df8c66227..a210963bae32 100644 --- a/kernel/audit_tree.c +++ b/kernel/audit_tree.c @@ -24,6 +24,7 @@ struct audit_tree { struct audit_chunk { struct list_head hash; + unsigned long key; struct fsnotify_mark mark; struct list_head trees; /* with root here */ int dead; @@ -172,21 +173,6 @@ static unsigned long inode_to_key(const struct inode *inode) return (unsigned long)&inode->i_fsnotify_marks; } -/* - * Function to return search key in our hash from chunk. Key 0 is special and - * should never be present in the hash. - */ -static unsigned long chunk_to_key(struct audit_chunk *chunk) -{ - /* - * We have a reference to the mark so it should be attached to a - * connector. - */ - if (WARN_ON_ONCE(!chunk->mark.connector)) - return 0; - return (unsigned long)chunk->mark.connector->obj; -} - static inline struct list_head *chunk_hash(unsigned long key) { unsigned long n = key / L1_CACHE_BYTES; @@ -196,12 +182,12 @@ static inline struct list_head *chunk_hash(unsigned long key) /* hash_lock & entry->group->mark_mutex is held by caller */ static void insert_hash(struct audit_chunk *chunk) { - unsigned long key = chunk_to_key(chunk); struct list_head *list; if (!(chunk->mark.flags & FSNOTIFY_MARK_FLAG_ATTACHED)) return; - list = chunk_hash(key); + WARN_ON_ONCE(!chunk->key); + list = chunk_hash(chunk->key); list_add_rcu(&chunk->hash, list); } @@ -213,7 +199,7 @@ struct audit_chunk *audit_tree_lookup(const struct inode *inode) struct audit_chunk *p; list_for_each_entry_rcu(p, list, hash) { - if (chunk_to_key(p) == key) { + if (p->key == key) { atomic_long_inc(&p->refs); return p; } @@ -295,6 +281,7 @@ static void untag_chunk(struct node *p) chunk->dead = 1; spin_lock(&hash_lock); + new->key = chunk->key; list_replace_init(&chunk->trees, &new->trees); if (owner->root == chunk) { list_del_init(&owner->same_root); @@ -380,6 +367,7 @@ static int create_chunk(struct inode *inode, struct audit_tree *tree) tree->root = chunk; list_add(&tree->same_root, &chunk->trees); } + chunk->key = inode_to_key(inode); insert_hash(chunk); spin_unlock(&hash_lock); mutex_unlock(&audit_tree_group->mark_mutex); @@ -456,6 +444,7 @@ static int tag_chunk(struct inode *inode, struct audit_tree *tree) fsnotify_put_mark(old_entry); return 0; } + chunk->key = old->key; list_replace_init(&old->trees, &chunk->trees); for (n = 0, p = chunk->owners; n < old->count; n++, p++) { struct audit_tree *s = old->owners[n].owner; @@ -652,7 +641,7 @@ void audit_trim_trees(void) /* this could be NULL if the watch is dying else where... */ node->index |= 1U<<31; if (iterate_mounts(compare_root, - (void *)chunk_to_key(chunk), + (void *)(chunk->key), root_mnt)) node->index &= ~(1U<<31); }