From patchwork Thu Dec 22 09:15:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 9484587 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 34E58601D2 for ; Thu, 22 Dec 2016 09:16:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 330D727C05 for ; Thu, 22 Dec 2016 09:16:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 27F9B2836D; Thu, 22 Dec 2016 09:16:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 10EAC27C05 for ; Thu, 22 Dec 2016 09:16:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S938854AbcLVJQZ (ORCPT ); Thu, 22 Dec 2016 04:16:25 -0500 Received: from mx2.suse.de ([195.135.220.15]:55689 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S938770AbcLVJPs (ORCPT ); Thu, 22 Dec 2016 04:15:48 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 53348ACBC; Thu, 22 Dec 2016 09:15:44 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id E92E61E1141; Thu, 22 Dec 2016 10:15:42 +0100 (CET) From: Jan Kara To: Cc: Amir Goldstein , Lino Sanfilippo , Miklos Szeredi , Paul Moore , Jan Kara Subject: [PATCH 11/22] fsnotify: Remove special handling of mark destruction on group shutdown Date: Thu, 22 Dec 2016 10:15:27 +0100 Message-Id: <20161222091538.28702-12-jack@suse.cz> X-Mailer: git-send-email 2.10.2 In-Reply-To: <20161222091538.28702-1-jack@suse.cz> References: <20161222091538.28702-1-jack@suse.cz> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently we queue all marks for destruction on group shutdown and then destroy them from fsnotify_destroy_group() instead from a worker thread which is the usual path. However worker can already be processing some list of marks to destroy so this does not make 100% all marks are really destroyed by the time group is shut down. This isn't a big problem as each mark holds group reference and thus group stays partially alive until all marks are really freed but there's no point in complicating our lives - just wait for the delayed work to be finished instead. Signed-off-by: Jan Kara Reviewed-by: Amir Goldstein --- fs/notify/fsnotify.h | 6 ++---- fs/notify/group.c | 10 ++++++---- fs/notify/mark.c | 9 +++++---- 3 files changed, 13 insertions(+), 12 deletions(-) diff --git a/fs/notify/fsnotify.h b/fs/notify/fsnotify.h index bdc489af0e5b..670c2bac1342 100644 --- a/fs/notify/fsnotify.h +++ b/fs/notify/fsnotify.h @@ -36,10 +36,8 @@ static inline void fsnotify_clear_marks_by_mount(struct vfsmount *mnt) } /* prepare for freeing all marks associated with given group */ extern void fsnotify_detach_group_marks(struct fsnotify_group *group); -/* - * wait for fsnotify_mark_srcu period to end and free all marks in destroy_list - */ -extern void fsnotify_mark_destroy_list(void); +/* Wait until all marks queued for destruction are destroyed */ +extern void fsnotify_wait_marks_destroyed(void); /* * update the dentry->d_flags of all of inode's children to indicate if inode cares diff --git a/fs/notify/group.c b/fs/notify/group.c index fbe3cbebec16..0fb4aadcc19f 100644 --- a/fs/notify/group.c +++ b/fs/notify/group.c @@ -66,14 +66,16 @@ void fsnotify_destroy_group(struct fsnotify_group *group) */ fsnotify_group_stop_queueing(group); - /* clear all inode marks for this group, attach them to destroy_list */ + /* Clear all marks for this group and queue them for destruction */ fsnotify_detach_group_marks(group); /* - * Wait for fsnotify_mark_srcu period to end and free all marks in - * destroy_list + * Wait until all marks get really destroyed. We could actually destroy + * them ourselves instead of waiting for worker to do it, however that + * would be racy as worker can already be processing some marks before + * we even entered fsnotify_destroy_group(). */ - fsnotify_mark_destroy_list(); + fsnotify_wait_marks_destroyed(); /* * Since we have waited for fsnotify_mark_srcu in diff --git a/fs/notify/mark.c b/fs/notify/mark.c index 55550dad6617..60f5754ce5ed 100644 --- a/fs/notify/mark.c +++ b/fs/notify/mark.c @@ -650,7 +650,7 @@ void fsnotify_detach_group_marks(struct fsnotify_group *group) fsnotify_get_mark(mark); fsnotify_detach_mark(mark); mutex_unlock(&group->mark_mutex); - __fsnotify_free_mark(mark); + fsnotify_free_mark(mark); fsnotify_put_mark(mark); } } @@ -710,7 +710,7 @@ void fsnotify_init_mark(struct fsnotify_mark *mark, * Destroy all marks in destroy_list, waits for SRCU period to finish before * actually freeing marks. */ -void fsnotify_mark_destroy_list(void) +static void fsnotify_mark_destroy_workfn(struct work_struct *work) { struct fsnotify_mark *mark, *next; struct list_head private_destroy_list; @@ -728,7 +728,8 @@ void fsnotify_mark_destroy_list(void) } } -static void fsnotify_mark_destroy_workfn(struct work_struct *work) +/* Wait for all marks queued for destruction to be actually destroyed */ +void fsnotify_wait_marks_destroyed(void) { - fsnotify_mark_destroy_list(); + flush_delayed_work(&reaper_work); }