From patchwork Tue Nov 17 11:52:30 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Layton X-Patchwork-Id: 7636061 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 776989F6CD for ; Tue, 17 Nov 2015 11:53:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 62672204E2 for ; Tue, 17 Nov 2015 11:53:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3EBE0204EC for ; Tue, 17 Nov 2015 11:53:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753221AbbKQLxi (ORCPT ); Tue, 17 Nov 2015 06:53:38 -0500 Received: from mail-qk0-f170.google.com ([209.85.220.170]:33007 "EHLO mail-qk0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753419AbbKQLx0 (ORCPT ); Tue, 17 Nov 2015 06:53:26 -0500 Received: by qkas77 with SMTP id s77so1711918qka.0 for ; Tue, 17 Nov 2015 03:53:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=poochiereds_net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HPpk4scbbadP9MsKiONp1rmX/90GyUDW396GfYNnxf0=; b=D38n6WLtJpm7yciFoYAZ6qegHrwbk4V5CvsB4fPwFBRnPiEq8HbprfjWi8hTwz/9H2 6B0NBheXyQYmMVo5sM9z8IcJyDlvN4es0D+ugQ6R3QkvHCLiuTnPUrNeaYuhvF456Une +r0JfPEjufy5TblHyFBR+J9e0u9Is11NnesIqvwQlFeP8ZC9wFdiXKokUPXBsrw+Z/Fl Sm640VePgT/Rpu7tCiAEVYgBz6o+wAqQqRW2WcRSWz6kw7O06kJu+qpmAqHRWNuWICd/ qGxiojrazBP20j9q1Nc5hqodaUNeTtyPazia6joO6LoeRFtyaLo59cdhV+rdyGedFs8E NiLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=HPpk4scbbadP9MsKiONp1rmX/90GyUDW396GfYNnxf0=; b=K1Nv5pRQYBwN3JoZXQuP2pGu9FvgZscBsjGot8qQTA8FW0JtEXuNOv5awFUABnnYNI +c41d40kT3PLo7fa8lMMxv5w/G9MrwI1ORiLVZ8Kh5EYIFYfavIbJLGJ7tcC8HpCU4Qm H1O+Y6RKTv4YaXCN6TsqFuCMHY0qL+0WGUp6UDMd83pq396jcTMran/9XaD3yNkH6+NB r8ELvs86RMMF616fuv4b5G0cVbc4XfuhG9vdaDaJysazAFkYFQG7+n7UWYPI7p8x2zaU wAtfGRqbeJ3gPTF60DQsXUCxeHRPul3crD6pRGsjSuyzzktR2mrdLAGxQ4VMAyNKLAcR QVsg== X-Gm-Message-State: ALoCoQnCW6VsSSmE8GZde280p3c9wMMetCXR0YwOIfhUva3empcCzYpa/r+UE+vicHXQ5dNP+OjL X-Received: by 10.55.33.40 with SMTP id h40mr40845788qkh.77.1447761206016; Tue, 17 Nov 2015 03:53:26 -0800 (PST) Received: from tlielax.poochiereds.net ([2606:a000:1125:4075::d5a]) by smtp.googlemail.com with ESMTPSA id w10sm1583910qhc.16.2015.11.17.03.53.24 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 17 Nov 2015 03:53:25 -0800 (PST) From: Jeff Layton X-Google-Original-From: Jeff Layton To: bfields@fieldses.org, trond.myklebust@primarydata.com Cc: linux-nfs@vger.kernel.org, Eric Paris , Alexander Viro , linux-fsdevel@vger.kernel.org Subject: [PATCH v1 08/38] fsnotify: destroy marks with call_srcu instead of dedicated thread Date: Tue, 17 Nov 2015 06:52:30 -0500 Message-Id: <1447761180-4250-9-git-send-email-jeff.layton@primarydata.com> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1447761180-4250-1-git-send-email-jeff.layton@primarydata.com> References: <1447761180-4250-1-git-send-email-jeff.layton@primarydata.com> Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP At the time that this code was originally written, call_srcu didn't exist, so this thread was required to ensure that we waited for that SRCU grace period to settle before finally freeing the object. It does exist now however and we can much more efficiently use call_srcu to handle this. That also allows us to potentially use srcu_barrier to ensure that they are all of the callbacks have run before proceeding. In order to conserve space, we union the rcu_head with the g_list. This will be necessary for nfsd which will allocate marks from a dedicated slabcache. We have to be able to ensure that all of the objects are destroyed before destroying the cache. That's fairly difficult to ensure with dedicated thread doing the destruction. Signed-off-by: Jeff Layton Reviewed-by: Jan Kara --- fs/notify/mark.c | 66 +++++++++------------------------------- include/linux/fsnotify_backend.h | 10 +++--- 2 files changed, 20 insertions(+), 56 deletions(-) diff --git a/fs/notify/mark.c b/fs/notify/mark.c index c2bd670d4704..00e7072d3c95 100644 --- a/fs/notify/mark.c +++ b/fs/notify/mark.c @@ -92,9 +92,6 @@ #include "fsnotify.h" struct srcu_struct fsnotify_mark_srcu; -static DEFINE_SPINLOCK(destroy_lock); -static LIST_HEAD(destroy_list); -static DECLARE_WAIT_QUEUE_HEAD(destroy_waitq); void fsnotify_get_mark(struct fsnotify_mark *mark) { @@ -169,10 +166,19 @@ void fsnotify_detach_mark(struct fsnotify_mark *mark) atomic_dec(&group->num_marks); } +static void +fsnotify_mark_free_rcu(struct rcu_head *rcu) +{ + struct fsnotify_mark *mark; + + mark = container_of(rcu, struct fsnotify_mark, g_rcu); + fsnotify_put_mark(mark); +} + /* - * Free fsnotify mark. The freeing is actually happening from a kthread which - * first waits for srcu period end. Caller must have a reference to the mark - * or be protected by fsnotify_mark_srcu. + * Free fsnotify mark. The freeing is actually happening from a call_srcu + * callback. Caller must have a reference to the mark or be protected by + * fsnotify_mark_srcu. */ void fsnotify_free_mark(struct fsnotify_mark *mark) { @@ -187,10 +193,7 @@ void fsnotify_free_mark(struct fsnotify_mark *mark) mark->flags &= ~FSNOTIFY_MARK_FLAG_ALIVE; spin_unlock(&mark->lock); - spin_lock(&destroy_lock); - list_add(&mark->g_list, &destroy_list); - spin_unlock(&destroy_lock); - wake_up(&destroy_waitq); + call_srcu(&fsnotify_mark_srcu, &mark->g_rcu, fsnotify_mark_free_rcu); /* * Some groups like to know that marks are being freed. This is a @@ -387,11 +390,7 @@ err: spin_unlock(&mark->lock); - spin_lock(&destroy_lock); - list_add(&mark->g_list, &destroy_list); - spin_unlock(&destroy_lock); - wake_up(&destroy_waitq); - + call_srcu(&fsnotify_mark_srcu, &mark->g_rcu, fsnotify_mark_free_rcu); return ret; } @@ -496,40 +495,3 @@ void fsnotify_init_mark(struct fsnotify_mark *mark, mark->free_mark = free_mark; } EXPORT_SYMBOL_GPL(fsnotify_init_mark); - -static int fsnotify_mark_destroy(void *ignored) -{ - struct fsnotify_mark *mark, *next; - struct list_head private_destroy_list; - - for (;;) { - spin_lock(&destroy_lock); - /* exchange the list head */ - list_replace_init(&destroy_list, &private_destroy_list); - spin_unlock(&destroy_lock); - - synchronize_srcu(&fsnotify_mark_srcu); - - list_for_each_entry_safe(mark, next, &private_destroy_list, g_list) { - list_del_init(&mark->g_list); - fsnotify_put_mark(mark); - } - - wait_event_interruptible(destroy_waitq, !list_empty(&destroy_list)); - } - - return 0; -} - -static int __init fsnotify_mark_init(void) -{ - struct task_struct *thread; - - thread = kthread_run(fsnotify_mark_destroy, NULL, - "fsnotify_mark"); - if (IS_ERR(thread)) - panic("unable to start fsnotify mark destruction thread."); - - return 0; -} -device_initcall(fsnotify_mark_init); diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h index 533c4408529a..1c582747b410 100644 --- a/include/linux/fsnotify_backend.h +++ b/include/linux/fsnotify_backend.h @@ -217,10 +217,12 @@ struct fsnotify_mark { /* Group this mark is for. Set on mark creation, stable until last ref * is dropped */ struct fsnotify_group *group; - /* List of marks by group->i_fsnotify_marks. Also reused for queueing - * mark into destroy_list when it's waiting for the end of SRCU period - * before it can be freed. [group->mark_mutex] */ - struct list_head g_list; + union { + /* List of marks by group->i_fsnotify_marks. [group->mark_mutex] */ + struct list_head g_list; + /* rcu_head for call_srcu-based destructor */ + struct rcu_head g_rcu; + }; /* Protects inode / mnt pointers, flags, masks */ spinlock_t lock; /* List of marks for inode / vfsmount [obj_lock] */