From patchwork Mon Aug 21 11:55:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 9912301 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4C6D2602D8 for ; Mon, 21 Aug 2017 11:55:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 33E1C27FA8 for ; Mon, 21 Aug 2017 11:55:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 28C06286E4; Mon, 21 Aug 2017 11:55:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B228F27FA8 for ; Mon, 21 Aug 2017 11:55:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753558AbdHULza (ORCPT ); Mon, 21 Aug 2017 07:55:30 -0400 Received: from mx2.suse.de ([195.135.220.15]:32884 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1753555AbdHULzX (ORCPT ); Mon, 21 Aug 2017 07:55:23 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id C6CA4ACF0; Mon, 21 Aug 2017 11:55:19 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 03AF71E3482; Mon, 21 Aug 2017 13:55:19 +0200 (CEST) From: Jan Kara To: Cc: Wang Shilong , Andreas Dilger , Jan Kara Subject: [PATCH 20/27] quota: Remove dq_wait_unused from dquot Date: Mon, 21 Aug 2017 13:55:10 +0200 Message-Id: <20170821115517.26542-21-jack@suse.cz> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20170821115517.26542-1-jack@suse.cz> References: <20170821115517.26542-1-jack@suse.cz> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently every dquot carries a wait_queue_head_t used only when we are turning quotas off to wait for last users to drop dquot references. Since such rare case is not performance sensitive in any means, just use a global waitqueue for this and save space in struct dquot. Also convert the logic to use wait_event() instead of open-coding it. Signed-off-by: Jan Kara --- fs/quota/dquot.c | 17 +++++++---------- include/linux/quota.h | 1 - 2 files changed, 7 insertions(+), 11 deletions(-) diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c index 93adcdd6a260..361a2a6f13e1 100644 --- a/fs/quota/dquot.c +++ b/fs/quota/dquot.c @@ -126,6 +126,8 @@ __cacheline_aligned_in_smp DEFINE_SPINLOCK(dq_data_lock); EXPORT_SYMBOL(dq_data_lock); DEFINE_STATIC_SRCU(dquot_srcu); +static DECLARE_WAIT_QUEUE_HEAD(dquot_ref_wq); + void __quota_error(struct super_block *sb, const char *func, const char *fmt, ...) { @@ -527,22 +529,18 @@ static void invalidate_dquots(struct super_block *sb, int type) continue; /* Wait for dquot users */ if (atomic_read(&dquot->dq_count)) { - DEFINE_WAIT(wait); - dqgrab(dquot); - prepare_to_wait(&dquot->dq_wait_unused, &wait, - TASK_UNINTERRUPTIBLE); spin_unlock(&dq_list_lock); - /* Once dqput() wakes us up, we know it's time to free + /* + * Once dqput() wakes us up, we know it's time to free * the dquot. * IMPORTANT: we rely on the fact that there is always * at most one process waiting for dquot to free. * Otherwise dq_count would be > 1 and we would never * wake up. */ - if (atomic_read(&dquot->dq_count) > 1) - schedule(); - finish_wait(&dquot->dq_wait_unused, &wait); + wait_event(dquot_ref_wq, + atomic_read(&dquot->dq_count) == 1); dqput(dquot); /* At this moment dquot() need not exist (it could be * reclaimed by prune_dqcache(). Hence we must @@ -754,7 +752,7 @@ void dqput(struct dquot *dquot) /* Releasing dquot during quotaoff phase? */ if (!sb_has_quota_active(dquot->dq_sb, dquot->dq_id.type) && atomic_read(&dquot->dq_count) == 1) - wake_up(&dquot->dq_wait_unused); + wake_up(&dquot_ref_wq); spin_unlock(&dq_list_lock); return; } @@ -809,7 +807,6 @@ static struct dquot *get_empty_dquot(struct super_block *sb, int type) INIT_LIST_HEAD(&dquot->dq_inuse); INIT_HLIST_NODE(&dquot->dq_hash); INIT_LIST_HEAD(&dquot->dq_dirty); - init_waitqueue_head(&dquot->dq_wait_unused); dquot->dq_sb = sb; dquot->dq_id = make_kqid_invalid(type); atomic_set(&dquot->dq_count, 1); diff --git a/include/linux/quota.h b/include/linux/quota.h index 3a6df7461642..ad6809f099ac 100644 --- a/include/linux/quota.h +++ b/include/linux/quota.h @@ -299,7 +299,6 @@ struct dquot { struct list_head dq_dirty; /* List of dirty dquots */ struct mutex dq_lock; /* dquot IO lock */ atomic_t dq_count; /* Use count */ - wait_queue_head_t dq_wait_unused; /* Wait queue for dquot to become unused */ struct super_block *dq_sb; /* superblock this applies to */ struct kqid dq_id; /* ID this applies to (uid, gid, projid) */ loff_t dq_off; /* Offset of dquot on disk */