From patchwork Tue Jun 8 23:02:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Gushchin X-Patchwork-Id: 12308397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55D6AC4743D for ; Tue, 8 Jun 2021 23:14:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F2ECC61178 for ; Tue, 8 Jun 2021 23:14:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2ECC61178 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=fb.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 898536B006C; Tue, 8 Jun 2021 19:14:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 820E66B006E; Tue, 8 Jun 2021 19:14:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 64CDE6B0070; Tue, 8 Jun 2021 19:14:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id 2E46C6B006C for ; Tue, 8 Jun 2021 19:14:00 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id BE4DA181AEF23 for ; Tue, 8 Jun 2021 23:13:59 +0000 (UTC) X-FDA: 78232111398.23.81169A6 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf05.hostedemail.com (Postfix) with ESMTP id AC268E000265 for ; Tue, 8 Jun 2021 23:13:54 +0000 (UTC) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 158N4wcY005784 for ; Tue, 8 Jun 2021 16:13:58 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=axkOO9x/6aankbMHjRCuy7zetTVE8dInIT00aHgGpNE=; b=VWWuM6YX2SebujpNV/4dpAC3sxwRgYZ2vMWpzeR+WW6PK5f1oSksAOoUPIDgthX58BrY dgNTZbjwghRVZyjBjK0sZn+6hq6gUvoCVh2d2BZevo8HCpSPj3pB3lZXYPso4Ugb41JJ iflPi+yEMpXaL3NzOijIq6PWnHTJjehc1IA= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com with ESMTP id 3929n4bn0t-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Tue, 08 Jun 2021 16:13:58 -0700 Received: from intmgw001.27.prn2.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::d) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 8 Jun 2021 16:13:56 -0700 Received: by devvm3388.prn0.facebook.com (Postfix, from userid 111017) id 52DC182542D7; Tue, 8 Jun 2021 16:02:28 -0700 (PDT) From: Roman Gushchin To: Andrew Morton , Tejun Heo CC: , , , Alexander Viro , Jan Kara , Dennis Zhou , Dave Chinner , , Roman Gushchin Subject: [PATCH v9 7/8] writeback, cgroup: support switching multiple inodes at once Date: Tue, 8 Jun 2021 16:02:24 -0700 Message-ID: <20210608230225.2078447-8-guro@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210608230225.2078447-1-guro@fb.com> References: <20210608230225.2078447-1-guro@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: D06dCcJU-r27Icn_OWPUTZsQ__MrrR0S X-Proofpoint-ORIG-GUID: D06dCcJU-r27Icn_OWPUTZsQ__MrrR0S X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391,18.0.761 definitions=2021-06-08_17:2021-06-04,2021-06-08 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 mlxscore=0 lowpriorityscore=0 phishscore=0 adultscore=0 bulkscore=0 spamscore=0 clxscore=1015 malwarescore=0 suspectscore=0 priorityscore=1501 mlxlogscore=925 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000 definitions=main-2106080146 X-FB-Internal: deliver Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=fb.com header.s=facebook header.b=VWWuM6YX; spf=pass (imf05.hostedemail.com: domain of "prvs=5793bd417c=guro@fb.com" designates 67.231.153.30 as permitted sender) smtp.mailfrom="prvs=5793bd417c=guro@fb.com"; dmarc=pass (policy=reject) header.from=fb.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: AC268E000265 X-Stat-Signature: 15qj6ckxouowm4t7onsd38otes6jptyz X-HE-Tag: 1623194034-250605 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently only a single inode can be switched to another writeback structure at once. That means to switch an inode a separate inode_switch_wbs_context structure must be allocated, and a separate rcu callback and work must be scheduled. It's fine for the existing ad-hoc switching, which is not happening that often, but sub-optimal for massive switching required in order to release a writeback structure. To prepare for it, let's add a support for switching multiple inodes at once. Instead of containing a single inode pointer, inode_switch_wbs_context will contain a NULL-terminated array of inode pointers. inode_do_switch_wbs() will be called for each inode. To optimize the locking bdi->wb_switch_rwsem, old_wb's and new_wb's list_locks will be acquired and released only once altogether for all inodes. wb_wakeup() will be also be called only once. Instead of calling wb_put(old_wb) after each successful switch, wb_put_many() is introduced and used. Signed-off-by: Roman Gushchin Acked-by: Tejun Heo Reviewed-by: Jan Kara Acked-by: Dennis Zhou --- fs/fs-writeback.c | 106 +++++++++++++++++++------------ include/linux/backing-dev-defs.h | 18 +++++- 2 files changed, 80 insertions(+), 44 deletions(-) diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 8240eb0803dc..2d7c5e1e32e7 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -335,10 +335,18 @@ static struct bdi_writeback *inode_to_wb_and_lock_list(struct inode *inode) } struct inode_switch_wbs_context { - struct inode *inode; - struct bdi_writeback *new_wb; - struct rcu_work work; + + /* + * Multiple inodes can be switched at once. The switching procedure + * consists of two parts, separated by a RCU grace period. To make + * sure that the second part is executed for each inode gone through + * the first part, all inode pointers are placed into a NULL-terminated + * array embedded into struct inode_switch_wbs_context. Otherwise + * an inode could be left in a non-consistent state. + */ + struct bdi_writeback *new_wb; + struct inode *inodes[]; }; static void bdi_down_write_wb_switch_rwsem(struct backing_dev_info *bdi) @@ -351,39 +359,15 @@ static void bdi_up_write_wb_switch_rwsem(struct backing_dev_info *bdi) up_write(&bdi->wb_switch_rwsem); } -static void inode_do_switch_wbs(struct inode *inode, +static bool inode_do_switch_wbs(struct inode *inode, + struct bdi_writeback *old_wb, struct bdi_writeback *new_wb) { - struct backing_dev_info *bdi = inode_to_bdi(inode); struct address_space *mapping = inode->i_mapping; - struct bdi_writeback *old_wb = inode->i_wb; XA_STATE(xas, &mapping->i_pages, 0); struct page *page; bool switched = false; - /* - * If @inode switches cgwb membership while sync_inodes_sb() is - * being issued, sync_inodes_sb() might miss it. Synchronize. - */ - down_read(&bdi->wb_switch_rwsem); - - /* - * By the time control reaches here, RCU grace period has passed - * since I_WB_SWITCH assertion and all wb stat update transactions - * between unlocked_inode_to_wb_begin/end() are guaranteed to be - * synchronizing against the i_pages lock. - * - * Grabbing old_wb->list_lock, inode->i_lock and the i_pages lock - * gives us exclusion against all wb related operations on @inode - * including IO list manipulations and stat updates. - */ - if (old_wb < new_wb) { - spin_lock(&old_wb->list_lock); - spin_lock_nested(&new_wb->list_lock, SINGLE_DEPTH_NESTING); - } else { - spin_lock(&new_wb->list_lock); - spin_lock_nested(&old_wb->list_lock, SINGLE_DEPTH_NESTING); - } spin_lock(&inode->i_lock); xa_lock_irq(&mapping->i_pages); @@ -458,25 +442,63 @@ static void inode_do_switch_wbs(struct inode *inode, xa_unlock_irq(&mapping->i_pages); spin_unlock(&inode->i_lock); - spin_unlock(&new_wb->list_lock); - spin_unlock(&old_wb->list_lock); - - up_read(&bdi->wb_switch_rwsem); - if (switched) { - wb_wakeup(new_wb); - wb_put(old_wb); - } + return switched; } static void inode_switch_wbs_work_fn(struct work_struct *work) { struct inode_switch_wbs_context *isw = container_of(to_rcu_work(work), struct inode_switch_wbs_context, work); + struct backing_dev_info *bdi = inode_to_bdi(isw->inodes[0]); + struct bdi_writeback *old_wb = isw->inodes[0]->i_wb; + struct bdi_writeback *new_wb = isw->new_wb; + unsigned long nr_switched = 0; + struct inode **inodep; + + /* + * If @inode switches cgwb membership while sync_inodes_sb() is + * being issued, sync_inodes_sb() might miss it. Synchronize. + */ + down_read(&bdi->wb_switch_rwsem); + + /* + * By the time control reaches here, RCU grace period has passed + * since I_WB_SWITCH assertion and all wb stat update transactions + * between unlocked_inode_to_wb_begin/end() are guaranteed to be + * synchronizing against the i_pages lock. + * + * Grabbing old_wb->list_lock, inode->i_lock and the i_pages lock + * gives us exclusion against all wb related operations on @inode + * including IO list manipulations and stat updates. + */ + if (old_wb < new_wb) { + spin_lock(&old_wb->list_lock); + spin_lock_nested(&new_wb->list_lock, SINGLE_DEPTH_NESTING); + } else { + spin_lock(&new_wb->list_lock); + spin_lock_nested(&old_wb->list_lock, SINGLE_DEPTH_NESTING); + } + + for (inodep = isw->inodes; *inodep; inodep++) { + WARN_ON_ONCE((*inodep)->i_wb != old_wb); + if (inode_do_switch_wbs(*inodep, old_wb, new_wb)) + nr_switched++; + } + + spin_unlock(&new_wb->list_lock); + spin_unlock(&old_wb->list_lock); + + up_read(&bdi->wb_switch_rwsem); + + if (nr_switched) { + wb_wakeup(new_wb); + wb_put_many(old_wb, nr_switched); + } - inode_do_switch_wbs(isw->inode, isw->new_wb); - wb_put(isw->new_wb); - iput(isw->inode); + for (inodep = isw->inodes; *inodep; inodep++) + iput(*inodep); + wb_put(new_wb); kfree(isw); atomic_dec(&isw_nr_in_flight); } @@ -503,7 +525,7 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id) if (atomic_read(&isw_nr_in_flight) > WB_FRN_MAX_IN_FLIGHT) return; - isw = kzalloc(sizeof(*isw), GFP_ATOMIC); + isw = kzalloc(sizeof(*isw) + 2 * sizeof(struct inode *), GFP_ATOMIC); if (!isw) return; @@ -530,7 +552,7 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id) __iget(inode); spin_unlock(&inode->i_lock); - isw->inode = inode; + isw->inodes[0] = inode; /* * In addition to synchronizing among switchers, I_WB_SWITCH tells diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h index e5dc238ebe4f..63f52ad2ce7a 100644 --- a/include/linux/backing-dev-defs.h +++ b/include/linux/backing-dev-defs.h @@ -240,8 +240,9 @@ static inline void wb_get(struct bdi_writeback *wb) /** * wb_put - decrement a wb's refcount * @wb: bdi_writeback to put + * @nr: number of references to put */ -static inline void wb_put(struct bdi_writeback *wb) +static inline void wb_put_many(struct bdi_writeback *wb, unsigned long nr) { if (WARN_ON_ONCE(!wb->bdi)) { /* @@ -252,7 +253,16 @@ static inline void wb_put(struct bdi_writeback *wb) } if (wb != &wb->bdi->wb) - percpu_ref_put(&wb->refcnt); + percpu_ref_put_many(&wb->refcnt, nr); +} + +/** + * wb_put - decrement a wb's refcount + * @wb: bdi_writeback to put + */ +static inline void wb_put(struct bdi_writeback *wb) +{ + wb_put_many(wb, 1); } /** @@ -281,6 +291,10 @@ static inline void wb_put(struct bdi_writeback *wb) { } +static inline void wb_put_many(struct bdi_writeback *wb, unsigned long nr) +{ +} + static inline bool wb_dying(struct bdi_writeback *wb) { return false;