From patchwork Mon Mar 18 19:48:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858471 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 35FA414DE for ; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1CDCC2846F for ; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1108A29025; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 45661294BF for ; Mon, 18 Mar 2019 19:48:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727531AbfCRTs2 (ORCPT ); Mon, 18 Mar 2019 15:48:28 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35352 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726998AbfCRTs2 (ORCPT ); Mon, 18 Mar 2019 15:48:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=AOzvmZMTDIUoOT6CJ2VqA/BjGfuBAonrADJEDHGv3ig=; b=emSUwi31OmJWKAaRNtu2NZ9DQ EUn7nGFDPTKiazMoPpqquNl0uSyufvth6k4Nuhac8JYn4Egc3Zn38ITGaC3yVnQfjH4XKfjhpzujt HaigJ9fI4hjm2IW59JWeLIW2RQ78cXWjsvLvfA6psBnS096NX6hn2Mo/tA2DTnqbkgYqwUa6DV7Bi BrLoPvN4Np6SevKQ46KP5CR5BbtXsfHPpnG3Ek1q1/ZjtDBqKhZTfN+lVi0yTxapHsbZJ3bhn4kOL kxq9rxUBBueI3JqVOlQWk3dKF0SEYPYj3i9Bi//i4rrSYcujvyaGJD6WLPZT8kFPocAgOzUnVcBEr 0vWJdNzJw==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFH-0000v8-QT; Mon, 18 Mar 2019 19:48:27 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 01/14] blk-cgroup: Convert to XArray Date: Mon, 18 Mar 2019 12:48:08 -0700 Message-Id: <20190318194821.3470-2-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP At the point of allocation, we're under not only the xarray lock, but also under the queue lock. So we can't drop the lock and retry the allocation with GFP_KERNEL. Use xa_insert() of a NULL pointer to ensure the subsequent store will not need to allocate memory. Now the store cannot fail, so we can remove the error checks. Signed-off-by: Matthew Wilcox --- block/bfq-cgroup.c | 4 +-- block/blk-cgroup.c | 69 ++++++++++++++++---------------------- include/linux/blk-cgroup.h | 5 ++- 3 files changed, 33 insertions(+), 45 deletions(-) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index c6113af31960..9d25e490f9fa 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -863,7 +863,7 @@ static int bfq_io_set_weight_legacy(struct cgroup_subsys_state *css, return ret; ret = 0; - spin_lock_irq(&blkcg->lock); + xa_lock_irq(&blkcg->blkg_array); bfqgd->weight = (unsigned short)val; hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) { struct bfq_group *bfqg = blkg_to_bfqg(blkg); @@ -897,7 +897,7 @@ static int bfq_io_set_weight_legacy(struct cgroup_subsys_state *css, bfqg->entity.prio_changed = 1; } } - spin_unlock_irq(&blkcg->lock); + xa_unlock_irq(&blkcg->blkg_array); return ret; } diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 77f37ef8ef06..90deb5445332 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -169,12 +169,12 @@ struct blkcg_gq *blkg_lookup_slowpath(struct blkcg *blkcg, struct blkcg_gq *blkg; /* - * Hint didn't match. Look up from the radix tree. Note that the + * Hint didn't match. Fetch from the xarray. Note that the * hint can only be updated under queue_lock as otherwise @blkg - * could have already been removed from blkg_tree. The caller is + * could have already been removed from blkg_array. The caller is * responsible for grabbing queue_lock if @update_hint. */ - blkg = radix_tree_lookup(&blkcg->blkg_tree, q->id); + blkg = xa_load(&blkcg->blkg_array, q->id); if (blkg && blkg->q == q) { if (update_hint) { lockdep_assert_held(&q->queue_lock); @@ -256,29 +256,21 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg, pol->pd_init_fn(blkg->pd[i]); } - /* insert */ - spin_lock(&blkcg->lock); - ret = radix_tree_insert(&blkcg->blkg_tree, q->id, blkg); - if (likely(!ret)) { - hlist_add_head_rcu(&blkg->blkcg_node, &blkcg->blkg_list); - list_add(&blkg->q_node, &q->blkg_list); + xa_lock(&blkcg->blkg_array); + __xa_store(&blkcg->blkg_array, q->id, blkg, 0); + hlist_add_head_rcu(&blkg->blkcg_node, &blkcg->blkg_list); + list_add(&blkg->q_node, &q->blkg_list); - for (i = 0; i < BLKCG_MAX_POLS; i++) { - struct blkcg_policy *pol = blkcg_policy[i]; + for (i = 0; i < BLKCG_MAX_POLS; i++) { + struct blkcg_policy *pol = blkcg_policy[i]; - if (blkg->pd[i] && pol->pd_online_fn) - pol->pd_online_fn(blkg->pd[i]); - } + if (blkg->pd[i] && pol->pd_online_fn) + pol->pd_online_fn(blkg->pd[i]); } blkg->online = true; - spin_unlock(&blkcg->lock); - - if (!ret) - return blkg; + xa_unlock(&blkcg->blkg_array); - /* @blkg failed fully initialized, use the usual release path */ - blkg_put(blkg); - return ERR_PTR(ret); + return blkg; err_cancel_ref: percpu_ref_exit(&blkg->refcnt); @@ -376,7 +368,7 @@ static void blkg_destroy(struct blkcg_gq *blkg) int i; lockdep_assert_held(&blkg->q->queue_lock); - lockdep_assert_held(&blkcg->lock); + lockdep_assert_held(&blkcg->blkg_array.xa_lock); /* Something wrong if we are trying to remove same group twice */ WARN_ON_ONCE(list_empty(&blkg->q_node)); @@ -396,7 +388,7 @@ static void blkg_destroy(struct blkcg_gq *blkg) blkg->online = false; - radix_tree_delete(&blkcg->blkg_tree, blkg->q->id); + __xa_erase(&blkcg->blkg_array, blkg->q->id); list_del_init(&blkg->q_node); hlist_del_init_rcu(&blkg->blkcg_node); @@ -429,9 +421,9 @@ static void blkg_destroy_all(struct request_queue *q) list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) { struct blkcg *blkcg = blkg->blkcg; - spin_lock(&blkcg->lock); + xa_lock(&blkcg->blkg_array); blkg_destroy(blkg); - spin_unlock(&blkcg->lock); + xa_unlock(&blkcg->blkg_array); } q->root_blkg = NULL; @@ -446,7 +438,7 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css, int i; mutex_lock(&blkcg_pol_mutex); - spin_lock_irq(&blkcg->lock); + xa_lock_irq(&blkcg->blkg_array); /* * Note that stat reset is racy - it doesn't synchronize against @@ -465,7 +457,7 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css, } } - spin_unlock_irq(&blkcg->lock); + xa_unlock_irq(&blkcg->blkg_array); mutex_unlock(&blkcg_pol_mutex); return 0; } @@ -1084,7 +1076,7 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css) */ void blkcg_destroy_blkgs(struct blkcg *blkcg) { - spin_lock_irq(&blkcg->lock); + xa_lock_irq(&blkcg->blkg_array); while (!hlist_empty(&blkcg->blkg_list)) { struct blkcg_gq *blkg = hlist_entry(blkcg->blkg_list.first, @@ -1095,13 +1087,13 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg) blkg_destroy(blkg); spin_unlock(&q->queue_lock); } else { - spin_unlock_irq(&blkcg->lock); + xa_unlock_irq(&blkcg->blkg_array); cpu_relax(); - spin_lock_irq(&blkcg->lock); + xa_lock_irq(&blkcg->blkg_array); } } - spin_unlock_irq(&blkcg->lock); + xa_unlock_irq(&blkcg->blkg_array); } static void blkcg_css_free(struct cgroup_subsys_state *css) @@ -1166,8 +1158,7 @@ blkcg_css_alloc(struct cgroup_subsys_state *parent_css) pol->cpd_init_fn(cpd); } - spin_lock_init(&blkcg->lock); - INIT_RADIX_TREE(&blkcg->blkg_tree, GFP_NOWAIT | __GFP_NOWARN); + xa_init_flags(&blkcg->blkg_array, XA_FLAGS_LOCK_IRQ); INIT_HLIST_HEAD(&blkcg->blkg_list); #ifdef CONFIG_CGROUP_WRITEBACK INIT_LIST_HEAD(&blkcg->cgwb_list); @@ -1203,14 +1194,16 @@ blkcg_css_alloc(struct cgroup_subsys_state *parent_css) int blkcg_init_queue(struct request_queue *q) { struct blkcg_gq *new_blkg, *blkg; - bool preloaded; int ret; new_blkg = blkg_alloc(&blkcg_root, q, GFP_KERNEL); if (!new_blkg) return -ENOMEM; - preloaded = !radix_tree_preload(GFP_KERNEL); + ret = xa_insert_irq(&blkcg_root.blkg_array, q->id, NULL, GFP_KERNEL); + if (ret == -ENOMEM) + return -ENOMEM; + BUG_ON(ret < 0); /* Make sure the root blkg exists. */ rcu_read_lock(); @@ -1222,9 +1215,6 @@ int blkcg_init_queue(struct request_queue *q) spin_unlock_irq(&q->queue_lock); rcu_read_unlock(); - if (preloaded) - radix_tree_preload_end(); - ret = blk_iolatency_init(q); if (ret) goto err_destroy_all; @@ -1238,10 +1228,9 @@ int blkcg_init_queue(struct request_queue *q) blkg_destroy_all(q); return ret; err_unlock: + xa_erase(&blkcg_root.blkg_array, q->id); spin_unlock_irq(&q->queue_lock); rcu_read_unlock(); - if (preloaded) - radix_tree_preload_end(); return PTR_ERR(blkg); } diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h index 76c61318fda5..51530ac5451f 100644 --- a/include/linux/blk-cgroup.h +++ b/include/linux/blk-cgroup.h @@ -17,7 +17,7 @@ #include #include #include -#include +#include #include #include #include @@ -46,9 +46,8 @@ struct blkcg_gq; struct blkcg { struct cgroup_subsys_state css; - spinlock_t lock; - struct radix_tree_root blkg_tree; + struct xarray blkg_array; struct blkcg_gq __rcu *blkg_hint; struct hlist_head blkg_list; From patchwork Mon Mar 18 19:48:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858501 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2F5A66C2 for ; Mon, 18 Mar 2019 19:48:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 164C52950B for ; Mon, 18 Mar 2019 19:48:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0AEDF2950A; Mon, 18 Mar 2019 19:48:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D7EE929518 for ; Mon, 18 Mar 2019 19:48:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727567AbfCRTsi (ORCPT ); Mon, 18 Mar 2019 15:48:38 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35354 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727218AbfCRTs2 (ORCPT ); Mon, 18 Mar 2019 15:48:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=RPkeFIWpFer9xbM+mmvwrTZv7i0Ua6lxJAjpp1KHriA=; b=QQza5uEgj43uWAmVOe5qrnATq m85X7+XOQpX3NP2XMscCn4Pv4oENCzRn+9VE8djVr2m5RpnxxKHMIgHKyy7sxupZVF8O/1eSJYSS/ wdv4NAqToCeHfjEAqU0rpdrWbXj41cmyRsuoR83A9ZxWSOcduaKcyAzdh63CFzPeR4fR1kKm19JDx HR9f4LYXUX0SgB/r+vWML/koyvsmIRWpWf7O4lYGENwmWv9TWh7xVvJUkAQJWKV5BUjMokiMawF/J GCvE+8O2FjvHeNKZoRaQXHvnzKYlB75JvxTy/FysxxyvVnDXjtv6AQxy5IDJLb6HLpYeSZTok06Gz GbiMThO4A==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFH-0000vC-Va; Mon, 18 Mar 2019 19:48:27 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 02/14] blk-cgroup: Remove blkg_list hlist Date: Mon, 18 Mar 2019 12:48:09 -0700 Message-Id: <20190318194821.3470-3-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We can iterate over all blkcgs using the XArray iterator instead of maintaining a separate hlist. This removes a nasty locking inversion in blkcg_destroy_blkgs(). Signed-off-by: Matthew Wilcox --- block/bfq-cgroup.c | 3 ++- block/blk-cgroup.c | 38 ++++++++++++++------------------------ include/linux/blk-cgroup.h | 2 -- 3 files changed, 16 insertions(+), 27 deletions(-) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 9d25e490f9fa..13e5c7edde7a 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -857,6 +857,7 @@ static int bfq_io_set_weight_legacy(struct cgroup_subsys_state *css, struct blkcg *blkcg = css_to_blkcg(css); struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg); struct blkcg_gq *blkg; + unsigned long index; int ret = -ERANGE; if (val < BFQ_MIN_WEIGHT || val > BFQ_MAX_WEIGHT) @@ -865,7 +866,7 @@ static int bfq_io_set_weight_legacy(struct cgroup_subsys_state *css, ret = 0; xa_lock_irq(&blkcg->blkg_array); bfqgd->weight = (unsigned short)val; - hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) { + xa_for_each(&blkcg->blkg_array, index, blkg) { struct bfq_group *bfqg = blkg_to_bfqg(blkg); if (!bfqg) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 90deb5445332..bd6eea0587fb 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -258,7 +258,6 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg, xa_lock(&blkcg->blkg_array); __xa_store(&blkcg->blkg_array, q->id, blkg, 0); - hlist_add_head_rcu(&blkg->blkcg_node, &blkcg->blkg_list); list_add(&blkg->q_node, &q->blkg_list); for (i = 0; i < BLKCG_MAX_POLS; i++) { @@ -372,7 +371,6 @@ static void blkg_destroy(struct blkcg_gq *blkg) /* Something wrong if we are trying to remove same group twice */ WARN_ON_ONCE(list_empty(&blkg->q_node)); - WARN_ON_ONCE(hlist_unhashed(&blkg->blkcg_node)); for (i = 0; i < BLKCG_MAX_POLS; i++) { struct blkcg_policy *pol = blkcg_policy[i]; @@ -390,7 +388,6 @@ static void blkg_destroy(struct blkcg_gq *blkg) __xa_erase(&blkcg->blkg_array, blkg->q->id); list_del_init(&blkg->q_node); - hlist_del_init_rcu(&blkg->blkcg_node); /* * Both setting lookup hint to and clearing it from @blkg are done @@ -435,6 +432,7 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css, { struct blkcg *blkcg = css_to_blkcg(css); struct blkcg_gq *blkg; + unsigned long index; int i; mutex_lock(&blkcg_pol_mutex); @@ -445,7 +443,7 @@ static int blkcg_reset_stats(struct cgroup_subsys_state *css, * stat updates. This is a debug feature which shouldn't exist * anyway. If you get hit by a race, retry. */ - hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) { + xa_for_each(&blkcg->blkg_array, index, blkg) { blkg_rwstat_reset(&blkg->stat_bytes); blkg_rwstat_reset(&blkg->stat_ios); @@ -495,16 +493,15 @@ void blkcg_print_blkgs(struct seq_file *sf, struct blkcg *blkcg, bool show_total) { struct blkcg_gq *blkg; + unsigned long index; u64 total = 0; - rcu_read_lock(); - hlist_for_each_entry_rcu(blkg, &blkcg->blkg_list, blkcg_node) { + xa_for_each(&blkcg->blkg_array, index, blkg) { spin_lock_irq(&blkg->q->queue_lock); if (blkcg_policy_enabled(blkg->q, pol)) total += prfill(sf, blkg->pd[pol->plid], data); spin_unlock_irq(&blkg->q->queue_lock); } - rcu_read_unlock(); if (show_total) seq_printf(sf, "Total %llu\n", (unsigned long long)total); @@ -924,10 +921,10 @@ static int blkcg_print_stat(struct seq_file *sf, void *v) { struct blkcg *blkcg = css_to_blkcg(seq_css(sf)); struct blkcg_gq *blkg; + unsigned long index; rcu_read_lock(); - - hlist_for_each_entry_rcu(blkg, &blkcg->blkg_list, blkcg_node) { + xa_for_each(&blkcg->blkg_array, index, blkg) { const char *dname; char *buf; struct blkg_rwstat rwstat; @@ -1076,24 +1073,18 @@ static void blkcg_css_offline(struct cgroup_subsys_state *css) */ void blkcg_destroy_blkgs(struct blkcg *blkcg) { - xa_lock_irq(&blkcg->blkg_array); + struct blkcg_gq *blkg; + unsigned long index; - while (!hlist_empty(&blkcg->blkg_list)) { - struct blkcg_gq *blkg = hlist_entry(blkcg->blkg_list.first, - struct blkcg_gq, blkcg_node); + xa_for_each(&blkcg->blkg_array, index, blkg) { struct request_queue *q = blkg->q; - if (spin_trylock(&q->queue_lock)) { - blkg_destroy(blkg); - spin_unlock(&q->queue_lock); - } else { - xa_unlock_irq(&blkcg->blkg_array); - cpu_relax(); - xa_lock_irq(&blkcg->blkg_array); - } + spin_lock_irq(&q->queue_lock); + xa_lock(&blkcg->blkg_array); + blkg_destroy(blkg); + xa_unlock(&blkcg->blkg_array); + spin_unlock_irq(&q->queue_lock); } - - xa_unlock_irq(&blkcg->blkg_array); } static void blkcg_css_free(struct cgroup_subsys_state *css) @@ -1159,7 +1150,6 @@ blkcg_css_alloc(struct cgroup_subsys_state *parent_css) } xa_init_flags(&blkcg->blkg_array, XA_FLAGS_LOCK_IRQ); - INIT_HLIST_HEAD(&blkcg->blkg_list); #ifdef CONFIG_CGROUP_WRITEBACK INIT_LIST_HEAD(&blkcg->cgwb_list); refcount_set(&blkcg->cgwb_refcnt, 1); diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h index 51530ac5451f..0b2929195a0f 100644 --- a/include/linux/blk-cgroup.h +++ b/include/linux/blk-cgroup.h @@ -49,7 +49,6 @@ struct blkcg { struct xarray blkg_array; struct blkcg_gq __rcu *blkg_hint; - struct hlist_head blkg_list; struct blkcg_policy_data *cpd[BLKCG_MAX_POLS]; @@ -110,7 +109,6 @@ struct blkcg_gq { /* Pointer to the associated request_queue */ struct request_queue *q; struct list_head q_node; - struct hlist_node blkcg_node; struct blkcg *blkcg; /* From patchwork Mon Mar 18 19:48:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858473 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6373617E9 for ; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4AF792890C for ; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 497E529222; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6FA7D29512 for ; Mon, 18 Mar 2019 19:48:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727548AbfCRTs2 (ORCPT ); Mon, 18 Mar 2019 15:48:28 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35356 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727508AbfCRTs2 (ORCPT ); Mon, 18 Mar 2019 15:48:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=XeFOH0Emf0RE77WFecTAZfQFGbklgmlnZ9THg2NYuS4=; b=G7Y3ezU/MdqxXOvW1isjW56Rw Xsr0XlfC/7e3GHzyLRDzti24uC7PhIFqK3vJlu/OdDilqb9bGbInotCuAGHP3hqiONkNZamWTeFKC 5c0pSKlaqpn035ngStoXDjacpjoFxwrV0FJ1gDBRAs/yhXjJr3SdshvCTCiic0tJE9IKo0k48bxUZ gyyuZPbOxDXz/SMncAPnShQjX8dEkJPWmXmcBllWqhzHNW2ph0r1+NjBpKKWdknLE72zvKsWsY7fC zOsL8ui/cF4sKyeysRN0XHox9FA+LCzTJ7orszexHwWLnW1CPwBOJ4slh7xMGQGkaZt/KPs7OLy6a JUEj+Jt0w==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFI-0000vH-4B; Mon, 18 Mar 2019 19:48:28 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 03/14] blk-cgroup: Reduce scope of blkg_array lock Date: Mon, 18 Mar 2019 12:48:10 -0700 Message-Id: <20190318194821.3470-4-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We can now take and release the blkg_array lock within blkg_destroy() instead of forcing the caller to hold it across the call. Signed-off-by: Matthew Wilcox --- block/blk-cgroup.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index bd6eea0587fb..6962e2fc612d 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -367,7 +367,6 @@ static void blkg_destroy(struct blkcg_gq *blkg) int i; lockdep_assert_held(&blkg->q->queue_lock); - lockdep_assert_held(&blkcg->blkg_array.xa_lock); /* Something wrong if we are trying to remove same group twice */ WARN_ON_ONCE(list_empty(&blkg->q_node)); @@ -386,7 +385,7 @@ static void blkg_destroy(struct blkcg_gq *blkg) blkg->online = false; - __xa_erase(&blkcg->blkg_array, blkg->q->id); + xa_erase(&blkcg->blkg_array, blkg->q->id); list_del_init(&blkg->q_node); /* @@ -416,11 +415,7 @@ static void blkg_destroy_all(struct request_queue *q) spin_lock_irq(&q->queue_lock); list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) { - struct blkcg *blkcg = blkg->blkcg; - - xa_lock(&blkcg->blkg_array); blkg_destroy(blkg); - xa_unlock(&blkcg->blkg_array); } q->root_blkg = NULL; @@ -1080,9 +1075,7 @@ void blkcg_destroy_blkgs(struct blkcg *blkcg) struct request_queue *q = blkg->q; spin_lock_irq(&q->queue_lock); - xa_lock(&blkcg->blkg_array); blkg_destroy(blkg); - xa_unlock(&blkcg->blkg_array); spin_unlock_irq(&q->queue_lock); } } From patchwork Mon Mar 18 19:48:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858475 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 95A8A6C2 for ; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7CBAA2885F for ; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7B65129222; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9A6472950F for ; Mon, 18 Mar 2019 19:48:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727558AbfCRTs2 (ORCPT ); Mon, 18 Mar 2019 15:48:28 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35358 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726998AbfCRTs2 (ORCPT ); Mon, 18 Mar 2019 15:48:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=PE9NVhdTnO4X7DaQ4s+zXFb8TEgZkxJvR8ZbXjbcyLE=; b=MRVZN7cff6nY05Dc2+fRSaUE8 l+h4RqTylDYpMedZA8ls33tiv+LszU4hijxHKQcfRBbkC6zAJZ3WNOOfNIHz9npPDmmKf/zaEpwse MugaFovorxBactpyyO4nuIOzQFYemuTA5BRj8JM1XSF+bhHQ2XRMv1IMr+wTX8h0xBtrLeiMdjopT JKdPeYcS14QqeB8QApKz8v8tE6IDnOdoTgCz1ojeH5gEWdzoTuJ43kRX/OVe0mtQj+t6A9+tHED0e zveWyPXg83x25aF+pUgCtA1JRdDqWyy4kSLQtOC3rFYaZ5RvphTndoKTKKFVT7hQAJGCCsnaJCq7i IZj5P6RRA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFI-0000vO-9G; Mon, 18 Mar 2019 19:48:28 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 04/14] blk-ioc: Convert to XArray Date: Mon, 18 Mar 2019 12:48:11 -0700 Message-Id: <20190318194821.3470-5-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Use xa_insert_irq() to do the allocation before grabbing the other locks. This user appears to be able to race, so use xa_cmpxchg() to handle the race effectively. Signed-off-by: Matthew Wilcox --- block/blk-ioc.c | 23 +++++++++++++---------- include/linux/iocontext.h | 6 +++--- 2 files changed, 16 insertions(+), 13 deletions(-) diff --git a/block/blk-ioc.c b/block/blk-ioc.c index 5ed59ac6ae58..1db53c371b14 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -65,7 +65,7 @@ static void ioc_destroy_icq(struct io_cq *icq) lockdep_assert_held(&ioc->lock); - radix_tree_delete(&ioc->icq_tree, icq->q->id); + xa_erase(&ioc->icq_array, icq->q->id); hlist_del_init(&icq->ioc_node); list_del_init(&icq->q_node); @@ -255,7 +255,7 @@ int create_task_io_context(struct task_struct *task, gfp_t gfp_flags, int node) atomic_set(&ioc->nr_tasks, 1); atomic_set(&ioc->active_ref, 1); spin_lock_init(&ioc->lock); - INIT_RADIX_TREE(&ioc->icq_tree, GFP_ATOMIC); + xa_init_flags(&ioc->icq_array, XA_FLAGS_LOCK_IRQ); INIT_HLIST_HEAD(&ioc->icq_list); INIT_WORK(&ioc->release_work, ioc_release_fn); @@ -339,7 +339,7 @@ struct io_cq *ioc_lookup_icq(struct io_context *ioc, struct request_queue *q) if (icq && icq->q == q) goto out; - icq = radix_tree_lookup(&ioc->icq_tree, q->id); + icq = xa_load(&ioc->icq_array, q->id); if (icq && icq->q == q) rcu_assign_pointer(ioc->icq_hint, icq); /* allowed to race */ else @@ -366,7 +366,7 @@ struct io_cq *ioc_create_icq(struct io_context *ioc, struct request_queue *q, gfp_t gfp_mask) { struct elevator_type *et = q->elevator->type; - struct io_cq *icq; + struct io_cq *icq, *curr; /* allocate stuff */ icq = kmem_cache_alloc_node(et->icq_cache, gfp_mask | __GFP_ZERO, @@ -374,10 +374,14 @@ struct io_cq *ioc_create_icq(struct io_context *ioc, struct request_queue *q, if (!icq) return NULL; - if (radix_tree_maybe_preload(gfp_mask) < 0) { + if (xa_insert_irq(&ioc->icq_array, q->id, NULL, gfp_mask) == -ENOMEM) { kmem_cache_free(et->icq_cache, icq); return NULL; } + /* + * If we get -EBUSY, we're racing with another caller; we'll see + * who wins the race below. + */ icq->ioc = ioc; icq->q = q; @@ -388,21 +392,20 @@ struct io_cq *ioc_create_icq(struct io_context *ioc, struct request_queue *q, spin_lock_irq(&q->queue_lock); spin_lock(&ioc->lock); - if (likely(!radix_tree_insert(&ioc->icq_tree, q->id, icq))) { + curr = xa_cmpxchg(&ioc->icq_array, q->id, XA_ZERO_ENTRY, icq, + GFP_ATOMIC); + if (likely(!curr)) { hlist_add_head(&icq->ioc_node, &ioc->icq_list); list_add(&icq->q_node, &q->icq_list); if (et->ops.init_icq) et->ops.init_icq(icq); } else { kmem_cache_free(et->icq_cache, icq); - icq = ioc_lookup_icq(ioc, q); - if (!icq) - printk(KERN_ERR "cfq: icq link failed!\n"); + icq = curr; } spin_unlock(&ioc->lock); spin_unlock_irq(&q->queue_lock); - radix_tree_preload_end(); return icq; } diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h index dba15ca8e60b..e16224f70084 100644 --- a/include/linux/iocontext.h +++ b/include/linux/iocontext.h @@ -2,9 +2,9 @@ #ifndef IOCONTEXT_H #define IOCONTEXT_H -#include #include #include +#include enum { ICQ_EXITED = 1 << 2, @@ -56,7 +56,7 @@ enum { * - ioc->icq_list and icq->ioc_node are protected by ioc lock. * q->icq_list and icq->q_node by q lock. * - * - ioc->icq_tree and ioc->icq_hint are protected by ioc lock, while icq + * - ioc->icq_array and ioc->icq_hint are protected by ioc lock, while icq * itself is protected by q lock. However, both the indexes and icq * itself are also RCU managed and lookup can be performed holding only * the q lock. @@ -111,7 +111,7 @@ struct io_context { int nr_batch_requests; /* Number of requests left in the batch */ unsigned long last_waited; /* Time last woken after wait for request */ - struct radix_tree_root icq_tree; + struct xarray icq_array; struct io_cq __rcu *icq_hint; struct hlist_head icq_list; From patchwork Mon Mar 18 19:48:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858483 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E647C6C2 for ; Mon, 18 Mar 2019 19:48:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CEFEB2890C for ; Mon, 18 Mar 2019 19:48:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CD74229506; Mon, 18 Mar 2019 19:48:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5780E2890C for ; Mon, 18 Mar 2019 19:48:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727508AbfCRTs3 (ORCPT ); Mon, 18 Mar 2019 15:48:29 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35360 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727554AbfCRTs2 (ORCPT ); Mon, 18 Mar 2019 15:48:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=HUYEO5ur+32RD/K2a2FafWDPVSUmsO63N6kkWm9XTiw=; b=b5DUZZrLtWb1JD2Yc93oX0+TH cyE4yKK2jCA7f5Wsz9WsjyK2HUC5KSdOsBIJ0FoTMP9x56ntJwShbq6fDElwYn02GAJqE+IPn0lKi gJMnjpj23a6lOjIxKFGcHtvREAWjSvozL82KLrc/Gcl/DYjdRMiHw9dt2E/7kBwnP6b40EhUmnx+e IZdFNKBwTMgzHvHIPd+VD0R+Gpl702hzYY3mEo2wa26MgkZRsUC/7v0e9QOXu3FMXxcrUWnOPMfiR CbFw+8/1DvXrSHXCvtRt5WGGVoehOS9zlJZoH2zmiypKOv44LrKiiOMtyQB8scXHe3owKo6wqazIl S4PsHB8QQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFI-0000vW-Ee; Mon, 18 Mar 2019 19:48:28 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 05/14] blk-ioc: Remove ioc's icq_list Date: Mon, 18 Mar 2019 12:48:12 -0700 Message-Id: <20190318194821.3470-6-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Use the XArray's iterator instead of this hlist. Signed-off-by: Matthew Wilcox --- block/blk-ioc.c | 15 ++++++--------- include/linux/iocontext.h | 16 +++++----------- 2 files changed, 11 insertions(+), 20 deletions(-) diff --git a/block/blk-ioc.c b/block/blk-ioc.c index 1db53c371b14..53da5bf9cdc2 100644 --- a/block/blk-ioc.c +++ b/block/blk-ioc.c @@ -66,7 +66,6 @@ static void ioc_destroy_icq(struct io_cq *icq) lockdep_assert_held(&ioc->lock); xa_erase(&ioc->icq_array, icq->q->id); - hlist_del_init(&icq->ioc_node); list_del_init(&icq->q_node); /* @@ -96,6 +95,8 @@ static void ioc_release_fn(struct work_struct *work) struct io_context *ioc = container_of(work, struct io_context, release_work); unsigned long flags; + unsigned long index; + struct io_cq *icq; /* * Exiting icq may call into put_io_context() through elevator @@ -105,9 +106,7 @@ static void ioc_release_fn(struct work_struct *work) */ spin_lock_irqsave_nested(&ioc->lock, flags, 1); - while (!hlist_empty(&ioc->icq_list)) { - struct io_cq *icq = hlist_entry(ioc->icq_list.first, - struct io_cq, ioc_node); + xa_for_each(&ioc->icq_array, index, icq) { struct request_queue *q = icq->q; if (spin_trylock(&q->queue_lock)) { @@ -148,7 +147,7 @@ void put_io_context(struct io_context *ioc) */ if (atomic_long_dec_and_test(&ioc->refcount)) { spin_lock_irqsave(&ioc->lock, flags); - if (!hlist_empty(&ioc->icq_list)) + if (!xa_empty(&ioc->icq_array)) queue_work(system_power_efficient_wq, &ioc->release_work); else @@ -170,6 +169,7 @@ void put_io_context(struct io_context *ioc) void put_io_context_active(struct io_context *ioc) { unsigned long flags; + unsigned long index; struct io_cq *icq; if (!atomic_dec_and_test(&ioc->active_ref)) { @@ -183,7 +183,7 @@ void put_io_context_active(struct io_context *ioc) * explanation on the nested locking annotation. */ spin_lock_irqsave_nested(&ioc->lock, flags, 1); - hlist_for_each_entry(icq, &ioc->icq_list, ioc_node) { + xa_for_each(&ioc->icq_array, index, icq) { if (icq->flags & ICQ_EXITED) continue; @@ -256,7 +256,6 @@ int create_task_io_context(struct task_struct *task, gfp_t gfp_flags, int node) atomic_set(&ioc->active_ref, 1); spin_lock_init(&ioc->lock); xa_init_flags(&ioc->icq_array, XA_FLAGS_LOCK_IRQ); - INIT_HLIST_HEAD(&ioc->icq_list); INIT_WORK(&ioc->release_work, ioc_release_fn); /* @@ -386,7 +385,6 @@ struct io_cq *ioc_create_icq(struct io_context *ioc, struct request_queue *q, icq->ioc = ioc; icq->q = q; INIT_LIST_HEAD(&icq->q_node); - INIT_HLIST_NODE(&icq->ioc_node); /* lock both q and ioc and try to link @icq */ spin_lock_irq(&q->queue_lock); @@ -395,7 +393,6 @@ struct io_cq *ioc_create_icq(struct io_context *ioc, struct request_queue *q, curr = xa_cmpxchg(&ioc->icq_array, q->id, XA_ZERO_ENTRY, icq, GFP_ATOMIC); if (likely(!curr)) { - hlist_add_head(&icq->ioc_node, &ioc->icq_list); list_add(&icq->q_node, &q->icq_list); if (et->ops.init_icq) et->ops.init_icq(icq); diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h index e16224f70084..bc054ab9b273 100644 --- a/include/linux/iocontext.h +++ b/include/linux/iocontext.h @@ -53,8 +53,7 @@ enum { * * - ioc lock nests inside q lock. * - * - ioc->icq_list and icq->ioc_node are protected by ioc lock. - * q->icq_list and icq->q_node by q lock. + * - q->icq_list and icq->q_node are protected by q lock. * * - ioc->icq_array and ioc->icq_hint are protected by ioc lock, while icq * itself is protected by q lock. However, both the indexes and icq @@ -74,19 +73,15 @@ struct io_cq { struct io_context *ioc; /* - * q_node and ioc_node link io_cq through icq_list of q and ioc - * respectively. Both fields are unused once ioc_exit_icq() is - * called and shared with __rcu_icq_cache and __rcu_head which are - * used for RCU free of io_cq. + * q_node links io_cq through the icq_list of q. + * It is unused once ioc_exit_icq() is called so it is shared with + * __rcu_icq_cache which is used for RCU free of io_cq. */ union { struct list_head q_node; struct kmem_cache *__rcu_icq_cache; }; - union { - struct hlist_node ioc_node; - struct rcu_head __rcu_head; - }; + struct rcu_head __rcu_head; unsigned int flags; }; @@ -113,7 +108,6 @@ struct io_context { struct xarray icq_array; struct io_cq __rcu *icq_hint; - struct hlist_head icq_list; struct work_struct release_work; }; From patchwork Mon Mar 18 19:48:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858479 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 05F3E17EF for ; Mon, 18 Mar 2019 19:48:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E06192846F for ; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DEB992885F; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7B3592890C for ; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727596AbfCRTs3 (ORCPT ); Mon, 18 Mar 2019 15:48:29 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35362 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727508AbfCRTs2 (ORCPT ); Mon, 18 Mar 2019 15:48:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=0mXU83czsrrlKjLEhrrYM6cEFyI0U5urP7EMqQ3H560=; b=LAOKpmb1mqn3lCTZZ7R+LRZU0 XHZWJPdvcui7/2wRn245+LupjIN7uknhSa7U28FoiX6ynN5fhbAQEbfNpHLJ9IbAhkeqW+xtvKK5h 1rx9+bPLeSLnDs6pT3Ykjz2wMEnnsl6wh/PucID2ApKN3+gGhm4j0zPU4DCeVWw+89i12Ss5e8gps VDOrV12SBB+Mj/kIWmfCLUSge17nap8qcIJJxi1I9EfLvCKIDTzNsOnJeCpLadDzoxQd2nUUVrRAz 4zq4VrFs6AWK5yipIspWu2rabqATWG/WDZLkQ2Z+9WEiz21djMiWE0PN/cf9JnheVdGqPEvN0yMhY kQ1FdLGMA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFI-0000vc-Jb; Mon, 18 Mar 2019 19:48:28 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 06/14] genhd: Convert to XArray Date: Mon, 18 Mar 2019 12:48:13 -0700 Message-Id: <20190318194821.3470-7-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Replace the IDR with the XArray. Includes converting the lookup from being protected by a spinlock to being protected by RCU. Signed-off-by: Matthew Wilcox --- block/genhd.c | 42 ++++++++++++++++-------------------------- 1 file changed, 16 insertions(+), 26 deletions(-) diff --git a/block/genhd.c b/block/genhd.c index 703267865f14..7bb4d15f7574 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -17,7 +17,7 @@ #include #include #include -#include +#include #include #include #include @@ -30,11 +30,8 @@ struct kobject *block_depr; /* for extended dynamic devt allocation, currently only one major is used */ #define NR_EXT_DEVT (1 << MINORBITS) -/* For extended devt allocation. ext_devt_lock prevents look up - * results from going away underneath its user. - */ -static DEFINE_SPINLOCK(ext_devt_lock); -static DEFINE_IDR(ext_devt_idr); +/* For extended devt allocation */ +static DEFINE_XARRAY_FLAGS(ext_devt, XA_FLAGS_LOCK_BH | XA_FLAGS_ALLOC); static const struct device_type disk_type; @@ -487,7 +484,8 @@ static int blk_mangle_minor(int minor) int blk_alloc_devt(struct hd_struct *part, dev_t *devt) { struct gendisk *disk = part_to_disk(part); - int idx; + u32 idx; + int err; /* in consecutive minor range? */ if (part->partno < disk->minors) { @@ -495,16 +493,10 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt) return 0; } - /* allocate ext devt */ - idr_preload(GFP_KERNEL); - - spin_lock_bh(&ext_devt_lock); - idx = idr_alloc(&ext_devt_idr, part, 0, NR_EXT_DEVT, GFP_NOWAIT); - spin_unlock_bh(&ext_devt_lock); - - idr_preload_end(); - if (idx < 0) - return idx == -ENOSPC ? -EBUSY : idx; + err = xa_alloc(&ext_devt, &idx, part, XA_LIMIT(0, NR_EXT_DEVT - 1), + GFP_KERNEL); + if (err < 0) + return err; *devt = MKDEV(BLOCK_EXT_MAJOR, blk_mangle_minor(idx)); return 0; @@ -516,8 +508,7 @@ int blk_alloc_devt(struct hd_struct *part, dev_t *devt) * * Free @devt which was allocated using blk_alloc_devt(). * - * CONTEXT: - * Might sleep. + * Context: Might sleep. */ void blk_free_devt(dev_t devt) { @@ -525,9 +516,7 @@ void blk_free_devt(dev_t devt) return; if (MAJOR(devt) == BLOCK_EXT_MAJOR) { - spin_lock_bh(&ext_devt_lock); - idr_remove(&ext_devt_idr, blk_mangle_minor(MINOR(devt))); - spin_unlock_bh(&ext_devt_lock); + xa_erase_bh(&ext_devt, blk_mangle_minor(MINOR(devt))); } } @@ -852,13 +841,13 @@ struct gendisk *get_gendisk(dev_t devt, int *partno) } else { struct hd_struct *part; - spin_lock_bh(&ext_devt_lock); - part = idr_find(&ext_devt_idr, blk_mangle_minor(MINOR(devt))); + rcu_read_lock(); + part = xa_load(&ext_devt, blk_mangle_minor(MINOR(devt))); if (part && get_disk_and_module(part_to_disk(part))) { *partno = part->partno; disk = part_to_disk(part); } - spin_unlock_bh(&ext_devt_lock); + rcu_read_unlock(); } if (!disk) @@ -1303,8 +1292,9 @@ static void disk_release(struct device *dev) hd_free_part(&disk->part0); if (disk->queue) blk_put_queue(disk->queue); - kfree(disk); + kfree_rcu(disk, part0.rcu_work.rcu); } + struct class block_class = { .name = "block", }; From patchwork Mon Mar 18 19:48:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858477 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C6EF718EC for ; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AC9842846F for ; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AABB929510; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E9B529025 for ; Mon, 18 Mar 2019 19:48:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727577AbfCRTs3 (ORCPT ); Mon, 18 Mar 2019 15:48:29 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35364 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726998AbfCRTs3 (ORCPT ); Mon, 18 Mar 2019 15:48:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ckHueTyWRddXMW8rz7NWuKSRzh/6TUt+4/2u4PLF9EM=; b=nyRK42aKWbhxXZG2MujeKwrnu IWzVs/2n/soAb9iVAh8MzMF+nLnuJ634tAVP4A069oFbtWKrPEI6IOO+Rz4CBqnqW/cmBrazQ4JX0 WjPDlQ/zO7/PPSWc4GK0LWJhsmw1qrd0xBe2D9w3/2ee+7vBaMTbX+wgYuI/AYcPlenK3Ez3Z5L5u APwyhrt1XwTWpU5IoolzvvfWXeobAbBEauSAOU+zcr8w6ZWOJG5wUNk0fWhYn1JHL9Ga2VLSPcLeT UKPmNoloqyH4npkKil+R+H8qil8/QpNwk0EDiLCJ0jnmWTxl8+SO/A/cSV/9nhzt2GGfJUTPW9GI+ WqngnXGSA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFI-0000vi-OP; Mon, 18 Mar 2019 19:48:28 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 07/14] bsg: Convert bsg_minor_idr to XArray Date: Mon, 18 Mar 2019 12:48:14 -0700 Message-Id: <20190318194821.3470-8-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- block/bsg.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/block/bsg.c b/block/bsg.c index f306853c6b08..e24420a21383 100644 --- a/block/bsg.c +++ b/block/bsg.c @@ -16,9 +16,9 @@ #include #include #include -#include #include #include +#include #include #include @@ -46,7 +46,7 @@ struct bsg_device { #define BSG_MAX_DEVS 32768 static DEFINE_MUTEX(bsg_mutex); -static DEFINE_IDR(bsg_minor_idr); +static DEFINE_XARRAY_ALLOC(bsg_classes); #define BSG_LIST_ARRAY_SIZE 8 static struct hlist_head bsg_device_list[BSG_LIST_ARRAY_SIZE]; @@ -294,7 +294,7 @@ static struct bsg_device *bsg_get_device(struct inode *inode, struct file *file) * find the class device */ mutex_lock(&bsg_mutex); - bcd = idr_find(&bsg_minor_idr, iminor(inode)); + bcd = xa_load(&bsg_classes, iminor(inode)); if (!bcd) { bd = ERR_PTR(-ENODEV); @@ -401,7 +401,7 @@ void bsg_unregister_queue(struct request_queue *q) return; mutex_lock(&bsg_mutex); - idr_remove(&bsg_minor_idr, bcd->minor); + xa_erase(&bsg_classes, bcd->minor); if (q->kobj.sd) sysfs_remove_link(&q->kobj, "bsg"); device_unregister(bcd->class_dev); @@ -429,23 +429,23 @@ int bsg_register_queue(struct request_queue *q, struct device *parent, mutex_lock(&bsg_mutex); - ret = idr_alloc(&bsg_minor_idr, bcd, 0, BSG_MAX_DEVS, GFP_KERNEL); + ret = xa_alloc(&bsg_classes, &bcd->minor, bcd, + XA_LIMIT(0, BSG_MAX_DEVS - 1), GFP_KERNEL); if (ret < 0) { - if (ret == -ENOSPC) { + if (ret == -EBUSY) { printk(KERN_ERR "bsg: too many bsg devices\n"); ret = -EINVAL; } goto unlock; } - bcd->minor = ret; bcd->queue = q; bcd->ops = ops; dev = MKDEV(bsg_major, bcd->minor); class_dev = device_create(bsg_class, parent, dev, NULL, "%s", name); if (IS_ERR(class_dev)) { ret = PTR_ERR(class_dev); - goto idr_remove; + goto remove; } bcd->class_dev = class_dev; @@ -460,8 +460,8 @@ int bsg_register_queue(struct request_queue *q, struct device *parent, unregister_class_dev: device_unregister(class_dev); -idr_remove: - idr_remove(&bsg_minor_idr, bcd->minor); +remove: + xa_erase(&bsg_classes, bcd->minor); unlock: mutex_unlock(&bsg_mutex); return ret; From patchwork Mon Mar 18 19:48:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858497 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC1D014DE for ; Mon, 18 Mar 2019 19:48:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB9BA2890C for ; Mon, 18 Mar 2019 19:48:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A005E29511; Mon, 18 Mar 2019 19:48:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2452F2893E for ; Mon, 18 Mar 2019 19:48:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727616AbfCRTsi (ORCPT ); Mon, 18 Mar 2019 15:48:38 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35366 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727564AbfCRTs3 (ORCPT ); Mon, 18 Mar 2019 15:48:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=EFjKTaxCxPf3DnfiVLUJM56TaKZinfZJ19nTWR8wtiM=; b=IRIOsaG0BvXGYzNn/FqPPPZjQ ec4OmQvvAWBvpjSPRcklCXy/yIABLeNQdxyh8dGaWkhJhqFuAkv1QBGzuc1f8naaxPSekLVUt54jY SMIsteFYJsIzk6lsUj4UWXzbrDHcpK2YllJvtOrgd0FwLctJ2NuQMEvxe3edyXKHeR+9KU2KntMHM EbZhcyqoCaHYeLBo9ItbWzeWD6oBCW71bZOJwuWuUcQZnC8fQQTG8eHx/XdWD4/odTGiBdfiTOFaG 7QIw95riTKT/1TwaIh9RensbQSqrAZhsyByoJdHuphmD1a55YwrqT34569zhGUcLuZBb4osjsN8BA eGQLJ3w6g==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFI-0000vn-TW; Mon, 18 Mar 2019 19:48:28 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 08/14] brd: Convert to XArray Date: Mon, 18 Mar 2019 12:48:15 -0700 Message-Id: <20190318194821.3470-9-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Convert brd_pages from a radix tree to an XArray. Simpler and smaller code; in particular another user of radix_tree_preload is eliminated. Signed-off-by: Matthew Wilcox --- drivers/block/brd.c | 93 ++++++++++++++------------------------------- 1 file changed, 28 insertions(+), 65 deletions(-) diff --git a/drivers/block/brd.c b/drivers/block/brd.c index c18586fccb6f..44ce4891e0db 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -17,7 +17,7 @@ #include #include #include -#include +#include #include #include #include @@ -28,9 +28,9 @@ #define PAGE_SECTORS (1 << PAGE_SECTORS_SHIFT) /* - * Each block ramdisk device has a radix_tree brd_pages of pages that stores - * the pages containing the block device's contents. A brd page's ->index is - * its offset in PAGE_SIZE units. This is similar to, but in no way connected + * Each block ramdisk device has an xarray brd_pages that stores the pages + * containing the block device's contents. A brd page's ->index is its + * offset in PAGE_SIZE units. This is similar to, but in no way connected * with, the kernel's pagecache or buffer cache (which sit above our block * device). */ @@ -40,13 +40,7 @@ struct brd_device { struct request_queue *brd_queue; struct gendisk *brd_disk; struct list_head brd_list; - - /* - * Backing store of pages and lock to protect it. This is the contents - * of the block device. - */ - spinlock_t brd_lock; - struct radix_tree_root brd_pages; + struct xarray brd_pages; }; /* @@ -61,17 +55,9 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector) * The page lifetime is protected by the fact that we have opened the * device node -- brd pages will never be deleted under us, so we * don't need any further locking or refcounting. - * - * This is strictly true for the radix-tree nodes as well (ie. we - * don't actually need the rcu_read_lock()), however that is not a - * documented feature of the radix-tree API so it is better to be - * safe here (we don't have total exclusion from radix tree updates - * here, only deletes). */ - rcu_read_lock(); idx = sector >> PAGE_SECTORS_SHIFT; /* sector to page index */ - page = radix_tree_lookup(&brd->brd_pages, idx); - rcu_read_unlock(); + page = xa_load(&brd->brd_pages, idx); BUG_ON(page && page->index != idx); @@ -86,7 +72,7 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector) static struct page *brd_insert_page(struct brd_device *brd, sector_t sector) { pgoff_t idx; - struct page *page; + struct page *curr, *page; gfp_t gfp_flags; page = brd_lookup_page(brd, sector); @@ -107,62 +93,40 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector) if (!page) return NULL; - if (radix_tree_preload(GFP_NOIO)) { - __free_page(page); - return NULL; - } - - spin_lock(&brd->brd_lock); idx = sector >> PAGE_SECTORS_SHIFT; page->index = idx; - if (radix_tree_insert(&brd->brd_pages, idx, page)) { + curr = xa_cmpxchg(&brd->brd_pages, idx, NULL, page, GFP_NOIO); + if (curr) { __free_page(page); - page = radix_tree_lookup(&brd->brd_pages, idx); - BUG_ON(!page); - BUG_ON(page->index != idx); + if (xa_err(curr)) { + page = NULL; + } else { + page = curr; + BUG_ON(!page); + BUG_ON(page->index != idx); + } } - spin_unlock(&brd->brd_lock); - - radix_tree_preload_end(); return page; } /* - * Free all backing store pages and radix tree. This must only be called when + * Free all backing store pages and xarray. This must only be called when * there are no other users of the device. */ -#define FREE_BATCH 16 static void brd_free_pages(struct brd_device *brd) { - unsigned long pos = 0; - struct page *pages[FREE_BATCH]; - int nr_pages; - - do { - int i; - - nr_pages = radix_tree_gang_lookup(&brd->brd_pages, - (void **)pages, pos, FREE_BATCH); - - for (i = 0; i < nr_pages; i++) { - void *ret; - - BUG_ON(pages[i]->index < pos); - pos = pages[i]->index; - ret = radix_tree_delete(&brd->brd_pages, pos); - BUG_ON(!ret || ret != pages[i]); - __free_page(pages[i]); - } - - pos++; + XA_STATE(xas, &brd->brd_pages, 0); + struct page *page; - /* - * This assumes radix_tree_gang_lookup always returns as - * many pages as possible. If the radix-tree code changes, - * so will this have to. - */ - } while (nr_pages == FREE_BATCH); + /* lockdep can't know there are no other users */ + xas_lock(&xas); + xas_for_each(&xas, page, ULONG_MAX) { + BUG_ON(page->index != xas.xa_index); + __free_page(page); + xas_store(&xas, NULL); + } + xas_unlock(&xas); } /* @@ -372,8 +336,7 @@ static struct brd_device *brd_alloc(int i) if (!brd) goto out; brd->brd_number = i; - spin_lock_init(&brd->brd_lock); - INIT_RADIX_TREE(&brd->brd_pages, GFP_ATOMIC); + xa_init(&brd->brd_pages); brd->brd_queue = blk_alloc_queue(GFP_KERNEL); if (!brd->brd_queue) From patchwork Mon Mar 18 19:48:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858499 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8044A14DE for ; Mon, 18 Mar 2019 19:48:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6602428753 for ; Mon, 18 Mar 2019 19:48:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5AC2D289CA; Mon, 18 Mar 2019 19:48:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ACBD229513 for ; Mon, 18 Mar 2019 19:48:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727564AbfCRTsi (ORCPT ); Mon, 18 Mar 2019 15:48:38 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35368 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727567AbfCRTs3 (ORCPT ); Mon, 18 Mar 2019 15:48:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=syzjGXm7nZCMWATApfkoGI2YZgFq4WD7FAbxvnSIUZs=; b=b38JTHGHP/Txgst2+ZYrbbPDu Mwk46zT8F//mpcMJvRB7QXdm2kMYhEn5SoSVlc8l4O8cqUcKcZADski72/RgKnAYwuWqM66Hmn+wW 5HLMnj7onyrA1ZZI2Oy1c6I7fMRvaBjRzQCUy68P3e7MwEegkt9TVUxLmWKmZpXVDaDm0KMyjtQ+X Rp2UhLQZl6WOaRXIWr+j08FQ3kA6vBIRWasTHlUklLiXgEq0q90CvXIABLEyJIJx+h8YCHVqrP5b2 rNT1clMQveYBGWNRUI9kGSxhxJFBpiTr7fiDcdYPBN7SiTyx2UNpEfMiWA3XajUDuGHv/fEdCHPvE Qx0sS9xBg==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFJ-0000vw-2T; Mon, 18 Mar 2019 19:48:29 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 09/14] null_blk: Convert to XArray Date: Mon, 18 Mar 2019 12:48:16 -0700 Message-Id: <20190318194821.3470-10-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP By changing the locking we could remove the slightly awkward dance in null_insert_page(), but I'll leave that for someone who's more familiar with the driver. Signed-off-by: Matthew Wilcox --- drivers/block/null_blk.h | 4 +- drivers/block/null_blk_main.c | 97 ++++++++++++++--------------------- 2 files changed, 40 insertions(+), 61 deletions(-) diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h index 34b22d6523ba..8460eb0b2fe2 100644 --- a/drivers/block/null_blk.h +++ b/drivers/block/null_blk.h @@ -35,8 +35,8 @@ struct nullb_queue { struct nullb_device { struct nullb *nullb; struct config_item item; - struct radix_tree_root data; /* data stored in the disk */ - struct radix_tree_root cache; /* disk cache data */ + struct xarray data; /* data stored in the disk */ + struct xarray cache; /* disk cache data */ unsigned long flags; /* device flags */ unsigned int curr_cache; struct badblocks badblocks; diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c index 417a9f15c116..2a9832eb9ad4 100644 --- a/drivers/block/null_blk_main.c +++ b/drivers/block/null_blk_main.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "null_blk.h" #define PAGE_SECTORS_SHIFT (PAGE_SHIFT - SECTOR_SHIFT) @@ -507,8 +508,8 @@ static struct nullb_device *null_alloc_dev(void) dev = kzalloc(sizeof(*dev), GFP_KERNEL); if (!dev) return NULL; - INIT_RADIX_TREE(&dev->data, GFP_ATOMIC); - INIT_RADIX_TREE(&dev->cache, GFP_ATOMIC); + xa_init_flags(&dev->data, XA_FLAGS_LOCK_IRQ); + xa_init_flags(&dev->cache, XA_FLAGS_LOCK_IRQ); if (badblocks_init(&dev->badblocks, 0)) { kfree(dev); return NULL; @@ -689,18 +690,18 @@ static void null_free_sector(struct nullb *nullb, sector_t sector, unsigned int sector_bit; u64 idx; struct nullb_page *t_page, *ret; - struct radix_tree_root *root; + struct xarray *xa; - root = is_cache ? &nullb->dev->cache : &nullb->dev->data; + xa = is_cache ? &nullb->dev->cache : &nullb->dev->data; idx = sector >> PAGE_SECTORS_SHIFT; sector_bit = (sector & SECTOR_MASK); - t_page = radix_tree_lookup(root, idx); + t_page = xa_load(xa, idx); if (t_page) { __clear_bit(sector_bit, t_page->bitmap); if (null_page_empty(t_page)) { - ret = radix_tree_delete_item(root, idx, t_page); + ret = xa_cmpxchg(xa, idx, t_page, NULL, 0); WARN_ON(ret != t_page); null_free_page(ret); if (is_cache) @@ -709,47 +710,17 @@ static void null_free_sector(struct nullb *nullb, sector_t sector, } } -static struct nullb_page *null_radix_tree_insert(struct nullb *nullb, u64 idx, - struct nullb_page *t_page, bool is_cache) -{ - struct radix_tree_root *root; - - root = is_cache ? &nullb->dev->cache : &nullb->dev->data; - - if (radix_tree_insert(root, idx, t_page)) { - null_free_page(t_page); - t_page = radix_tree_lookup(root, idx); - WARN_ON(!t_page || t_page->page->index != idx); - } else if (is_cache) - nullb->dev->curr_cache += PAGE_SIZE; - - return t_page; -} - static void null_free_device_storage(struct nullb_device *dev, bool is_cache) { - unsigned long pos = 0; - int nr_pages; - struct nullb_page *ret, *t_pages[FREE_BATCH]; - struct radix_tree_root *root; - - root = is_cache ? &dev->cache : &dev->data; - - do { - int i; - - nr_pages = radix_tree_gang_lookup(root, - (void **)t_pages, pos, FREE_BATCH); - - for (i = 0; i < nr_pages; i++) { - pos = t_pages[i]->page->index; - ret = radix_tree_delete_item(root, pos, t_pages[i]); - WARN_ON(ret != t_pages[i]); - null_free_page(ret); - } + struct nullb_page *t_page; + XA_STATE(xas, is_cache ? &dev->cache : &dev->data, 0); - pos++; - } while (nr_pages == FREE_BATCH); + xas_lock(&xas); + xas_for_each(&xas, t_page, ULONG_MAX) { + xas_store(&xas, NULL); + null_free_page(t_page); + } + xas_unlock(&xas); if (is_cache) dev->curr_cache = 0; @@ -761,13 +732,13 @@ static struct nullb_page *__null_lookup_page(struct nullb *nullb, unsigned int sector_bit; u64 idx; struct nullb_page *t_page; - struct radix_tree_root *root; + struct xarray *xa; idx = sector >> PAGE_SECTORS_SHIFT; sector_bit = (sector & SECTOR_MASK); - root = is_cache ? &nullb->dev->cache : &nullb->dev->data; - t_page = radix_tree_lookup(root, idx); + xa = is_cache ? &nullb->dev->cache : &nullb->dev->data; + t_page = xa_load(xa, idx); WARN_ON(t_page && t_page->page->index != idx); if (t_page && (for_write || test_bit(sector_bit, t_page->bitmap))) @@ -793,8 +764,9 @@ static struct nullb_page *null_insert_page(struct nullb *nullb, __releases(&nullb->lock) __acquires(&nullb->lock) { - u64 idx; - struct nullb_page *t_page; + struct xarray *xa; + unsigned long idx; + struct nullb_page *exist, *t_page; t_page = null_lookup_page(nullb, sector, true, ignore_cache); if (t_page) @@ -806,14 +778,21 @@ static struct nullb_page *null_insert_page(struct nullb *nullb, if (!t_page) goto out_lock; - if (radix_tree_preload(GFP_NOIO)) + idx = sector >> PAGE_SECTORS_SHIFT; + xa = ignore_cache ? &nullb->dev->data : &nullb->dev->cache; + if (xa_insert_irq(xa, idx, NULL, GFP_NOIO) == -ENOMEM) goto out_freepage; spin_lock_irq(&nullb->lock); - idx = sector >> PAGE_SECTORS_SHIFT; t_page->page->index = idx; - t_page = null_radix_tree_insert(nullb, idx, t_page, !ignore_cache); - radix_tree_preload_end(); + exist = xa_cmpxchg(xa, idx, XA_ZERO_ENTRY, t_page, GFP_ATOMIC); + if (exist) { + null_free_page(t_page); + t_page = exist; + } else if (!ignore_cache) + nullb->dev->curr_cache += PAGE_SIZE; + + WARN_ON(t_page->page->index != idx); return t_page; out_freepage: @@ -839,8 +818,7 @@ static int null_flush_cache_page(struct nullb *nullb, struct nullb_page *c_page) if (test_bit(NULLB_PAGE_FREE, c_page->bitmap)) { null_free_page(c_page); if (t_page && null_page_empty(t_page)) { - ret = radix_tree_delete_item(&nullb->dev->data, - idx, t_page); + xa_cmpxchg(&nullb->dev->data, idx, t_page, NULL, 0); null_free_page(t_page); } return 0; @@ -865,7 +843,7 @@ static int null_flush_cache_page(struct nullb *nullb, struct nullb_page *c_page) kunmap_atomic(dst); kunmap_atomic(src); - ret = radix_tree_delete_item(&nullb->dev->cache, idx, c_page); + ret = xa_cmpxchg(&nullb->dev->cache, idx, c_page, NULL, 0); null_free_page(ret); nullb->dev->curr_cache -= PAGE_SIZE; @@ -883,8 +861,9 @@ static int null_make_cache_space(struct nullb *nullb, unsigned long n) nullb->dev->curr_cache + n || nullb->dev->curr_cache == 0) return 0; - nr_pages = radix_tree_gang_lookup(&nullb->dev->cache, - (void **)c_pages, nullb->cache_flush_pos, FREE_BATCH); + nr_pages = xa_extract(&nullb->dev->cache, (void **)c_pages, + nullb->cache_flush_pos, ULONG_MAX, + FREE_BATCH, XA_PRESENT); /* * nullb_flush_cache_page could unlock before using the c_pages. To * avoid race, we don't allow page free @@ -1025,7 +1004,7 @@ static int null_handle_flush(struct nullb *nullb) break; } - WARN_ON(!radix_tree_empty(&nullb->dev->cache)); + WARN_ON(!xa_empty(&nullb->dev->cache)); spin_unlock_irq(&nullb->lock); return err; } From patchwork Mon Mar 18 19:48:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858481 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C8DCB14DE for ; Mon, 18 Mar 2019 19:48:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B197A29506 for ; Mon, 18 Mar 2019 19:48:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A631B2951C; Mon, 18 Mar 2019 19:48:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 367412885F for ; Mon, 18 Mar 2019 19:48:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727554AbfCRTsa (ORCPT ); Mon, 18 Mar 2019 15:48:30 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35370 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727570AbfCRTs3 (ORCPT ); Mon, 18 Mar 2019 15:48:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=hArS45QIEYgOtIjQ5G4rtpb1oLKhSIxMrFciiZBLEXU=; b=agtHmvpcop4YFI6uoZhHq0jgG zbd2gRuJZh6xX8JbGxa3RQ8BtUlNaIu2BKOotlV0FhHfVaNdPK9h1wTHWZFglsNqEwccXWhWPKwx3 2dS0c+6vhsOJ5bgQa0WT6QFdH4xR3aH17h/TcGiDbXqCV1Dro0C2mfVOfYlRoQSvrKsMyuHhuTQ0l dH3W/VapoBz200m0bVs0Ksor2qifITIVo6HcXvs/pcUGX9BDUzjurWDvp6QRwBHIQXhl/2Fs3Ggtn XFEVs/qTs8LCQh0uD9hbFJ1SL6tBrQ1y+2oy6xIws5/v1aQBaVVuk6f0cLfWisEObWSkMcNAdI/4p bfEPF0ezQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFJ-0000w2-7Y; Mon, 18 Mar 2019 19:48:29 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 10/14] loop: Convert loop_index_idr to XArray Date: Mon, 18 Mar 2019 12:48:17 -0700 Message-Id: <20190318194821.3470-11-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/block/loop.c | 88 ++++++++++++++++---------------------------- 1 file changed, 31 insertions(+), 57 deletions(-) diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 1e6edd568214..d1a0f689788d 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -83,7 +83,7 @@ #include -static DEFINE_IDR(loop_index_idr); +static DEFINE_XARRAY_ALLOC(loop_devs); static DEFINE_MUTEX(loop_ctl_mutex); static int max_part; @@ -1812,28 +1812,23 @@ int loop_register_transfer(struct loop_func_table *funcs) return 0; } -static int unregister_transfer_cb(int id, void *ptr, void *data) -{ - struct loop_device *lo = ptr; - struct loop_func_table *xfer = data; - - mutex_lock(&loop_ctl_mutex); - if (lo->lo_encryption == xfer) - loop_release_xfer(lo); - mutex_unlock(&loop_ctl_mutex); - return 0; -} - int loop_unregister_transfer(int number) { unsigned int n = number; struct loop_func_table *xfer; + struct loop_device *lo; + unsigned long index; if (n == 0 || n >= MAX_LO_CRYPT || (xfer = xfer_funcs[n]) == NULL) return -EINVAL; xfer_funcs[n] = NULL; - idr_for_each(&loop_index_idr, &unregister_transfer_cb, xfer); + xa_for_each(&loop_devs, index, lo) { + mutex_lock(&loop_ctl_mutex); + if (lo->lo_encryption == xfer) + loop_release_xfer(lo); + mutex_unlock(&loop_ctl_mutex); + } return 0; } @@ -1935,15 +1930,14 @@ static int loop_add(struct loop_device **l, int i) /* allocate id, if @id >= 0, we're requesting that specific id */ if (i >= 0) { - err = idr_alloc(&loop_index_idr, lo, i, i + 1, GFP_KERNEL); - if (err == -ENOSPC) - err = -EEXIST; + err = xa_insert(&loop_devs, i, lo, GFP_KERNEL); } else { - err = idr_alloc(&loop_index_idr, lo, 0, 0, GFP_KERNEL); + err = xa_alloc(&loop_devs, &i, lo, xa_limit_32b, GFP_KERNEL); } + if (err == -EBUSY) + err = -EEXIST; if (err < 0) goto out_free_dev; - i = err; err = -ENOMEM; lo->tag_set.ops = &loop_mq_ops; @@ -1956,7 +1950,7 @@ static int loop_add(struct loop_device **l, int i) err = blk_mq_alloc_tag_set(&lo->tag_set); if (err) - goto out_free_idr; + goto out_free_xa; lo->lo_queue = blk_mq_init_queue(&lo->tag_set); if (IS_ERR(lo->lo_queue)) { @@ -2018,8 +2012,8 @@ static int loop_add(struct loop_device **l, int i) blk_cleanup_queue(lo->lo_queue); out_cleanup_tags: blk_mq_free_tag_set(&lo->tag_set); -out_free_idr: - idr_remove(&loop_index_idr, i); +out_free_xa: + xa_erase(&loop_devs, i); out_free_dev: kfree(lo); out: @@ -2035,41 +2029,28 @@ static void loop_remove(struct loop_device *lo) kfree(lo); } -static int find_free_cb(int id, void *ptr, void *data) -{ - struct loop_device *lo = ptr; - struct loop_device **l = data; - - if (lo->lo_state == Lo_unbound) { - *l = lo; - return 1; - } - return 0; -} - static int loop_lookup(struct loop_device **l, int i) { struct loop_device *lo; int ret = -ENODEV; if (i < 0) { - int err; + unsigned long index; - err = idr_for_each(&loop_index_idr, &find_free_cb, &lo); - if (err == 1) { - *l = lo; - ret = lo->lo_number; + xa_for_each(&loop_devs, index, lo) { + if (lo->lo_state != Lo_unbound) + continue; + break; } - goto out; + } else { + /* lookup and return a specific i */ + lo = xa_load(&loop_devs, i); } - /* lookup and return a specific i */ - lo = idr_find(&loop_index_idr, i); if (lo) { *l = lo; ret = lo->lo_number; } -out: return ret; } @@ -2126,7 +2107,7 @@ static long loop_control_ioctl(struct file *file, unsigned int cmd, break; } lo->lo_disk->private_data = NULL; - idr_remove(&loop_index_idr, lo->lo_number); + xa_erase(&loop_devs, lo->lo_number); loop_remove(lo); break; case LOOP_CTL_GET_FREE: @@ -2233,23 +2214,16 @@ static int __init loop_init(void) return err; } -static int loop_exit_cb(int id, void *ptr, void *data) -{ - struct loop_device *lo = ptr; - - loop_remove(lo); - return 0; -} - static void __exit loop_exit(void) { - unsigned long range; - - range = max_loop ? max_loop << part_shift : 1UL << MINORBITS; + struct loop_device *lo; + unsigned long range, index; - idr_for_each(&loop_index_idr, &loop_exit_cb, NULL); - idr_destroy(&loop_index_idr); + xa_for_each(&loop_devs, index, lo) + loop_remove(lo); + xa_destroy(&loop_devs); + range = max_loop ? max_loop << part_shift : 1UL << MINORBITS; blk_unregister_region(MKDEV(LOOP_MAJOR, 0), range); unregister_blkdev(LOOP_MAJOR, "loop"); From patchwork Mon Mar 18 19:48:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858487 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 664806C2 for ; Mon, 18 Mar 2019 19:48:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E2E72885F for ; Mon, 18 Mar 2019 19:48:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 42EAE294AD; Mon, 18 Mar 2019 19:48:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 769912885F for ; Mon, 18 Mar 2019 19:48:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727618AbfCRTsb (ORCPT ); Mon, 18 Mar 2019 15:48:31 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35372 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726998AbfCRTs3 (ORCPT ); Mon, 18 Mar 2019 15:48:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=PzrDonBLtGOEoEdiJDPz4JZO1N6aHkTeofAemL8nqTk=; b=it5tXckeqyyXAC5r2Pfvmy0JY 5Aa2kunXqByxkL8q04/r+Y+SBnb1z9qNBdr/X8t5Y5FkQpFmqXI2G7FMD+F1erGRkJy0NSlZUG43p Wkovnf4TB/lNXgQoPtlUM6WDuEwGJX6M4vRvTNTXQcsgML3UJta+iMVMbIbnQ/7v4Go0riJKF62P1 vwqGrCEJrpBBeeRACRo1TdAEGRwnXdUqhoaJZ6Ef+0fxOe142jW3UIPcSW4718oQCx0VwQK57ARGe yxQmIWmxWdlNyKcPHylqFIjPDSquCgZDt1TiSqhPj6d9lWP3+RcQE7cxZ/VX8nagOpGw7xF1qx1lY 5dl/XcQIQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFJ-0000w9-CO; Mon, 18 Mar 2019 19:48:29 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 11/14] nbd: Convert nbd_index_idr to XArray Date: Mon, 18 Mar 2019 12:48:18 -0700 Message-Id: <20190318194821.3470-12-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/block/nbd.c | 145 ++++++++++++++++++-------------------------- 1 file changed, 59 insertions(+), 86 deletions(-) diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 90ba9f4c03f3..6e64884973dd 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -44,8 +44,8 @@ #include #include -static DEFINE_IDR(nbd_index_idr); -static DEFINE_MUTEX(nbd_index_mutex); +static DEFINE_XARRAY_ALLOC(nbd_devs); +static DEFINE_MUTEX(nbd_global_mutex); static int nbd_total_devices = 0; struct nbd_sock { @@ -223,10 +223,9 @@ static void nbd_dev_remove(struct nbd_device *nbd) static void nbd_put(struct nbd_device *nbd) { - if (refcount_dec_and_mutex_lock(&nbd->refs, - &nbd_index_mutex)) { - idr_remove(&nbd_index_idr, nbd->index); - mutex_unlock(&nbd_index_mutex); + if (refcount_dec_and_mutex_lock(&nbd->refs, &nbd_global_mutex)) { + xa_erase(&nbd_devs, nbd->index); + mutex_unlock(&nbd_global_mutex); nbd_dev_remove(nbd); } } @@ -1331,7 +1330,7 @@ static int nbd_open(struct block_device *bdev, fmode_t mode) struct nbd_device *nbd; int ret = 0; - mutex_lock(&nbd_index_mutex); + mutex_lock(&nbd_global_mutex); nbd = bdev->bd_disk->private_data; if (!nbd) { ret = -ENXIO; @@ -1363,7 +1362,7 @@ static int nbd_open(struct block_device *bdev, fmode_t mode) bdev->bd_invalidated = 1; } out: - mutex_unlock(&nbd_index_mutex); + mutex_unlock(&nbd_global_mutex); return ret; } @@ -1551,15 +1550,13 @@ static int nbd_dev_add(int index) goto out_free_nbd; if (index >= 0) { - err = idr_alloc(&nbd_index_idr, nbd, index, index + 1, - GFP_KERNEL); - if (err == -ENOSPC) - err = -EEXIST; + err = xa_insert(&nbd_devs, index, nbd, GFP_KERNEL); } else { - err = idr_alloc(&nbd_index_idr, nbd, 0, 0, GFP_KERNEL); - if (err >= 0) - index = err; + err = xa_alloc(&nbd_devs, &index, nbd, xa_limit_32b, + GFP_KERNEL); } + if (err == -EBUSY) + err = -EEXIST; if (err < 0) goto out_free_disk; @@ -1576,7 +1573,7 @@ static int nbd_dev_add(int index) err = blk_mq_alloc_tag_set(&nbd->tag_set); if (err) - goto out_free_idr; + goto out_free_dev; q = blk_mq_init_queue(&nbd->tag_set); if (IS_ERR(q)) { @@ -1613,8 +1610,8 @@ static int nbd_dev_add(int index) out_free_tags: blk_mq_free_tag_set(&nbd->tag_set); -out_free_idr: - idr_remove(&nbd_index_idr, index); +out_free_dev: + xa_erase(&nbd_devs, index); out_free_disk: put_disk(disk); out_free_nbd: @@ -1623,18 +1620,6 @@ static int nbd_dev_add(int index) return err; } -static int find_free_cb(int id, void *ptr, void *data) -{ - struct nbd_device *nbd = ptr; - struct nbd_device **found = data; - - if (!refcount_read(&nbd->config_refs)) { - *found = nbd; - return 1; - } - return 0; -} - /* Netlink interface. */ static const struct nla_policy nbd_attr_policy[NBD_ATTR_MAX + 1] = { [NBD_ATTR_INDEX] = { .type = NLA_U32 }, @@ -1683,46 +1668,51 @@ static int nbd_genl_connect(struct sk_buff *skb, struct genl_info *info) return -EINVAL; } again: - mutex_lock(&nbd_index_mutex); + mutex_lock(&nbd_global_mutex); if (index == -1) { - ret = idr_for_each(&nbd_index_idr, &find_free_cb, &nbd); - if (ret == 0) { + unsigned long i; + xa_for_each(&nbd_devs, i, nbd) { + if (!refcount_read(&nbd->config_refs)) + break; + } + + if (!nbd) { int new_index; new_index = nbd_dev_add(-1); if (new_index < 0) { - mutex_unlock(&nbd_index_mutex); + mutex_unlock(&nbd_global_mutex); printk(KERN_ERR "nbd: failed to add new device\n"); return new_index; } - nbd = idr_find(&nbd_index_idr, new_index); + nbd = xa_load(&nbd_devs, new_index); } } else { - nbd = idr_find(&nbd_index_idr, index); + nbd = xa_load(&nbd_devs, index); if (!nbd) { ret = nbd_dev_add(index); if (ret < 0) { - mutex_unlock(&nbd_index_mutex); + mutex_unlock(&nbd_global_mutex); printk(KERN_ERR "nbd: failed to add new device\n"); return ret; } - nbd = idr_find(&nbd_index_idr, index); + nbd = xa_load(&nbd_devs, index); } } if (!nbd) { + mutex_unlock(&nbd_global_mutex); printk(KERN_ERR "nbd: couldn't find device at index %d\n", index); - mutex_unlock(&nbd_index_mutex); return -EINVAL; } if (!refcount_inc_not_zero(&nbd->refs)) { - mutex_unlock(&nbd_index_mutex); + mutex_unlock(&nbd_global_mutex); if (index == -1) goto again; printk(KERN_ERR "nbd: device at index %d is going down\n", index); return -EINVAL; } - mutex_unlock(&nbd_index_mutex); + mutex_unlock(&nbd_global_mutex); mutex_lock(&nbd->config_lock); if (refcount_read(&nbd->config_refs)) { @@ -1850,21 +1840,21 @@ static int nbd_genl_disconnect(struct sk_buff *skb, struct genl_info *info) return -EINVAL; } index = nla_get_u32(info->attrs[NBD_ATTR_INDEX]); - mutex_lock(&nbd_index_mutex); - nbd = idr_find(&nbd_index_idr, index); + mutex_lock(&nbd_global_mutex); + nbd = xa_load(&nbd_devs, index); if (!nbd) { - mutex_unlock(&nbd_index_mutex); + mutex_unlock(&nbd_global_mutex); printk(KERN_ERR "nbd: couldn't find device at index %d\n", index); return -EINVAL; } if (!refcount_inc_not_zero(&nbd->refs)) { - mutex_unlock(&nbd_index_mutex); + mutex_unlock(&nbd_global_mutex); printk(KERN_ERR "nbd: device at index %d is going down\n", index); return -EINVAL; } - mutex_unlock(&nbd_index_mutex); + mutex_unlock(&nbd_global_mutex); if (!refcount_inc_not_zero(&nbd->config_refs)) { nbd_put(nbd); return 0; @@ -1891,21 +1881,21 @@ static int nbd_genl_reconfigure(struct sk_buff *skb, struct genl_info *info) return -EINVAL; } index = nla_get_u32(info->attrs[NBD_ATTR_INDEX]); - mutex_lock(&nbd_index_mutex); - nbd = idr_find(&nbd_index_idr, index); + mutex_lock(&nbd_global_mutex); + nbd = xa_load(&nbd_devs, index); if (!nbd) { - mutex_unlock(&nbd_index_mutex); + mutex_unlock(&nbd_global_mutex); printk(KERN_ERR "nbd: couldn't find a device at index %d\n", index); return -EINVAL; } if (!refcount_inc_not_zero(&nbd->refs)) { - mutex_unlock(&nbd_index_mutex); + mutex_unlock(&nbd_global_mutex); printk(KERN_ERR "nbd: device at index %d is going down\n", index); return -EINVAL; } - mutex_unlock(&nbd_index_mutex); + mutex_unlock(&nbd_global_mutex); if (!refcount_inc_not_zero(&nbd->config_refs)) { dev_err(nbd_to_dev(nbd), @@ -2044,7 +2034,7 @@ static int populate_nbd_status(struct nbd_device *nbd, struct sk_buff *reply) /* This is a little racey, but for status it's ok. The * reason we don't take a ref here is because we can't * take a ref in the index == -1 case as we would need - * to put under the nbd_index_mutex, which could + * to put under the nbd_global_mutex, which could * deadlock if we are configured to remove ourselves * once we're disconnected. */ @@ -2064,16 +2054,11 @@ static int populate_nbd_status(struct nbd_device *nbd, struct sk_buff *reply) return 0; } -static int status_cb(int id, void *ptr, void *data) -{ - struct nbd_device *nbd = ptr; - return populate_nbd_status(nbd, (struct sk_buff *)data); -} - static int nbd_genl_status(struct sk_buff *skb, struct genl_info *info) { struct nlattr *dev_list; struct sk_buff *reply; + struct nbd_device *nbd; void *reply_head; size_t msg_size; int index = -1; @@ -2082,7 +2067,7 @@ static int nbd_genl_status(struct sk_buff *skb, struct genl_info *info) if (info->attrs[NBD_ATTR_INDEX]) index = nla_get_u32(info->attrs[NBD_ATTR_INDEX]); - mutex_lock(&nbd_index_mutex); + mutex_lock(&nbd_global_mutex); msg_size = nla_total_size(nla_attr_size(sizeof(u32)) + nla_attr_size(sizeof(u8))); @@ -2100,14 +2085,17 @@ static int nbd_genl_status(struct sk_buff *skb, struct genl_info *info) dev_list = nla_nest_start(reply, NBD_ATTR_DEVICE_LIST); if (index == -1) { - ret = idr_for_each(&nbd_index_idr, &status_cb, reply); - if (ret) { - nlmsg_free(reply); - goto out; + unsigned long i; + + xa_for_each(&nbd_devs, i, nbd) { + ret = populate_nbd_status(nbd, reply); + if (ret) { + nlmsg_free(reply); + goto out; + } } } else { - struct nbd_device *nbd; - nbd = idr_find(&nbd_index_idr, index); + nbd = xa_load(&nbd_devs, index); if (nbd) { ret = populate_nbd_status(nbd, reply); if (ret) { @@ -2120,7 +2108,7 @@ static int nbd_genl_status(struct sk_buff *skb, struct genl_info *info) genlmsg_end(reply, reply_head); ret = genlmsg_reply(reply, info); out: - mutex_unlock(&nbd_index_mutex); + mutex_unlock(&nbd_global_mutex); return ret; } @@ -2229,42 +2217,27 @@ static int __init nbd_init(void) } nbd_dbg_init(); - mutex_lock(&nbd_index_mutex); + mutex_lock(&nbd_global_mutex); for (i = 0; i < nbds_max; i++) nbd_dev_add(i); - mutex_unlock(&nbd_index_mutex); - return 0; -} - -static int nbd_exit_cb(int id, void *ptr, void *data) -{ - struct list_head *list = (struct list_head *)data; - struct nbd_device *nbd = ptr; - - list_add_tail(&nbd->list, list); + mutex_unlock(&nbd_global_mutex); return 0; } static void __exit nbd_cleanup(void) { struct nbd_device *nbd; - LIST_HEAD(del_list); + unsigned long index; nbd_dbg_close(); - mutex_lock(&nbd_index_mutex); - idr_for_each(&nbd_index_idr, &nbd_exit_cb, &del_list); - mutex_unlock(&nbd_index_mutex); - - while (!list_empty(&del_list)) { - nbd = list_first_entry(&del_list, struct nbd_device, list); - list_del_init(&nbd->list); + xa_for_each(&nbd_devs, index, nbd) { if (refcount_read(&nbd->refs) != 1) printk(KERN_ERR "nbd: possibly leaking a device\n"); nbd_put(nbd); } - idr_destroy(&nbd_index_idr); + xa_destroy(&nbd_devs); genl_unregister_family(&nbd_genl_family); destroy_workqueue(recv_workqueue); unregister_blkdev(NBD_MAJOR, "nbd"); From patchwork Mon Mar 18 19:48:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858485 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6615617EF for ; Mon, 18 Mar 2019 19:48:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4BE432885F for ; Mon, 18 Mar 2019 19:48:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4A848292FE; Mon, 18 Mar 2019 19:48:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DE08C2885F for ; Mon, 18 Mar 2019 19:48:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727609AbfCRTsa (ORCPT ); Mon, 18 Mar 2019 15:48:30 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35374 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727597AbfCRTs3 (ORCPT ); Mon, 18 Mar 2019 15:48:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=wg4dmSy/RqoCCts/JYr6HGrbCcIu44kKA+TFYCEd1V0=; b=SZEHZrXaa4Paaqyn8UgPvG2OR J8zqCiG1d7Id2Gdla4PtVR2YfXHZf6CdhwTMDCNxYcgZ/7xWkkUFZnHMP7NkVCWootOfk8cKJ98z7 tXOczwzIldIK19ZRlWoFjJtcOED6DiBnt8onPT4L+vcIOJ/VKVgPhSkvU9KFum0bNYmxTTyB5i0q0 zb9+qU8FZcyNqw/3J2v4PeVT5odUQqd521TkQxKBYhJLwisVsiq3xJqSK/MRY5HfIhgZHn7EeEQ0t fkbqNCdb74nw62pAuWYmOrRNdTx0RKDhM+RvXNyDgoRX2XSV8kzOlYy1yRMM3z2CjgPkydH33zk4Z P8YmLB7vA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFJ-0000wF-Hj; Mon, 18 Mar 2019 19:48:29 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 12/14] zram: Convert zram_index_idr to XArray Date: Mon, 18 Mar 2019 12:48:19 -0700 Message-Id: <20190318194821.3470-13-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/block/zram/zram_drv.c | 40 +++++++++++++---------------------- 1 file changed, 15 insertions(+), 25 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index e7a5f1d1c314..f7e53a681637 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -36,9 +36,7 @@ #include "zram_drv.h" -static DEFINE_IDR(zram_index_idr); -/* idr index must be protected */ -static DEFINE_MUTEX(zram_index_mutex); +static DEFINE_XARRAY_ALLOC(zram_devs); static int zram_major; static const char *default_compressor = "lzo-rle"; @@ -1901,10 +1899,9 @@ static int zram_add(void) if (!zram) return -ENOMEM; - ret = idr_alloc(&zram_index_idr, zram, 0, 0, GFP_KERNEL); + ret = xa_alloc(&zram_devs, &device_id, zram, xa_limit_32b, GFP_KERNEL); if (ret < 0) goto out_free_dev; - device_id = ret; init_rwsem(&zram->init_lock); #ifdef CONFIG_ZRAM_WRITEBACK @@ -1915,7 +1912,7 @@ static int zram_add(void) pr_err("Error allocating disk queue for device %d\n", device_id); ret = -ENOMEM; - goto out_free_idr; + goto out_remove_dev; } blk_queue_make_request(queue, zram_make_request); @@ -1979,8 +1976,8 @@ static int zram_add(void) out_free_queue: blk_cleanup_queue(queue); -out_free_idr: - idr_remove(&zram_index_idr, device_id); +out_remove_dev: + xa_erase(&zram_devs, device_id); out_free_dev: kfree(zram); return ret; @@ -2034,9 +2031,7 @@ static ssize_t hot_add_show(struct class *class, { int ret; - mutex_lock(&zram_index_mutex); ret = zram_add(); - mutex_unlock(&zram_index_mutex); if (ret < 0) return ret; @@ -2059,18 +2054,15 @@ static ssize_t hot_remove_store(struct class *class, if (dev_id < 0) return -EINVAL; - mutex_lock(&zram_index_mutex); - - zram = idr_find(&zram_index_idr, dev_id); + zram = xa_load(&zram_devs, dev_id); if (zram) { ret = zram_remove(zram); if (!ret) - idr_remove(&zram_index_idr, dev_id); + xa_erase(&zram_devs, dev_id); } else { ret = -ENODEV; } - mutex_unlock(&zram_index_mutex); return ret ? ret : count; } static CLASS_ATTR_WO(hot_remove); @@ -2088,18 +2080,18 @@ static struct class zram_control_class = { .class_groups = zram_control_class_groups, }; -static int zram_remove_cb(int id, void *ptr, void *data) -{ - zram_remove(ptr); - return 0; -} - static void destroy_devices(void) { + struct zram *zram; + unsigned long index; + class_unregister(&zram_control_class); - idr_for_each(&zram_index_idr, &zram_remove_cb, NULL); + xa_for_each(&zram_devs, index, zram) { + zram_remove(zram); + } + xa_destroy(&zram_devs); + zram_debugfs_destroy(); - idr_destroy(&zram_index_idr); unregister_blkdev(zram_major, "zram"); cpuhp_remove_multi_state(CPUHP_ZCOMP_PREPARE); } @@ -2130,9 +2122,7 @@ static int __init zram_init(void) } while (num_devices != 0) { - mutex_lock(&zram_index_mutex); ret = zram_add(); - mutex_unlock(&zram_index_mutex); if (ret < 0) goto out_error; num_devices--; From patchwork Mon Mar 18 19:48:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858493 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9723214DE for ; Mon, 18 Mar 2019 19:48:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7985628846 for ; Mon, 18 Mar 2019 19:48:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6DEE429506; Mon, 18 Mar 2019 19:48:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A1C1B28846 for ; Mon, 18 Mar 2019 19:48:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727570AbfCRTsa (ORCPT ); Mon, 18 Mar 2019 15:48:30 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35376 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727609AbfCRTsa (ORCPT ); Mon, 18 Mar 2019 15:48:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Fbn0hOZXTaH//51hQ9uMk96e0SP6ig2f7P3BksZdLtU=; b=AxMidq+p2ljaornnhjbhjRlnF i4XLRflV1/ICRFlBQVd12FwRIN99M111rpCtmCfJPSEhOYmorG8oWWNFi/HGkzs3rfdQ0ytQR6c6H eoKMjnAc5VzvylJOGFHq/cJSk+eOXNLPy1EQHdUVcFfBU0l4fR985noJVVB79/eJvyb885B0ZIgPH 3/AaL3lcY43w0cgTxG0MkJsJB3PE7qqjta1Ln11uzCytGHqmVuUZ9QnzBH7A4h8toJhzUtBnYwZzf AadsmDhwT9AWxKlQkwhI/inErB0xwCPT7eaoASpptUAfR5vqYo159fORVv/3gw4mivbVyC+0EQT1W /1xisUrTQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFJ-0000wL-Mw; Mon, 18 Mar 2019 19:48:29 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 13/14] drbd: Convert drbd devices to XArray Date: Mon, 18 Mar 2019 12:48:20 -0700 Message-Id: <20190318194821.3470-14-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/block/drbd/drbd_debugfs.c | 16 ++++----- drivers/block/drbd/drbd_int.h | 6 ++-- drivers/block/drbd/drbd_main.c | 56 ++++++++++++++---------------- drivers/block/drbd/drbd_nl.c | 42 +++++++++++----------- drivers/block/drbd/drbd_proc.c | 8 ++--- drivers/block/drbd/drbd_receiver.c | 4 +-- drivers/block/drbd/drbd_state.c | 8 ++--- drivers/block/drbd/drbd_worker.c | 8 ++--- 8 files changed, 72 insertions(+), 76 deletions(-) diff --git a/drivers/block/drbd/drbd_debugfs.c b/drivers/block/drbd/drbd_debugfs.c index f13b48ff5f43..a1336bbf5083 100644 --- a/drivers/block/drbd/drbd_debugfs.c +++ b/drivers/block/drbd/drbd_debugfs.c @@ -128,11 +128,11 @@ static void seq_print_minor_vnr_req(struct seq_file *m, struct drbd_request *req static void seq_print_resource_pending_meta_io(struct seq_file *m, struct drbd_resource *resource, unsigned long now) { struct drbd_device *device; - unsigned int i; + unsigned long i; seq_puts(m, "minor\tvnr\tstart\tsubmit\tintent\n"); rcu_read_lock(); - idr_for_each_entry(&resource->devices, device, i) { + xa_for_each(&resource->devices, i, device) { struct drbd_md_io tmp; /* In theory this is racy, * in the sense that there could have been a @@ -156,11 +156,11 @@ static void seq_print_resource_pending_meta_io(struct seq_file *m, struct drbd_r static void seq_print_waiting_for_AL(struct seq_file *m, struct drbd_resource *resource, unsigned long now) { struct drbd_device *device; - unsigned int i; + unsigned long i; seq_puts(m, "minor\tvnr\tage\t#waiting\n"); rcu_read_lock(); - idr_for_each_entry(&resource->devices, device, i) { + xa_for_each(&resource->devices, i, device) { unsigned long jif; struct drbd_request *req; int n = atomic_read(&device->ap_actlog_cnt); @@ -216,11 +216,11 @@ static void seq_print_device_bitmap_io(struct seq_file *m, struct drbd_device *d static void seq_print_resource_pending_bitmap_io(struct seq_file *m, struct drbd_resource *resource, unsigned long now) { struct drbd_device *device; - unsigned int i; + unsigned long i; seq_puts(m, "minor\tvnr\trw\tage\t#in-flight\n"); rcu_read_lock(); - idr_for_each_entry(&resource->devices, device, i) { + xa_for_each(&resource->devices, i, device) { seq_print_device_bitmap_io(m, device, now); } rcu_read_unlock(); @@ -288,10 +288,10 @@ static void seq_print_resource_pending_peer_requests(struct seq_file *m, struct drbd_resource *resource, unsigned long now) { struct drbd_device *device; - unsigned int i; + unsigned long i; rcu_read_lock(); - idr_for_each_entry(&resource->devices, device, i) { + xa_for_each(&resource->devices, i, device) { seq_print_device_peer_requests(m, device, now); } rcu_read_unlock(); diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h index 000a2f4c0e92..4e68b5f8265b 100644 --- a/drivers/block/drbd/drbd_int.h +++ b/drivers/block/drbd/drbd_int.h @@ -189,7 +189,7 @@ drbd_insert_fault(struct drbd_device *device, unsigned int type) { #define div_floor(A, B) ((A)/(B)) extern struct ratelimit_state drbd_ratelimit_state; -extern struct idr drbd_devices; /* RCU, updates: genl_lock() */ +extern struct xarray drbd_devices; /* RCU, updates: genl_lock() */ extern struct list_head drbd_resources; /* RCU, updates: genl_lock() */ extern const char *cmdname(enum drbd_packet cmd); @@ -670,7 +670,7 @@ struct drbd_resource { struct dentry *debugfs_res_in_flight_summary; #endif struct kref kref; - struct idr devices; /* volume number to device mapping */ + struct xarray devices; /* volume number to device mapping */ struct list_head connections; struct list_head resources; struct res_opts res_opts; @@ -1022,7 +1022,7 @@ struct drbd_config_context { static inline struct drbd_device *minor_to_device(unsigned int minor) { - return (struct drbd_device *)idr_find(&drbd_devices, minor); + return xa_load(&drbd_devices, minor); } static inline struct drbd_peer_device *first_peer_device(struct drbd_device *device) diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c index 714eb64fabfd..2f3b18e0223b 100644 --- a/drivers/block/drbd/drbd_main.c +++ b/drivers/block/drbd/drbd_main.c @@ -116,7 +116,7 @@ module_param_string(usermode_helper, drbd_usermode_helper, sizeof(drbd_usermode_ /* in 2.6.x, our device mapping and config info contains our virtual gendisks * as member "struct gendisk *vdisk;" */ -struct idr drbd_devices; +DEFINE_XARRAY(drbd_devices); struct list_head drbd_resources; struct mutex resources_mutex; @@ -2364,7 +2364,7 @@ void drbd_destroy_resource(struct kref *kref) struct drbd_resource *resource = container_of(kref, struct drbd_resource, kref); - idr_destroy(&resource->devices); + xa_destroy(&resource->devices); free_cpumask_var(resource->cpu_mask); kfree(resource->name); memset(resource, 0xf2, sizeof(*resource)); @@ -2386,7 +2386,7 @@ void drbd_free_resource(struct drbd_resource *resource) static void drbd_cleanup(void) { - unsigned int i; + unsigned long i; struct drbd_device *device; struct drbd_resource *resource, *tmp; @@ -2406,8 +2406,9 @@ static void drbd_cleanup(void) drbd_genl_unregister(); - idr_for_each_entry(&drbd_devices, device, i) + xa_for_each(&drbd_devices, i, device) drbd_delete_device(device); + xa_destroy(&drbd_devices); /* not _rcu since, no other updater anymore. Genl already unregistered */ for_each_resource_safe(resource, tmp, &drbd_resources) { @@ -2420,8 +2421,6 @@ static void drbd_cleanup(void) drbd_destroy_mempools(); unregister_blkdev(DRBD_MAJOR, "drbd"); - idr_destroy(&drbd_devices); - pr_info("module cleanup done.\n"); } @@ -2659,7 +2658,7 @@ struct drbd_resource *drbd_create_resource(const char *name) if (!zalloc_cpumask_var(&resource->cpu_mask, GFP_KERNEL)) goto fail_free_name; kref_init(&resource->kref); - idr_init(&resource->devices); + xa_init(&resource->devices); INIT_LIST_HEAD(&resource->connections); resource->write_ordering = WO_BDEV_FLUSH; list_add_tail_rcu(&resource->resources, &drbd_resources); @@ -2791,7 +2790,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig struct drbd_peer_device *peer_device, *tmp_peer_device; struct gendisk *disk; struct request_queue *q; - int id; + int ret; int vnr = adm_ctx->volume; enum drbd_ret_code err = ERR_NOMEM; @@ -2854,19 +2853,19 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig device->read_requests = RB_ROOT; device->write_requests = RB_ROOT; - id = idr_alloc(&drbd_devices, device, minor, minor + 1, GFP_KERNEL); - if (id < 0) { - if (id == -ENOSPC) + ret = xa_insert(&drbd_devices, minor, device, GFP_KERNEL); + if (ret < 0) { + if (ret == -EBUSY) err = ERR_MINOR_OR_VOLUME_EXISTS; - goto out_no_minor_idr; + goto out_no_minor; } kref_get(&device->kref); - id = idr_alloc(&resource->devices, device, vnr, vnr + 1, GFP_KERNEL); - if (id < 0) { - if (id == -ENOSPC) + ret = xa_insert(&resource->devices, vnr, device, GFP_KERNEL); + if (ret < 0) { + if (ret == -EBUSY) err = ERR_MINOR_OR_VOLUME_EXISTS; - goto out_idr_remove_minor; + goto out_remove_minor; } kref_get(&device->kref); @@ -2875,18 +2874,18 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig for_each_connection(connection, resource) { peer_device = kzalloc(sizeof(struct drbd_peer_device), GFP_KERNEL); if (!peer_device) - goto out_idr_remove_from_resource; + goto out_remove_from_resource; peer_device->connection = connection; peer_device->device = device; list_add(&peer_device->peer_devices, &device->peer_devices); kref_get(&device->kref); - id = idr_alloc(&connection->peer_devices, peer_device, vnr, vnr + 1, GFP_KERNEL); - if (id < 0) { - if (id == -ENOSPC) + ret = idr_alloc(&connection->peer_devices, peer_device, vnr, vnr + 1, GFP_KERNEL); + if (ret < 0) { + if (ret == -ENOSPC) err = ERR_INVALID_REQUEST; - goto out_idr_remove_from_resource; + goto out_remove_from_resource; } kref_get(&connection->kref); INIT_WORK(&peer_device->send_acks_work, drbd_send_acks_wf); @@ -2913,7 +2912,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig out_idr_remove_vol: idr_remove(&connection->peer_devices, vnr); -out_idr_remove_from_resource: +out_remove_from_resource: for_each_connection(connection, resource) { peer_device = idr_remove(&connection->peer_devices, vnr); if (peer_device) @@ -2923,11 +2922,11 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig list_del(&peer_device->peer_devices); kfree(peer_device); } - idr_remove(&resource->devices, vnr); -out_idr_remove_minor: - idr_remove(&drbd_devices, minor); + xa_erase(&resource->devices, vnr); +out_remove_minor: + xa_erase(&drbd_devices, minor); synchronize_rcu(); -out_no_minor_idr: +out_no_minor: drbd_bm_cleanup(device); out_no_bitmap: __free_page(device->md_io.page); @@ -2955,9 +2954,9 @@ void drbd_delete_device(struct drbd_device *device) idr_remove(&connection->peer_devices, device->vnr); kref_put(&device->kref, drbd_destroy_device); } - idr_remove(&resource->devices, device->vnr); + xa_erase(&resource->devices, device->vnr); kref_put(&device->kref, drbd_destroy_device); - idr_remove(&drbd_devices, device_to_minor(device)); + xa_erase(&drbd_devices, device_to_minor(device)); kref_put(&device->kref, drbd_destroy_device); del_gendisk(device->vdisk); synchronize_rcu(); @@ -2990,7 +2989,6 @@ static int __init drbd_init(void) init_waitqueue_head(&drbd_pp_wait); drbd_proc = NULL; /* play safe for drbd_cleanup */ - idr_init(&drbd_devices); mutex_init(&resources_mutex); INIT_LIST_HEAD(&drbd_resources); diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c index f2471172a961..67c1a343a595 100644 --- a/drivers/block/drbd/drbd_nl.c +++ b/drivers/block/drbd/drbd_nl.c @@ -3436,11 +3436,12 @@ int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb) struct nlattr *resource_filter; struct drbd_resource *resource; struct drbd_device *uninitialized_var(device); - int minor, err, retcode; + int err, retcode; + unsigned long minor; struct drbd_genlmsghdr *dh; struct device_info device_info; struct device_statistics device_statistics; - struct idr *idr_to_search; + struct xarray *devices; resource = (struct drbd_resource *)cb->args[0]; if (!cb->args[0] && !cb->args[1]) { @@ -3456,18 +3457,13 @@ int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb) rcu_read_lock(); minor = cb->args[1]; - idr_to_search = resource ? &resource->devices : &drbd_devices; - device = idr_get_next(idr_to_search, &minor); + devices = resource ? &resource->devices : &drbd_devices; + device = xa_find_after(devices, &minor, ULONG_MAX, XA_PRESENT); if (!device) { err = 0; goto out; } - idr_for_each_entry_continue(idr_to_search, device, minor) { - retcode = NO_ERROR; - goto put_result; /* only one iteration */ - } - err = 0; - goto out; /* no more devices */ + retcode = NO_ERROR; put_result: dh = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, @@ -3688,9 +3684,10 @@ int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb) struct drbd_resource *resource; struct drbd_device *uninitialized_var(device); struct drbd_peer_device *peer_device = NULL; - int minor, err, retcode; + int err, retcode; + unsigned long minor; struct drbd_genlmsghdr *dh; - struct idr *idr_to_search; + struct xarray *devices; resource = (struct drbd_resource *)cb->args[0]; if (!cb->args[0] && !cb->args[1]) { @@ -3706,13 +3703,12 @@ int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb) rcu_read_lock(); minor = cb->args[1]; - idr_to_search = resource ? &resource->devices : &drbd_devices; - device = idr_find(idr_to_search, minor); + devices = resource ? &resource->devices : &drbd_devices; + device = xa_load(devices, minor); if (!device) { next_device: - minor++; cb->args[2] = 0; - device = idr_get_next(idr_to_search, &minor); + device = xa_find_after(devices, &minor, ULONG_MAX, XA_PRESENT); if (!device) { err = 0; goto out; @@ -3941,12 +3937,12 @@ static int get_one_status(struct sk_buff *skb, struct netlink_callback *cb) struct drbd_resource *pos = (struct drbd_resource *)cb->args[0]; struct drbd_resource *resource = NULL; struct drbd_resource *tmp; - unsigned volume = cb->args[1]; + unsigned long volume = cb->args[1]; /* Open coded, deferred, iteration: * for_each_resource_safe(resource, tmp, &drbd_resources) { * connection = "first connection of resource or undefined"; - * idr_for_each_entry(&resource->devices, device, i) { + * xa_for_each(&resource->devices, i, device) { * ... * } * } @@ -3981,7 +3977,8 @@ static int get_one_status(struct sk_buff *skb, struct netlink_callback *cb) } if (resource) { next_resource: - device = idr_get_next(&resource->devices, &volume); + device = xa_find(&resource->devices, &volume, + ULONG_MAX, XA_PRESENT); if (!device) { /* No more volumes to dump on this resource. * Advance resource iterator. */ @@ -4489,7 +4486,7 @@ static int adm_del_resource(struct drbd_resource *resource) if (connection->cstate > C_STANDALONE) return ERR_NET_CONFIGURED; } - if (!idr_is_empty(&resource->devices)) + if (!xa_empty(&resource->devices)) return ERR_RES_IN_USE; /* The state engine has stopped the sender thread, so we don't @@ -4519,6 +4516,7 @@ int drbd_adm_down(struct sk_buff *skb, struct genl_info *info) struct drbd_device *device; int retcode; /* enum drbd_ret_code rsp. enum drbd_state_rv */ unsigned i; + unsigned long index; retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_RESOURCE); if (!adm_ctx.reply_skb) @@ -4548,7 +4546,7 @@ int drbd_adm_down(struct sk_buff *skb, struct genl_info *info) } /* detach */ - idr_for_each_entry(&resource->devices, device, i) { + xa_for_each(&resource->devices, index, device) { retcode = adm_detach(device, 0); if (retcode < SS_SUCCESS || retcode > NO_ERROR) { drbd_msg_put_info(adm_ctx.reply_skb, "failed to detach"); @@ -4557,7 +4555,7 @@ int drbd_adm_down(struct sk_buff *skb, struct genl_info *info) } /* delete volumes */ - idr_for_each_entry(&resource->devices, device, i) { + xa_for_each(&resource->devices, index, device) { retcode = adm_del_minor(device); if (retcode != NO_ERROR) { /* "can not happen" */ diff --git a/drivers/block/drbd/drbd_proc.c b/drivers/block/drbd/drbd_proc.c index 74ef29247bb5..77a0ad118e51 100644 --- a/drivers/block/drbd/drbd_proc.c +++ b/drivers/block/drbd/drbd_proc.c @@ -226,7 +226,7 @@ static void drbd_syncer_progress(struct drbd_device *device, struct seq_file *se int drbd_seq_show(struct seq_file *seq, void *v) { - int i, prev_i = -1; + unsigned long i, prev_i = -1; const char *sn; struct drbd_device *device; struct net_conf *nc; @@ -263,7 +263,7 @@ int drbd_seq_show(struct seq_file *seq, void *v) */ rcu_read_lock(); - idr_for_each_entry(&drbd_devices, device, i) { + xa_for_each(&drbd_devices, i, device) { if (prev_i != i - 1) seq_putc(seq, '\n'); prev_i = i; @@ -274,7 +274,7 @@ int drbd_seq_show(struct seq_file *seq, void *v) if (state.conn == C_STANDALONE && state.disk == D_DISKLESS && state.role == R_SECONDARY) { - seq_printf(seq, "%2d: cs:Unconfigured\n", i); + seq_printf(seq, "%2ld: cs:Unconfigured\n", i); } else { /* reset device->congestion_reason */ bdi_rw_congested(device->rq_queue->backing_dev_info); @@ -282,7 +282,7 @@ int drbd_seq_show(struct seq_file *seq, void *v) nc = rcu_dereference(first_peer_device(device)->connection->net_conf); wp = nc ? nc->wire_protocol - DRBD_PROT_A + 'A' : ' '; seq_printf(seq, - "%2d: cs:%s ro:%s/%s ds:%s/%s %c %c%c%c%c%c%c\n" + "%2ld: cs:%s ro:%s/%s ds:%s/%s %c %c%c%c%c%c%c\n" " ns:%u nr:%u dw:%u dr:%u al:%u bm:%u " "lo:%d pe:%d ua:%d ap:%d ep:%d wo:%c", i, sn, diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c index c7ad88d91a09..f87bd8a034b2 100644 --- a/drivers/block/drbd/drbd_receiver.c +++ b/drivers/block/drbd/drbd_receiver.c @@ -1460,7 +1460,7 @@ void drbd_bump_write_ordering(struct drbd_resource *resource, struct drbd_backin { struct drbd_device *device; enum write_ordering_e pwo; - int vnr; + unsigned long vnr; static char *write_ordering_str[] = { [WO_NONE] = "none", [WO_DRAIN_IO] = "drain", @@ -1471,7 +1471,7 @@ void drbd_bump_write_ordering(struct drbd_resource *resource, struct drbd_backin if (wo != WO_BDEV_FLUSH) wo = min(pwo, wo); rcu_read_lock(); - idr_for_each_entry(&resource->devices, device, vnr) { + xa_for_each(&resource->devices, vnr, device) { if (get_ldev(device)) { wo = max_allowed_wo(device->ldev, wo); if (device->ldev == bdev) diff --git a/drivers/block/drbd/drbd_state.c b/drivers/block/drbd/drbd_state.c index 2b4c0db5d867..c3587054fa1e 100644 --- a/drivers/block/drbd/drbd_state.c +++ b/drivers/block/drbd/drbd_state.c @@ -56,12 +56,12 @@ static void count_objects(struct drbd_resource *resource, { struct drbd_device *device; struct drbd_connection *connection; - int vnr; + unsigned long vnr; *n_devices = 0; *n_connections = 0; - idr_for_each_entry(&resource->devices, device, vnr) + xa_for_each(&resource->devices, vnr, device) (*n_devices)++; for_each_connection(connection, resource) (*n_connections)++; @@ -99,7 +99,7 @@ struct drbd_state_change *remember_old_state(struct drbd_resource *resource, gfp unsigned int n_devices; struct drbd_connection *connection; unsigned int n_connections; - int vnr; + unsigned long vnr; struct drbd_device_state_change *device_state_change; struct drbd_peer_device_state_change *peer_device_state_change; @@ -133,7 +133,7 @@ struct drbd_state_change *remember_old_state(struct drbd_resource *resource, gfp device_state_change = state_change->devices; peer_device_state_change = state_change->peer_devices; - idr_for_each_entry(&resource->devices, device, vnr) { + xa_for_each(&resource->devices, vnr, device) { kref_get(&device->kref); device_state_change->device = device; device_state_change->disk_state[OLD] = device->state.disk; diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c index 268ef0c5d4ab..614aa963700c 100644 --- a/drivers/block/drbd/drbd_worker.c +++ b/drivers/block/drbd/drbd_worker.c @@ -1577,10 +1577,10 @@ static bool drbd_pause_after(struct drbd_device *device) { bool changed = false; struct drbd_device *odev; - int i; + unsigned long i; rcu_read_lock(); - idr_for_each_entry(&drbd_devices, odev, i) { + xa_for_each(&drbd_devices, i, odev) { if (odev->state.conn == C_STANDALONE && odev->state.disk == D_DISKLESS) continue; if (!_drbd_may_sync_now(odev) && @@ -1603,10 +1603,10 @@ static bool drbd_resume_next(struct drbd_device *device) { bool changed = false; struct drbd_device *odev; - int i; + unsigned long i; rcu_read_lock(); - idr_for_each_entry(&drbd_devices, odev, i) { + xa_for_each(&drbd_devices, i, odev) { if (odev->state.conn == C_STANDALONE && odev->state.disk == D_DISKLESS) continue; if (odev->state.aftr_isp) { From patchwork Mon Mar 18 19:48:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10858489 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F1AF917E9 for ; Mon, 18 Mar 2019 19:48:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D4FD62951A for ; Mon, 18 Mar 2019 19:48:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C93222950B; Mon, 18 Mar 2019 19:48:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0A2C42890C for ; Mon, 18 Mar 2019 19:48:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727597AbfCRTsa (ORCPT ); Mon, 18 Mar 2019 15:48:30 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:35382 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727618AbfCRTsa (ORCPT ); Mon, 18 Mar 2019 15:48:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=syPNaKyPeHJalobdeO/7vC2pGxao4lfo8CHz8FE1ntg=; b=Kz/4MMdQi2cV16gyCmldt7Kdb enkKBVqb0EupDQQ9M57xlLpf/3NhkZSPiOpeyRft3BqFb1kRmV11D6kBmetN+W3RKiw1zkl0Ubg0M Fn+7vXEJUD0f0fMBHiyd5w+gzhuI3rtNGTQMRnm+hJMsZ5HUKN4JrgAUnoc2arKfdFguCHsw/ktqc tadp0Ztd0ETVVUbgzGOUnj/VJuaOdfhP1dNp/g2iMS9m+VzFvea6lURdVguBVHkXQ/cYPxfQfdWJz +BedaMQrvf8kVKaesXYrvMvYgI/SxOamn+bGk+DDhByCKkbF3BuV9ji8DX7WA7f2KpTZH8N66SYqx GHaFmxOaA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1h5yFJ-0000wQ-S3; Mon, 18 Mar 2019 19:48:29 +0000 From: Matthew Wilcox To: linux-block@vger.kernel.org Cc: Matthew Wilcox Subject: [PATCH 14/14] drbd: Convert peer devices to XArray Date: Mon, 18 Mar 2019 12:48:21 -0700 Message-Id: <20190318194821.3470-15-willy@infradead.org> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190318194821.3470-1-willy@infradead.org> References: <20190318194821.3470-1-willy@infradead.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Matthew Wilcox --- drivers/block/drbd/drbd_int.h | 4 +- drivers/block/drbd/drbd_main.c | 25 +++++++------ drivers/block/drbd/drbd_nl.c | 35 +++++++++--------- drivers/block/drbd/drbd_receiver.c | 29 ++++++++------- drivers/block/drbd/drbd_state.c | 59 +++++++++++++++--------------- drivers/block/drbd/drbd_worker.c | 12 +++--- 6 files changed, 84 insertions(+), 80 deletions(-) diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h index 4e68b5f8265b..375418164a2c 100644 --- a/drivers/block/drbd/drbd_int.h +++ b/drivers/block/drbd/drbd_int.h @@ -705,7 +705,7 @@ struct drbd_connection { struct dentry *debugfs_conn_oldest_requests; #endif struct kref kref; - struct idr peer_devices; /* volume number to peer device mapping */ + struct xarray peer_devices; /* volume number to peer device mapping */ enum drbd_conns cstate; /* Only C_STANDALONE to C_WF_REPORT_PARAMS */ struct mutex cstate_mutex; /* Protects graceful disconnects */ unsigned int connect_cnt; /* Inc each time a connection is established */ @@ -1033,7 +1033,7 @@ static inline struct drbd_peer_device *first_peer_device(struct drbd_device *dev static inline struct drbd_peer_device * conn_peer_device(struct drbd_connection *connection, int volume_number) { - return idr_find(&connection->peer_devices, volume_number); + return xa_load(&connection->peer_devices, volume_number); } #define for_each_resource(resource, _resources) \ diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c index 2f3b18e0223b..c292fbdddfec 100644 --- a/drivers/block/drbd/drbd_main.c +++ b/drivers/block/drbd/drbd_main.c @@ -489,10 +489,12 @@ void _drbd_thread_stop(struct drbd_thread *thi, int restart, int wait) int conn_lowest_minor(struct drbd_connection *connection) { struct drbd_peer_device *peer_device; - int vnr = 0, minor = -1; + unsigned long vnr = 0; + int minor = -1; rcu_read_lock(); - peer_device = idr_get_next(&connection->peer_devices, &vnr); + peer_device = xa_find(&connection->peer_devices, &vnr, + ULONG_MAX, XA_PRESENT); if (peer_device) minor = device_to_minor(peer_device->device); rcu_read_unlock(); @@ -2712,7 +2714,7 @@ struct drbd_connection *conn_create(const char *name, struct res_opts *res_opts) connection->cstate = C_STANDALONE; mutex_init(&connection->cstate_mutex); init_waitqueue_head(&connection->ping_wait); - idr_init(&connection->peer_devices); + xa_init(&connection->peer_devices); drbd_init_workqueue(&connection->sender_work); mutex_init(&connection->data.mutex); @@ -2757,7 +2759,7 @@ void drbd_destroy_connection(struct kref *kref) drbd_err(connection, "epoch_size:%d\n", atomic_read(&connection->current_epoch->epoch_size)); kfree(connection->current_epoch); - idr_destroy(&connection->peer_devices); + xa_destroy(&connection->peer_devices); drbd_free_socket(&connection->meta); drbd_free_socket(&connection->data); @@ -2881,9 +2883,10 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig list_add(&peer_device->peer_devices, &device->peer_devices); kref_get(&device->kref); - ret = idr_alloc(&connection->peer_devices, peer_device, vnr, vnr + 1, GFP_KERNEL); + ret = xa_insert(&connection->peer_devices, vnr, peer_device, + GFP_KERNEL); if (ret < 0) { - if (ret == -ENOSPC) + if (ret == -EBUSY) err = ERR_INVALID_REQUEST; goto out_remove_from_resource; } @@ -2911,10 +2914,10 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig return NO_ERROR; out_idr_remove_vol: - idr_remove(&connection->peer_devices, vnr); + xa_erase(&connection->peer_devices, vnr); out_remove_from_resource: for_each_connection(connection, resource) { - peer_device = idr_remove(&connection->peer_devices, vnr); + peer_device = xa_erase(&connection->peer_devices, vnr); if (peer_device) kref_put(&connection->kref, drbd_destroy_connection); } @@ -2951,7 +2954,7 @@ void drbd_delete_device(struct drbd_device *device) drbd_debugfs_peer_device_cleanup(peer_device); drbd_debugfs_device_cleanup(device); for_each_connection(connection, resource) { - idr_remove(&connection->peer_devices, device->vnr); + xa_erase(&connection->peer_devices, device->vnr); kref_put(&device->kref, drbd_destroy_device); } xa_erase(&resource->devices, device->vnr); @@ -3066,10 +3069,10 @@ void drbd_free_sock(struct drbd_connection *connection) void conn_md_sync(struct drbd_connection *connection) { struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; kref_get(&device->kref); diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c index 67c1a343a595..9f599c85b3d2 100644 --- a/drivers/block/drbd/drbd_nl.c +++ b/drivers/block/drbd/drbd_nl.c @@ -457,10 +457,10 @@ static enum drbd_fencing_p highest_fencing_policy(struct drbd_connection *connec { enum drbd_fencing_p fp = FP_NOT_AVAIL; struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; if (get_ldev_if_state(device, D_CONSISTENT)) { struct disk_conf *disk_conf = @@ -2271,10 +2271,10 @@ static bool conn_resync_running(struct drbd_connection *connection) { struct drbd_peer_device *peer_device; bool rv = false; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; if (device->state.conn == C_SYNC_SOURCE || device->state.conn == C_SYNC_TARGET || @@ -2293,10 +2293,10 @@ static bool conn_ov_running(struct drbd_connection *connection) { struct drbd_peer_device *peer_device; bool rv = false; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; if (device->state.conn == C_VERIFY_S || device->state.conn == C_VERIFY_T) { @@ -2313,7 +2313,7 @@ static enum drbd_ret_code _check_net_options(struct drbd_connection *connection, struct net_conf *old_net_conf, struct net_conf *new_net_conf) { struct drbd_peer_device *peer_device; - int i; + unsigned long i; if (old_net_conf && connection->cstate == C_WF_REPORT_PARAMS && connection->agreed_pro_version < 100) { if (new_net_conf->wire_protocol != old_net_conf->wire_protocol) @@ -2335,7 +2335,7 @@ _check_net_options(struct drbd_connection *connection, struct net_conf *old_net_ (new_net_conf->wire_protocol != DRBD_PROT_C)) return ERR_NOT_PROTO_C; - idr_for_each_entry(&connection->peer_devices, peer_device, i) { + xa_for_each(&connection->peer_devices, i, peer_device) { struct drbd_device *device = peer_device->device; if (get_ldev(device)) { enum drbd_fencing_p fp = rcu_dereference(device->ldev->disk_conf)->fencing; @@ -2358,14 +2358,14 @@ check_net_options(struct drbd_connection *connection, struct net_conf *new_net_c { enum drbd_ret_code rv; struct drbd_peer_device *peer_device; - int i; + unsigned long i; rcu_read_lock(); rv = _check_net_options(connection, rcu_dereference(connection->net_conf), new_net_conf); rcu_read_unlock(); /* connection->peer_devices protected by genl_lock() here */ - idr_for_each_entry(&connection->peer_devices, peer_device, i) { + xa_for_each(&connection->peer_devices, i, peer_device) { struct drbd_device *device = peer_device->device; if (!device->bitmap) { if (drbd_bm_init(device)) @@ -2535,9 +2535,9 @@ int drbd_adm_net_opts(struct sk_buff *skb, struct genl_info *info) if (connection->cstate >= C_WF_REPORT_PARAMS) { struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) + xa_for_each(&connection->peer_devices, vnr, peer_device) drbd_send_sync_param(peer_device); } @@ -2589,7 +2589,7 @@ int drbd_adm_connect(struct sk_buff *skb, struct genl_info *info) struct drbd_resource *resource; struct drbd_connection *connection; enum drbd_ret_code retcode; - int i; + unsigned long i; int err; retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_RESOURCE); @@ -2682,7 +2682,7 @@ int drbd_adm_connect(struct sk_buff *skb, struct genl_info *info) connection->peer_addr_len = nla_len(adm_ctx.peer_addr); memcpy(&connection->peer_addr, nla_data(adm_ctx.peer_addr), connection->peer_addr_len); - idr_for_each_entry(&connection->peer_devices, peer_device, i) { + xa_for_each(&connection->peer_devices, i, peer_device) { peer_devices++; } @@ -2690,7 +2690,7 @@ int drbd_adm_connect(struct sk_buff *skb, struct genl_info *info) flags = (peer_devices--) ? NOTIFY_CONTINUES : 0; mutex_lock(¬ification_mutex); notify_connection_state(NULL, 0, connection, &connection_info, NOTIFY_CREATE | flags); - idr_for_each_entry(&connection->peer_devices, peer_device, i) { + xa_for_each(&connection->peer_devices, i, peer_device) { struct peer_device_info peer_device_info; peer_device_to_info(&peer_device_info, peer_device); @@ -2701,7 +2701,7 @@ int drbd_adm_connect(struct sk_buff *skb, struct genl_info *info) mutex_unlock(&adm_ctx.resource->conf_update); rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, i) { + xa_for_each(&connection->peer_devices, i, peer_device) { struct drbd_device *device = peer_device->device; device->send_cnt = 0; device->recv_cnt = 0; @@ -4515,7 +4515,6 @@ int drbd_adm_down(struct sk_buff *skb, struct genl_info *info) struct drbd_connection *connection; struct drbd_device *device; int retcode; /* enum drbd_ret_code rsp. enum drbd_state_rv */ - unsigned i; unsigned long index; retcode = drbd_adm_prepare(&adm_ctx, skb, info, DRBD_ADM_NEED_RESOURCE); @@ -4530,7 +4529,7 @@ int drbd_adm_down(struct sk_buff *skb, struct genl_info *info) for_each_connection(connection, resource) { struct drbd_peer_device *peer_device; - idr_for_each_entry(&connection->peer_devices, peer_device, i) { + xa_for_each(&connection->peer_devices, index, peer_device) { retcode = drbd_set_role(peer_device->device, R_SECONDARY, 0); if (retcode < SS_SUCCESS) { drbd_msg_put_info(adm_ctx.reply_skb, "failed to demote"); diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c index f87bd8a034b2..88b9de71833e 100644 --- a/drivers/block/drbd/drbd_receiver.c +++ b/drivers/block/drbd/drbd_receiver.c @@ -232,10 +232,10 @@ static void drbd_reclaim_net_peer_reqs(struct drbd_device *device) static void conn_reclaim_net_peer_reqs(struct drbd_connection *connection) { struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; if (!atomic_read(&device->pp_in_use_by_net)) continue; @@ -935,7 +935,8 @@ static int conn_connect(struct drbd_connection *connection) struct drbd_socket sock, msock; struct drbd_peer_device *peer_device; struct net_conf *nc; - int vnr, timeout, h; + int timeout, h; + unsigned long vnr; bool discard_my_data, ok; enum drbd_state_rv rv; struct accept_wait_data ad = { @@ -1098,7 +1099,7 @@ static int conn_connect(struct drbd_connection *connection) * drbd_set_role() is finished, and any incoming drbd_set_role * will see the STATE_SENT flag, and wait for it to be cleared. */ - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) + xa_for_each(&connection->peer_devices, vnr, peer_device) mutex_lock(peer_device->device->state_mutex); /* avoid a race with conn_request_state( C_DISCONNECTING ) */ @@ -1106,11 +1107,11 @@ static int conn_connect(struct drbd_connection *connection) set_bit(STATE_SENT, &connection->flags); spin_unlock_irq(&connection->resource->req_lock); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) + xa_for_each(&connection->peer_devices, vnr, peer_device) mutex_unlock(peer_device->device->state_mutex); rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; kref_get(&device->kref); rcu_read_unlock(); @@ -1323,14 +1324,14 @@ static void drbd_flush(struct drbd_connection *connection) if (connection->resource->write_ordering >= WO_BDEV_FLUSH) { struct drbd_peer_device *peer_device; struct issue_flush_context ctx; - int vnr; + unsigned long vnr; atomic_set(&ctx.pending, 1); ctx.error = 0; init_completion(&ctx.done); rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; if (!get_ldev(device)) @@ -1763,10 +1764,10 @@ static void drbd_remove_epoch_entry_interval(struct drbd_device *device, static void conn_wait_active_ee_empty(struct drbd_connection *connection) { struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; kref_get(&device->kref); @@ -5166,7 +5167,7 @@ static void conn_disconnect(struct drbd_connection *connection) { struct drbd_peer_device *peer_device; enum drbd_conns oc; - int vnr; + unsigned long vnr; if (connection->cstate == C_STANDALONE) return; @@ -5187,7 +5188,7 @@ static void conn_disconnect(struct drbd_connection *connection) drbd_free_sock(connection); rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; kref_get(&device->kref); rcu_read_unlock(); @@ -5881,12 +5882,12 @@ static int got_BarrierAck(struct drbd_connection *connection, struct packet_info { struct p_barrier_ack *p = pi->data; struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; tl_release(connection, p->barrier, be32_to_cpu(p->set_size)); rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; if (device->state.conn == C_AHEAD && diff --git a/drivers/block/drbd/drbd_state.c b/drivers/block/drbd/drbd_state.c index c3587054fa1e..f6a93cda2028 100644 --- a/drivers/block/drbd/drbd_state.c +++ b/drivers/block/drbd/drbd_state.c @@ -307,10 +307,10 @@ bool conn_all_vols_unconf(struct drbd_connection *connection) { struct drbd_peer_device *peer_device; bool rv = true; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; if (device->state.disk != D_DISKLESS || device->state.conn != C_STANDALONE || @@ -348,10 +348,10 @@ enum drbd_role conn_highest_role(struct drbd_connection *connection) { enum drbd_role role = R_SECONDARY; struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; role = max_role(role, device->state.role); } @@ -364,10 +364,10 @@ enum drbd_role conn_highest_peer(struct drbd_connection *connection) { enum drbd_role peer = R_UNKNOWN; struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; peer = max_role(peer, device->state.peer); } @@ -380,10 +380,10 @@ enum drbd_disk_state conn_highest_disk(struct drbd_connection *connection) { enum drbd_disk_state disk_state = D_DISKLESS; struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; disk_state = max_t(enum drbd_disk_state, disk_state, device->state.disk); } @@ -396,10 +396,10 @@ enum drbd_disk_state conn_lowest_disk(struct drbd_connection *connection) { enum drbd_disk_state disk_state = D_MASK; struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; disk_state = min_t(enum drbd_disk_state, disk_state, device->state.disk); } @@ -412,10 +412,10 @@ enum drbd_disk_state conn_highest_pdsk(struct drbd_connection *connection) { enum drbd_disk_state disk_state = D_DISKLESS; struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; disk_state = max_t(enum drbd_disk_state, disk_state, device->state.pdsk); } @@ -428,10 +428,10 @@ enum drbd_conns conn_lowest_conn(struct drbd_connection *connection) { enum drbd_conns conn = C_MASK; struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; conn = min_t(enum drbd_conns, conn, device->state.conn); } @@ -443,11 +443,11 @@ enum drbd_conns conn_lowest_conn(struct drbd_connection *connection) static bool no_peer_wf_report_params(struct drbd_connection *connection) { struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; bool rv = true; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) + xa_for_each(&connection->peer_devices, vnr, peer_device) if (peer_device->device->state.conn == C_WF_REPORT_PARAMS) { rv = false; break; @@ -460,10 +460,10 @@ static bool no_peer_wf_report_params(struct drbd_connection *connection) static void wake_up_all_devices(struct drbd_connection *connection) { struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) + xa_for_each(&connection->peer_devices, vnr, peer_device) wake_up(&peer_device->device->state_wait); rcu_read_unlock(); @@ -1767,10 +1767,10 @@ static void after_state_ch(struct drbd_device *device, union drbd_state os, if (resource->susp_fen && conn_lowest_conn(connection) >= C_CONNECTED) { /* case2: The connection was established again: */ struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) + xa_for_each(&connection->peer_devices, vnr, peer_device) clear_bit(NEW_CUR_UUID, &peer_device->device->flags); rcu_read_unlock(); @@ -2054,7 +2054,7 @@ static int w_after_conn_state_ch(struct drbd_work *w, int unused) enum drbd_conns oc = acscw->oc; union drbd_state ns_max = acscw->ns_max; struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; broadcast_state_change(acscw->state_change); forget_state_change(acscw->state_change); @@ -2068,7 +2068,7 @@ static int w_after_conn_state_ch(struct drbd_work *w, int unused) struct net_conf *old_conf; mutex_lock(¬ification_mutex); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) + xa_for_each(&connection->peer_devices, vnr, peer_device) notify_peer_device_state(NULL, 0, peer_device, NULL, NOTIFY_DESTROY | NOTIFY_CONTINUES); notify_connection_state(NULL, 0, connection, NULL, NOTIFY_DESTROY); @@ -2090,7 +2090,7 @@ static int w_after_conn_state_ch(struct drbd_work *w, int unused) /* case1: The outdate peer handler is successful: */ if (ns_max.pdsk <= D_OUTDATED) { rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; if (test_bit(NEW_CUR_UUID, &device->flags)) { drbd_uuid_new_current(device); @@ -2117,7 +2117,8 @@ static void conn_old_common_state(struct drbd_connection *connection, union drbd { enum chg_state_flags flags = ~0; struct drbd_peer_device *peer_device; - int vnr, first_vol = 1; + unsigned long vnr; + bool first_vol = true; union drbd_dev_state os, cs = { { .role = R_SECONDARY, .peer = R_UNKNOWN, @@ -2127,7 +2128,7 @@ static void conn_old_common_state(struct drbd_connection *connection, union drbd } }; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; os = device->state; @@ -2166,10 +2167,10 @@ conn_is_valid_transition(struct drbd_connection *connection, union drbd_state ma enum drbd_state_rv rv = SS_SUCCESS; union drbd_state ns, os; struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; os = drbd_read_state(device); ns = sanitize_state(device, os, apply_mask_val(os, mask, val), NULL); @@ -2216,7 +2217,7 @@ conn_set_state(struct drbd_connection *connection, union drbd_state mask, union } }; struct drbd_peer_device *peer_device; enum drbd_state_rv rv; - int vnr, number_of_volumes = 0; + unsigned long vnr, number_of_volumes = 0; if (mask.conn == C_MASK) { /* remember last connect time so request_timer_fn() won't @@ -2229,7 +2230,7 @@ conn_set_state(struct drbd_connection *connection, union drbd_state mask, union } rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; number_of_volumes++; os = drbd_read_state(device); diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c index 614aa963700c..ece947ba341d 100644 --- a/drivers/block/drbd/drbd_worker.c +++ b/drivers/block/drbd/drbd_worker.c @@ -1013,8 +1013,8 @@ int drbd_resync_finished(struct drbd_device *device) fp = rcu_dereference(device->ldev->disk_conf)->fencing; if (fp != FP_DONT_CARE) { struct drbd_peer_device *peer_device; - int vnr; - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + unsigned long vnr; + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; disk_state = min_t(enum drbd_disk_state, disk_state, device->state.disk); pdsk_state = min_t(enum drbd_disk_state, pdsk_state, device->state.pdsk); @@ -2062,10 +2062,10 @@ static unsigned long get_work_bits(unsigned long *flags) static void do_unqueued_work(struct drbd_connection *connection) { struct drbd_peer_device *peer_device; - int vnr; + unsigned long vnr; rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; unsigned long todo = get_work_bits(&device->flags); if (!todo) @@ -2179,7 +2179,7 @@ int drbd_worker(struct drbd_thread *thi) struct drbd_work *w = NULL; struct drbd_peer_device *peer_device; LIST_HEAD(work_list); - int vnr; + unsigned long vnr; while (get_t_state(thi) == RUNNING) { drbd_thread_current_set_cpu(thi); @@ -2232,7 +2232,7 @@ int drbd_worker(struct drbd_thread *thi) } while (!list_empty(&work_list) || test_bit(DEVICE_WORK_PENDING, &connection->flags)); rcu_read_lock(); - idr_for_each_entry(&connection->peer_devices, peer_device, vnr) { + xa_for_each(&connection->peer_devices, vnr, peer_device) { struct drbd_device *device = peer_device->device; D_ASSERT(device, device->state.disk == D_DISKLESS && device->state.conn == C_STANDALONE); kref_get(&device->kref);