From patchwork Mon Feb 27 17:36:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13153970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE756C64ED6 for ; Mon, 27 Feb 2023 17:36:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 706786B0083; Mon, 27 Feb 2023 12:36:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 618AE6B0085; Mon, 27 Feb 2023 12:36:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4445C6B0087; Mon, 27 Feb 2023 12:36:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 340A76B0083 for ; Mon, 27 Feb 2023 12:36:57 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0485EAAF96 for ; Mon, 27 Feb 2023 17:36:56 +0000 (UTC) X-FDA: 80513777274.14.CDE9A62 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf15.hostedemail.com (Postfix) with ESMTP id 346E8A0017 for ; Mon, 27 Feb 2023 17:36:54 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=aK47QRtP; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3Nur8YwYKCCUTVSFOCHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3Nur8YwYKCCUTVSFOCHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677519415; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fUqwZdzGNzQ24dn0iye+/d9qpIg3e2K+gtFu7zl18dg=; b=8Xum4TOW0tWQ2lfVZ4NKqjQ4s7rFRFzb98l56gBm6PZhQr6Ge61yGJbqM86Xn58g58H2G2 izMaAxofpTbgLHyV5uQXAXIvWmaxeRw7BtOtEdoSluV+YuL+dj7PiY6ZR4AM0U9OBPnj6w H+uhLfx4mIWwYpaU90w/BzLjW5bxmSc= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=aK47QRtP; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3Nur8YwYKCCUTVSFOCHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3Nur8YwYKCCUTVSFOCHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677519415; a=rsa-sha256; cv=none; b=pKEVJEfWeB5tbfe9pOn6RYj3xQuCfBPSPw+eO+a4EwK/9UPdsW+db81MObgQfDEdz9jf06 bXJpe4qXYkgREfdfolcRX38rCRLkjnbtl/WagH2xsabnGqvRk5Ggv7PfI1DEiIskP1TzNT HmpKrXSAKESH0NHoUUmB62Cy33KiXkY= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-536be78056eso152373267b3.1 for ; Mon, 27 Feb 2023 09:36:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fUqwZdzGNzQ24dn0iye+/d9qpIg3e2K+gtFu7zl18dg=; b=aK47QRtPPpBCpM3JfuxQQgTdprUxWBxefyFdQCAQov/N683tdXupMjqGdDKaSCwwNE r/feh6muJswYFO3gz8+8hlh8fJ46mk6ytxKiHTyHUDkeqkCmRcZ5ILVSZrf82rniQwJS bwSBoSpbInW4fIYbWSg9wRUr2lr7BPbm97H4qJPgmiiSyOeTA46yxA3PrsE4fyck24P9 RYYIOxHb/Qq+7z+3uM5jAsGNi6XPyJ8SL/Fk6/nH9s0kq3HjTQmlwXVgHF3zawoBuzen oXraX9gQmV6bGYIwSDanLksc10lz5lKlqDY5O8IyYB6Zq/Hucf+Jh7yTBppqP+cHK8p4 s1gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fUqwZdzGNzQ24dn0iye+/d9qpIg3e2K+gtFu7zl18dg=; b=oTvg3OpzmJgQ5jG5sx9jHDOf8WuJmLxQT65RS2uj/9j+5+tq5sdBDRDv7wF1pElehD uELUVRfsWxmscNK2nepDmZ/eXaMnwa2JXjN3rt3FZB3bD8VbDEv4WHEwJy56A0PeIVrQ MrgJyagYiM5qHlw2GusRMVWyhlA5o4wmWTzYPQ9i2DnK1BXuTBPuf0iuDhJmDctVaHfY V3sVtKH8qdzL9aMi2CK56sV119whceyvpRc+PrxneLsxACEJu9OSmdCvj/ZAXIwWcL4F u0oAm6JugI/PsPzU/b45Oao+RFeVMvuJ5IMTcykgRQzaGcOSie5Zt+QOlrEjWWlgjRLS IkEw== X-Gm-Message-State: AO0yUKX+PahLXWgyM5wGp6ZQFBWdGjYwPFEmNMpX+fNOoils4N4M8vFc C7dRw5NkbC/2VbcMvK1l6N77s+WF0qc= X-Google-Smtp-Source: AK7set+xp/kCact/o4a/8QM+9CxJRaFgqFXpY/QxYftINt6FbP584Urjuu8wRQFTT7DzY73bdvjS3xZy3nM= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:200:e1f6:21d1:eead:3897]) (user=surenb job=sendgmr) by 2002:a05:6902:1388:b0:855:fdcb:4467 with SMTP id x8-20020a056902138800b00855fdcb4467mr285964ybu.0.1677519414274; Mon, 27 Feb 2023 09:36:54 -0800 (PST) Date: Mon, 27 Feb 2023 09:36:06 -0800 In-Reply-To: <20230227173632.3292573-1-surenb@google.com> Mime-Version: 1.0 References: <20230227173632.3292573-1-surenb@google.com> X-Mailer: git-send-email 2.39.2.722.g9855ee24e9-goog Message-ID: <20230227173632.3292573-8-surenb@google.com> Subject: [PATCH v4 07/33] maple_tree: Add RCU lock checking to rcu callback functions From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org, mingo@redhat.com, will@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, chriscli@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, rppt@kernel.org, jannh@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, leewalsh@google.com, posk@google.com, michalechner92@googlemail.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com, "Liam R. Howlett" , Suren Baghdasaryan X-Rspamd-Queue-Id: 346E8A0017 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: jw7fjyw9379tjrzbfgkr7747aiknzt44 X-HE-Tag: 1677519414-748410 X-HE-Meta: U2FsdGVkX1/OLjeVEPtHe/j58a7FrJ3rxFXS3uMs9dqbLAen8nvoxcUKj7Fgreph2UVW9MH9S48onvsY7dGeFuqTTcHSjbMBXueNikoReZXSlDnE3qXA/Oe6rFRvwtlS5Mi6pfFt7ZURUSWpwWAOBM8Bg1eOWf3j8mUrDKazdWQtjAX9AUHliW+Vyw9kOkJemkeymcdlz3M3sF2sb04gzW0eRQgsFRvzq5miL+zRVmNGf3h6yQdzjggmO6F4zLJLZ4DpZiI94SFG+X4H1n120QXR/ICX0Rk4LrZhCIUSNPbD/7fG0NK+pL1Eh0oLW1aXWuAXU9CxJU29sKM8q/GSjwNAv6S9MrtCQHaCU0rIwySEVqSh5XBPOI5U4NOQ+ZDdEi1/hTrG/iKmSETjhrDvoOhf+nehDiP4n8Cy7sEo6HBT5Z7Mz0fl4o9QaLRo3dGLnaDZT8KhyGRErtTrTHAwriO+J/fOxY7dmnWZcS/+YqaF3QPoD1hswJvxpb4jyCLU4VsNbNvevBCPOb/drwlcZwEqThWyvYZisQEkyI1cgC76OfGnA/rhutLDqUbff/CusIM9f7LgWQdn2yxV1xESnAzOh3QkZG4KhpMZcRcQx+l2HIcxpMCpZ6S3D/QrJSdjUSTs8uj9G3MTI83pyNxTUYXfp785jZhUNDo2SUlL8JClsDft1VEVG74o8LiGtje3uAeDuuvxtY4PSQxwHSmodIxaR1wGsj6WHwTfaur1TudjbmF+Zn38boqk0KN92MicCtSfbs/622KYvN+kDsyKTKjJtz32tYd4oUQkszhPZq7lkYwxymlkhoxkUErHVzWyFkjLNuxb6HejiRw/HS1w/YzpdUZmz1PZ5hdc2UBGO/XlZ74nFMEGTyOOLEg13Qi7Sq4ovPZon5lKfEVSRK8s+9cNE0F+HWpkDJ/Fv8/7XQL0UFJWwCSVizY1wnPHe2J0VWPpJdVrLFhPAy9MZA+ hngVHV93 1e16jMRsd3mWx7d5NZw6jc/zqvvccX1fGi3lskECCqhCtypC8VY8byHut5+GpueGIJV7QDLiYfYsKLEP4MwTTcEKN3Tje2PA1uOo2autxg6tglBakXsDYOy674ZyfJ05w6un/nkHFDGh2TorQIQx0JjuSb+rbHzofyKx1KlFcN7tH3JyIsllYcgkyqFecerjLs2cYYJwScJfrQDWOWseWuliWgOaOU2L9Gx4PUybLF5Y0jQgR7ZzjQSfFymib2Nlplvhdp01B8ZyUnTVkKnwo9b0SMff/ScIBPWtFTZDZ9xClKQv/MP2zA+4M7fGRevAhCTqbDEPSqqCOXoo9ZZeqM/o0z23j+eIQVVENjgKAYzpMz8ZVE9QPUgnS8hKNTzFMKlwIFFIKYHxzznb9pZ6a6V959jkkfuzKXTWgu0QgoxOe4YqJ2dIPYcKDpypVzyOK0IVwt9C5wAlsA/xkpyj5+/gcjPo2v9icEfrcLiD2BuS3JzFggAsjsX06na/B3IgB7nN1RaXJ1iStLr5jV7/ZEOfbG8FaP8vgMWqbXmKwzK6JEatk1HBSUIP8QhQq1p4JkHoM+Xc+TGoUgtS1U+lAXvmJrqrTkNpSXbpe34zVMYAB4Ziwzn/QcpwR1HsX9uGn6yuVTrygRhbDzaC4bfXSTJ2Rk0OmAbbhQQGqEA9Uhx9k7S5adVne4Udaqu97ftuR0+td X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Liam R. Howlett" Dereferencing RCU objects within the RCU callback without the RCU check has caused lockdep to complain. Fix the RCU dereferencing by using the RCU callback lock to ensure the operation is safe. Also stop creating a new lock to use for dereferencing during destruction of the tree or subtree. Instead, pass through a pointer to the tree that has the lock that is held for RCU dereferencing checking. It also does not make sense to use the maple state in the freeing scenario as the tree walk is a special case where the tree no longer has the normal encodings and parent pointers. Fixes: 54a611b60590 ("Maple Tree: add new data structure") Reported-by: Suren Baghdasaryan Signed-off-by: Liam R. Howlett --- lib/maple_tree.c | 188 ++++++++++++++++++++++++----------------------- 1 file changed, 96 insertions(+), 92 deletions(-) diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 8ad2d1669fad..2be86368237d 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -824,6 +824,11 @@ static inline void *mt_slot(const struct maple_tree *mt, return rcu_dereference_check(slots[offset], mt_locked(mt)); } +static inline void *mt_slot_locked(struct maple_tree *mt, void __rcu **slots, + unsigned char offset) +{ + return rcu_dereference_protected(slots[offset], mt_locked(mt)); +} /* * mas_slot_locked() - Get the slot value when holding the maple tree lock. * @mas: The maple state @@ -835,7 +840,7 @@ static inline void *mt_slot(const struct maple_tree *mt, static inline void *mas_slot_locked(struct ma_state *mas, void __rcu **slots, unsigned char offset) { - return rcu_dereference_protected(slots[offset], mt_locked(mas->tree)); + return mt_slot_locked(mas->tree, slots, offset); } /* @@ -907,34 +912,35 @@ static inline void ma_set_meta(struct maple_node *mn, enum maple_type mt, } /* - * mas_clear_meta() - clear the metadata information of a node, if it exists - * @mas: The maple state + * mt_clear_meta() - clear the metadata information of a node, if it exists + * @mt: The maple tree * @mn: The maple node - * @mt: The maple node type + * @type: The maple node type * @offset: The offset of the highest sub-gap in this node. * @end: The end of the data in this node. */ -static inline void mas_clear_meta(struct ma_state *mas, struct maple_node *mn, - enum maple_type mt) +static inline void mt_clear_meta(struct maple_tree *mt, struct maple_node *mn, + enum maple_type type) { struct maple_metadata *meta; unsigned long *pivots; void __rcu **slots; void *next; - switch (mt) { + switch (type) { case maple_range_64: pivots = mn->mr64.pivot; if (unlikely(pivots[MAPLE_RANGE64_SLOTS - 2])) { slots = mn->mr64.slot; - next = mas_slot_locked(mas, slots, - MAPLE_RANGE64_SLOTS - 1); - if (unlikely((mte_to_node(next) && mte_node_type(next)))) - return; /* The last slot is a node, no metadata */ + next = mt_slot_locked(mt, slots, + MAPLE_RANGE64_SLOTS - 1); + if (unlikely((mte_to_node(next) && + mte_node_type(next)))) + return; /* no metadata, could be node */ } fallthrough; case maple_arange_64: - meta = ma_meta(mn, mt); + meta = ma_meta(mn, type); break; default: return; @@ -5497,7 +5503,7 @@ static inline int mas_rev_alloc(struct ma_state *mas, unsigned long min, } /* - * mas_dead_leaves() - Mark all leaves of a node as dead. + * mte_dead_leaves() - Mark all leaves of a node as dead. * @mas: The maple state * @slots: Pointer to the slot array * @type: The maple node type @@ -5507,16 +5513,16 @@ static inline int mas_rev_alloc(struct ma_state *mas, unsigned long min, * Return: The number of leaves marked as dead. */ static inline -unsigned char mas_dead_leaves(struct ma_state *mas, void __rcu **slots, - enum maple_type mt) +unsigned char mte_dead_leaves(struct maple_enode *enode, struct maple_tree *mt, + void __rcu **slots) { struct maple_node *node; enum maple_type type; void *entry; int offset; - for (offset = 0; offset < mt_slots[mt]; offset++) { - entry = mas_slot_locked(mas, slots, offset); + for (offset = 0; offset < mt_slot_count(enode); offset++) { + entry = mt_slot(mt, slots, offset); type = mte_node_type(entry); node = mte_to_node(entry); /* Use both node and type to catch LE & BE metadata */ @@ -5531,162 +5537,160 @@ unsigned char mas_dead_leaves(struct ma_state *mas, void __rcu **slots, return offset; } -static void __rcu **mas_dead_walk(struct ma_state *mas, unsigned char offset) +/** + * mte_dead_walk() - Walk down a dead tree to just before the leaves + * @enode: The maple encoded node + * @offset: The starting offset + * + * Note: This can only be used from the RCU callback context. + */ +static void __rcu **mte_dead_walk(struct maple_enode **enode, unsigned char offset) { - struct maple_node *next; + struct maple_node *node, *next; void __rcu **slots = NULL; - next = mas_mn(mas); + next = mte_to_node(*enode); do { - mas->node = mt_mk_node(next, next->type); - slots = ma_slots(next, next->type); - next = mas_slot_locked(mas, slots, offset); + *enode = ma_enode_ptr(next); + node = mte_to_node(*enode); + slots = ma_slots(node, node->type); + next = rcu_dereference_protected(slots[offset], + lock_is_held(&rcu_callback_map)); offset = 0; } while (!ma_is_leaf(next->type)); return slots; } +/** + * mt_free_walk() - Walk & free a tree in the RCU callback context + * @head: The RCU head that's within the node. + * + * Note: This can only be used from the RCU callback context. + */ static void mt_free_walk(struct rcu_head *head) { void __rcu **slots; struct maple_node *node, *start; - struct maple_tree mt; + struct maple_enode *enode; unsigned char offset; enum maple_type type; - MA_STATE(mas, &mt, 0, 0); node = container_of(head, struct maple_node, rcu); if (ma_is_leaf(node->type)) goto free_leaf; - mt_init_flags(&mt, node->ma_flags); - mas_lock(&mas); start = node; - mas.node = mt_mk_node(node, node->type); - slots = mas_dead_walk(&mas, 0); - node = mas_mn(&mas); + enode = mt_mk_node(node, node->type); + slots = mte_dead_walk(&enode, 0); + node = mte_to_node(enode); do { mt_free_bulk(node->slot_len, slots); offset = node->parent_slot + 1; - mas.node = node->piv_parent; - if (mas_mn(&mas) == node) - goto start_slots_free; - - type = mte_node_type(mas.node); - slots = ma_slots(mte_to_node(mas.node), type); - if ((offset < mt_slots[type]) && (slots[offset])) - slots = mas_dead_walk(&mas, offset); - - node = mas_mn(&mas); + enode = node->piv_parent; + if (mte_to_node(enode) == node) + goto free_leaf; + + type = mte_node_type(enode); + slots = ma_slots(mte_to_node(enode), type); + if ((offset < mt_slots[type]) && + rcu_dereference_protected(slots[offset], + lock_is_held(&rcu_callback_map))) + slots = mte_dead_walk(&enode, offset); + node = mte_to_node(enode); } while ((node != start) || (node->slot_len < offset)); slots = ma_slots(node, node->type); mt_free_bulk(node->slot_len, slots); -start_slots_free: - mas_unlock(&mas); free_leaf: mt_free_rcu(&node->rcu); } -static inline void __rcu **mas_destroy_descend(struct ma_state *mas, - struct maple_enode *prev, unsigned char offset) +static inline void __rcu **mte_destroy_descend(struct maple_enode **enode, + struct maple_tree *mt, struct maple_enode *prev, unsigned char offset) { struct maple_node *node; - struct maple_enode *next = mas->node; + struct maple_enode *next = *enode; void __rcu **slots = NULL; + enum maple_type type; + unsigned char next_offset = 0; do { - mas->node = next; - node = mas_mn(mas); - slots = ma_slots(node, mte_node_type(mas->node)); - next = mas_slot_locked(mas, slots, 0); - if ((mte_dead_node(next))) { - mte_to_node(next)->type = mte_node_type(next); - next = mas_slot_locked(mas, slots, 1); - } + *enode = next; + node = mte_to_node(*enode); + type = mte_node_type(*enode); + slots = ma_slots(node, type); + next = mt_slot_locked(mt, slots, next_offset); + if ((mte_dead_node(next))) + next = mt_slot_locked(mt, slots, ++next_offset); - mte_set_node_dead(mas->node); - node->type = mte_node_type(mas->node); - mas_clear_meta(mas, node, node->type); + mte_set_node_dead(*enode); + node->type = type; node->piv_parent = prev; node->parent_slot = offset; - offset = 0; - prev = mas->node; + offset = next_offset; + next_offset = 0; + prev = *enode; } while (!mte_is_leaf(next)); return slots; } -static void mt_destroy_walk(struct maple_enode *enode, unsigned char ma_flags, +static void mt_destroy_walk(struct maple_enode *enode, struct maple_tree *mt, bool free) { void __rcu **slots; struct maple_node *node = mte_to_node(enode); struct maple_enode *start; - struct maple_tree mt; - - MA_STATE(mas, &mt, 0, 0); - mas.node = enode; if (mte_is_leaf(enode)) { node->type = mte_node_type(enode); goto free_leaf; } - ma_flags &= ~MT_FLAGS_LOCK_MASK; - mt_init_flags(&mt, ma_flags); - mas_lock(&mas); - - mte_to_node(enode)->ma_flags = ma_flags; start = enode; - slots = mas_destroy_descend(&mas, start, 0); - node = mas_mn(&mas); + slots = mte_destroy_descend(&enode, mt, start, 0); + node = mte_to_node(enode); // Updated in the above call. do { enum maple_type type; unsigned char offset; struct maple_enode *parent, *tmp; - node->type = mte_node_type(mas.node); - node->slot_len = mas_dead_leaves(&mas, slots, node->type); + node->slot_len = mte_dead_leaves(enode, mt, slots); if (free) mt_free_bulk(node->slot_len, slots); offset = node->parent_slot + 1; - mas.node = node->piv_parent; - if (mas_mn(&mas) == node) - goto start_slots_free; + enode = node->piv_parent; + if (mte_to_node(enode) == node) + goto free_leaf; - type = mte_node_type(mas.node); - slots = ma_slots(mte_to_node(mas.node), type); + type = mte_node_type(enode); + slots = ma_slots(mte_to_node(enode), type); if (offset >= mt_slots[type]) goto next; - tmp = mas_slot_locked(&mas, slots, offset); + tmp = mt_slot_locked(mt, slots, offset); if (mte_node_type(tmp) && mte_to_node(tmp)) { - parent = mas.node; - mas.node = tmp; - slots = mas_destroy_descend(&mas, parent, offset); + parent = enode; + enode = tmp; + slots = mte_destroy_descend(&enode, mt, parent, offset); } next: - node = mas_mn(&mas); - } while (start != mas.node); + node = mte_to_node(enode); + } while (start != enode); - node = mas_mn(&mas); - node->type = mte_node_type(mas.node); - node->slot_len = mas_dead_leaves(&mas, slots, node->type); + node = mte_to_node(enode); + node->slot_len = mte_dead_leaves(enode, mt, slots); if (free) mt_free_bulk(node->slot_len, slots); -start_slots_free: - mas_unlock(&mas); - free_leaf: if (free) mt_free_rcu(&node->rcu); else - mas_clear_meta(&mas, node, node->type); + mt_clear_meta(mt, node, node->type); } /* @@ -5702,10 +5706,10 @@ static inline void mte_destroy_walk(struct maple_enode *enode, struct maple_node *node = mte_to_node(enode); if (mt_in_rcu(mt)) { - mt_destroy_walk(enode, mt->ma_flags, false); + mt_destroy_walk(enode, mt, false); call_rcu(&node->rcu, mt_free_walk); } else { - mt_destroy_walk(enode, mt->ma_flags, true); + mt_destroy_walk(enode, mt, true); } }