From patchwork Tue Aug 13 04:29:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBB3CC52D7B for ; Tue, 13 Aug 2024 04:29:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BCF6C6B008C; Tue, 13 Aug 2024 00:29:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B7F656B0092; Tue, 13 Aug 2024 00:29:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A46E16B0095; Tue, 13 Aug 2024 00:29:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 87B096B008C for ; Tue, 13 Aug 2024 00:29:47 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id DDE7FA04AB for ; Tue, 13 Aug 2024 04:29:46 +0000 (UTC) X-FDA: 82445943972.19.11DBFCE Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf26.hostedemail.com (Postfix) with ESMTP id 340B1140004 for ; Tue, 13 Aug 2024 04:29:44 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=EBHkuKu9; spf=pass (imf26.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523316; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AOMTfLgi9knuXnk5PhgPB7U9YULtwaGPXfrVHA/gUQg=; b=SvKg2dhd9ecxZ4q+9yiYj86hZ9A48JshOqpaVDU+lrGJ0OD7AT8CQu+kqKhPr4EgDVDcZj oZPzAmbsq7iN8XR89I61v9MbT7jZliVlxm15qBZMSx4IvQM6Gz8Adk75OnuLTgrbdDJB9Y isY1vxS+1zLZiUPFz3+OsSGi9Vo6Jac= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523316; a=rsa-sha256; cv=none; b=nh+ee/1fq4EkWSizWnBXtueaKDDbwPh1Wf0KbwZMdmY8QSVyqTeNGFtFRdpCWP0UN9npM1 1VqyG7naVOcmKtGZ8fSSbsGw7okLAb93l6p+jocwcXtTo6LLW0fbLqWXNbsrdPHKyPEPlu sY14C0eUPyaaNx4H/pyLs0LyCwGOWaE= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=EBHkuKu9; spf=pass (imf26.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 097D161556; Tue, 13 Aug 2024 04:29:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D4E6C4AF11; Tue, 13 Aug 2024 04:29:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523383; bh=j0G5fXEcKabl4eM0DNdrmoN5gAzK97GyH+KqiIKlO+E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EBHkuKu9ob1H4nQ2HPG2r8kaziic49j7ChKpYmVlO4X57NfAsMise8cFGwD8J0Pdx 2p9u06EFeLqkaLB/noqB09bPzhJCYcNaCa20HBPg9UsZuwIq3nGhzvPaVUzKnf5SeP bc97Jx5nxCALEC78HkHgJyQoBsPXk33iPfL+qI5vqYokgNfeCRQlVMky/0osMwZSrx ABG1B6zaWczu/N78wzA+C3jA612bb7rQl34zWVh7TnJwWHWhNOfAVF0IBsZrwMxrwt ARjD8gdVC/U1GTqrRW2XdphVurBZ7wMBtjyalYD7OnOsmMqLLU2Fw6oZqdzI51mY50 KONBZMFzJx/Cw== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH v3 01/13] uprobes: revamp uprobe refcounting and lifetime management Date: Mon, 12 Aug 2024 21:29:05 -0700 Message-ID: <20240813042917.506057-2-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 340B1140004 X-Stat-Signature: 8xjwwdqfz1faed1wxnanfncp8ojcu9rm X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1723523384-867637 X-HE-Meta: U2FsdGVkX18jzp26tWTKcLztfu6jKwCj0OmahshYnGD5hXE3wTL0iXYSzZFvrJ2VyWWHIE7uGcFTezXgVVOoshuKW5b9suUsB+p2UER1f3okXX97v18niJNDC74OgzgNZlSF3UmxKAHmJ4pDRtfT2Rx+BJobDVEi5gaRRpxkDafJ1mk2jQYxnI79NAi7n/tTw0Fc3YOD1oJrigeIBC1mAGzV0GiQLwaEYSDIu8gOBglJ0uHoiwFakhlGu8vq4NxLzwoenCVrdGlcBtjTepNjYqofvHoyemMNZM8w2/6fpxssBvnobxq74p9TXk5GRfpr84/7B0A7YWM1Qrak5+OYXW48AAA9xjSBcZddAdivlHKYWidovguDEcrHWNe4eTNshbofjEvqGVJubJ7i8ofaUqvq+ciea90fdbTqmoP2IQzRHVCcBmT456JV5diufsuAhr3YxBi9geMRNCfkCSz59jk0PM4O1M/h6lLQSJDnC74dQ1OWkFOAtDGvLyZdcGoqtphgSQA/ieifmiHQOyatcZ7B/nSfm8ICbWbTFJhNLD/0C8pZsnB8O1lwABRqOo31BRje2fCCzYzvl4eV85SC00Tnf3OEMPkWWrTzv8F1dmShPAREau208ym1p2w7FrYkVdw8Hl7hgDrpJhUfWw2XphRfmx5b8NYAts+RhDCQDHdvVyacB28vMhW/kcN6RFztTvdG2M9uHPdEYl8ChFnQuyCrNO2UJ+gibyn5QpQLC+f3Z3mJHkF1zt24AzQmVNv0/yTzx0gIW6eKvcefdyl/M+32s5MVQ94Uma2VqUvR6D9JM1vc5SNXpYkUEl2622v/zlcP9QdGBdCbY5fkaS2RhVCtglJ8YsFeK2rR68pcZmX7fkyNIBv20A2srLmEBSXH9WKt/N4KgtBiLAYiSuoaWfhlAXYlFKuA5myKQeICMljRXGBwLCz2RYPyqKtkBVHDs0oOLCjCSOY3bbfRLlN fhBANjDA 0vbtGmcYid6rS9xZVDRO9SE+LlL7wWclyerV4XFpODKfUf3/6SnL5y440hpQ5V+/klLkTkoA6/1HatZqiqyAsH7ve4VNy9PbB3E9X8UbyBGfb3c/yvrtg3w38wkPqTaq5h9Z26xXZqvgyhuT3zmx9sebrk6aITM07V4JWDgEAl9VTtD76ynXr0OIDa3pV7nFp6SE+1n7KLA5kdAQ3Mcq/sxjvOlTxwEXDL+8wfgvyn8nnkEM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Revamp how struct uprobe is refcounted, and thus how its lifetime is managed. Right now, there are a few possible "owners" of uprobe refcount: - uprobes_tree RB tree assumes one refcount when uprobe is registered and added to the lookup tree; - while uprobe is triggered and kernel is handling it in the breakpoint handler code, temporary refcount bump is done to keep uprobe from being freed; - if we have uretprobe requested on a given struct uprobe instance, we take another refcount to keep uprobe alive until user space code returns from the function and triggers return handler. The uprobe_tree's extra refcount of 1 is confusing and problematic. No matter how many actual consumers are attached, they all share the same refcount, and we have an extra logic to drop the "last" (which might not really be last) refcount once uprobe's consumer list becomes empty. This is unconventional and has to be kept in mind as a special case all the time. Further, because of this design we have the situations where find_uprobe() will find uprobe, bump refcount, return it to the caller, but that uprobe will still need uprobe_is_active() check, after which the caller is required to drop refcount and try again. This is just too many details leaking to the higher level logic. This patch changes refcounting scheme in such a way as to not have uprobes_tree keeping extra refcount for struct uprobe. Instead, each uprobe_consumer is assuming its own refcount, which will be dropped when consumer is unregistered. Other than that, all the active users of uprobe (entry and return uprobe handling code) keeps exactly the same refcounting approach. With the above setup, once uprobe's refcount drops to zero, we need to make sure that uprobe's "destructor" removes uprobe from uprobes_tree, of course. This, though, races with uprobe entry handling code in handle_swbp(), which, through find_active_uprobe()->find_uprobe() lookup, can race with uprobe being destroyed after refcount drops to zero (e.g., due to uprobe_consumer unregistering). So we add try_get_uprobe(), which will attempt to bump refcount, unless it already is zero. Caller needs to guarantee that uprobe instance won't be freed in parallel, which is the case while we keep uprobes_treelock (for read or write, doesn't matter). Note also, we now don't leak the race between registration and unregistration, so we remove the retry logic completely. If find_uprobe() returns valid uprobe, it's guaranteed to remain in uprobes_tree with properly incremented refcount. The race is handled inside __insert_uprobe() and put_uprobe() working together: __insert_uprobe() will remove uprobe from RB-tree, if it can't bump refcount and will retry to insert the new uprobe instance. put_uprobe() won't attempt to remove uprobe from RB-tree, if it's already not there. All that is protected by uprobes_treelock, which keeps things simple. Signed-off-by: Andrii Nakryiko --- kernel/events/uprobes.c | 179 +++++++++++++++++++++++----------------- 1 file changed, 101 insertions(+), 78 deletions(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 33349cc8de0c..147561c19d57 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -109,6 +109,11 @@ struct xol_area { unsigned long vaddr; /* Page(s) of instruction slots */ }; +static void uprobe_warn(struct task_struct *t, const char *msg) +{ + pr_warn("uprobe: %s:%d failed to %s\n", current->comm, current->pid, msg); +} + /* * valid_vma: Verify if the specified vma is an executable vma * Relax restrictions while unregistering: vm_flags might have @@ -587,25 +592,53 @@ set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long v *(uprobe_opcode_t *)&auprobe->insn); } +/* uprobe should have guaranteed positive refcount */ static struct uprobe *get_uprobe(struct uprobe *uprobe) { refcount_inc(&uprobe->ref); return uprobe; } +/* + * uprobe should have guaranteed lifetime, which can be either of: + * - caller already has refcount taken (and wants an extra one); + * - uprobe is RCU protected and won't be freed until after grace period; + * - we are holding uprobes_treelock (for read or write, doesn't matter). + */ +static struct uprobe *try_get_uprobe(struct uprobe *uprobe) +{ + if (refcount_inc_not_zero(&uprobe->ref)) + return uprobe; + return NULL; +} + +static inline bool uprobe_is_active(struct uprobe *uprobe) +{ + return !RB_EMPTY_NODE(&uprobe->rb_node); +} + static void put_uprobe(struct uprobe *uprobe) { - if (refcount_dec_and_test(&uprobe->ref)) { - /* - * If application munmap(exec_vma) before uprobe_unregister() - * gets called, we don't get a chance to remove uprobe from - * delayed_uprobe_list from remove_breakpoint(). Do it here. - */ - mutex_lock(&delayed_uprobe_lock); - delayed_uprobe_remove(uprobe, NULL); - mutex_unlock(&delayed_uprobe_lock); - kfree(uprobe); - } + if (!refcount_dec_and_test(&uprobe->ref)) + return; + + write_lock(&uprobes_treelock); + + if (uprobe_is_active(uprobe)) + rb_erase(&uprobe->rb_node, &uprobes_tree); + + write_unlock(&uprobes_treelock); + + /* + * If application munmap(exec_vma) before uprobe_unregister() + * gets called, we don't get a chance to remove uprobe from + * delayed_uprobe_list from remove_breakpoint(). Do it here. + */ + mutex_lock(&delayed_uprobe_lock); + delayed_uprobe_remove(uprobe, NULL); + mutex_unlock(&delayed_uprobe_lock); + + kfree(uprobe); } static __always_inline @@ -656,7 +689,7 @@ static struct uprobe *__find_uprobe(struct inode *inode, loff_t offset) struct rb_node *node = rb_find(&key, &uprobes_tree, __uprobe_cmp_key); if (node) - return get_uprobe(__node_2_uprobe(node)); + return try_get_uprobe(__node_2_uprobe(node)); return NULL; } @@ -676,26 +709,44 @@ static struct uprobe *find_uprobe(struct inode *inode, loff_t offset) return uprobe; } +/* + * Attempt to insert a new uprobe into uprobes_tree. + * + * If uprobe already exists (for given inode+offset), we just increment + * refcount of previously existing uprobe. + * + * If not, a provided new instance of uprobe is inserted into the tree (with + * assumed initial refcount == 1). + * + * In any case, we return a uprobe instance that ends up being in uprobes_tree. + * Caller has to clean up new uprobe instance, if it ended up not being + * inserted into the tree. + * + * We assume that uprobes_treelock is held for writing. + */ static struct uprobe *__insert_uprobe(struct uprobe *uprobe) { struct rb_node *node; - +again: node = rb_find_add(&uprobe->rb_node, &uprobes_tree, __uprobe_cmp); - if (node) - return get_uprobe(__node_2_uprobe(node)); + if (node) { + struct uprobe *u = __node_2_uprobe(node); - /* get access + creation ref */ - refcount_set(&uprobe->ref, 2); - return NULL; + if (!try_get_uprobe(u)) { + rb_erase(node, &uprobes_tree); + RB_CLEAR_NODE(&u->rb_node); + goto again; + } + + return u; + } + + return uprobe; } /* - * Acquire uprobes_treelock. - * Matching uprobe already exists in rbtree; - * increment (access refcount) and return the matching uprobe. - * - * No matching uprobe; insert the uprobe in rb_tree; - * get a double refcount (access + creation) and return NULL. + * Acquire uprobes_treelock and insert uprobe into uprobes_tree + * (or reuse existing one, see __insert_uprobe() comments above). */ static struct uprobe *insert_uprobe(struct uprobe *uprobe) { @@ -732,11 +783,13 @@ static struct uprobe *alloc_uprobe(struct inode *inode, loff_t offset, uprobe->ref_ctr_offset = ref_ctr_offset; init_rwsem(&uprobe->register_rwsem); init_rwsem(&uprobe->consumer_rwsem); + RB_CLEAR_NODE(&uprobe->rb_node); + refcount_set(&uprobe->ref, 1); /* add to uprobes_tree, sorted on inode:offset */ cur_uprobe = insert_uprobe(uprobe); /* a uprobe exists for this inode:offset combination */ - if (cur_uprobe) { + if (cur_uprobe != uprobe) { if (cur_uprobe->ref_ctr_offset != uprobe->ref_ctr_offset) { ref_ctr_mismatch_warn(cur_uprobe, uprobe); put_uprobe(cur_uprobe); @@ -921,26 +974,6 @@ remove_breakpoint(struct uprobe *uprobe, struct mm_struct *mm, unsigned long vad return set_orig_insn(&uprobe->arch, mm, vaddr); } -static inline bool uprobe_is_active(struct uprobe *uprobe) -{ - return !RB_EMPTY_NODE(&uprobe->rb_node); -} -/* - * There could be threads that have already hit the breakpoint. They - * will recheck the current insn and restart if find_uprobe() fails. - * See find_active_uprobe(). - */ -static void delete_uprobe(struct uprobe *uprobe) -{ - if (WARN_ON(!uprobe_is_active(uprobe))) - return; - - write_lock(&uprobes_treelock); - rb_erase(&uprobe->rb_node, &uprobes_tree); - write_unlock(&uprobes_treelock); - RB_CLEAR_NODE(&uprobe->rb_node); /* for uprobe_is_active() */ -} - struct map_info { struct map_info *next; struct mm_struct *mm; @@ -1094,17 +1127,13 @@ void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc) int err; down_write(&uprobe->register_rwsem); - if (WARN_ON(!consumer_del(uprobe, uc))) + if (WARN_ON(!consumer_del(uprobe, uc))) { err = -ENOENT; - else + } else { err = register_for_each_vma(uprobe, NULL); - - /* TODO : cant unregister? schedule a worker thread */ - if (!err) { - if (!uprobe->consumers) - delete_uprobe(uprobe); - else - err = -EBUSY; + /* TODO : cant unregister? schedule a worker thread */ + if (unlikely(err)) + uprobe_warn(current, "unregister, leaking uprobe"); } up_write(&uprobe->register_rwsem); @@ -1159,27 +1188,16 @@ struct uprobe *uprobe_register(struct inode *inode, if (!IS_ALIGNED(ref_ctr_offset, sizeof(short))) return ERR_PTR(-EINVAL); - retry: uprobe = alloc_uprobe(inode, offset, ref_ctr_offset); if (IS_ERR(uprobe)) return uprobe; - /* - * We can race with uprobe_unregister()->delete_uprobe(). - * Check uprobe_is_active() and retry if it is false. - */ down_write(&uprobe->register_rwsem); - ret = -EAGAIN; - if (likely(uprobe_is_active(uprobe))) { - consumer_add(uprobe, uc); - ret = register_for_each_vma(uprobe, uc); - } + consumer_add(uprobe, uc); + ret = register_for_each_vma(uprobe, uc); up_write(&uprobe->register_rwsem); - put_uprobe(uprobe); if (ret) { - if (unlikely(ret == -EAGAIN)) - goto retry; uprobe_unregister(uprobe, uc); return ERR_PTR(ret); } @@ -1286,15 +1304,17 @@ static void build_probe_list(struct inode *inode, u = rb_entry(t, struct uprobe, rb_node); if (u->inode != inode || u->offset < min) break; - list_add(&u->pending_list, head); - get_uprobe(u); + /* if uprobe went away, it's safe to ignore it */ + if (try_get_uprobe(u)) + list_add(&u->pending_list, head); } for (t = n; (t = rb_next(t)); ) { u = rb_entry(t, struct uprobe, rb_node); if (u->inode != inode || u->offset > max) break; - list_add(&u->pending_list, head); - get_uprobe(u); + /* if uprobe went away, it's safe to ignore it */ + if (try_get_uprobe(u)) + list_add(&u->pending_list, head); } } read_unlock(&uprobes_treelock); @@ -1752,6 +1772,12 @@ static int dup_utask(struct task_struct *t, struct uprobe_task *o_utask) return -ENOMEM; *n = *o; + /* + * uprobe's refcnt has to be positive at this point, kept by + * utask->return_instances items; return_instances can't be + * removed right now, as task is blocked due to duping; so + * get_uprobe() is safe to use here. + */ get_uprobe(n->uprobe); n->next = NULL; @@ -1763,12 +1789,6 @@ static int dup_utask(struct task_struct *t, struct uprobe_task *o_utask) return 0; } -static void uprobe_warn(struct task_struct *t, const char *msg) -{ - pr_warn("uprobe: %s:%d failed to %s\n", - current->comm, current->pid, msg); -} - static void dup_xol_work(struct callback_head *work) { if (current->flags & PF_EXITING) @@ -1894,7 +1914,10 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs) } orig_ret_vaddr = utask->return_instances->orig_ret_vaddr; } - + /* + * uprobe's refcnt is positive, held by caller, so it's safe to + * unconditionally bump it one more time here + */ ri->uprobe = get_uprobe(uprobe); ri->func = instruction_pointer(regs); ri->stack = user_stack_pointer(regs); From patchwork Tue Aug 13 04:29:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21AA3C531DC for ; Tue, 13 Aug 2024 04:29:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 603EF6B0095; Tue, 13 Aug 2024 00:29:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B4196B0098; Tue, 13 Aug 2024 00:29:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 407046B009A; Tue, 13 Aug 2024 00:29:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1E7C96B0095 for ; Tue, 13 Aug 2024 00:29:50 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C42381C2A6E for ; Tue, 13 Aug 2024 04:29:49 +0000 (UTC) X-FDA: 82445944098.21.D864566 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf20.hostedemail.com (Postfix) with ESMTP id 03F981C0025 for ; Tue, 13 Aug 2024 04:29:47 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=bxlOGQ6i; spf=pass (imf20.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523376; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1ZWK7bm1HkTQcb51SIbzEbm8+h3G7RmFgIjgqVvjSlA=; b=Xv7uf6VcDJFm5QGNNVpwvgECYFLWm9pZpP6/lAFCEEq8dW9MPaHLQW4DhLjCnSAqUl2veJ a0BDnY7xUijN5E/SuSBarktzFeCjKXlmd4gf0HjowzwjABjTJhZ7pKIfqbPFuhKySJjIi6 2+W3a4FBGZUNnWRLzOflxxW4Az5xzho= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=bxlOGQ6i; spf=pass (imf20.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523376; a=rsa-sha256; cv=none; b=vT0zyhbRLoVi/I0GxV6p1kIyr5xgVvfSu7p9xiEqK9DI+vHq5WBw2YW1bo3Yrkj0mCDKf5 8JqWs3syfuM6th16yLPo9T6hTe62Wj+ij/4ddhY1UqSbJOKRCcahNnG7pjX0EMnP4dDGQj 8+RJMUxo+jTcFpQLyJuNwWnV8C/9/lo= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 371D66155A; Tue, 13 Aug 2024 04:29:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD351C4AF09; Tue, 13 Aug 2024 04:29:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523387; bh=5HQ56DnZm01JlMtuqxoBg8Kdzw4Qreqv0MmJ/xI2OUc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bxlOGQ6irc19Rwnm0/OGpuIssYfeqF9BmpZHyj7Ww5ppReoD/hQ0PLGNXUzIZgBmv M/85X1nfUmgrJTjHrScmx5HfGyJ6l0nfnTs7e5MZ9wPA4LRrusyvWG49v3FJzQMPur JTAyZxdojxd7vDwgMuyUmYWmA0RGkxpUsgwiEpo+FYktDtLxo+Xsv+Pc6OpnlngzDC QZWa7UH8FC2YT+SUDxX0MMuqTcIrcZ7q7xhvMQHsYQjdUVonGGabCDmBE7SY/AbDEf bePCpBgsdIn29ULHE25eg6EVu6l1lOPNqIwOmGM1rKIwZBSHRsh3xE8wqZg10K27Te GSGoFqay0NgxQ== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH v3 02/13] uprobes: protected uprobe lifetime with SRCU Date: Mon, 12 Aug 2024 21:29:06 -0700 Message-ID: <20240813042917.506057-3-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: r6usqrfi9yg75dgas9zkbkzpcdn4u8wf X-Rspamd-Queue-Id: 03F981C0025 X-Rspamd-Server: rspam11 X-HE-Tag: 1723523387-926744 X-HE-Meta: U2FsdGVkX19W5Xvm6Ge71Xwr94JhRcv5HWqH569sJIeF73NQ2cV2PJ+BxJMHJtBskMpjyBSemZ/WvjM16BFk6Ma7XI3ZqyAo06bw1HMfLSkecBwejqKJMI1Ks9WiZ2fasB6V3NEC2yf2DZTNR+9+pahbSllBv47mn6d1//YQdadBWbsMeudnh65r1t4GbbcaZJAyhM4YPopDuAQ+gVZS4Ehp8FRjW8M0ao3Tn4pe3ncITwttnt6mUVGT/VbChWj4h3e0NykpkWbInbmDqw6hUj3WnKox8ZoEeNVNSLZ6C1I35IhUVPIsQIXL8QzX0ZNb3197CKne8YZhg4FEOfSC6iMbeDD5xEw3Q0Ovwk9QZv+HUVRiBcVvyyAEokxfdGqi0gY1AasuNTQfRWzopmZEGCL/OufoT1qe7fffej5RrKQCZaG+kRlEcDdUjPupzJdj/X4PQ4t3Zy6hD8IvAE6bXlaVAm4SynT3lLy7q6baGHQDUWd5eOwfEw86+OlmzUx9bbuEEtWRJCpWUjU/kyfaqHCuOusFDeKmzPahIlEtgQQcgPT42kn181V0VqsSKd5Z1fRte65MD1fQH5MRbJHAcQaJqmvhJjNkQZwR/zbmFQbKhMDxgz2NcCQVHBa62AEVgkhTxX+DyY3sS/9CY2W+9r5MVnSmCRSDMkOC0jagUMsijYWWL+FILo8B/vSFcAkn3THoqIBZ2V5L0qytdeZ7RaGgWllxfcU5RSgUPTv7gcIYmVrjSmUZ8ok0wNAylQYD82oDVhtQk7h+4HBs9QbHOHL69eJ2L0rRCiaUvAq8GbG+dC/adO+4TR2nAYgy0h/aC1lFCLpNMDAfL2TrmShVSFIlL4vbY5C4Flcdh4QwuzsSO2YVZolUdl++VfqVN7G4+SGwwifGyfC5Bs9/uWBeY2V9jshOzyL/Ws960/KVYjrC5bUxB6+tAkueT2SXe2faZbf/YFDMCI65iFNBP7J 6DIqZHCy a5S5TJr3RQt4PCsB4GL1idQh2spSJp8cby0Dn6SuviPo+xIe7UmjHIaqi19yzOA/CxtaefLdVLXr0iEolSXbH+fyt0wmX8W3lZRQdGgG4B7DboDUOjAvvYctF/ZP77/Z4PWy4UuRVPXErWh7j7k5KzuCQAOVYpihZIWbVetN5jwllJ4lxc0ylxzHU/nZxhawZH7e6+y0vGIpNrdFXymTXlN6riuVxSaqVxVFGWIKZUt53b7o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To avoid unnecessarily taking a (brief) refcount on uprobe during breakpoint handling in handle_swbp for entry uprobes, make find_uprobe() not take refcount, but protect the lifetime of a uprobe instance with RCU. This improves scalability, as refcount gets quite expensive due to cache line bouncing between multiple CPUs. Specifically, we utilize our own uprobe-specific SRCU instance for this RCU protection. put_uprobe() will delay actual kfree() using call_srcu(). For now, uretprobe and single-stepping handling will still acquire refcount as necessary. We'll address these issues in follow up patches by making them use SRCU with timeout. Signed-off-by: Andrii Nakryiko --- kernel/events/uprobes.c | 94 +++++++++++++++++++++++------------------ 1 file changed, 54 insertions(+), 40 deletions(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 147561c19d57..3e3595753e2c 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -41,6 +41,8 @@ static struct rb_root uprobes_tree = RB_ROOT; static DEFINE_RWLOCK(uprobes_treelock); /* serialize rbtree access */ +DEFINE_STATIC_SRCU(uprobes_srcu); + #define UPROBES_HASH_SZ 13 /* serialize uprobe->pending_list */ static struct mutex uprobes_mmap_mutex[UPROBES_HASH_SZ]; @@ -59,6 +61,7 @@ struct uprobe { struct list_head pending_list; struct uprobe_consumer *consumers; struct inode *inode; /* Also hold a ref to inode */ + struct rcu_head rcu; loff_t offset; loff_t ref_ctr_offset; unsigned long flags; @@ -617,6 +620,13 @@ static inline bool uprobe_is_active(struct uprobe *uprobe) return !RB_EMPTY_NODE(&uprobe->rb_node); } +static void uprobe_free_rcu(struct rcu_head *rcu) +{ + struct uprobe *uprobe = container_of(rcu, struct uprobe, rcu); + + kfree(uprobe); +} + static void put_uprobe(struct uprobe *uprobe) { if (!refcount_dec_and_test(&uprobe->ref)) @@ -638,7 +648,7 @@ static void put_uprobe(struct uprobe *uprobe) delayed_uprobe_remove(uprobe, NULL); mutex_unlock(&delayed_uprobe_lock); - kfree(uprobe); + call_srcu(&uprobes_srcu, &uprobe->rcu, uprobe_free_rcu); } static __always_inline @@ -680,33 +690,25 @@ static inline int __uprobe_cmp(struct rb_node *a, const struct rb_node *b) return uprobe_cmp(u->inode, u->offset, __node_2_uprobe(b)); } -static struct uprobe *__find_uprobe(struct inode *inode, loff_t offset) +/* + * Assumes being inside RCU protected region. + * No refcount is taken on returned uprobe. + */ +static struct uprobe *find_uprobe_rcu(struct inode *inode, loff_t offset) { struct __uprobe_key key = { .inode = inode, .offset = offset, }; - struct rb_node *node = rb_find(&key, &uprobes_tree, __uprobe_cmp_key); - - if (node) - return try_get_uprobe(__node_2_uprobe(node)); - - return NULL; -} + struct rb_node *node; -/* - * Find a uprobe corresponding to a given inode:offset - * Acquires uprobes_treelock - */ -static struct uprobe *find_uprobe(struct inode *inode, loff_t offset) -{ - struct uprobe *uprobe; + lockdep_assert(srcu_read_lock_held(&uprobes_srcu)); read_lock(&uprobes_treelock); - uprobe = __find_uprobe(inode, offset); + node = rb_find(&key, &uprobes_tree, __uprobe_cmp_key); read_unlock(&uprobes_treelock); - return uprobe; + return node ? __node_2_uprobe(node) : NULL; } /* @@ -1080,10 +1082,10 @@ register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new) goto free; /* * We take mmap_lock for writing to avoid the race with - * find_active_uprobe() which takes mmap_lock for reading. + * find_active_uprobe_rcu() which takes mmap_lock for reading. * Thus this install_breakpoint() can not make - * is_trap_at_addr() true right after find_uprobe() - * returns NULL in find_active_uprobe(). + * is_trap_at_addr() true right after find_uprobe_rcu() + * returns NULL in find_active_uprobe_rcu(). */ mmap_write_lock(mm); vma = find_vma(mm, info->vaddr); @@ -1885,9 +1887,13 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs) return; } + /* we need to bump refcount to store uprobe in utask */ + if (!try_get_uprobe(uprobe)) + return; + ri = kmalloc(sizeof(struct return_instance), GFP_KERNEL); if (!ri) - return; + goto fail; trampoline_vaddr = uprobe_get_trampoline_vaddr(); orig_ret_vaddr = arch_uretprobe_hijack_return_addr(trampoline_vaddr, regs); @@ -1914,11 +1920,7 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs) } orig_ret_vaddr = utask->return_instances->orig_ret_vaddr; } - /* - * uprobe's refcnt is positive, held by caller, so it's safe to - * unconditionally bump it one more time here - */ - ri->uprobe = get_uprobe(uprobe); + ri->uprobe = uprobe; ri->func = instruction_pointer(regs); ri->stack = user_stack_pointer(regs); ri->orig_ret_vaddr = orig_ret_vaddr; @@ -1929,8 +1931,9 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs) utask->return_instances = ri; return; - fail: +fail: kfree(ri); + put_uprobe(uprobe); } /* Prepare to single-step probed instruction out of line. */ @@ -1945,9 +1948,14 @@ pre_ssout(struct uprobe *uprobe, struct pt_regs *regs, unsigned long bp_vaddr) if (!utask) return -ENOMEM; + if (!try_get_uprobe(uprobe)) + return -EINVAL; + xol_vaddr = xol_get_insn_slot(uprobe); - if (!xol_vaddr) - return -ENOMEM; + if (!xol_vaddr) { + err = -ENOMEM; + goto err_out; + } utask->xol_vaddr = xol_vaddr; utask->vaddr = bp_vaddr; @@ -1955,12 +1963,15 @@ pre_ssout(struct uprobe *uprobe, struct pt_regs *regs, unsigned long bp_vaddr) err = arch_uprobe_pre_xol(&uprobe->arch, regs); if (unlikely(err)) { xol_free_insn_slot(current); - return err; + goto err_out; } utask->active_uprobe = uprobe; utask->state = UTASK_SSTEP; return 0; +err_out: + put_uprobe(uprobe); + return err; } /* @@ -2043,7 +2054,8 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr) return is_trap_insn(&opcode); } -static struct uprobe *find_active_uprobe(unsigned long bp_vaddr, int *is_swbp) +/* assumes being inside RCU protected region */ +static struct uprobe *find_active_uprobe_rcu(unsigned long bp_vaddr, int *is_swbp) { struct mm_struct *mm = current->mm; struct uprobe *uprobe = NULL; @@ -2056,7 +2068,7 @@ static struct uprobe *find_active_uprobe(unsigned long bp_vaddr, int *is_swbp) struct inode *inode = file_inode(vma->vm_file); loff_t offset = vaddr_to_offset(vma, bp_vaddr); - uprobe = find_uprobe(inode, offset); + uprobe = find_uprobe_rcu(inode, offset); } if (!uprobe) @@ -2202,13 +2214,15 @@ static void handle_swbp(struct pt_regs *regs) { struct uprobe *uprobe; unsigned long bp_vaddr; - int is_swbp; + int is_swbp, srcu_idx; bp_vaddr = uprobe_get_swbp_addr(regs); if (bp_vaddr == uprobe_get_trampoline_vaddr()) return uprobe_handle_trampoline(regs); - uprobe = find_active_uprobe(bp_vaddr, &is_swbp); + srcu_idx = srcu_read_lock(&uprobes_srcu); + + uprobe = find_active_uprobe_rcu(bp_vaddr, &is_swbp); if (!uprobe) { if (is_swbp > 0) { /* No matching uprobe; signal SIGTRAP. */ @@ -2224,7 +2238,7 @@ static void handle_swbp(struct pt_regs *regs) */ instruction_pointer_set(regs, bp_vaddr); } - return; + goto out; } /* change it in advance for ->handler() and restart */ @@ -2259,12 +2273,12 @@ static void handle_swbp(struct pt_regs *regs) if (arch_uprobe_skip_sstep(&uprobe->arch, regs)) goto out; - if (!pre_ssout(uprobe, regs, bp_vaddr)) - return; + if (pre_ssout(uprobe, regs, bp_vaddr)) + goto out; - /* arch_uprobe_skip_sstep() succeeded, or restart if can't singlestep */ out: - put_uprobe(uprobe); + /* arch_uprobe_skip_sstep() succeeded, or restart if can't singlestep */ + srcu_read_unlock(&uprobes_srcu, srcu_idx); } /* From patchwork Tue Aug 13 04:29:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761271 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DE30C52D7B for ; Tue, 13 Aug 2024 04:34:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A9B826B0082; Tue, 13 Aug 2024 00:34:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A4B266B00A1; Tue, 13 Aug 2024 00:34:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8ECC46B00A2; Tue, 13 Aug 2024 00:34:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 678AF6B0082 for ; Tue, 13 Aug 2024 00:34:53 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 17C851A05ED for ; Tue, 13 Aug 2024 04:34:53 +0000 (UTC) X-FDA: 82445956866.03.BA2DD34 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf21.hostedemail.com (Postfix) with ESMTP id 6E29B1C0010 for ; Tue, 13 Aug 2024 04:34:51 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=A6mFQoGJ; spf=pass (imf21.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523636; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tdv90doT2nJa6DVyXFTavFfUQdeTgDeASjyWU3h4dEE=; b=fHCLPFNElwutMX7lhsFHpeA45yGPgTA08SnxloFx/W1W17KmXxjsjeOMJt9XGwcHEZ1qeB D000uV3VxK+KBlWihxwa8E+3dR2QGEULnnI6snbU3mIP9B19EO5cKB/TIKnbYv6ImwCVsX 2o+ZfBSSG1o4L7ZZTrp44tHrgq9lhC4= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=A6mFQoGJ; spf=pass (imf21.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523636; a=rsa-sha256; cv=none; b=wWY1/QeEHEogswNNxA7RqEIRHUHxYWxKxj07IbBZxkvpvn18VE8nJssnnVzsrOLNoUL5pr hBsmkmJIDWRjZf+VWkAs8CSm//dFOKTbH/xBtQY7EUOlmcEFCk9NBUogttNOOPnusXrJzS lkxHJ3uUvU8ylb7Atr0VyvweoeQaGv8= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 8FDD861559; Tue, 13 Aug 2024 04:29:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1543DC4AF0E; Tue, 13 Aug 2024 04:29:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523390; bh=uB9+IvJHW7QHyKw3tlvcdrmTqmlaEh8Wb6qWvDMkeqI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A6mFQoGJzG1oSsmLkPHy54MrauNP4LSQJyHRp42tGjmzQJmtCC8TlyjYv1yU9mmma Ueth3Hk0CG0F/xkKupkCo7r7zcpL877y0PeAlqsjmjoh73TCm47Dyk/CWAbrsxvNfx 9i+ghE6w+VJMXxuHSlNBzC7Se8MffE2ouh1Nf6SciRyEgmQTTUk7gNbnetJ+4neA/y E6wsnr5s16rj/Pxt+26oP6pDyt+af+4pAzQ8r2befRfC0GtBtrrh094zaCyq0zk5w4 GIz3xCosrsUs4TogasMjmRa0YirRMfU6F9e6K9OlrbB2WJCgHJjTxW8pMRNc+J42Zd 0Yqu7xjwbNrKQ== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH v3 03/13] uprobes: get rid of enum uprobe_filter_ctx in uprobe filter callbacks Date: Mon, 12 Aug 2024 21:29:07 -0700 Message-ID: <20240813042917.506057-4-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Stat-Signature: tg5spujgw8o4qo5af6xxckhdy9ibwd55 X-Rspam-User: X-Rspamd-Queue-Id: 6E29B1C0010 X-Rspamd-Server: rspam02 X-HE-Tag: 1723523691-757719 X-HE-Meta: U2FsdGVkX1/y1eQAXWfKapjZ7BD2svGL2abJ3V1c6ZPWf6Q+a09dm3lu9RY6b/g8zMHeN45TkUJljs7I6x3Om1fK3yZj3RtnW11UaqgA6mtwoMGhkwv9why9gtCVittAcvIhd1CRJdzF+g5yNnX85KOS9VkAFTntfsMEwUMv9s081V2dyEAkDMK1VfDZDwlHbYJCcfVHnhcgGcpmmsllX/luZKqV+F7FVKNGodWdDHzyLTjYurAhNxOCzGmrsz9ZUsQJhLwcstl6XvrLBZkkRT5LI2/j/v/9LYEoabJtF1dwBIYx1ZA08ialZ87aGd03C5Q2QbMAS14sz7mwCvwS/9FpwS7txM/QPsyocqOiJkpyVcJXhGdZuD10VU67bm90oJdH/n9Krxz69+h5f+Xn/kLWyKLhVt/UAD6zO+7uaSmr8MY/wVrcTr7fpws7ZcnMyj+j7mN1KDED+0/IZdjedxSoJJONstp02OBNfij5lvqNIYeFkYZxQQE5eVBRgSl+ocHwGfLX3Ozr9E074mLCuJD2K7O8Lw9Rncr/0RK2bqO0lDUVYEoiUNZiNnPFMzLTOVPRqt6X5bpz+APn/olOLdVd2W619gHczfvZX04YYxr0l/+fMFEs0P8rLXck1tBOWh4zGak2G2VUw2oeBj+wUjFPrIpNaxc6qwuaN6f9r8EzDTSHHRmZ0UPEubySMspVWDzD7c0NxPyrVucus842ODiuWTeoPPV9v5FpenIIVSTLdN4GE9FZaqf+7ilkWF65QBWoQpm7O8Vle+u0K62IG4vGObaB58K53zQmBqT29K2RBBs6MNEcmmPjsrXLtp9YNOsJ0ZS//BPBrfI61FNQL6llSFYjoKuA2jp3chOoUvr9h03ZfAvEbIce0b53ibnY/4J4LtgWkPBFvTTtcKV94EEHpI7FkGXwaG+hAfm9hQFiZzUaEUJbQfL5wtEkoenuOZl9cNIsBONN+yAfN5Z l9f2DcLs KPGON2tijsRHY5nzLI087WPW1CHCMAkNO5EuLP0yM5LeRBWRahyCxeGhCuNyJ0ajJdLrSGxDgPoMoCsUAxJ3hr1/Gz1WaXTYAjkQHraofeC19gY6AhKN4he7t7igq60yd5RRvvPDSXNFqrh79eNINFEPd/aojq+k33uxtdMjo6C/V4JPBerqayF8rDeBNgLDDtl/ZzTnNzFEIpPrLAXl8QYy79/iX3KAKiPXtK75ln/x+aKJQ2CYBj7UT9LNhlygLir0ITXka0Uu8afvrFPlMkC8bvcVZ+Jg5hhOz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It serves no purpose beyond adding unnecessray argument passed to the filter callback. Just get rid of it, no one is actually using it. Signed-off-by: Andrii Nakryiko --- include/linux/uprobes.h | 10 +--------- kernel/events/uprobes.c | 18 +++++++----------- kernel/trace/bpf_trace.c | 3 +-- kernel/trace/trace_uprobe.c | 9 +++------ 4 files changed, 12 insertions(+), 28 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 6332c111036e..9cf0dce62e4c 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -28,20 +28,12 @@ struct page; #define MAX_URETPROBE_DEPTH 64 -enum uprobe_filter_ctx { - UPROBE_FILTER_REGISTER, - UPROBE_FILTER_UNREGISTER, - UPROBE_FILTER_MMAP, -}; - struct uprobe_consumer { int (*handler)(struct uprobe_consumer *self, struct pt_regs *regs); int (*ret_handler)(struct uprobe_consumer *self, unsigned long func, struct pt_regs *regs); - bool (*filter)(struct uprobe_consumer *self, - enum uprobe_filter_ctx ctx, - struct mm_struct *mm); + bool (*filter)(struct uprobe_consumer *self, struct mm_struct *mm); struct uprobe_consumer *next; }; diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 3e3595753e2c..8bdcdc6901b2 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -918,21 +918,19 @@ static int prepare_uprobe(struct uprobe *uprobe, struct file *file, return ret; } -static inline bool consumer_filter(struct uprobe_consumer *uc, - enum uprobe_filter_ctx ctx, struct mm_struct *mm) +static inline bool consumer_filter(struct uprobe_consumer *uc, struct mm_struct *mm) { - return !uc->filter || uc->filter(uc, ctx, mm); + return !uc->filter || uc->filter(uc, mm); } -static bool filter_chain(struct uprobe *uprobe, - enum uprobe_filter_ctx ctx, struct mm_struct *mm) +static bool filter_chain(struct uprobe *uprobe, struct mm_struct *mm) { struct uprobe_consumer *uc; bool ret = false; down_read(&uprobe->consumer_rwsem); for (uc = uprobe->consumers; uc; uc = uc->next) { - ret = consumer_filter(uc, ctx, mm); + ret = consumer_filter(uc, mm); if (ret) break; } @@ -1099,12 +1097,10 @@ register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new) if (is_register) { /* consult only the "caller", new consumer. */ - if (consumer_filter(new, - UPROBE_FILTER_REGISTER, mm)) + if (consumer_filter(new, mm)) err = install_breakpoint(uprobe, mm, vma, info->vaddr); } else if (test_bit(MMF_HAS_UPROBES, &mm->flags)) { - if (!filter_chain(uprobe, - UPROBE_FILTER_UNREGISTER, mm)) + if (!filter_chain(uprobe, mm)) err |= remove_breakpoint(uprobe, mm, info->vaddr); } @@ -1387,7 +1383,7 @@ int uprobe_mmap(struct vm_area_struct *vma) */ list_for_each_entry_safe(uprobe, u, &tmp_list, pending_list) { if (!fatal_signal_pending(current) && - filter_chain(uprobe, UPROBE_FILTER_MMAP, vma->vm_mm)) { + filter_chain(uprobe, vma->vm_mm)) { unsigned long vaddr = offset_to_vaddr(vma, uprobe->offset); install_breakpoint(uprobe, vma->vm_mm, vma, vaddr); } diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 4e391daafa64..73c570b5988b 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -3320,8 +3320,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe, } static bool -uprobe_multi_link_filter(struct uprobe_consumer *con, enum uprobe_filter_ctx ctx, - struct mm_struct *mm) +uprobe_multi_link_filter(struct uprobe_consumer *con, struct mm_struct *mm) { struct bpf_uprobe *uprobe; diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c index 52e76a73fa7c..7eb79e0a5352 100644 --- a/kernel/trace/trace_uprobe.c +++ b/kernel/trace/trace_uprobe.c @@ -1078,9 +1078,7 @@ print_uprobe_event(struct trace_iterator *iter, int flags, struct trace_event *e return trace_handle_return(s); } -typedef bool (*filter_func_t)(struct uprobe_consumer *self, - enum uprobe_filter_ctx ctx, - struct mm_struct *mm); +typedef bool (*filter_func_t)(struct uprobe_consumer *self, struct mm_struct *mm); static int trace_uprobe_enable(struct trace_uprobe *tu, filter_func_t filter) { @@ -1339,8 +1337,7 @@ static int uprobe_perf_open(struct trace_event_call *call, return err; } -static bool uprobe_perf_filter(struct uprobe_consumer *uc, - enum uprobe_filter_ctx ctx, struct mm_struct *mm) +static bool uprobe_perf_filter(struct uprobe_consumer *uc, struct mm_struct *mm) { struct trace_uprobe_filter *filter; struct trace_uprobe *tu; @@ -1426,7 +1423,7 @@ static void __uprobe_perf_func(struct trace_uprobe *tu, static int uprobe_perf_func(struct trace_uprobe *tu, struct pt_regs *regs, struct uprobe_cpu_buffer **ucbp) { - if (!uprobe_perf_filter(&tu->consumer, 0, current->mm)) + if (!uprobe_perf_filter(&tu->consumer, current->mm)) return UPROBE_HANDLER_REMOVE; if (!is_ret_probe(tu)) From patchwork Tue Aug 13 04:29:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761246 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4BEDC52D7C for ; Tue, 13 Aug 2024 04:29:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E2716B009F; Tue, 13 Aug 2024 00:29:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3695D6B00A0; Tue, 13 Aug 2024 00:29:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BEFF6B00A1; Tue, 13 Aug 2024 00:29:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F011F6B009F for ; Tue, 13 Aug 2024 00:29:56 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 596F81205EE for ; Tue, 13 Aug 2024 04:29:56 +0000 (UTC) X-FDA: 82445944392.29.7D28293 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf26.hostedemail.com (Postfix) with ESMTP id 9B8A5140004 for ; Tue, 13 Aug 2024 04:29:54 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=QET3Aid1; spf=pass (imf26.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523360; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wBJU1f7FFd35QfSjW9BDamRkiXuE7F/4iDuuSAqyZ+M=; b=x8MMqaumBFoKK+/jtn7bMdvHzx8KE3Uuna+KlWJFV1A8KdgOcUsqY/JIJgjmS6EYTtQ+Rl I4kIPsj5Xj25GE42XfbKXwSNOyR7JF5zlcU9h789G1/fxS7MTitMjouJWbxah1ibplMikv q6LgxRvFkE7jjXxKKBSDCU2K3DzSFqw= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=QET3Aid1; spf=pass (imf26.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523360; a=rsa-sha256; cv=none; b=a2LXL4qw8Q6Z67gSbbqsJTjkHGbTV8SX0hVmqBddNcsomxN5gpoNgzG5sBNnov45FbqHNr RTESVYsI8kChiH15gsyHZuKv2FyYuCnIpt/VR1nXXDhW1Z2oIgmknQEeBTCXwPWITLn93o f5MBEN3EKeLzRqQTSFfYtJHGP1pKQTg= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C315C61561; Tue, 13 Aug 2024 04:29:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5C265C4AF09; Tue, 13 Aug 2024 04:29:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523393; bh=r8TSvo2hmymFmaW8DnSgb8eLExbQzyutP7wrmtQjB9U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QET3Aid1p3p6zpbI/vyekDIH/TA9WuthMBXBmzg7W4mSZtM8Vmh4lkfnzmfJQ5NND gntDwNcRBBdIjXNVMkTJe+A3FxGjgBYoRHrEh+9h3qL9x9jneOpnkXPJtc9iOjFZPD j0G/XjhV5+ut0qELpIr8c1iHbFHeR0KW2/n3j1d63eQyFq3vTT9UTXb3BRIvg8FeAU k+BspMbFsMmt4gX1M5cr0Y9QP1tpXhj6KoJgVQoXRK4QIxcJ7ge9QGsqA+sRVBDtrk s5cwCL0chEsEh9f0Q1mlx7mAS8E3EjWr7cSMTPD+l6fPwueUKKMjmgBFCnG8oW1dWC Kieji6bsnHBDw== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH v3 04/13] uprobes: travers uprobe's consumer list locklessly under SRCU protection Date: Mon, 12 Aug 2024 21:29:08 -0700 Message-ID: <20240813042917.506057-5-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 9B8A5140004 X-Stat-Signature: wgmji4xzbqjc7fgrj77xhjod6fynoxs4 X-HE-Tag: 1723523394-293146 X-HE-Meta: U2FsdGVkX195VA1B80xPv3uKhPvlq3Il5aqL1hbkY/CYfdzwHb+h8oias26gxaWjzg001rNRsQ6bKRxm6/ETdKdMLaX12EvLdCE2wywPvXYlEDfi1EOapyWlBYcHLAM8YvLr9Hss7VHZqXNpEDORUg2/zjcpSVe7OTFJ75C8UoU6FGZ39n79kMPLLwKgz7ElZEtbsEM2+u2h+7N8+D5r330UVMY2mRFdFdUYFrQTOOY03BCMMQqGLsrzDbhJfkhuFwXMoEXAdF9QhbLw/FJQBu4eG39+BVb049AUSHnhiSCr8Qm1oIHrt+DdVHILPkFtIe3nuG5rjUUcRwu9PKetnCzwaVJGZgyEPnZJUcDJ/wM/tq6jvPtX6FrkODSNEKN8lyWhGFFN34nWnBSvyKLPR+FfhWpzY1sRHyFihAvMB1PIW9wxqQ6h5wVH4FnxUnTViDVNcD4ig2PEXBBskF/TKnFaPm0QE0RSzwl4l2uhtIgd+X5kkPwahv0yn9BQKUyF7zE8Zrf1+fnNWNqwb+XneJTQnG0fhPzqRVKTlV+P6OlRo9+6SdBqhTPcfgLoB+O1X4QMFvcqXdaczsSHeoDMjb1Q/V5xA7ViNjs7aqmhvKhRCtU/3rLo4X5dxPsYecwc/5XdGBeM9KBmHUDMI++G6qA7lMtItwlaorb8wu3j11snPhrIsyUFtyCneIXq5hTz5D1C1ri/jVHD/I1ArgaMfJJhcRF/sHaSIvnszzmv5yqHtI5pcoxKJb4VxJSgvk7m1IshTSTXZhWlIzVi7c6cTAKxpgQkCfaWz5nvyA8h3IYcT2fGRm1OgbHfmJZVpL1V7/DJjwUodUQgz0s1vHxxY71oIOu2Qe0mL+U7vqact2j2NwNhtstlPI6u2NEm+O/Mv/Gcho351gxXCEvJlz04Ik0FBndlK0Bqk1LHnEYlGbSjklgQU2u9e5HzXAFfCyekT0xGszGULDpcZRgIbfo 5ifa01fc 93lZ5I6dhdRZCLfQbYvlPXE2gdca/sFsk5F26PcGTtOAlDn2ts36dM4bUdWFCALbFAtiJyCiTX3T0bgX8m5kRdZIbKUyaVoreSNcnfhuP/kgQQFxMDSJpsqJP2/q2u0FrPxALhCLZCci+JPj8sI47CQTGjGfHWJKKONeFe6fiXKucVsxi/DpyjmZQhSCsbviHBnKvbkf0i6q5mKpsDnH6UYVYzV94/VVvpzoHU82l2IiOqVw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: uprobe->register_rwsem is one of a few big bottlenecks to scalability of uprobes, so we need to get rid of it to improve uprobe performance and multi-CPU scalability. First, we turn uprobe's consumer list to a typical doubly-linked list and utilize existing RCU-aware helpers for traversing such lists, as well as adding and removing elements from it. For entry uprobes we already have SRCU protection active since before uprobe lookup. For uretprobe we keep refcount, guaranteeing that uprobe won't go away from under us, but we add SRCU protection around consumer list traversal. Lastly, to keep handler_chain()'s UPROBE_HANDLER_REMOVE handling simple, we remember whether any removal was requested during handler calls, but then we double-check the decision under a proper register_rwsem using consumers' filter callbacks. Handler removal is very rare, so this extra lock won't hurt performance, overall, but we also avoid the need for any extra protection (e.g., seqcount locks). Signed-off-by: Andrii Nakryiko --- include/linux/uprobes.h | 2 +- kernel/events/uprobes.c | 111 ++++++++++++++++++++++------------------ 2 files changed, 61 insertions(+), 52 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 9cf0dce62e4c..29c935b0d504 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -35,7 +35,7 @@ struct uprobe_consumer { struct pt_regs *regs); bool (*filter)(struct uprobe_consumer *self, struct mm_struct *mm); - struct uprobe_consumer *next; + struct list_head cons_node; }; #ifdef CONFIG_UPROBES diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 8bdcdc6901b2..7de1aaf50394 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -59,7 +59,7 @@ struct uprobe { struct rw_semaphore register_rwsem; struct rw_semaphore consumer_rwsem; struct list_head pending_list; - struct uprobe_consumer *consumers; + struct list_head consumers; struct inode *inode; /* Also hold a ref to inode */ struct rcu_head rcu; loff_t offset; @@ -783,6 +783,7 @@ static struct uprobe *alloc_uprobe(struct inode *inode, loff_t offset, uprobe->inode = inode; uprobe->offset = offset; uprobe->ref_ctr_offset = ref_ctr_offset; + INIT_LIST_HEAD(&uprobe->consumers); init_rwsem(&uprobe->register_rwsem); init_rwsem(&uprobe->consumer_rwsem); RB_CLEAR_NODE(&uprobe->rb_node); @@ -808,34 +809,10 @@ static struct uprobe *alloc_uprobe(struct inode *inode, loff_t offset, static void consumer_add(struct uprobe *uprobe, struct uprobe_consumer *uc) { down_write(&uprobe->consumer_rwsem); - uc->next = uprobe->consumers; - uprobe->consumers = uc; + list_add_rcu(&uc->cons_node, &uprobe->consumers); up_write(&uprobe->consumer_rwsem); } -/* - * For uprobe @uprobe, delete the consumer @uc. - * Return true if the @uc is deleted successfully - * or return false. - */ -static bool consumer_del(struct uprobe *uprobe, struct uprobe_consumer *uc) -{ - struct uprobe_consumer **con; - bool ret = false; - - down_write(&uprobe->consumer_rwsem); - for (con = &uprobe->consumers; *con; con = &(*con)->next) { - if (*con == uc) { - *con = uc->next; - ret = true; - break; - } - } - up_write(&uprobe->consumer_rwsem); - - return ret; -} - static int __copy_insn(struct address_space *mapping, struct file *filp, void *insn, int nbytes, loff_t offset) { @@ -929,7 +906,8 @@ static bool filter_chain(struct uprobe *uprobe, struct mm_struct *mm) bool ret = false; down_read(&uprobe->consumer_rwsem); - for (uc = uprobe->consumers; uc; uc = uc->next) { + list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node, + srcu_read_lock_held(&uprobes_srcu)) { ret = consumer_filter(uc, mm); if (ret) break; @@ -1125,18 +1103,31 @@ void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc) int err; down_write(&uprobe->register_rwsem); - if (WARN_ON(!consumer_del(uprobe, uc))) { - err = -ENOENT; - } else { - err = register_for_each_vma(uprobe, NULL); - /* TODO : cant unregister? schedule a worker thread */ - if (unlikely(err)) - uprobe_warn(current, "unregister, leaking uprobe"); - } + + list_del_rcu(&uc->cons_node); + err = register_for_each_vma(uprobe, NULL); + up_write(&uprobe->register_rwsem); - if (!err) - put_uprobe(uprobe); + /* TODO : cant unregister? schedule a worker thread */ + if (unlikely(err)) { + uprobe_warn(current, "unregister, leaking uprobe"); + goto out_sync; + } + + put_uprobe(uprobe); + +out_sync: + /* + * Now that handler_chain() and handle_uretprobe_chain() iterate over + * uprobe->consumers list under RCU protection without holding + * uprobe->register_rwsem, we need to wait for RCU grace period to + * make sure that we can't call into just unregistered + * uprobe_consumer's callbacks anymore. If we don't do that, fast and + * unlucky enough caller can free consumer's memory and cause + * handler_chain() or handle_uretprobe_chain() to do an use-after-free. + */ + synchronize_srcu(&uprobes_srcu); } EXPORT_SYMBOL_GPL(uprobe_unregister); @@ -1214,13 +1205,20 @@ EXPORT_SYMBOL_GPL(uprobe_register); int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool add) { struct uprobe_consumer *con; - int ret = -ENOENT; + int ret = -ENOENT, srcu_idx; down_write(&uprobe->register_rwsem); - for (con = uprobe->consumers; con && con != uc ; con = con->next) - ; - if (con) - ret = register_for_each_vma(uprobe, add ? uc : NULL); + + srcu_idx = srcu_read_lock(&uprobes_srcu); + list_for_each_entry_srcu(con, &uprobe->consumers, cons_node, + srcu_read_lock_held(&uprobes_srcu)) { + if (con == uc) { + ret = register_for_each_vma(uprobe, add ? uc : NULL); + break; + } + } + srcu_read_unlock(&uprobes_srcu, srcu_idx); + up_write(&uprobe->register_rwsem); return ret; @@ -2085,10 +2083,12 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs) struct uprobe_consumer *uc; int remove = UPROBE_HANDLER_REMOVE; bool need_prep = false; /* prepare return uprobe, when needed */ + bool has_consumers = false; - down_read(&uprobe->register_rwsem); current->utask->auprobe = &uprobe->arch; - for (uc = uprobe->consumers; uc; uc = uc->next) { + + list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node, + srcu_read_lock_held(&uprobes_srcu)) { int rc = 0; if (uc->handler) { @@ -2101,17 +2101,24 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs) need_prep = true; remove &= rc; + has_consumers = true; } current->utask->auprobe = NULL; if (need_prep && !remove) prepare_uretprobe(uprobe, regs); /* put bp at return */ - if (remove && uprobe->consumers) { - WARN_ON(!uprobe_is_active(uprobe)); - unapply_uprobe(uprobe, current->mm); + if (remove && has_consumers) { + down_read(&uprobe->register_rwsem); + + /* re-check that removal is still required, this time under lock */ + if (!filter_chain(uprobe, current->mm)) { + WARN_ON(!uprobe_is_active(uprobe)); + unapply_uprobe(uprobe, current->mm); + } + + up_read(&uprobe->register_rwsem); } - up_read(&uprobe->register_rwsem); } static void @@ -2119,13 +2126,15 @@ handle_uretprobe_chain(struct return_instance *ri, struct pt_regs *regs) { struct uprobe *uprobe = ri->uprobe; struct uprobe_consumer *uc; + int srcu_idx; - down_read(&uprobe->register_rwsem); - for (uc = uprobe->consumers; uc; uc = uc->next) { + srcu_idx = srcu_read_lock(&uprobes_srcu); + list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node, + srcu_read_lock_held(&uprobes_srcu)) { if (uc->ret_handler) uc->ret_handler(uc, ri->func, regs); } - up_read(&uprobe->register_rwsem); + srcu_read_unlock(&uprobes_srcu, srcu_idx); } static struct return_instance *find_next_ret_chain(struct return_instance *ri) From patchwork Tue Aug 13 04:29:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89221C52D7C for ; Tue, 13 Aug 2024 04:30:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 16FC16B00A0; Tue, 13 Aug 2024 00:30:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1200B6B00A1; Tue, 13 Aug 2024 00:30:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F2D326B00A2; Tue, 13 Aug 2024 00:29:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D16236B00A0 for ; Tue, 13 Aug 2024 00:29:59 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 8619FA059A for ; Tue, 13 Aug 2024 04:29:59 +0000 (UTC) X-FDA: 82445944518.07.B41B66A Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf25.hostedemail.com (Postfix) with ESMTP id DCA23A0014 for ; Tue, 13 Aug 2024 04:29:57 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=lAoZgFKT; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf25.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523341; a=rsa-sha256; cv=none; b=F75n+SQ1715MmMtCQh7Ty1JnCtaU5LBkFkIsSSoYlQkv0L6tpCuH6L23nDaf51Lrzfh5/D BJj6qI1wK5Tgz8sczEWg+0TRZjDeoFD1LJkEFUxAUx8aphvpRc98C976+P7yO8AndIop2Z Nb53qXDy7GmnwGwIUEX4/NGExCvEOW0= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=lAoZgFKT; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf25.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523341; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LlaKZRu4PzVHPdtAjWlZSFDNQe6FVy7RczckRzr/Mps=; b=PCVlyrsuuwCEJ2Z04npjZ62iYU2NahBuDd2xmvbLOHaluhVCmbHMiJsc+KPxuMBlgtHu46 z8dK9tH7YNyQj8A/EkbLEAriugZUQA74qfIPjFqcoi/CwS7re5uTFzxaZV+udgox7hswIU +P5tT8sRif5vGAiEqEyczkQBstjiblg= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 1919461551; Tue, 13 Aug 2024 04:29:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AE5C7C4AF0E; Tue, 13 Aug 2024 04:29:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523396; bh=OvL70QsrDczQ8iqrV8LJBOmNylKoE4hwbFie5j0/JPQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lAoZgFKTFKJp/F2BB4J2mNKQlWMd/tPk5d1k6V28UkayC+B9VzjbEtAD7etxoMPox KbZZ8pfGDdndUki76uP9d1Kumyi0kPPI6sR/mMdVZYktNRy+zZF2VF4FgLznIFMRDF Wnc8+PRlNA7CWER/Q74/5uVayvtjMRD+3OsBCDbKWF0OmUwmbbCxQaVEn1ciPrBF0a yiCvHr2Ob6iZoAIhxLuxutP9ciBwGSQhbNGS+RQr7bpqmB3oYXkxFk1emantskGrWr tE2e4ayy7/6Y+TPrt6ChbeoQK4OEY08fTqQfq2h7dQ7/HpRo+auNBL31UQ1bu5IBxW QkhqNdR7Z5qiA== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH v3 05/13] perf/uprobe: split uprobe_unregister() Date: Mon, 12 Aug 2024 21:29:09 -0700 Message-ID: <20240813042917.506057-6-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: DCA23A0014 X-Stat-Signature: gkgbjc43iimcoimo7a4yihg44qssfzp1 X-Rspam-User: X-HE-Tag: 1723523397-589974 X-HE-Meta: U2FsdGVkX19Qd9W4idb8pr0vVaGFDEHUJUZN0qBM5oqKHQ/qx7LtEV0hTQl9CfcDhJzYpN63NTXi1xX82gHeaqj/YQyHnREXPm+TaBhNI22AJ36GC5VZrz1UbqjZdcCj2+rFZdY2Ewu+1/pkLjs5tMC95bZ5pn9+5ToxJq8LpgTcnKJ/AYJpzB5ErLDaaQf0FDh7gVfxoZNdPNuxVn5WcI8lIzEojssgZ5DntbV3NzNV19CCQ/gR8D675ndt0j8QBe4aDJbdozCKM26c7W/vhDC36nIl3V/+gqQ4YKSuRSKYGBKl2Nah/TSjf5kWeA7CDp1SGaR1F7gmIxQB+QlLITQ4chzyCWEZcMojGmlIHud8v1UlFv+mXekIOBzZ1JRDwSsVqdjHFhIqj43ahbvABQe0ME6Bh3sQJR41f+qvoqkz/tmDtC8mx+rmEY/lYz09i3prHl+ZR5UJ4CuY6uCt4XzX6+yx9INP+oE0KHk0PCUSbSEjvyMDt/pCNgrK/1ZbH3URhQ08P4bexCmcpipFtgNIMSJHc3hcraHiNHKmsOcnl61pNUZFw2aTTNtFH4P0dlusij8vAXt+sXEHfiNw2v7NB3vEHHlQfxXxsjFgtPIoNlNH9GRkH5gk0LAXwKbj1VOCClvAHHyFB7AZ168668Z6MdvnzvVGBRLEocvya/6Cv3GeRPlCxG8zR31WAiZz6/h+zRxKqe7CNmFKLRs2qHJmuX6oUyxCy3GX1470WJ/8tURqhZYbiBH9yL5kfVH6QYNZ7izdwuXNKWtLHMnnlFNJNn+Ed1ppb5d+DXmmy6EcUEiMcufcbzRz4dxYi4hhyRS3Qquhq3+R44zJaltnU0tqxzGz7V6xVcWi4U+pYmn0yAb3sMI/qiP1bVyOQrqx9Yo+xnXAIEAyHCLaxzIqUBRQLaDg/pVqrOls3Q8ZL+ioTWjukh8EXVUTNZw0wuDWriGFtn/YE2WPeWGgYWy uoerRJ6O 483u4nZaZrulsP7Coey2DEk7DW0T1t0DLBqW8E852A0vONKYtG+mYteZwpaITHCwEl52+Zeb1M2rwbUufnkNNPdM2p1LpCzK0DK8Q4fKqKdF6Fkbe+dCsQv9J1aFhuw1ZRPdHD0u3qQatbV2LzIg2/5tZxGDGnWPC3s50qjFLK2MmKBE4rivMJB3zVtg95pKy7Ol8MlptQTPkiDxonPSW+fa2SSXJ+YwXTSaXiK3bS9goLW7fWmL4R+gy4L+HQ/U6mR7BZQYoE6MEmv9eq4Ef8R06po1sdMe+H+qBDlKqs+Jg+PGidXbrwlPrUTFgxRMArRw/dNdQ7bY2xCbYW4rFu70L9+ZgcbOFYlzC X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Zijlstra With uprobe_unregister() having grown a synchronize_srcu(), it becomes fairly slow to call. Esp. since both users of this API call it in a loop. Peel off the sync_srcu() and do it once, after the loop. We also need to add uprobe_unregister_sync() into uprobe_register()'s error handling path, as we need to be careful about returning to the caller before we have a guarantee that partially attached consumer won't be called anymore. This is an unlikely slow path and this should be totally fine to be slow in the case of a failed attach. Signed-off-by: Peter Zijlstra (Intel) Co-developed-by: Andrii Nakryiko Signed-off-by: Andrii Nakryiko --- include/linux/uprobes.h | 8 +++++-- kernel/events/uprobes.c | 21 +++++++++++++------ kernel/trace/bpf_trace.c | 5 ++++- kernel/trace/trace_uprobe.c | 6 +++++- .../selftests/bpf/bpf_testmod/bpf_testmod.c | 3 ++- 5 files changed, 32 insertions(+), 11 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 29c935b0d504..e41cdae5597b 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -108,7 +108,8 @@ extern unsigned long uprobe_get_trap_addr(struct pt_regs *regs); extern int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long vaddr, uprobe_opcode_t); extern struct uprobe *uprobe_register(struct inode *inode, loff_t offset, loff_t ref_ctr_offset, struct uprobe_consumer *uc); extern int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool); -extern void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc); +extern void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc); +extern void uprobe_unregister_sync(void); extern int uprobe_mmap(struct vm_area_struct *vma); extern void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end); extern void uprobe_start_dup_mmap(void); @@ -157,7 +158,10 @@ uprobe_apply(struct uprobe* uprobe, struct uprobe_consumer *uc, bool add) return -ENOSYS; } static inline void -uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc) +uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc) +{ +} +static inline void uprobe_unregister_sync(void) { } static inline int uprobe_mmap(struct vm_area_struct *vma) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 7de1aaf50394..0b6d4c0a0088 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1094,11 +1094,11 @@ register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new) } /** - * uprobe_unregister - unregister an already registered probe. + * uprobe_unregister_nosync - unregister an already registered probe. * @uprobe: uprobe to remove * @uc: identify which probe if multiple probes are colocated. */ -void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc) +void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc) { int err; @@ -1112,12 +1112,15 @@ void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc) /* TODO : cant unregister? schedule a worker thread */ if (unlikely(err)) { uprobe_warn(current, "unregister, leaking uprobe"); - goto out_sync; + return; } put_uprobe(uprobe); +} +EXPORT_SYMBOL_GPL(uprobe_unregister_nosync); -out_sync: +void uprobe_unregister_sync(void) +{ /* * Now that handler_chain() and handle_uretprobe_chain() iterate over * uprobe->consumers list under RCU protection without holding @@ -1129,7 +1132,7 @@ void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc) */ synchronize_srcu(&uprobes_srcu); } -EXPORT_SYMBOL_GPL(uprobe_unregister); +EXPORT_SYMBOL_GPL(uprobe_unregister_sync); /** * uprobe_register - register a probe @@ -1187,7 +1190,13 @@ struct uprobe *uprobe_register(struct inode *inode, up_write(&uprobe->register_rwsem); if (ret) { - uprobe_unregister(uprobe, uc); + uprobe_unregister_nosync(uprobe, uc); + /* + * Registration might have partially succeeded, so we can have + * this consumer being called right at this time. We need to + * sync here. It's ok, it's unlikely slow path. + */ + uprobe_unregister_sync(); return ERR_PTR(ret); } diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 73c570b5988b..6b632710c98e 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -3184,7 +3184,10 @@ static void bpf_uprobe_unregister(struct bpf_uprobe *uprobes, u32 cnt) u32 i; for (i = 0; i < cnt; i++) - uprobe_unregister(uprobes[i].uprobe, &uprobes[i].consumer); + uprobe_unregister_nosync(uprobes[i].uprobe, &uprobes[i].consumer); + + if (cnt) + uprobe_unregister_sync(); } static void bpf_uprobe_multi_link_release(struct bpf_link *link) diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c index 7eb79e0a5352..f7443e996b1b 100644 --- a/kernel/trace/trace_uprobe.c +++ b/kernel/trace/trace_uprobe.c @@ -1097,6 +1097,7 @@ static int trace_uprobe_enable(struct trace_uprobe *tu, filter_func_t filter) static void __probe_event_disable(struct trace_probe *tp) { struct trace_uprobe *tu; + bool sync = false; tu = container_of(tp, struct trace_uprobe, tp); WARN_ON(!uprobe_filter_is_empty(tu->tp.event->filter)); @@ -1105,9 +1106,12 @@ static void __probe_event_disable(struct trace_probe *tp) if (!tu->uprobe) continue; - uprobe_unregister(tu->uprobe, &tu->consumer); + uprobe_unregister_nosync(tu->uprobe, &tu->consumer); + sync = true; tu->uprobe = NULL; } + if (sync) + uprobe_unregister_sync(); } static int probe_event_enable(struct trace_event_call *call, diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c index 3c0515a27842..1fc16657cf42 100644 --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c @@ -475,7 +475,8 @@ static void testmod_unregister_uprobe(void) mutex_lock(&testmod_uprobe_mutex); if (uprobe.uprobe) { - uprobe_unregister(uprobe.uprobe, &uprobe.consumer); + uprobe_unregister_nosync(uprobe.uprobe, &uprobe.consumer); + uprobe_unregister_sync(); path_put(&uprobe.path); uprobe.uprobe = NULL; } From patchwork Tue Aug 13 04:29:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EE15C52D7C for ; Tue, 13 Aug 2024 04:30:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 89D2E6B00A4; Tue, 13 Aug 2024 00:30:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 789386B00A5; Tue, 13 Aug 2024 00:30:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58D2D6B00A6; Tue, 13 Aug 2024 00:30:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 34CB36B00A4 for ; Tue, 13 Aug 2024 00:30:09 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id F12281A05E4 for ; Tue, 13 Aug 2024 04:30:08 +0000 (UTC) X-FDA: 82445944896.03.BE35610 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf02.hostedemail.com (Postfix) with ESMTP id 20C9980027 for ; Tue, 13 Aug 2024 04:30:05 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=nyi4ZzFt; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf02.hostedemail.com: domain of andrii@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=andrii@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523355; a=rsa-sha256; cv=none; b=L3Q668tDM9MXrCxDlSdnBOedgNm0+I1z78zaVR2+NKCpql3d2Gy7kkke+mmOb1vC4bW6PY YXTLkzlgug8SDVxBPo6Wyd7GPwa/s0K3pENHJ96rsrkiBxB/1zJMGLb4e/zX3b04aX4JqW husThFGbkLcWS86Y+4pImtOUw6LyUD4= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=nyi4ZzFt; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf02.hostedemail.com: domain of andrii@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=andrii@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523355; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hCGVwTt9bzFS2vsQe5QWlZJE2SQYwI4THELaVkQI2Lw=; b=fY0NbP1Mos8UTg9g7IYZWxv2/jckDg7/2Sn934Ee0mxzE1D4KRxL7lP+PXdhLFLna5OFW0 Gxg+fQ0RWOQlH46wJzCYRLQThDb0RcpTdWsX+RxoFQDYNWvH9N42cscA2t6XQ/90j6kNRJ hyPYyDYf2/Pk8EAmzTIt1UhANic9buM= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id D7A72CE1179; Tue, 13 Aug 2024 04:30:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9C2EC4AF0E; Tue, 13 Aug 2024 04:29:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523400; bh=iIO+4imlGFWJODra3HvN51TRltaxN3vJ0lLIJ7O8eDQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nyi4ZzFtnEAKbgF77exAYbwH1JsjVlsy90cUdbmKcEkFiUDkFs7z+WNMHwv52zCQs RYZnm0SPIyUnbXEBgAAfgiLeilZyo+U9A/ZkLVTHGCuWzvl/JyN5GDGL9wrZBTAszT JxinIH2E9S72z3nYli0ti618m+TKCvsomeB+oeRn8jpQ3Hz3AlsvNoh+xwkhjqMeEd g9PDl+ERgtJaZSDu7PY34GWvX0XBXEts9uwIBdCcwZIP+FQ69ct0JoOsxsg7rJfyIs HCCyxlizGciad8le7WUAhSzKGHiUduThHlpeVKwJtvyMDQV4THSw0uUYbUpYbmTb4w HW43b2PqOU8hg== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH v3 06/13] rbtree: provide rb_find_rcu() / rb_find_add_rcu() Date: Mon, 12 Aug 2024 21:29:10 -0700 Message-ID: <20240813042917.506057-7-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 20C9980027 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: rorb5s4t5cqzdit4qz633dnb9mfsdduc X-HE-Tag: 1723523405-977787 X-HE-Meta: U2FsdGVkX1/4wVUhRvuR0DRaURrInB6J0Q7S6eCUis6/Xon7HJMoL858YCuSjbVniavTmKbPlghtYNXdHXEkBEZSyMjkipluCsd+0ezVx5vjQLja7V529fCEBz0VWFAaiSNOhOj8NPfTCqYvVwjgr/k6mp/5OBQov21Ly/So4leRp1IHeb/vBYemUD2PGpwFmJl1+ZWXKEaNtvuYAnAjfg+eOVuvBdwAI0zHaFCCnDfKUj76eCfMz4L3nxYKJX351sHM3+GHr3zT3hN3pN7FzxRdg8kPPB6F2oPezo8yvJK5M04ypX33ax+iTeKpQdNLCc7O31eel9W5Rjtp/DrHDUydJtiFVpmK+y6QbRjIe3kctO3l9N3ghtZ/WdgA6njkIFgna7NY0wSL/WeYpPOCxidksc+ZBh+UgSzTUZREtuvSgjLyA7ouBFvhbZUXV2DwKKjLoum2zcitLolMAVD83AGosT2On2a9RRAd+Fouw4rEweY6+Y2wGGON8fQhknBpLo9HPmXFVzYB8dgALoixblAqu18KqSiIoaRMyuP7+bAC5lnbCqdygmoHVYANq/Z/MjR3XO+IcOIqf06BJllVt2dEVOQLu2tP8z0eowVBinsWP96tSNYxsGwphzAfu/YMbD8Xd3mFEeD0skk8AEtC9d4/hR1MgFbU5spSycnzI2BM8eIKzTnPdnf7ZdeSAU/24+PgqAGXJEM4D/wciWtt5VSR1i+brTeMM/RSXVWaTl9dG0sg2EzjvMXb7QNWQPVlc7nuex79FPW9q2QSySeqOwa7khklvEI2rYFJ+wKJMXdjgvOk4DLqNpePkL6vRQhGC0xgVJE4rkcW15CdSZxTI2kPxPtyHGKwydbJjFTdrAhVkBabamnuixT2z9z17oQ3UISM+PjotkRFA5Wj6IsANcmJzPjMcsOmybbt6z40WSKvB1yHo9DXiDHwFoZjxDh5Z2JQiisF+TjR29Iy+fo 0wLkxA1p gyDJUbmPRjuVW+XFlZfozsBCPJ8B50b7Tu+fXspAeUaFRU2aAQibRIL8IJqUNTu/Y+Qx/XZQazCYEK/i08TyfFVJUFs0iGglQTjn/hDhBiSLc/wc0ZGwmPz14OJC4TiEk+0qz8yD+QscZXpURF8Yvz6xOv4tyOdjzIaNALYGHDK9uKTkm4Nf7aZPkfwuei9nyi2UaIpxEC3AvGPv2O8j7TqfDL0bhKNfvsdpP4gX7+75FwVjm7blfHf7teip/uFHxvpQs8XaD1XeuXOhRKKCJacekEKLTaF51BnImrS/E0UqZ9FtxTfQ70QQUIHGGUXCnt27iZ93Ro5JRtq1pNo/QWg9lbw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Zijlstra Much like latch_tree, add two RCU methods for the regular RB-tree, which can be used in conjunction with a seqcount to provide lockless lookups. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Masami Hiramatsu (Google) --- include/linux/rbtree.h | 67 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 67 insertions(+) diff --git a/include/linux/rbtree.h b/include/linux/rbtree.h index f7edca369eda..7c173aa64e1e 100644 --- a/include/linux/rbtree.h +++ b/include/linux/rbtree.h @@ -244,6 +244,42 @@ rb_find_add(struct rb_node *node, struct rb_root *tree, return NULL; } +/** + * rb_find_add_rcu() - find equivalent @node in @tree, or add @node + * @node: node to look-for / insert + * @tree: tree to search / modify + * @cmp: operator defining the node order + * + * Adds a Store-Release for link_node. + * + * Returns the rb_node matching @node, or NULL when no match is found and @node + * is inserted. + */ +static __always_inline struct rb_node * +rb_find_add_rcu(struct rb_node *node, struct rb_root *tree, + int (*cmp)(struct rb_node *, const struct rb_node *)) +{ + struct rb_node **link = &tree->rb_node; + struct rb_node *parent = NULL; + int c; + + while (*link) { + parent = *link; + c = cmp(node, parent); + + if (c < 0) + link = &parent->rb_left; + else if (c > 0) + link = &parent->rb_right; + else + return parent; + } + + rb_link_node_rcu(node, parent, link); + rb_insert_color(node, tree); + return NULL; +} + /** * rb_find() - find @key in tree @tree * @key: key to match @@ -272,6 +308,37 @@ rb_find(const void *key, const struct rb_root *tree, return NULL; } +/** + * rb_find_rcu() - find @key in tree @tree + * @key: key to match + * @tree: tree to search + * @cmp: operator defining the node order + * + * Notably, tree descent vs concurrent tree rotations is unsound and can result + * in false-negatives. + * + * Returns the rb_node matching @key or NULL. + */ +static __always_inline struct rb_node * +rb_find_rcu(const void *key, const struct rb_root *tree, + int (*cmp)(const void *key, const struct rb_node *)) +{ + struct rb_node *node = tree->rb_node; + + while (node) { + int c = cmp(key, node); + + if (c < 0) + node = rcu_dereference_raw(node->rb_left); + else if (c > 0) + node = rcu_dereference_raw(node->rb_right); + else + return node; + } + + return NULL; +} + /** * rb_find_first() - find the first @key in @tree * @key: key to match From patchwork Tue Aug 13 04:29:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761248 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14A78C52D7C for ; Tue, 13 Aug 2024 04:30:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 95F396B00A3; Tue, 13 Aug 2024 00:30:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E8826B00A4; Tue, 13 Aug 2024 00:30:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 714446B00A5; Tue, 13 Aug 2024 00:30:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 510156B00A3 for ; Tue, 13 Aug 2024 00:30:06 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0E8551A05E6 for ; Tue, 13 Aug 2024 04:30:06 +0000 (UTC) X-FDA: 82445944812.12.AE300BA Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf27.hostedemail.com (Postfix) with ESMTP id 569EC4000A for ; Tue, 13 Aug 2024 04:30:04 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=cZSEy+Xo; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf27.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523347; a=rsa-sha256; cv=none; b=l+5kO04UUwx1nqtoyuV+RUhc5afdjCEdmKkXb4vzP1nDX3JqwB4DjZH/Al3AsM2lH/jC+P MUv7d1WMImhMWNVMH3GwJDtU9bee3FQ/zb8hPXGcixhDGF69AtLL1yFi8So8xaXI9oP5du S8CZ0Bca9i1cF3TrOX5Y3rTym/LYfTA= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=cZSEy+Xo; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf27.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523347; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FCMvolNdMapmguVot6yEgB2UuXNjv/Po8Xt56cDF0BA=; b=lT1bRy0ogEL5UuA6mAk6DKRnIUB4xFlaeANyNzslu3BGNVNDbQ5Cz6gTL1O51Sy1p4JcOv Ps62ZDewpSff3nWj2q7+pzvv0LKxrGR4BNFrXtT3rBkqOXvJbMoxRIhjxaMrUBqHo7mn3z zshYQhuJcW/dk1Ogyb99tj2AGDqCbBE= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 91EFB61564; Tue, 13 Aug 2024 04:30:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 32717C4AF16; Tue, 13 Aug 2024 04:30:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523403; bh=pS649MnIu4k9iBlktKdHu5GDOrjn3RJ3VigER76AbMA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cZSEy+XoeS3UFrInP9/zXYxqHQgKl+3I0n1r2WTRl/KauaJ/woCw8rRvRGDLMh6fX 0wa4ulv4U7K+gwbpr4pBTAkOgxmPgweuPkDiFInsVzrls1t1JIO2gEtLlia3i7gcyn A+Z+vMxl0xnQhPfi4usipo4NX5bPEO+DzckBxpOAYLqgFaP/2kjkb5ERIHfntzrI6j 6CH2kFv9upGXOPA1GNFH7SL2ElSTx2+ppjDpXr2W+HCQ6uoSM+oe9tfPO2h2BWTtkA yr5JDEkFcLVYFiDg4gTpYDSF/BP4DKrdBOXWKEF3e7hgKYf5LcmHztb75RR+5Gm0i2 7xXtP40jyBg/Q== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH v3 07/13] uprobes: perform lockless SRCU-protected uprobes_tree lookup Date: Mon, 12 Aug 2024 21:29:11 -0700 Message-ID: <20240813042917.506057-8-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 569EC4000A X-Stat-Signature: 5ibozr5c55j6nyu8r9bs981q3zmkokft X-Rspam-User: X-HE-Tag: 1723523404-586833 X-HE-Meta: U2FsdGVkX18ju02b46vOQO4w9fFeYArw3WLJm9lsI2d/a2TktLPpVFsO4AAzyM3RBP6qzsNKk+sKByE6zXMnt7sWV3rHfZcN995ziC/CPPvY0PgAFpM0NufQd/4X7PJrMMSs4T85mA+BQ2OP9GFq4AthDsAp+cPn2klG+Z8TYGF129HK8aGeP7Lge8JVio9qr7jqPsO9Obgne+xRk3dzd2fiPsGQJnaIOxgbUWDCu3XIUB6mRot7i4jpT6YhvZpMOTLpXt5jVZxO2l92DNubdknfwnCRIpE9DRLxffiVwzD1uNm7da1s2y8UJH/yBVkHu7vyQGW1NeA55OS7LtMnnTsKCrSemssPFLZdiqePeofu7tQLAHAlsqeEt/EDBXdlW3qo8o3XTCLxc1lhbR9GqKx6CFBRnRKItYcgInb3xY8aNTrzeK5tJGsZC1BkQrwosqFBFoaEka2c6MoLpv1OtNoTLXD6D4iyir4UvQia6tO69hgERmAMTTfgMdSlfUV4XpMQoKEHtmOiTExFfDDxXZ8nMRMyxRuscdWlAnE3cW7f1vshFWtznovPmyh50ZBsuu5Pa5nuiLqa6i41XMs+2kgdo4WUlEz8jeyG8iBJ6HM/6iemFacffUC7caecnpj/kqFen/XM3CGlzySF8kjvzp4O857cBbVDivRLYp8+C3ZoRo9XynIMgSXSe/rvIZStL2KC7OlBgfGts+I8Y3Z7OYMhR2KDsr42k8xG8ltV/Hva4AsOH6q7NayoFOmahgTpUoVx9B0VRKdsIk4wPqpL2L0jBBkvLw0uCenffqSEPDX3h6QEK2FtrL+S/aLG7BJq+ipg55EhotX9QHbxWXAJBOkl6PNDZ2pIfj5vVoO/dyiT572etXtYUgUHzqZaSI3WNQ7K243n+zrN9HoFoNxdIAYV3liSsVFWnxkXr+D03s8hgUSsxWSmgHukll13+tg7SU4ulYQHhjyfl325byl B2XZBvKC trar3vK/zG5J0cw7yLaRsVbHxVSiaMIY7gsJMUa1Mb5LLOkujWBTrplT/DiRTr48XX69DyAdf1mSuKVtApgaUlZF7b4h3lUMxwVe2h7sLNiSFm+GVNPFn5vE443EY/uZtwQJmW7ZwKhIYFW4X9YiIQT+DfAdTlPBbq51cQojdfUV+nIKgo+fyUmB45EH66XE1OdGKjZRr+t7UUMPBUC+SZe/Arc7b+psjbSxqqGv0xGaN8fypsc8Kzqv0KA1JEthPrNrCNZIT4323IrbwoE2diuuxkb6tffQuDlnCA/e/1ZBTWrgh624ObObU2JFmoDietHUHS8zZ95bw9c7Vz8z6i5uRtEodXCc12rRsNEOkh1TIsFoI2cKUh5eVPbHy+CAMGsnt X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Another big bottleneck to scalablity is uprobe_treelock that's taken in a very hot path in handle_swbp(). Now that uprobes are SRCU-protected, take advantage of that and make uprobes_tree RB-tree look up lockless. To make RB-tree RCU-protected lockless lookup correct, we need to take into account that such RB-tree lookup can return false negatives if there are parallel RB-tree modifications (rotations) going on. We use seqcount lock to detect whether RB-tree changed, and if we find nothing while RB-tree got modified inbetween, we just retry. If uprobe was found, then it's guaranteed to be a correct lookup. With all the lock-avoiding changes done, we get a pretty decent improvement in performance and scalability of uprobes with number of CPUs, even though we are still nowhere near linear scalability. This is due to SRCU not really scaling very well with number of CPUs on a particular hardware that was used for testing (80-core Intel Xeon Gold 6138 CPU @ 2.00GHz), but also due to the remaning mmap_lock, which is currently taken to resolve interrupt address to inode+offset and then uprobe instance. And, of course, uretprobes still need similar RCU to avoid refcount in the hot path, which will be addressed in the follow up patches. Nevertheless, the improvement is good. We used BPF selftest-based uprobe-nop and uretprobe-nop benchmarks to get the below numbers, varying number of CPUs on which uprobes and uretprobes are triggered. BASELINE ======== uprobe-nop ( 1 cpus): 3.032 ± 0.023M/s ( 3.032M/s/cpu) uprobe-nop ( 2 cpus): 3.452 ± 0.005M/s ( 1.726M/s/cpu) uprobe-nop ( 4 cpus): 3.663 ± 0.005M/s ( 0.916M/s/cpu) uprobe-nop ( 8 cpus): 3.718 ± 0.038M/s ( 0.465M/s/cpu) uprobe-nop (16 cpus): 3.344 ± 0.008M/s ( 0.209M/s/cpu) uprobe-nop (32 cpus): 2.288 ± 0.021M/s ( 0.071M/s/cpu) uprobe-nop (64 cpus): 3.205 ± 0.004M/s ( 0.050M/s/cpu) uretprobe-nop ( 1 cpus): 1.979 ± 0.005M/s ( 1.979M/s/cpu) uretprobe-nop ( 2 cpus): 2.361 ± 0.005M/s ( 1.180M/s/cpu) uretprobe-nop ( 4 cpus): 2.309 ± 0.002M/s ( 0.577M/s/cpu) uretprobe-nop ( 8 cpus): 2.253 ± 0.001M/s ( 0.282M/s/cpu) uretprobe-nop (16 cpus): 2.007 ± 0.000M/s ( 0.125M/s/cpu) uretprobe-nop (32 cpus): 1.624 ± 0.003M/s ( 0.051M/s/cpu) uretprobe-nop (64 cpus): 2.149 ± 0.001M/s ( 0.034M/s/cpu) SRCU CHANGES ============ uprobe-nop ( 1 cpus): 3.276 ± 0.005M/s ( 3.276M/s/cpu) uprobe-nop ( 2 cpus): 4.125 ± 0.002M/s ( 2.063M/s/cpu) uprobe-nop ( 4 cpus): 7.713 ± 0.002M/s ( 1.928M/s/cpu) uprobe-nop ( 8 cpus): 8.097 ± 0.006M/s ( 1.012M/s/cpu) uprobe-nop (16 cpus): 6.501 ± 0.056M/s ( 0.406M/s/cpu) uprobe-nop (32 cpus): 4.398 ± 0.084M/s ( 0.137M/s/cpu) uprobe-nop (64 cpus): 6.452 ± 0.000M/s ( 0.101M/s/cpu) uretprobe-nop ( 1 cpus): 2.055 ± 0.001M/s ( 2.055M/s/cpu) uretprobe-nop ( 2 cpus): 2.677 ± 0.000M/s ( 1.339M/s/cpu) uretprobe-nop ( 4 cpus): 4.561 ± 0.003M/s ( 1.140M/s/cpu) uretprobe-nop ( 8 cpus): 5.291 ± 0.002M/s ( 0.661M/s/cpu) uretprobe-nop (16 cpus): 5.065 ± 0.019M/s ( 0.317M/s/cpu) uretprobe-nop (32 cpus): 3.622 ± 0.003M/s ( 0.113M/s/cpu) uretprobe-nop (64 cpus): 3.723 ± 0.002M/s ( 0.058M/s/cpu) Peak througput increased from 3.7 mln/s (uprobe triggerings) up to about 8 mln/s. For uretprobes it's a bit more modest with bump from 2.4 mln/s to 5mln/s. Suggested-by: Peter Zijlstra (Intel) Signed-off-by: Andrii Nakryiko --- kernel/events/uprobes.c | 30 ++++++++++++++++++++++++------ 1 file changed, 24 insertions(+), 6 deletions(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 0b6d4c0a0088..8559ca365679 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -40,6 +40,7 @@ static struct rb_root uprobes_tree = RB_ROOT; #define no_uprobe_events() RB_EMPTY_ROOT(&uprobes_tree) static DEFINE_RWLOCK(uprobes_treelock); /* serialize rbtree access */ +static seqcount_rwlock_t uprobes_seqcount = SEQCNT_RWLOCK_ZERO(uprobes_seqcount, &uprobes_treelock); DEFINE_STATIC_SRCU(uprobes_srcu); @@ -634,8 +635,11 @@ static void put_uprobe(struct uprobe *uprobe) write_lock(&uprobes_treelock); - if (uprobe_is_active(uprobe)) + if (uprobe_is_active(uprobe)) { + write_seqcount_begin(&uprobes_seqcount); rb_erase(&uprobe->rb_node, &uprobes_tree); + write_seqcount_end(&uprobes_seqcount); + } write_unlock(&uprobes_treelock); @@ -701,14 +705,26 @@ static struct uprobe *find_uprobe_rcu(struct inode *inode, loff_t offset) .offset = offset, }; struct rb_node *node; + unsigned int seq; lockdep_assert(srcu_read_lock_held(&uprobes_srcu)); - read_lock(&uprobes_treelock); - node = rb_find(&key, &uprobes_tree, __uprobe_cmp_key); - read_unlock(&uprobes_treelock); + do { + seq = read_seqcount_begin(&uprobes_seqcount); + node = rb_find_rcu(&key, &uprobes_tree, __uprobe_cmp_key); + /* + * Lockless RB-tree lookups can result only in false negatives. + * If the element is found, it is correct and can be returned + * under RCU protection. If we find nothing, we need to + * validate that seqcount didn't change. If it did, we have to + * try again as we might have missed the element (false + * negative). If seqcount is unchanged, search truly failed. + */ + if (node) + return __node_2_uprobe(node); + } while (read_seqcount_retry(&uprobes_seqcount, seq)); - return node ? __node_2_uprobe(node) : NULL; + return NULL; } /* @@ -730,7 +746,7 @@ static struct uprobe *__insert_uprobe(struct uprobe *uprobe) { struct rb_node *node; again: - node = rb_find_add(&uprobe->rb_node, &uprobes_tree, __uprobe_cmp); + node = rb_find_add_rcu(&uprobe->rb_node, &uprobes_tree, __uprobe_cmp); if (node) { struct uprobe *u = __node_2_uprobe(node); @@ -755,7 +771,9 @@ static struct uprobe *insert_uprobe(struct uprobe *uprobe) struct uprobe *u; write_lock(&uprobes_treelock); + write_seqcount_begin(&uprobes_seqcount); u = __insert_uprobe(uprobe); + write_seqcount_end(&uprobes_seqcount); write_unlock(&uprobes_treelock); return u; From patchwork Tue Aug 13 04:29:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761250 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE7BEC52D7B for ; Tue, 13 Aug 2024 04:30:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2D0E06B00A5; Tue, 13 Aug 2024 00:30:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E6D46B00A7; Tue, 13 Aug 2024 00:30:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 05ED16B00A8; Tue, 13 Aug 2024 00:30:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CA9626B00A5 for ; Tue, 13 Aug 2024 00:30:09 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7A583A058C for ; Tue, 13 Aug 2024 04:30:09 +0000 (UTC) X-FDA: 82445944938.04.89EDD35 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf06.hostedemail.com (Postfix) with ESMTP id C49A318000F for ; Tue, 13 Aug 2024 04:30:07 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=d2qy7pu5; spf=pass (imf06.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523396; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zgwnpK1QZvsw3Z3wOqvZa54lclJP05R94Yu4Ts/FO0E=; b=NVvSlfkcoScpMtnxyxgA10aWaEgdxvL0FvaBDmVJ5AU5CRxmFgIvPd/Bxpy0SmmhXW/BIz x9Ok+ddDTE4S9I+IXcTvyIHUDvs8fusqo+fHD4OEGQ8bNZnMuy7qoAH3/C78ssyyzFL95k qsrin1mZ0nAtUTSH9x0ubGulLc+TFtY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=d2qy7pu5; spf=pass (imf06.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523396; a=rsa-sha256; cv=none; b=3Bsn53cc9Gn+sV9ch6B3GhxA1CHeN6Y48Lrs333MZXTHEhaojZUUwAaREs3Ydky+MHKaT/ MKyuEL2CLVho7goElPc2OXoHpgpguh4Q0TvVgiW6dDw3MSsb9mF4aS5TwcoWuj3ZgKqIVZ FMxm7x+4SDvglfP7wbSWwfg20eLUnc4= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 00D9A61550; Tue, 13 Aug 2024 04:30:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 79525C4AF09; Tue, 13 Aug 2024 04:30:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523406; bh=/A8ty9vSHeed+/Tmg8mjROrsgzuVWuvPWZwEBfeENiw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d2qy7pu5HAsPWWumRq01aZd6gxWYz+LakPOyZo1KSVcUIbFFAOKnY0fvLJCCcMz3n S7FW8vDRhrRhV6WuTOlFVnLitf9+sX914g7iwpYCEgly8XpAk2bnpn/VMtVeT9h9hO xtxGG3ZUFpy9p1rzB7+skRPDxCUGK5WO28qdfrpCE6ehDS17O08zwhNcEzOK8r2hDR QHdPqa3PjFW8zXOvH7umQpEDoECxWeIc0GAmqMQtPgUWLpuz4xiyPC9UwQ/+U9H5HM zsTNPSHwveyM5xApxsr0P5qr5fTmgTQSoxAXIROeV8CxolMdeU6vqZLTk6JPSjd4aa Y2So+16jdHo8g== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH v3 08/13] uprobes: switch to RCU Tasks Trace flavor for better performance Date: Mon, 12 Aug 2024 21:29:12 -0700 Message-ID: <20240813042917.506057-9-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: 98a4awezck6ewfkicnq8z9zp9jgean8r X-Rspamd-Queue-Id: C49A318000F X-Rspamd-Server: rspam11 X-HE-Tag: 1723523407-832727 X-HE-Meta: U2FsdGVkX18JEnjcFAkURU9YYv+pVoYzi+DuUX7bYIwnmTRmETK4UORTotVYAMy+QNIhIp3m+3BWmD/yesJE544sANUo3zKYAjL7OVAMLN5uJQSMSij/o4XRJpdXFAM8EbvWVWhAbT0MHXZ3oIhUE56DkK1SQzCm004h5ka7bpuw3Sua26MNMoapqI9oiOsoO2BqC7LB7AKJ32EQtE40oSFBhwxkZEUDTKiSjccnVyDMPV9Umn14IJcgedLwZ1N2jTXN+OSEciRHwAK2KprnxOyEXCjgu/pTzhQbL5IPWiOEgJiD9I2iaIIbsyA4ooXK9uJaPm5yBzxUwohy6ujsGZGUnpIESImRXtNxMZ0P9udUfoU9lw06qoYp8reob+u6bNhyCbiKetGvuU4V839+8aYemNf+8onPjMWjF4Z3UoqiLz0QPbT4tBjgKIGbnUOyPY32qYTN7lctUFrIrBpcjkS+3ah6mnK9KYcpfD3sPFhXMNQr3rGzRFRe60WCKZgfpUqeQ3ikfdanIlFaSd7sVtZ81F9P7WMirpgSBzyEVP+4pOja/ZfRt3SybcLtW3lhZfiZsylC4G5Ulveu7u7ATeryUtza2H6UxcDvujh2GJgGTlLBe0OLGPE1TQ87dYjxK8/QGNwgquRcwwamw6pCePo+tuJw4Itvh/X8isE8h+HaKwoyelgQfBXP9Osm6KnJ+eLzooNkcCxzYAbLKGFRqYinv7aHH0Oc00f3G1gyTBhtNXMGwC080q4Z+6YazZ/W5U8qeNIV95OikVFfMEyRwL7Eto55G53wRqnMKN+nQf6QG2qccC2KLWeNIrPxATC+LSoq4wN7iEYeh6SS+PZXquywi9qHpp3gobjJgPhL3inr5IN6AP1xrujKV64Vv1unJrmuiEYPHeGsBEaJaYOR483Wl5btFNwxEXpj1Y3eylJ+yLMx6axFqMMVsuDNQCNECdg+oZ9MAoXUNiWYnzi 9ETPzsAo jxM8LhDVWRlq0z58PDiUeDkHOzSjq2MkP1G+U1HtwOo5UqPwzseGfktOsNLq+srhGSY/o3xGIZLabSYoKhlc7bWIS4la8gXqk5GfxTmt8NmRS92zqpZHhKddMJKRG5URpcytloQgCAq5SSq2mTyYgP76uF78JQYO0TzkbvD3G3R6u1KmqJoKbyvy7w0KJx11PCjXgKK+0gNTJC3tl+MHsyWRJd7zP2pzqprlKO5rOyRohdNQuKBAEy/2osECXqsD1Nww7Y3tXourOaSMBB0KlJOF5wg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch switches uprobes SRCU usage to RCU Tasks Trace flavor, which is optimized for more lightweight and quick readers (at the expense of slower writers, which for uprobes is a fine tradeof) and has better performance and scalability with number of CPUs. Similarly to baseline vs SRCU, we've benchmarked SRCU-based implementation vs RCU Tasks Trace implementation. SRCU ==== uprobe-nop ( 1 cpus): 3.276 ± 0.005M/s ( 3.276M/s/cpu) uprobe-nop ( 2 cpus): 4.125 ± 0.002M/s ( 2.063M/s/cpu) uprobe-nop ( 4 cpus): 7.713 ± 0.002M/s ( 1.928M/s/cpu) uprobe-nop ( 8 cpus): 8.097 ± 0.006M/s ( 1.012M/s/cpu) uprobe-nop (16 cpus): 6.501 ± 0.056M/s ( 0.406M/s/cpu) uprobe-nop (32 cpus): 4.398 ± 0.084M/s ( 0.137M/s/cpu) uprobe-nop (64 cpus): 6.452 ± 0.000M/s ( 0.101M/s/cpu) uretprobe-nop ( 1 cpus): 2.055 ± 0.001M/s ( 2.055M/s/cpu) uretprobe-nop ( 2 cpus): 2.677 ± 0.000M/s ( 1.339M/s/cpu) uretprobe-nop ( 4 cpus): 4.561 ± 0.003M/s ( 1.140M/s/cpu) uretprobe-nop ( 8 cpus): 5.291 ± 0.002M/s ( 0.661M/s/cpu) uretprobe-nop (16 cpus): 5.065 ± 0.019M/s ( 0.317M/s/cpu) uretprobe-nop (32 cpus): 3.622 ± 0.003M/s ( 0.113M/s/cpu) uretprobe-nop (64 cpus): 3.723 ± 0.002M/s ( 0.058M/s/cpu) RCU Tasks Trace =============== uprobe-nop ( 1 cpus): 3.396 ± 0.002M/s ( 3.396M/s/cpu) uprobe-nop ( 2 cpus): 4.271 ± 0.006M/s ( 2.135M/s/cpu) uprobe-nop ( 4 cpus): 8.499 ± 0.015M/s ( 2.125M/s/cpu) uprobe-nop ( 8 cpus): 10.355 ± 0.028M/s ( 1.294M/s/cpu) uprobe-nop (16 cpus): 7.615 ± 0.099M/s ( 0.476M/s/cpu) uprobe-nop (32 cpus): 4.430 ± 0.007M/s ( 0.138M/s/cpu) uprobe-nop (64 cpus): 6.887 ± 0.020M/s ( 0.108M/s/cpu) uretprobe-nop ( 1 cpus): 2.174 ± 0.001M/s ( 2.174M/s/cpu) uretprobe-nop ( 2 cpus): 2.853 ± 0.001M/s ( 1.426M/s/cpu) uretprobe-nop ( 4 cpus): 4.913 ± 0.002M/s ( 1.228M/s/cpu) uretprobe-nop ( 8 cpus): 5.883 ± 0.002M/s ( 0.735M/s/cpu) uretprobe-nop (16 cpus): 5.147 ± 0.001M/s ( 0.322M/s/cpu) uretprobe-nop (32 cpus): 3.738 ± 0.008M/s ( 0.117M/s/cpu) uretprobe-nop (64 cpus): 4.397 ± 0.002M/s ( 0.069M/s/cpu) Peak throughput for uprobes increases from 8 mln/s to 10.3 mln/s (+28%!), and for uretprobes from 5.3 mln/s to 5.8 mln/s (+11%), as we have more work to do on uretprobes side. Even single-thread (no contention) performance is slightly better: 3.276 mln/s to 3.396 mln/s (+3.5%) for uprobes, and 2.055 mln/s to 2.174 mln/s (+5.8%) for uretprobes. Signed-off-by: Andrii Nakryiko --- kernel/events/uprobes.c | 37 +++++++++++++++---------------------- 1 file changed, 15 insertions(+), 22 deletions(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 8559ca365679..0480ad841942 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -42,8 +42,6 @@ static struct rb_root uprobes_tree = RB_ROOT; static DEFINE_RWLOCK(uprobes_treelock); /* serialize rbtree access */ static seqcount_rwlock_t uprobes_seqcount = SEQCNT_RWLOCK_ZERO(uprobes_seqcount, &uprobes_treelock); -DEFINE_STATIC_SRCU(uprobes_srcu); - #define UPROBES_HASH_SZ 13 /* serialize uprobe->pending_list */ static struct mutex uprobes_mmap_mutex[UPROBES_HASH_SZ]; @@ -652,7 +650,7 @@ static void put_uprobe(struct uprobe *uprobe) delayed_uprobe_remove(uprobe, NULL); mutex_unlock(&delayed_uprobe_lock); - call_srcu(&uprobes_srcu, &uprobe->rcu, uprobe_free_rcu); + call_rcu_tasks_trace(&uprobe->rcu, uprobe_free_rcu); } static __always_inline @@ -707,7 +705,7 @@ static struct uprobe *find_uprobe_rcu(struct inode *inode, loff_t offset) struct rb_node *node; unsigned int seq; - lockdep_assert(srcu_read_lock_held(&uprobes_srcu)); + lockdep_assert(rcu_read_lock_trace_held()); do { seq = read_seqcount_begin(&uprobes_seqcount); @@ -924,8 +922,7 @@ static bool filter_chain(struct uprobe *uprobe, struct mm_struct *mm) bool ret = false; down_read(&uprobe->consumer_rwsem); - list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node, - srcu_read_lock_held(&uprobes_srcu)) { + list_for_each_entry_rcu(uc, &uprobe->consumers, cons_node, rcu_read_lock_trace_held()) { ret = consumer_filter(uc, mm); if (ret) break; @@ -1148,7 +1145,7 @@ void uprobe_unregister_sync(void) * unlucky enough caller can free consumer's memory and cause * handler_chain() or handle_uretprobe_chain() to do an use-after-free. */ - synchronize_srcu(&uprobes_srcu); + synchronize_rcu_tasks_trace(); } EXPORT_SYMBOL_GPL(uprobe_unregister_sync); @@ -1232,19 +1229,18 @@ EXPORT_SYMBOL_GPL(uprobe_register); int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool add) { struct uprobe_consumer *con; - int ret = -ENOENT, srcu_idx; + int ret = -ENOENT; down_write(&uprobe->register_rwsem); - srcu_idx = srcu_read_lock(&uprobes_srcu); - list_for_each_entry_srcu(con, &uprobe->consumers, cons_node, - srcu_read_lock_held(&uprobes_srcu)) { + rcu_read_lock_trace(); + list_for_each_entry_rcu(con, &uprobe->consumers, cons_node, rcu_read_lock_trace_held()) { if (con == uc) { ret = register_for_each_vma(uprobe, add ? uc : NULL); break; } } - srcu_read_unlock(&uprobes_srcu, srcu_idx); + rcu_read_unlock_trace(); up_write(&uprobe->register_rwsem); @@ -2114,8 +2110,7 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs) current->utask->auprobe = &uprobe->arch; - list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node, - srcu_read_lock_held(&uprobes_srcu)) { + list_for_each_entry_rcu(uc, &uprobe->consumers, cons_node, rcu_read_lock_trace_held()) { int rc = 0; if (uc->handler) { @@ -2153,15 +2148,13 @@ handle_uretprobe_chain(struct return_instance *ri, struct pt_regs *regs) { struct uprobe *uprobe = ri->uprobe; struct uprobe_consumer *uc; - int srcu_idx; - srcu_idx = srcu_read_lock(&uprobes_srcu); - list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node, - srcu_read_lock_held(&uprobes_srcu)) { + rcu_read_lock_trace(); + list_for_each_entry_rcu(uc, &uprobe->consumers, cons_node, rcu_read_lock_trace_held()) { if (uc->ret_handler) uc->ret_handler(uc, ri->func, regs); } - srcu_read_unlock(&uprobes_srcu, srcu_idx); + rcu_read_unlock_trace(); } static struct return_instance *find_next_ret_chain(struct return_instance *ri) @@ -2246,13 +2239,13 @@ static void handle_swbp(struct pt_regs *regs) { struct uprobe *uprobe; unsigned long bp_vaddr; - int is_swbp, srcu_idx; + int is_swbp; bp_vaddr = uprobe_get_swbp_addr(regs); if (bp_vaddr == uprobe_get_trampoline_vaddr()) return uprobe_handle_trampoline(regs); - srcu_idx = srcu_read_lock(&uprobes_srcu); + rcu_read_lock_trace(); uprobe = find_active_uprobe_rcu(bp_vaddr, &is_swbp); if (!uprobe) { @@ -2310,7 +2303,7 @@ static void handle_swbp(struct pt_regs *regs) out: /* arch_uprobe_skip_sstep() succeeded, or restart if can't singlestep */ - srcu_read_unlock(&uprobes_srcu, srcu_idx); + rcu_read_unlock_trace(); } /* From patchwork Tue Aug 13 04:29:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 968FDC52D7B for ; Tue, 13 Aug 2024 04:30:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 285F46B00A9; Tue, 13 Aug 2024 00:30:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 20B516B00AA; Tue, 13 Aug 2024 00:30:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 037C86B00AB; Tue, 13 Aug 2024 00:30:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CC89C6B00A9 for ; Tue, 13 Aug 2024 00:30:17 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 533FBA6AFA for ; Tue, 13 Aug 2024 04:30:17 +0000 (UTC) X-FDA: 82445945274.16.4867E45 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf21.hostedemail.com (Postfix) with ESMTP id 986841C0004 for ; Tue, 13 Aug 2024 04:30:14 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=VbYEML4Z; spf=pass (imf21.hostedemail.com: domain of andrii@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523360; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lwhznnytl6nSMzyUju5L6D6mjL8tUlz3dcWyYWtQEH4=; b=2uXWkdewbHwllSPU0KbGFg94z7fYLNL/MHubtCeF+pXLH+Fkf0x3RR/B7WZt1vfl3K2VEm L5Gae0887XKV3IhwPCoavNeJ57izh87ytaTfkd97eqVFT36fogVHHVCobxFCLEW0OEmwKN JndILVcu4PKA37x1nmK1Um80lDHLjwI= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=VbYEML4Z; spf=pass (imf21.hostedemail.com: domain of andrii@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523360; a=rsa-sha256; cv=none; b=AjcQMRIE+gtKqKcqLJ1XjMQNKcNJ7TLRJuN+dJchyXEoP65HaD/HhdL9IfJVpuPNcPBhAl pmajGzH5UgvrD61X0dhfuVBHJw4SgFUUptelG6VpEKxZBmbKi/R48URSYzXGn8dqlHdP14 Lqw9Ze+DETRA8+JEdiv5zX8Qim6ilZg= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id D31FECE1185; Tue, 13 Aug 2024 04:30:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CCB88C4AF0E; Tue, 13 Aug 2024 04:30:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523410; bh=B5wil2uWMxial/sRKGYqY7C+zyKYK4VtXNZrEosPpfg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VbYEML4ZoVQTOrFFgEAXPJ1mBYBgtGu7Rkg6MuStkgGx7zozJ9MKqM8MTe7nxj2om 4OW5ZHn9n9vk5eU9zBgPIVfxfr1cn1IoqeAr8dXjcxM4PO7/VVN+IQbuSc9jR05abc xAoLSWSlND37zljNrA/f4hWV5EfX1VFrNJWH3F61i6IC8JaFR17qov+tYbNrq8qg4V UwqKedNLg0J8pFlH6C0jgFkSkWRUE17OjpNlEoo+/ChwJYAQxkBcF/oTWzCxke9+RW Y3UChPF+Ipl4cJIUrzpz4hbC8kLTIe4Hx+uKcczUUjLpALIj3J1wDCk9SwYFxtlXHT i78kflm3AtvGA== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH RFC v3 09/13] uprobes: SRCU-protect uretprobe lifetime (with timeout) Date: Mon, 12 Aug 2024 21:29:13 -0700 Message-ID: <20240813042917.506057-10-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Stat-Signature: p3rqsidmie63uqb9donduhmaz6muuxy5 X-Rspam-User: X-Rspamd-Queue-Id: 986841C0004 X-Rspamd-Server: rspam02 X-HE-Tag: 1723523414-138113 X-HE-Meta: U2FsdGVkX18jdR4Xva2Vi4hm5/Lt5lHKOVb5KETTsLAUNQOSywYdwbTvDZvsFemRLA7Z/QlXO6Xh06RcVBqIYNqRLzitnJ1xXqcRHvR7TLkDGRFywzwDH0f/oBYQs4J2bsEqf/o8fA/XlVa3RGOBgp3LsUyZ+OOp8oLOCHmgz7JWO6eIl50R0wGhapC8BhYzS2mLXyZZSHyMMFl8DJ63PsMJAgAkYuPU/zUrOvwJ+V+J6cI8AGTwz0K3/VNyybh5lVYJf7ot32uU0L3DKX8Buve3t5+2PJ31+wKJYJfC0wSf7zfyqylZGP2in+FQfupYRwK1eq7xTKjpRsC5dyC1k2beLsFlyIwUB9byl/s78QVyFIqEAbOQvootKWpD/RwC7A6R0ZoXwl5TfkXvWEF6dL7k6DEsK8YzbB8R/QlMqeFiJ4i8JjyDLhWwyFTbbKSS/iDL8WGV7hWPvMwVC91P5hl7bZ7o2FeUDq2Dnrr8TQ8hODWCrG7ZTseU25St5keUlo+w39lCShNeVtjhRKz+pjqpfSUi2+CY57GRUrNZFcK9FUBmhwZyUiQMHin3ru7kqQJzuopqv7gTC8f8sFJDIEF1erJZeeaV4QMZHmBbYzfHnnhM7IAV6k/49veBbPJNQo/shQSVozBglcBVrGuwp5x7Oa7QcpNEj2kcgGw0R0U+rTJlLYCAP9d+ci8DPHyfQ9GUyhPWIhX+GYxVHU+pHBk0PBEju/mHPABNbOo2v4JG6+3HnFYSEk5/Dt+/j6o6e52xQ79NT7sv4F4IPrCpm3nfH+KDou/fRwiLGB8EpN++53s9vfvSkxyi+uLDWDrsRCA9GTpkOf/UVzllWPip/dXNv33T2Xqt0I2zP4lJv6Wsgk344WcS97hjJHCjTwwPD3YjB9IKlHacF4WCip5w+Q7T+YI3KN6RSpNIVAL/aEs5NGu5ixiPJtU/YFDpK/COu/WnNfFUyvbwuR2RIai hXDT4gF6 K+PsVfCUAk8I5MUiMDNC7LeJsQOn4pI7PRdyaf2Uu/vO9P7wApZiLEWXQuI+grCUyOvVNylnNz/8nS51y+WJJpURmObJgXY3iu7K2wlgfqi+RunO3WfeXVAzs2rjVAg9EThyUuxMcVMTMiOs+yzHaXfgoMr2E2XapBq5Ek7XooXloKGzQfOA/sOhZ6y56QeYdPPqm5kkoX26mfi5vhxRqsW2giLgwecRZ9DTgXg3b1BHa1Pc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Avoid taking refcount on uprobe in prepare_uretprobe(), instead take uretprobe-specific SRCU lock and keep it active as kernel transfers control back to user space. Given we can't rely on user space returning from traced function within reasonable time period, we need to make sure not to keep SRCU lock active for too long, though. To that effect, we employ a timer callback which is meant to terminate SRCU lock region after predefined timeout (currently set to 100ms), and instead transfer underlying struct uprobe's lifetime protection to refcounting. This fallback to less scalable refcounting after 100ms is a fine tradeoff from uretprobe's scalability and performance perspective, because uretprobing long running user functions inherently doesn't run into scalability issues (there is just not enough frequency to cause noticeable issues with either performance or scalability). The overall trick is in ensuring synchronization between current thread and timer's callback fired on some other thread. To cope with that with minimal logic complications, we add hprobe wrapper which is used to contain all the racy and synchronization related issues behind a small number of basic helpers: hprobe_expire() and a matching pair of hprobe_consume() and hprobe_finalize(). Other than that whatever current thread's logic is there stays the same, as timer thread cannot modify return_instance state (or add new/remove old return_instances). It only takes care of SRCU unlock and uprobe refcounting, which is hidden from the higher-level uretprobe handling logic. We use atomic xchg() in hprobe_consume(), which is called from performance critical handle_uretprobe_chain() function run in the current context. When uncontended, this xchg() doesn't seem to hurt performance as there are no other competing CPUs fighting for the same cache line. We also mark struct return_instance as ____cacheline_aligned to ensure no false sharing can happen. Another technical moment, we need to make sure that the list of return instances can be safely traversed under RCU from timer callback, so we delay return_instance freeing with kfree_rcu() and make sure that list modifications use RCU-aware operations. Also, given SRCU lock survives transition from kernel to user space and back we need to use lower-level __srcu_read_lock() and __srcu_read_unlock() to avoid lockdep complaining. Signed-off-by: Andrii Nakryiko --- include/linux/uprobes.h | 49 ++++++- kernel/events/uprobes.c | 294 ++++++++++++++++++++++++++++++++++------ 2 files changed, 301 insertions(+), 42 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index e41cdae5597b..9a0aa0b2a5fe 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -15,6 +15,7 @@ #include #include #include +#include struct uprobe; struct vm_area_struct; @@ -48,6 +49,45 @@ enum uprobe_task_state { UTASK_SSTEP_TRAPPED, }; +/* + * Hybrid lifetime uprobe. Represents a uprobe instance that could be either + * SRCU protected (with SRCU protection eventually potentially timing out), + * refcounted using uprobe->ref, or there could be no valid uprobe (NULL). + * + * hprobe's internal state is setup such that background timer thread can + * atomically "downgrade" temporarily RCU-protected uprobe into refcounted one + * (or no uprobe, if refcounting failed). + * + * *stable* pointer always point to the uprobe (or could be NULL if there is + * was no valid underlying uprobe to begin with). + * + * *leased* pointer is the key to achieving race-free atomic lifetime state + * transition and can have three possible states: + * - either the same non-NULL value as *stable*, in which case uprobe is + * SRCU-protected; + * - NULL, in which case uprobe (if there is any) is refcounted; + * - special __UPROBE_DEAD value, which represents an uprobe that was SRCU + * protected initially, but SRCU period timed out and we attempted to + * convert it to refcounted, but refcount_inc_not_zero() failed, because + * uprobe effectively went away (the last consumer unsubscribed). In this + * case it's important to know that *stable* pointer (which still has + * non-NULL uprobe pointer) shouldn't be used, because lifetime of + * underlying uprobe is not guaranteed anymore. __UPROBE_DEAD is just an + * internal marker and is handled transparently by hprobe_fetch() helper. + * + * When uprobe is SRCU-protected, we also record srcu_idx value, necessary for + * SRCU unlocking. + * + * See hprobe_expire() and hprobe_fetch() for details of race-free uprobe + * state transitioning details. It all hinges on atomic xchg() over *leaded* + * pointer. *stable* pointer, once initially set, is not modified concurrently. + */ +struct hprobe { + struct uprobe *stable; + struct uprobe *leased; + int srcu_idx; +}; + /* * uprobe_task: Metadata of a task while it singlesteps. */ @@ -68,6 +108,7 @@ struct uprobe_task { }; struct uprobe *active_uprobe; + struct timer_list ri_timer; unsigned long xol_vaddr; struct arch_uprobe *auprobe; @@ -77,14 +118,18 @@ struct uprobe_task { }; struct return_instance { - struct uprobe *uprobe; unsigned long func; unsigned long stack; /* stack pointer */ unsigned long orig_ret_vaddr; /* original return address */ bool chained; /* true, if instance is nested */ struct return_instance *next; /* keep as stack */ -}; + + union { + struct hprobe hprobe; + struct rcu_head rcu; + }; +} ____cacheline_aligned; enum rp_check { RP_CHECK_CALL, diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 0480ad841942..26acd06871e6 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -26,6 +26,7 @@ #include #include #include +#include #include @@ -49,6 +50,9 @@ static struct mutex uprobes_mmap_mutex[UPROBES_HASH_SZ]; DEFINE_STATIC_PERCPU_RWSEM(dup_mmap_sem); +/* Covers return_instance's uprobe lifetime. */ +DEFINE_STATIC_SRCU(uretprobes_srcu); + /* Have a copy of original instruction */ #define UPROBE_COPY_INSN 0 @@ -594,13 +598,6 @@ set_orig_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, unsigned long v *(uprobe_opcode_t *)&auprobe->insn); } -/* uprobe should have guaranteed positive refcount */ -static struct uprobe *get_uprobe(struct uprobe *uprobe) -{ - refcount_inc(&uprobe->ref); - return uprobe; -} - /* * uprobe should have guaranteed lifetime, which can be either of: * - caller already has refcount taken (and wants an extra one); @@ -619,13 +616,20 @@ static inline bool uprobe_is_active(struct uprobe *uprobe) return !RB_EMPTY_NODE(&uprobe->rb_node); } -static void uprobe_free_rcu(struct rcu_head *rcu) +static void uprobe_free_rcu_tasks_trace(struct rcu_head *rcu) { struct uprobe *uprobe = container_of(rcu, struct uprobe, rcu); kfree(uprobe); } +static void uprobe_free_srcu(struct rcu_head *rcu) +{ + struct uprobe *uprobe = container_of(rcu, struct uprobe, rcu); + + call_rcu_tasks_trace(&uprobe->rcu, uprobe_free_rcu_tasks_trace); +} + static void put_uprobe(struct uprobe *uprobe) { if (!refcount_dec_and_test(&uprobe->ref)) @@ -650,7 +654,146 @@ static void put_uprobe(struct uprobe *uprobe) delayed_uprobe_remove(uprobe, NULL); mutex_unlock(&delayed_uprobe_lock); - call_rcu_tasks_trace(&uprobe->rcu, uprobe_free_rcu); + /* start srcu -> rcu_tasks_trace -> kfree chain */ + call_srcu(&uretprobes_srcu, &uprobe->rcu, uprobe_free_srcu); +} + +/* + * Special marker pointer for when ri_timer() expired, unlocking RCU, but + * failed to acquire refcount on uprobe (because it doesn't have any + * associated consumer anymore, for example). In such case it's important for + * hprobe_consume() to return NULL uprobe, instead of "stable" uprobe pointer, + * as that one isn't protected by either refcount nor RCU region now. + */ +#define __UPROBE_DEAD ((struct uprobe *)(-0xdead)) + +#define RI_TIMER_PERIOD (HZ/10) /* 100 ms */ + +/* Initialize hprobe as SRCU-protected "leased" uprobe */ +static void hprobe_init_leased(struct hprobe *hprobe, struct uprobe *uprobe, int srcu_idx) +{ + hprobe->srcu_idx = srcu_idx; + hprobe->stable = uprobe; + hprobe->leased = uprobe; +} + +/* Initialize hprobe as refcounted ("stable") uprobe (uprobe can be NULL). */ +static void hprobe_init_stable(struct hprobe *hprobe, struct uprobe *uprobe) +{ + hprobe->srcu_idx = -1; + hprobe->stable = uprobe; + hprobe->leased = NULL; +} + +/* + * hprobe_consume() fetches hprobe's underlying uprobe and detects whether + * uprobe is still SRCU protected, or is refcounted. hprobe_consume() can be + * used only once for a given hprobe. + * + * Caller has to perform SRCU unlock if under_rcu is set to true; + * otherwise, either properly refcounted uprobe is returned or NULL. + */ +static inline struct uprobe *hprobe_consume(struct hprobe *hprobe, bool *under_rcu) +{ + struct uprobe *uprobe; + + uprobe = xchg(&hprobe->leased, NULL); + if (uprobe) { + if (unlikely(uprobe == __UPROBE_DEAD)) { + *under_rcu = false; + return NULL; + } + + *under_rcu = true; + return uprobe; + } + + *under_rcu = false; + return hprobe->stable; +} + +/* + * Reset hprobe state and, if under_rcu is true, release SRCU lock. + * hprobe_finalize() can only be used from current context after + * hprobe_consume() call (which determines uprobe and under_rcu value). + */ +static void hprobe_finalize(struct hprobe *hprobe, struct uprobe *uprobe, bool under_rcu) +{ + if (under_rcu) + __srcu_read_unlock(&uretprobes_srcu, hprobe->srcu_idx); + else if (uprobe) + put_uprobe(uprobe); + /* prevent free_ret_instance() from double-putting uprobe */ + hprobe->stable = NULL; +} + +/* + * Attempt to switch (atomically) uprobe from being RCU protected ("leased") + * to refcounted ("stable") state. Competes with hprobe_consume(), only one of + * them can win the race to perform SRCU unlocking. Whoever wins must perform + * SRCU unlock. + * + * Returns underlying valid uprobe or NULL, if there was no underlying uprobe + * to begin with or we failed to bump its refcount and it's going away. + * + * Returned non-NULL uprobe can be still safely used within an ongoing SRCU + * locked region. It's not guaranteed that returned uprobe has a positive + * refcount, so caller has to attempt try_get_uprobe(), if it needs to use + * returned uprobe instance beyond ongoing SRCU lock region. See dup_utask(). + */ +static struct uprobe* hprobe_expire(struct hprobe *hprobe) +{ + struct uprobe *uprobe; + + /* + * return_instance's hprobe is protected by RCU. + * Underlying uprobe is itself protected from reuse by SRCU. + */ + lockdep_assert(rcu_read_lock_held() && srcu_read_lock_held(&uretprobes_srcu)); + + /* + * Leased pointer can only be NULL, __UPROBE_DEAD, or some valid uprobe + * pointer. This pointer can only be updated to NULL or __UPROBE_DEAD, + * not any other valid uprobe pointer. So it's safe to fetch it with + * READ_ONCE() and try to refcount it, if it's not NULL or __UPROBE_DEAD. + */ + uprobe = data_race(READ_ONCE(hprobe->leased)); + if (!uprobe || uprobe == __UPROBE_DEAD) + return NULL; + + if (!try_get_uprobe(uprobe)) { + /* + * hprobe_consume() might have xchg()'ed to NULL already, + * in which case we shouldn't set __UPROBE_DEAD. + */ + cmpxchg(&hprobe->leased, uprobe, __UPROBE_DEAD); + return NULL; + } + + /* + * Even if hprobe_consume() won and unlocked SRCU, we still have + * a guarantee that uprobe won't be freed (and thus won't be reused) + * because out caller maintains its own SRCU locked region. + * So cmpxchg() below is well-formed. + */ + if (cmpxchg(&hprobe->leased, uprobe, NULL)) { + /* + * At this point uprobe is properly refcounted, so it's safe + * to end its original SRCU locked region. + */ + __srcu_read_unlock(&uretprobes_srcu, hprobe->srcu_idx); + return uprobe; + } + + /* We lost the race, undo our refcount bump. It can drop to zero. */ + put_uprobe(uprobe); + + /* + * We return underlying uprobe nevertheless because it's still valid + * until the end of current SRCU locked region, and can be used to + * try_get_uprobe(). This is used in dup_utask(). + */ + return uprobe; } static __always_inline @@ -1727,11 +1870,18 @@ unsigned long uprobe_get_trap_addr(struct pt_regs *regs) return instruction_pointer(regs); } -static struct return_instance *free_ret_instance(struct return_instance *ri) +static struct return_instance *free_ret_instance(struct return_instance *ri, bool cleanup_hprobe) { struct return_instance *next = ri->next; - put_uprobe(ri->uprobe); - kfree(ri); + struct uprobe *uprobe; + bool under_rcu; + + if (cleanup_hprobe) { + uprobe = hprobe_consume(&ri->hprobe, &under_rcu); + hprobe_finalize(&ri->hprobe, uprobe, under_rcu); + } + + kfree_rcu(ri, rcu); return next; } @@ -1747,18 +1897,51 @@ void uprobe_free_utask(struct task_struct *t) if (!utask) return; + timer_delete_sync(&utask->ri_timer); + if (utask->active_uprobe) put_uprobe(utask->active_uprobe); ri = utask->return_instances; while (ri) - ri = free_ret_instance(ri); + ri = free_ret_instance(ri, true /* cleanup_hprobe */); xol_free_insn_slot(t); kfree(utask); t->utask = NULL; } +#define for_each_ret_instance_rcu(pos, head) \ + for (pos = rcu_dereference_raw(head); pos; pos = rcu_dereference_raw(pos->next)) + +static void ri_timer(struct timer_list *timer) +{ + struct uprobe_task *utask = container_of(timer, struct uprobe_task, ri_timer); + struct return_instance *ri; + + /* SRCU protects uprobe from reuse for the cmpxchg() inside hprobe_expire(). */ + guard(srcu)(&uretprobes_srcu); + /* RCU protects return_instance from freeing. */ + guard(rcu)(); + + for_each_ret_instance_rcu(ri, utask->return_instances) { + hprobe_expire(&ri->hprobe); + } +} + +static struct uprobe_task *alloc_utask(void) +{ + struct uprobe_task *utask; + + utask = kzalloc(sizeof(*utask), GFP_KERNEL); + if (!utask) + return NULL; + + timer_setup(&utask->ri_timer, ri_timer, 0); + + return utask; +} + /* * Allocate a uprobe_task object for the task if necessary. * Called when the thread hits a breakpoint. @@ -1770,7 +1953,7 @@ void uprobe_free_utask(struct task_struct *t) static struct uprobe_task *get_utask(void) { if (!current->utask) - current->utask = kzalloc(sizeof(struct uprobe_task), GFP_KERNEL); + current->utask = alloc_utask(); return current->utask; } @@ -1778,12 +1961,16 @@ static int dup_utask(struct task_struct *t, struct uprobe_task *o_utask) { struct uprobe_task *n_utask; struct return_instance **p, *o, *n; + struct uprobe *uprobe; - n_utask = kzalloc(sizeof(struct uprobe_task), GFP_KERNEL); + n_utask = alloc_utask(); if (!n_utask) return -ENOMEM; t->utask = n_utask; + /* protect uprobes from freeing, we'll need try_get_uprobe() them */ + guard(srcu)(&uretprobes_srcu); + p = &n_utask->return_instances; for (o = o_utask->return_instances; o; o = o->next) { n = kmalloc(sizeof(struct return_instance), GFP_KERNEL); @@ -1791,17 +1978,24 @@ static int dup_utask(struct task_struct *t, struct uprobe_task *o_utask) return -ENOMEM; *n = *o; + + /* see hprobe_expire() comments */ + uprobe = hprobe_expire(&o->hprobe); + if (uprobe) /* refcount bump for new utask */ + uprobe = try_get_uprobe(uprobe); + /* - * uprobe's refcnt has to be positive at this point, kept by - * utask->return_instances items; return_instances can't be - * removed right now, as task is blocked due to duping; so - * get_uprobe() is safe to use here. + * New utask will have stable properly refcounted uprobe or NULL. + * Even if we failed to get refcounted uprobe, we still need + * to preserve full set of return_instances for proper + * uretprobe handling and nesting in forked task. */ - get_uprobe(n->uprobe); - n->next = NULL; + hprobe_init_stable(&n->hprobe, uprobe); - *p = n; + n->next = NULL; + rcu_assign_pointer(*p, n); p = &n->next; + n_utask->depth++; } @@ -1877,10 +2071,10 @@ static void cleanup_return_instances(struct uprobe_task *utask, bool chained, enum rp_check ctx = chained ? RP_CHECK_CHAIN_CALL : RP_CHECK_CALL; while (ri && !arch_uretprobe_is_alive(ri, ctx, regs)) { - ri = free_ret_instance(ri); + ri = free_ret_instance(ri, true /* cleanup_hprobe */); utask->depth--; } - utask->return_instances = ri; + rcu_assign_pointer(utask->return_instances, ri); } static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs) @@ -1889,6 +2083,7 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs) struct uprobe_task *utask; unsigned long orig_ret_vaddr, trampoline_vaddr; bool chained; + int srcu_idx; if (!get_xol_area()) return; @@ -1904,10 +2099,6 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs) return; } - /* we need to bump refcount to store uprobe in utask */ - if (!try_get_uprobe(uprobe)) - return; - ri = kmalloc(sizeof(struct return_instance), GFP_KERNEL); if (!ri) goto fail; @@ -1937,20 +2128,36 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs) } orig_ret_vaddr = utask->return_instances->orig_ret_vaddr; } - ri->uprobe = uprobe; + + /* __srcu_read_lock() because SRCU lock survives switch to user space */ + srcu_idx = __srcu_read_lock(&uretprobes_srcu); + ri->func = instruction_pointer(regs); ri->stack = user_stack_pointer(regs); ri->orig_ret_vaddr = orig_ret_vaddr; ri->chained = chained; utask->depth++; + + hprobe_init_leased(&ri->hprobe, uprobe, srcu_idx); ri->next = utask->return_instances; - utask->return_instances = ri; + rcu_assign_pointer(utask->return_instances, ri); + + /* + * Don't reschedule if timer is already active. This way we have + * a guaranteed cap on maximum timer period (SRCU expiration duration) + * regardless of how long and well-timed uretprobe chain user space + * might cause. At worst we'll just have a few extra inconsequential + * refcount bumps even if we could, technically, get away with just an + * SRCU lock. On the other hand, we get timer expiration logic + * triggered and tested regularly even for very short-running uretprobes. + */ + if (!timer_pending(&utask->ri_timer)) + mod_timer(&utask->ri_timer, jiffies + RI_TIMER_PERIOD); return; fail: kfree(ri); - put_uprobe(uprobe); } /* Prepare to single-step probed instruction out of line. */ @@ -2144,11 +2351,14 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs) } static void -handle_uretprobe_chain(struct return_instance *ri, struct pt_regs *regs) +handle_uretprobe_chain(struct return_instance *ri, struct uprobe *uprobe, struct pt_regs *regs) { - struct uprobe *uprobe = ri->uprobe; struct uprobe_consumer *uc; + /* all consumers unsubscribed meanwhile */ + if (unlikely(!uprobe)) + return; + rcu_read_lock_trace(); list_for_each_entry_rcu(uc, &uprobe->consumers, cons_node, rcu_read_lock_trace_held()) { if (uc->ret_handler) @@ -2173,7 +2383,8 @@ void uprobe_handle_trampoline(struct pt_regs *regs) { struct uprobe_task *utask; struct return_instance *ri, *next; - bool valid; + struct uprobe *uprobe; + bool valid, under_rcu; utask = current->utask; if (!utask) @@ -2203,21 +2414,24 @@ void uprobe_handle_trampoline(struct pt_regs *regs) * trampoline addresses on the stack are replaced with correct * original return addresses */ - utask->return_instances = ri->next; + rcu_assign_pointer(utask->return_instances, ri->next); + + uprobe = hprobe_consume(&ri->hprobe, &under_rcu); if (valid) - handle_uretprobe_chain(ri, regs); - ri = free_ret_instance(ri); + handle_uretprobe_chain(ri, uprobe, regs); + hprobe_finalize(&ri->hprobe, uprobe, under_rcu); + + /* We already took care of hprobe, no need to waste more time on that. */ + ri = free_ret_instance(ri, false /* !cleanup_hprobe */); utask->depth--; } while (ri != next); } while (!valid); - utask->return_instances = ri; return; - sigill: +sigill: uprobe_warn(current, "handle uretprobe, sending SIGILL."); force_sig(SIGILL); - } bool __weak arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs) From patchwork Tue Aug 13 04:29:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761252 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C978C52D7B for ; Tue, 13 Aug 2024 04:30:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ACCC26B00AD; Tue, 13 Aug 2024 00:30:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A53776B00AC; Tue, 13 Aug 2024 00:30:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 89B8B6B00AD; Tue, 13 Aug 2024 00:30:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 64DFD6B00AB for ; Tue, 13 Aug 2024 00:30:22 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1A695C05D0 for ; Tue, 13 Aug 2024 04:30:22 +0000 (UTC) X-FDA: 82445945484.25.DAA29ED Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf04.hostedemail.com (Postfix) with ESMTP id 2C1D840004 for ; Tue, 13 Aug 2024 04:30:18 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=VsFyfNFI; spf=pass (imf04.hostedemail.com: domain of andrii@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523408; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0PqwzyKHX14q+atJT7rqGOiqxxOYOUyIwT7vShAS1sY=; b=JhJkd+3E2qFYWC5je3ex7/fDvAJ9ApzmdTFV1K6acRBLH/l27UtB81zf7Iow68vLVRYVKs 6KtJQwfTm6IbVk7LNboiKVOc1GuGTewc0knm71O+LbYnw+H9zNkiSBPdFb4wXNVs5z6jS4 55NCJU4070L+qe93tJzs20M/MwTspbE= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=VsFyfNFI; spf=pass (imf04.hostedemail.com: domain of andrii@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523408; a=rsa-sha256; cv=none; b=j22kpQvqCDFRqTuW7K56Gnoq/awFo/xYVQk4NF51XsOHTFWRjSGMB+dmnweXDd9q5WKF2G dsKoaVi11w/EZYUNwDdYbqW9aZORacK6cByvsUmTD84/TiZUXDdKN7wWsucKqZYW6Z7oGi /2Sq2SUjKbNIzFheYC4WwxPyL9/qtfw= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 212EECE1181; Tue, 13 Aug 2024 04:30:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2116BC4AF11; Tue, 13 Aug 2024 04:30:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523413; bh=lE/xzipeIMKlpu3HvFd4L+QxhBM6/Hk0Qs9hd5OSsYM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VsFyfNFI1aH1yYtea8NqaJI8dNdgsqy7CbBuR0QzPL1quQeQDPrXwLZh7di5rL9uS jRz1DW3DAlmVOhkzhAt4IpDSk5aRrst+Gkj55GZpizSIJUhFwV//P2uGNHw++ixzfm 0uZ3uhs8RHOv8cjppbjx20zTHVBGNXxZ8sufUBJXsguCXTtIbe/Z2khO/SRP55lbI1 bjB4AS7YmyAZDbVkW/BPle9cTfX+7DdCW5b4IzgU7Ht3uEkotHkag+cQMm2rkYR9BD yeEZ1V1yoh3bSUXYAgctMa8IAVOv39IoRp9hmW6CX5P/v8fWj7R6UkPdGXIeY0tnbd YvHBquD/4SPMg== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH RFC v3 10/13] uprobes: implement SRCU-protected lifetime for single-stepped uprobe Date: Mon, 12 Aug 2024 21:29:14 -0700 Message-ID: <20240813042917.506057-11-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: anm4mc8aizf9frha4fp98jnhazmq338e X-Rspamd-Queue-Id: 2C1D840004 X-Rspamd-Server: rspam11 X-HE-Tag: 1723523418-616574 X-HE-Meta: U2FsdGVkX18+pF9tIp4zWY+APGtr0whrYxYEcgr2zbOnW6GU5IdD35RlNa59h0VEiXZobLee10DvLiEB8p9YOl9Jh+is9qrPO70py+L+KFADfi6ni677lFyo/XE75g9/tEI2fe8h2KYe3z/o73ygz/2FkihmwhZq4kJNm30IXtTFYOY2LAMAjnERCYcZF+6v5lLSdsyiA6wsn118fjkDtIQKWWo+t+vWR76a8DomzfTeBS+cmDhP4RLiuX1kEHWCKqGLMYXZDPxqUm0G/41yAW/Dj17WMHWE7vphsqFcMcbnlvfDj3C+qUE2qa7Xc6xTHJzZWoAYLc46QwTlcx8pjHIpcdz60AXF85Q3f414LmzvD5SxtZuL0f6ZLIjii5jUZu5HeOeBIF2BTVyC1+vvBtIrtL7vZ7uSJdGupdCfBf59pippXFHzIfFgZZdOyduWcpEEwz0JdmH31fqp/H4UOfnVABfdIPH7N3iPjQGe2ZWV9f0EiMk2rcLcrMX66KRv7cMzN4X2nd8REQiYiDV+/DUcrGP9sTssT2FL37M/A//rch+xkK+3HyDzyo4v26jd9K/kIiCBR6lbBIeF89d8HmIOqIcGNmM7f0KrZkWVfgG5/L2IgDwhaVlEqlj58PCsDKCppnLmKYwEuvs8FP3jMR+y6aZUtwaEfbMh8m7epJ1KMsQ0GlD+g4RxWRfDVh/rATNQS7y5MyYv7L/3QQxdRFzZnhoCvMOJpZ/RV3hWtQFqaOwhRtzDYDn3zPQr0X8lcdkkTIEw+vtueXpBUqiMs0T8VlGk7hg9AYZBhcfAJRAZNawMu5W+mJGZYj0NJof8NxRX74LoPImxyjpSFTK+zQIWhTH7C10qTiWEi/nnj7weizFYubgcTo9XLv+/qPMkEt5XEFi3Q6VCsQlHNWgtoy2hd2g9NK+ri2SreoMYGKMC+KtDjwGqvEM2XUpCrUDHIYjEjMHP4reyDHKw8TY bLaJY+l0 uHfNtCVQNQdxI6hesrJuisu4RqkpXrCh60b/IOguqROfOQMvWBfbyvvsQaXkE/0FVxouYLPAQAqRVVu78TbPtEa7qNNEM8tgfRVT0lSOgEhK8f0/W3NfpaiCbSSXZkDq4EBljUuu+JZ2EyzNonUNUiX3jFkyxlIl4z8yqM1wLR9ioS2iorHbMd3yGqJIWkTIqFmKbPmM55D5qqsqegOtAvtsZAZkajbQXXfN+0gFobAKKWqQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Similarly to how we SRCU-protect uprobe instance (and avoid refcounting it unnecessarily) when waiting for return probe hit, use hprobe approach to do the same with single-stepped uprobe. Same hprobe_* primitives are used. We also reuse ri_timer() callback to expire both pending single-step uprobe and return instances. Signed-off-by: Andrii Nakryiko --- include/linux/uprobes.h | 4 +-- kernel/events/uprobes.c | 56 ++++++++++++++++++++++++----------------- 2 files changed, 35 insertions(+), 25 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 9a0aa0b2a5fe..cc9d3cb055b5 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -107,7 +107,7 @@ struct uprobe_task { }; }; - struct uprobe *active_uprobe; + struct hprobe active_hprobe; struct timer_list ri_timer; unsigned long xol_vaddr; @@ -115,7 +115,7 @@ struct uprobe_task { struct return_instance *return_instances; unsigned int depth; -}; +} ____cacheline_aligned; struct return_instance { unsigned long func; diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 26acd06871e6..713824c8ca77 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1860,11 +1860,16 @@ unsigned long __weak uprobe_get_swbp_addr(struct pt_regs *regs) return instruction_pointer(regs) - UPROBE_SWBP_INSN_SIZE; } +static bool utask_has_pending_sstep_uprobe(struct uprobe_task *utask) +{ + return utask->active_hprobe.stable != NULL; +} + unsigned long uprobe_get_trap_addr(struct pt_regs *regs) { struct uprobe_task *utask = current->utask; - if (unlikely(utask && utask->active_uprobe)) + if (unlikely(utask && utask_has_pending_sstep_uprobe(utask))) return utask->vaddr; return instruction_pointer(regs); @@ -1893,14 +1898,17 @@ void uprobe_free_utask(struct task_struct *t) { struct uprobe_task *utask = t->utask; struct return_instance *ri; + struct uprobe *uprobe; + bool under_rcu; if (!utask) return; timer_delete_sync(&utask->ri_timer); - if (utask->active_uprobe) - put_uprobe(utask->active_uprobe); + /* clean up pending single-stepped uprobe */ + uprobe = hprobe_consume(&utask->active_hprobe, &under_rcu); + hprobe_finalize(&utask->active_hprobe, uprobe, under_rcu); ri = utask->return_instances; while (ri) @@ -1924,6 +1932,8 @@ static void ri_timer(struct timer_list *timer) /* RCU protects return_instance from freeing. */ guard(rcu)(); + hprobe_expire(&utask->active_hprobe); + for_each_ret_instance_rcu(ri, utask->return_instances) { hprobe_expire(&ri->hprobe); } @@ -2166,20 +2176,15 @@ pre_ssout(struct uprobe *uprobe, struct pt_regs *regs, unsigned long bp_vaddr) { struct uprobe_task *utask; unsigned long xol_vaddr; - int err; + int err, srcu_idx; utask = get_utask(); if (!utask) return -ENOMEM; - if (!try_get_uprobe(uprobe)) - return -EINVAL; - xol_vaddr = xol_get_insn_slot(uprobe); - if (!xol_vaddr) { - err = -ENOMEM; - goto err_out; - } + if (!xol_vaddr) + return -ENOMEM; utask->xol_vaddr = xol_vaddr; utask->vaddr = bp_vaddr; @@ -2187,15 +2192,18 @@ pre_ssout(struct uprobe *uprobe, struct pt_regs *regs, unsigned long bp_vaddr) err = arch_uprobe_pre_xol(&uprobe->arch, regs); if (unlikely(err)) { xol_free_insn_slot(current); - goto err_out; + return err; } - utask->active_uprobe = uprobe; + srcu_idx = __srcu_read_lock(&uretprobes_srcu); + + hprobe_init_leased(&utask->active_hprobe, uprobe, srcu_idx); utask->state = UTASK_SSTEP; + + if (!timer_pending(&utask->ri_timer)) + mod_timer(&utask->ri_timer, jiffies + RI_TIMER_PERIOD); + return 0; -err_out: - put_uprobe(uprobe); - return err; } /* @@ -2212,7 +2220,7 @@ bool uprobe_deny_signal(void) struct task_struct *t = current; struct uprobe_task *utask = t->utask; - if (likely(!utask || !utask->active_uprobe)) + if (likely(!utask || !utask_has_pending_sstep_uprobe(utask))) return false; WARN_ON_ONCE(utask->state != UTASK_SSTEP); @@ -2528,8 +2536,10 @@ static void handle_singlestep(struct uprobe_task *utask, struct pt_regs *regs) { struct uprobe *uprobe; int err = 0; + bool under_rcu; + + uprobe = hprobe_consume(&utask->active_hprobe, &under_rcu); - uprobe = utask->active_uprobe; if (utask->state == UTASK_SSTEP_ACK) err = arch_uprobe_post_xol(&uprobe->arch, regs); else if (utask->state == UTASK_SSTEP_TRAPPED) @@ -2537,8 +2547,8 @@ static void handle_singlestep(struct uprobe_task *utask, struct pt_regs *regs) else WARN_ON_ONCE(1); - put_uprobe(uprobe); - utask->active_uprobe = NULL; + hprobe_finalize(&utask->active_hprobe, uprobe, under_rcu); + utask->state = UTASK_RUNNING; xol_free_insn_slot(current); @@ -2556,7 +2566,7 @@ static void handle_singlestep(struct uprobe_task *utask, struct pt_regs *regs) /* * On breakpoint hit, breakpoint notifier sets the TIF_UPROBE flag and * allows the thread to return from interrupt. After that handle_swbp() - * sets utask->active_uprobe. + * sets utask->active_hprobe. * * On singlestep exception, singlestep notifier sets the TIF_UPROBE flag * and allows the thread to return from interrupt. @@ -2571,7 +2581,7 @@ void uprobe_notify_resume(struct pt_regs *regs) clear_thread_flag(TIF_UPROBE); utask = current->utask; - if (utask && utask->active_uprobe) + if (utask && utask_has_pending_sstep_uprobe(utask)) handle_singlestep(utask, regs); else handle_swbp(regs); @@ -2602,7 +2612,7 @@ int uprobe_post_sstep_notifier(struct pt_regs *regs) { struct uprobe_task *utask = current->utask; - if (!current->mm || !utask || !utask->active_uprobe) + if (!current->mm || !utask || !utask_has_pending_sstep_uprobe(utask)) /* task is currently not uprobed */ return 0; From patchwork Tue Aug 13 04:29:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761254 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CC5BC52D7C for ; Tue, 13 Aug 2024 04:30:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 457016B00AE; Tue, 13 Aug 2024 00:30:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B8CE6B00AF; Tue, 13 Aug 2024 00:30:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E5BE6B00B0; Tue, 13 Aug 2024 00:30:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id F1DA06B00AE for ; Tue, 13 Aug 2024 00:30:24 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id AA4CB1205CE for ; Tue, 13 Aug 2024 04:30:24 +0000 (UTC) X-FDA: 82445945568.12.5580794 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf10.hostedemail.com (Postfix) with ESMTP id 2FDBAC001F for ; Tue, 13 Aug 2024 04:30:21 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=q3xvOnoW; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf10.hostedemail.com: domain of andrii@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=andrii@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523371; a=rsa-sha256; cv=none; b=vbidppyPZ1OvsLbKDfFJRlRT2xnQc2a4KlucQDWwqdRtN39kRbwf8m0mlzfrIAojDUO9yF vPehjQ/4H0wt7bgnKNNC77hQVRFYRG2yznwza2sA2nOy77y4di29ooM2a/hS3nAmrMJ7OX aeJTxwaY8Kf278rxVrpKJNMks6h7QAo= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=q3xvOnoW; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf10.hostedemail.com: domain of andrii@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=andrii@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523371; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=B9Dukaov5N32RakpiQxPipF9G/V96LMr9FfHFJaTjOQ=; b=h//RKJ26Z4nlPqQYTfrDRvqWFl9x/R0o/qejdz4bUGj99Brwo2wa2+MHowLioxj1jX9+dU Gwdq39f4XbY8newtCg+/xg9lBqoCzp2nA3c5rFfC5bmvDUJECpGOHW0CL2TNgKeAtqHxkY KAaLfjorfZ7SkXoCoxmRB//7/q6f5do= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 59A6ACE1171; Tue, 13 Aug 2024 04:30:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 64ED0C4AF09; Tue, 13 Aug 2024 04:30:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523416; bh=mYH0n7OY/tN+k3UF1Q0GJPFr/FSKIMdZcbGxNJ52OfI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q3xvOnoWnvTsLIvKz6kdYhpy/06faosDWyzX4GZE9mZKKYnCrHQmeagOkf3NtLtlt NmKjplsABxkVNsQSzHzR8kIYehx8Ka4H96mrEHnJ5/uB7BhSRa1rrkIs78nr9DL8OX d0o7kSdAgrgCbCLvkIE1Yf3v6860bzYy5EHLqGfN3vA5Uf52rC1YG3mlA5IVW1p45F CesQXLMDDMVAWyAl87VczubWu4gJ2hIJmxwGMuY/FGsANI2pjOCkGX9V64pu3GyB/7 gDvHzjkVyPcdTMh3VRASUdbc3SLZs8dxl/4Yoydf+uy7do87L6tUk4/+wWn3AHSVTG 38rfKjpvmatrw== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko , Andrii Nakryiko Subject: [PATCH RFC v3 11/13] mm: introduce mmap_lock_speculation_{start|end} Date: Mon, 12 Aug 2024 21:29:15 -0700 Message-ID: <20240813042917.506057-12-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2FDBAC001F X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 9z7pkzwanjed7e56u8educ1nt677ort7 X-HE-Tag: 1723523421-498913 X-HE-Meta: U2FsdGVkX1+GOJzeIDIkUMKcGv9dB7xZXiRylQ3fpisz4a9JHGy8Stx+rS0gVJ1H3g4F0rUNOJzldENNblFh2MT4fncy6ehHasggKBMmzUw4DyDoes3A04nNzdfZfOZfOJfDqf0qgYITwF29uaoshItdUYVqJI9kef6ASrVx2FUfOsa1wd2bHDOgdxEc9nEks6dMRptwhMe7+knweahEsny6R5CKAHEzaq/Sagj7wHxgYBUk5ETJDjkX/Lh5NHh5/FTGsBpjh7KeAec4+qKHtpiY9Bhgq3Q7hAQ4KUv9nUErTY7VCQEkURmG7y5whqv1YtX9ZHwnf+6nDwAL3WgMheB0FtbDbUSrL4bfiSQDR8QxJvCCdwCgQBMh5LxzMtyjC0Gy9ukcguhsGeQpuCanjr72qOFalOwdybNQq5UcrHLN1D3ugl2faVb1LAQnNDR9AQyMNUUsaE0aFHZgQehy1GZH6qtiAXS2pNnzMKsqrKjl8O6MvzPmT+7O3HO/UeOSyEteB88hpoKzCRNDAVtFReoB3HYJyuTusPy/mBRl+E/W/jws+0z++Qi4U+gQwuvAPi61KmW4VD4HW8WzmGUujGcAcWugYpr+9REh+p9+GzLgVXCtY9NSS3GTguP8LPS13h3BHrRy4iBPD4WCvL0oZJKk+RE9nt+/tIiVVbsxMK7uO/WvysBwI+J0Ca2uHl0OoaLzin8po6nF+sjahsK1oLzjo3IJHtqc/6QzhX/4MJd9hZII3gxSuukXyKTkv8jzv984YoBKhh1Vw70CGEeB2TJ7ispNHDUjFULwezFgr3H6W5U7Qv8nH9VM6uIN6Jz5DeNimn42lY2f3nnWFCrZ7fe034mzdYtcTiKtLT54DuoHg4jaIyTVftRC3o3wbKb9sLfNkMHmirntmdGbFJkhtvkR7AHfsErirvd60MAvbpvKD4WRugT7RdJArIqG611ja4m0KL6MEgGxvI9n7+1 T4ZwSdEa 6zc8DAf5LmHlBMaYd/Ayxnf+beZYaqeQv+rqKkUczX2TRO+udAL6nZtxXlBKDmX39Xe/CaYTpLpBPvMOSN5kmUAnQnr8Cxa7fEl+8pRmRO4u2kennuDmwjBR6A+UzK2a7i3kU8km0UxCQyo210hhaQqZC4h7v3z7ywcwCNUN50isVE4qyJOdd4X7yRINRdiaWyG14aCUFP66Fjkjh1qkXmNWj8yDB/AcTSgl/tuAAn7B25fAuZMYqqFuUUtBu0kpxOpxbmPKPWy5pNlw6s0uzdmEclXG/Gvtb1CoJ/l/52PaArDUO+zdC+fUHthmLjtjcv3/r25Wp3gkDkt+POiAH5m8zTCmNVjohi5CKdUqKZq9HCZkwnLd68NvPJRnrKFpJ2OtDDpcv61qqL9gE+nuPv+sFIw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Suren Baghdasaryan Add helper functions to speculatively perform operations without read-locking mmap_lock, expecting that mmap_lock will not be write-locked and mm is not modified from under us. Signed-off-by: Suren Baghdasaryan Suggested-by: Peter Zijlstra Cc: Andrii Nakryiko --- include/linux/mm_types.h | 3 +++ include/linux/mmap_lock.h | 53 +++++++++++++++++++++++++++++++-------- kernel/fork.c | 3 --- 3 files changed, 46 insertions(+), 13 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 485424979254..d5e3f907eea4 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -876,6 +876,9 @@ struct mm_struct { * Roughly speaking, incrementing the sequence number is * equivalent to releasing locks on VMAs; reading the sequence * number can be part of taking a read lock on a VMA. + * Incremented every time mmap_lock is write-locked/unlocked. + * Initialized to 0, therefore odd values indicate mmap_lock + * is write-locked and even values that it's released. * * Can be modified under write mmap_lock using RELEASE * semantics. diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index de9dc20b01ba..5410ce741d75 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -71,15 +71,12 @@ static inline void mmap_assert_write_locked(const struct mm_struct *mm) } #ifdef CONFIG_PER_VMA_LOCK -/* - * Drop all currently-held per-VMA locks. - * This is called from the mmap_lock implementation directly before releasing - * a write-locked mmap_lock (or downgrading it to read-locked). - * This should normally NOT be called manually from other places. - * If you want to call this manually anyway, keep in mind that this will release - * *all* VMA write locks, including ones from further up the stack. - */ -static inline void vma_end_write_all(struct mm_struct *mm) +static inline void init_mm_lock_seq(struct mm_struct *mm) +{ + mm->mm_lock_seq = 0; +} + +static inline void inc_mm_lock_seq(struct mm_struct *mm) { mmap_assert_write_locked(mm); /* @@ -91,19 +88,52 @@ static inline void vma_end_write_all(struct mm_struct *mm) */ smp_store_release(&mm->mm_lock_seq, mm->mm_lock_seq + 1); } + +static inline bool mmap_lock_speculation_start(struct mm_struct *mm, int *seq) +{ + /* Pairs with RELEASE semantics in inc_mm_lock_seq(). */ + *seq = smp_load_acquire(&mm->mm_lock_seq); + /* Allow speculation if mmap_lock is not write-locked */ + return (*seq & 1) == 0; +} + +static inline bool mmap_lock_speculation_end(struct mm_struct *mm, int seq) +{ + /* Pairs with RELEASE semantics in inc_mm_lock_seq(). */ + return seq == smp_load_acquire(&mm->mm_lock_seq); +} + #else -static inline void vma_end_write_all(struct mm_struct *mm) {} +static inline void init_mm_lock_seq(struct mm_struct *mm) {} +static inline void inc_mm_lock_seq(struct mm_struct *mm) {} +static inline bool mmap_lock_speculation_start(struct mm_struct *mm, int *seq) { return false; } +static inline bool mmap_lock_speculation_end(struct mm_struct *mm, int seq) { return false; } #endif +/* + * Drop all currently-held per-VMA locks. + * This is called from the mmap_lock implementation directly before releasing + * a write-locked mmap_lock (or downgrading it to read-locked). + * This should normally NOT be called manually from other places. + * If you want to call this manually anyway, keep in mind that this will release + * *all* VMA write locks, including ones from further up the stack. + */ +static inline void vma_end_write_all(struct mm_struct *mm) +{ + inc_mm_lock_seq(mm); +} + static inline void mmap_init_lock(struct mm_struct *mm) { init_rwsem(&mm->mmap_lock); + init_mm_lock_seq(mm); } static inline void mmap_write_lock(struct mm_struct *mm) { __mmap_lock_trace_start_locking(mm, true); down_write(&mm->mmap_lock); + inc_mm_lock_seq(mm); __mmap_lock_trace_acquire_returned(mm, true, true); } @@ -111,6 +141,7 @@ static inline void mmap_write_lock_nested(struct mm_struct *mm, int subclass) { __mmap_lock_trace_start_locking(mm, true); down_write_nested(&mm->mmap_lock, subclass); + inc_mm_lock_seq(mm); __mmap_lock_trace_acquire_returned(mm, true, true); } @@ -120,6 +151,8 @@ static inline int mmap_write_lock_killable(struct mm_struct *mm) __mmap_lock_trace_start_locking(mm, true); ret = down_write_killable(&mm->mmap_lock); + if (!ret) + inc_mm_lock_seq(mm); __mmap_lock_trace_acquire_returned(mm, true, ret == 0); return ret; } diff --git a/kernel/fork.c b/kernel/fork.c index cc760491f201..76ebafb956a6 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1259,9 +1259,6 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, seqcount_init(&mm->write_protect_seq); mmap_init_lock(mm); INIT_LIST_HEAD(&mm->mmlist); -#ifdef CONFIG_PER_VMA_LOCK - mm->mm_lock_seq = 0; -#endif mm_pgtables_bytes_init(mm); mm->map_count = 0; mm->locked_vm = 0; From patchwork Tue Aug 13 04:29:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64F5FC52D7C for ; Tue, 13 Aug 2024 04:30:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0AD156B00AB; Tue, 13 Aug 2024 00:30:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0361C6B00AC; Tue, 13 Aug 2024 00:30:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF2706B00AE; Tue, 13 Aug 2024 00:30:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id ABD706B00AB for ; Tue, 13 Aug 2024 00:30:22 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 6C0F340613 for ; Tue, 13 Aug 2024 04:30:22 +0000 (UTC) X-FDA: 82445945484.25.413A985 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf12.hostedemail.com (Postfix) with ESMTP id C7A5440011 for ; Tue, 13 Aug 2024 04:30:20 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=OLNzorZQ; spf=pass (imf12.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523350; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YYMIRaq+cLum7NENIPqqC0D6npIgjfOITQlSDs4Hi5Y=; b=OT/Ah9Ycq7Tc0rv+PRYHOKo9Gy9F81PDGr8zFORUcfRLZs8yoTRNKuddTHBON1wpG80b/R 7tsYaF/pLRBla9wQw/1gvhJkZ9WKEWJXcdTyIruWJ/QiZMRqmSoYAMSPJMy+mPMdslBVvD kbAwowZsRr9g6xm94+GdFNAdM7voedI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523350; a=rsa-sha256; cv=none; b=Y6zZhbnM68p1bJh4xblEHeSQ7vSRRVaurlnlTFK8Ahhcp411d4EPNa/rAidGaNGSgdqDyI LTZWnv2Kkktokpz8kBe5l8qq29eGbdot8ggC50RhgTYhj+BCqT6rGOFXx59BEvJZCvMPVw QL6P14jJ+nvH5eQ/vXYeFp1OZdjwBs4= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=OLNzorZQ; spf=pass (imf12.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 0DA3F61519; Tue, 13 Aug 2024 04:30:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 948B7C4AF0E; Tue, 13 Aug 2024 04:30:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523419; bh=nyRerODhqExBNZEMJor5oVWA71mTXcVPCXwFpQK87yo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OLNzorZQYwSeMaJO6ZfyOpi4voFgsxSZV0d5e7Qzj9EhA2qXZsvMRIgP+QZ/Gbskg ZBYVI1/3pxnqaxOfifIwgjRD5pPearLl6dOuYiGPxlC+9PlkxIphaHQocP/Mnfwhpw tvmC4tmqfNkwkFV14pqqAgBQiGt01yos3TpcaulRwZSoDwEUfEaQ/cSiBz5/qy+5fK jZo+8Q20NaEfVavXBJXBxFISIiOLjKvxVhOvy5axbn1VwexZJRmQCCwY7Q+1uueSCG K1U03rS2xYpmx0Ru+K8EnjzMNEnQMxpbsPnSM2W7a4OcCt27SFGei8XqAfYRzgMDJT Wuariw/Hk3ntQ== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH RFC v3 12/13] mm: add SLAB_TYPESAFE_BY_RCU to files_cache Date: Mon, 12 Aug 2024 21:29:16 -0700 Message-ID: <20240813042917.506057-13-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: C7A5440011 X-Stat-Signature: joadmjc1teggn6fmjjwp91zwhyxzhp8g X-HE-Tag: 1723523420-846495 X-HE-Meta: U2FsdGVkX188M2Qmx/eOk3jvixwmn1BIFdPcGRFRReUyBNWLvnCUhEgc4Syih70ELoSw6TLMmNWcTDxqR4AgTh3XPFgYoUpMyVtt6Wwlz6Yfuko2NzzWCOFCSPWQycB5KBJDsXLkg+r5riqoqY3psZR298jQAqDw3wGsglrUUpaRvaeO1Q/QgL/QwmQYXzKD8YO5vsQhDieTcg4XKEJRgozOfujaQs55wiuM/zIeD0vuZnVzM/3HYDBLsffIyNTSV7qCnF6pwJmi013INL/XrgriXoPyO7fTOIYAfSfmp6U2msAY/F17zEk7+KLP9BX6drzUqVMCEifxhCXglx2ioNLCISFcEgAibLEx4ie3cE43t6vyRDUInPJbAvPX05o7x+LvmUSCtXionK6CYSrPmPblXjoByiaNkQVNd3Gb03mSJFVAkix8RnMmQvALqYvNpQtF6NnRUHNY2ksVdsF9zM8SnS9FkwFWvhi6+2R50YGOSfc/uBHNPU46UaO4iU2F4P7VEqDSn2DMub3BelKghwP9BFx1+MC8YL6ivuLQaKedSGSJK9tdyntjAOKdZhUbHzFxyV6LsSjAUDpU18m7tjs4XnQpYLyDqWgfpIJ/aBsJZNNprmRiuXTLNqSqXSzfz1A7FHlaFM26UKhK8mjyQ+C4BJw1vBWgxINrusPpzx/3s9iDJbCK4WJxZN5uyzK/0og16D41HYPwtUE2fl5Vyuvxir418HXlUQUQVfLA+7fJ0p6D3ITZ4UxmpW05Fwbd1DxeZh3NF0tXpn83Up8mEG2DxedgHkQ7WvetrcrrteaFN1l3FMVMfx5f7Xj6T8vzSejVGX1ywQN6sG64Hmb2Muk8CoBzGBPI7op65Mk/QHMdx9uM1vlwamcYrQ0DjvDAuKPqUuwvjUC9wMxE0uBYhpRQTwYNiWG/F15GJs9E66PgG4Hv+R53iyyc0zd3EH2z5B/Tkd8AnAP0e79PmVF Q0eEOFaQ tdiaJG4tjJDCxlMCKHpklDw8jdWHAqlrcfD9cG5HLoNXP1gI/sfu3MWfegIXpfU1pP+jcIOqT4mRbBn6W3i0rPU8WkLh5KFPenNxDYsSiOe+jzw2cW5l7clDC1AAbn/QkAc11N4tpmIKQR/3ePO96ZBrVeUVAU7mVvA61J7VpbJ2sz2OJjSkTkk76I+EywYtez71b7xJmnNnkM5o0FMs5KYT84rs2/kba3hFmt8eMUJ9XGsz4lsn6MhLywQezfY4FZGr0D/bkJIdM/G4kMi2ITnJ45uK1LZkK82hQ98R8BSqLmz4pHs38Z8yGSUVW0ydSPpWvLtSgAmllf9CIH9BOGC+iheNDDMz9LoSU X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add RCU protection for file struct's backing memory by adding SLAB_TYPESAFE_BY_RCU flag to files_cachep. This will allow to locklessly access struct file's fields under RCU lock protection without having to take much more expensive and contended locks. This is going to be used for lockless uprobe look up in the next patch. Suggested-by: Matthew Wilcox Signed-off-by: Andrii Nakryiko --- kernel/fork.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/fork.c b/kernel/fork.c index 76ebafb956a6..91ecc32a491c 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -3157,8 +3157,8 @@ void __init proc_caches_init(void) NULL); files_cachep = kmem_cache_create("files_cache", sizeof(struct files_struct), 0, - SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, - NULL); + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| + SLAB_ACCOUNT, NULL); fs_cachep = kmem_cache_create("fs_cache", sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, From patchwork Tue Aug 13 04:29:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77894C52D7C for ; Tue, 13 Aug 2024 04:30:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2E556B00B1; Tue, 13 Aug 2024 00:30:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E8F0F6B00B2; Tue, 13 Aug 2024 00:30:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE2ED6B00B3; Tue, 13 Aug 2024 00:30:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A518F6B00B1 for ; Tue, 13 Aug 2024 00:30:31 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 168DA1405D2 for ; Tue, 13 Aug 2024 04:30:31 +0000 (UTC) X-FDA: 82445945862.01.0BDCF22 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf08.hostedemail.com (Postfix) with ESMTP id 9774416001C for ; Tue, 13 Aug 2024 04:30:27 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tr7HBheG; spf=pass (imf08.hostedemail.com: domain of andrii@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523417; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1KEh94yzksUL+SKRZUbwOEqx5pBEUR3+T5J64oeQdqk=; b=EG7xl/6KBu6BU27Ah05mIdsGg1e7STPtp5ePTvUec5sUXzmE7H94hYvPOqewz85Fh2ms/m UPt6EUoFAGem1y+ONvuyyfI4wDr9qCtjaUZE+jtK/X7tgRi/D+xlzTHAyoA6NrLUxP5q8q DDtXUnXLqKhl+3jYL1xkemcx0pbetHc= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=tr7HBheG; spf=pass (imf08.hostedemail.com: domain of andrii@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523417; a=rsa-sha256; cv=none; b=sjnMZHwOAzdY8z1i0kpMRlnzJF7UjH3hszVHbLUByHWi5ltZZh9a5gqwSBcXYIirbS0aTl Y/LTVwm2VG4F9yRR8oMq+B/aeww0FoVw6tCv/uGhjhG7kegpXZoYK5Gie6WeDk5m36P/oP qZeiO6o7p93S2rYzxvM0tezaPxk0v+s= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id C0190CE117F; Tue, 13 Aug 2024 04:30:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD99FC4AF09; Tue, 13 Aug 2024 04:30:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523423; bh=gSE54BiJY1Pn94Uq6hz2zx0T4YWVk3mFY2vRRT7eELk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tr7HBheG7ZVaxH5fDPNe9zvs9bISHt+ID/Fx9j140YrSfEHywcfelhc1Z7RuY/bsk 74BJkhS/NFMe5g1AKIgczJVWU3hHl3titiTkJO2DW0JesMP4jgP6a+TbdBuuXw2FQE lonD4ojRh/qFsbknKwHa4JCCXnqO3bRNXEWC1OhPeti3PFddopMCTnnTv1eNSMkC9K dtdCqz4umOkd8FzZkWAosT/JuZ/Skh/NO+ZJ00Sie2dfJ96wVNkhbZB7CZ/XgDcbg/ //k+fL/Yrt6qOqC+L/l+6MWX9siS1kS2UmNrG0VUqVHFqNtaCz0r6jKMZWPk8l9I3x YAy9iDqTC4JEw== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH RFC v3 13/13] uprobes: add speculative lockless VMA to inode resolution Date: Mon, 12 Aug 2024 21:29:17 -0700 Message-ID: <20240813042917.506057-14-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: g5akakmfuiha4rfccbq9hgiwo7ixsnxx X-Rspamd-Queue-Id: 9774416001C X-Rspamd-Server: rspam11 X-HE-Tag: 1723523427-265696 X-HE-Meta: U2FsdGVkX1/a2wOnzZ3xkRSmysbu5gyNt67FUxfvOBv24jcdfrcFKVbP9fAHBBip5SBqzqSsO97BeP1y522MG1GwqPpcjQboMl4kmjMu2mj+JYEs1+xqhx4YcwJQrUq2TlDyeT75svHlfaOMBnf5WfkRlHBdrhYBBZ1ilGq7w/La4mp/dziYW3bw55V2pJLc7Ls5UNrLElACleItM6l2iCWfGfNIAp2NcdttslqNG8y9bKg3rKCBsUcjrsb+5lUuU/MECxGcwAF9QZDGD2e7QXW2gdAolLVlc8B3OK/TzCrhUfN0YuklEImO8veoCXGx0bTgPEFofFb/79s23Dbcteyo+Y1iDKwqNQTx3ArO+sU5ZI749g5HrfNFlT2vfrVwdUepnBw0vlUVK5In1XaOI4bapizH8ooQndUwqefJI+Et7c4jsIqORqkI+yViL7OSy1Oy3W4SesMX9T8sUMblxDJrX58i0RWjveEVLok+cIloPrv+MibOwDmTsegozUphmy24vZcK2Rw8cIRdu1cG6EfGr8Oalizd91edX+ZcFghENcs8WTxrZvxX7zOrdNahDjzes/o5rWAduA9PzyjHD8e2sKbcoEY+y54CoFQSbYalkdoYJ0EoxECTxwFdZLzzkNptVsN9fRLP4zeyyQZKyAplZ0yKi/MJ6j9bbws6novcm6a6qVhkbnKXU5gbv7xNs0wSFBbZkMOdY88eNq9Qm1lEFk/k7KmeW1p+i+NctX8mgTZrshadQd407pnBzSk4V/Cedba3TpOgQFUHl+Y/WdDzcJ2WpkOFepC7PC3hdfFdREEuAbMOwk8tFy0fh/OuyLHGGtMFSP42boRT8cmJOoJbnM+eZ4/XJmv3n8kuFJQFykilYMz1ZEAoowRNI/9salp7ArdjxsgUvp/r3vUKGED6qFe2fj9nRRJeNvwUPVaK+TtQjyw5Y6qX4fO1dwam8GwlP3F+26m2usp/iAA 2G7uMYKB lXgjFKapa2SEGNE4GaxXj49QCaEN04g+cLdbK/xDhoiu/1yDt/Qm7gRsiAq+qV6xqEa3sjo2YFd2BHNNBD/FixQszfCol16yNYGqg5qxPEqgrkl+9pgLvNV2E8hd5mEyibXCz/YDSI+978BqA7CTg75bo6/M32RwWWlM95HMXuNwJupDsTMl5bERgYfagUqO2KcZnUPhhSlsUf71pSoN3SwkI66SOgTb44NguBCkK393OzaWDsWOTIhpbNd1GDk0nW71RwXt9zv2grOe/fwtuMuE7u05qMrpupzVlRHNjnZFQqg6dEsaBlUwxKyP4YHWtJBUdypsFci0Qd7E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that files_cachep is SLAB_TYPESAFE_BY_RCU, we can safely access vma->vm_file->f_inode lockless only under rcu_read_lock() protection, attempting uprobe look up speculatively. We rely on newly added mmap_lock_speculation_{start,end}() helpers to validate that mm_struct stays intact for entire duration of this speculation. If not, we fall back to mmap_lock-protected lookup. This allows to avoid contention on mmap_lock in absolutely majority of cases, nicely improving uprobe/uretprobe scalability. BEFORE ====== uprobe-nop ( 1 cpus): 3.417 ± 0.013M/s ( 3.417M/s/cpu) uprobe-nop ( 2 cpus): 5.724 ± 0.006M/s ( 2.862M/s/cpu) uprobe-nop ( 3 cpus): 8.543 ± 0.012M/s ( 2.848M/s/cpu) uprobe-nop ( 4 cpus): 11.094 ± 0.004M/s ( 2.774M/s/cpu) uprobe-nop ( 5 cpus): 13.703 ± 0.006M/s ( 2.741M/s/cpu) uprobe-nop ( 6 cpus): 16.350 ± 0.010M/s ( 2.725M/s/cpu) uprobe-nop ( 7 cpus): 19.100 ± 0.031M/s ( 2.729M/s/cpu) uprobe-nop ( 8 cpus): 20.138 ± 0.029M/s ( 2.517M/s/cpu) uprobe-nop (10 cpus): 20.161 ± 0.020M/s ( 2.016M/s/cpu) uprobe-nop (12 cpus): 15.129 ± 0.011M/s ( 1.261M/s/cpu) uprobe-nop (14 cpus): 15.013 ± 0.013M/s ( 1.072M/s/cpu) uprobe-nop (16 cpus): 13.352 ± 0.007M/s ( 0.834M/s/cpu) uprobe-nop (24 cpus): 12.470 ± 0.005M/s ( 0.520M/s/cpu) uprobe-nop (32 cpus): 11.252 ± 0.042M/s ( 0.352M/s/cpu) uprobe-nop (40 cpus): 10.308 ± 0.001M/s ( 0.258M/s/cpu) uprobe-nop (48 cpus): 11.037 ± 0.007M/s ( 0.230M/s/cpu) uprobe-nop (56 cpus): 12.055 ± 0.002M/s ( 0.215M/s/cpu) uprobe-nop (64 cpus): 12.895 ± 0.004M/s ( 0.201M/s/cpu) uprobe-nop (72 cpus): 13.995 ± 0.005M/s ( 0.194M/s/cpu) uprobe-nop (80 cpus): 15.224 ± 0.030M/s ( 0.190M/s/cpu) AFTER ===== uprobe-nop ( 1 cpus): 3.562 ± 0.006M/s ( 3.562M/s/cpu) uprobe-nop ( 2 cpus): 6.751 ± 0.007M/s ( 3.376M/s/cpu) uprobe-nop ( 3 cpus): 10.121 ± 0.007M/s ( 3.374M/s/cpu) uprobe-nop ( 4 cpus): 13.100 ± 0.007M/s ( 3.275M/s/cpu) uprobe-nop ( 5 cpus): 16.321 ± 0.008M/s ( 3.264M/s/cpu) uprobe-nop ( 6 cpus): 19.612 ± 0.004M/s ( 3.269M/s/cpu) uprobe-nop ( 7 cpus): 22.910 ± 0.037M/s ( 3.273M/s/cpu) uprobe-nop ( 8 cpus): 24.705 ± 0.011M/s ( 3.088M/s/cpu) uprobe-nop (10 cpus): 30.772 ± 0.020M/s ( 3.077M/s/cpu) uprobe-nop (12 cpus): 33.614 ± 0.009M/s ( 2.801M/s/cpu) uprobe-nop (14 cpus): 39.166 ± 0.004M/s ( 2.798M/s/cpu) uprobe-nop (16 cpus): 41.692 ± 0.014M/s ( 2.606M/s/cpu) uprobe-nop (24 cpus): 64.802 ± 0.048M/s ( 2.700M/s/cpu) uprobe-nop (32 cpus): 84.226 ± 0.223M/s ( 2.632M/s/cpu) uprobe-nop (40 cpus): 102.071 ± 0.067M/s ( 2.552M/s/cpu) uprobe-nop (48 cpus): 106.603 ± 1.198M/s ( 2.221M/s/cpu) uprobe-nop (56 cpus): 117.695 ± 0.059M/s ( 2.102M/s/cpu) uprobe-nop (64 cpus): 124.291 ± 0.485M/s ( 1.942M/s/cpu) uprobe-nop (72 cpus): 135.527 ± 0.134M/s ( 1.882M/s/cpu) uprobe-nop (80 cpus): 146.195 ± 0.230M/s ( 1.827M/s/cpu) Previously total throughput was maxing out at 20mln/s with 8-10 cores, declining afterwards. With this change, it now keeps growing with each added CPU, reaching 146mln/s at 80 CPUs (this was measured on a 80-core Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz). Suggested-by: Matthew Wilcox Signed-off-by: Andrii Nakryiko --- kernel/events/uprobes.c | 51 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 713824c8ca77..12f3edf2ffb1 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -2286,6 +2286,53 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr) return is_trap_insn(&opcode); } +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) +{ + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; + struct mm_struct *mm = current->mm; + struct uprobe *uprobe; + struct vm_area_struct *vma; + struct file *vm_file; + struct inode *vm_inode; + unsigned long vm_pgoff, vm_start; + int seq; + loff_t offset; + + if (!mmap_lock_speculation_start(mm, &seq)) + return NULL; + + rcu_read_lock(); + + vma = vma_lookup(mm, bp_vaddr); + if (!vma) + goto bail; + + vm_file = data_race(vma->vm_file); + if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC) + goto bail; + + vm_inode = data_race(vm_file->f_inode); + vm_pgoff = data_race(vma->vm_pgoff); + vm_start = data_race(vma->vm_start); + + offset = (loff_t)(vm_pgoff << PAGE_SHIFT) + (bp_vaddr - vm_start); + uprobe = find_uprobe_rcu(vm_inode, offset); + if (!uprobe) + goto bail; + + /* now double check that nothing about MM changed */ + if (!mmap_lock_speculation_end(mm, seq)) + goto bail; + + rcu_read_unlock(); + + /* happy case, we speculated successfully */ + return uprobe; +bail: + rcu_read_unlock(); + return NULL; +} + /* assumes being inside RCU protected region */ static struct uprobe *find_active_uprobe_rcu(unsigned long bp_vaddr, int *is_swbp) { @@ -2293,6 +2340,10 @@ static struct uprobe *find_active_uprobe_rcu(unsigned long bp_vaddr, int *is_swb struct uprobe *uprobe = NULL; struct vm_area_struct *vma; + uprobe = find_active_uprobe_speculative(bp_vaddr); + if (uprobe) + return uprobe; + mmap_read_lock(mm); vma = vma_lookup(mm, bp_vaddr); if (vma) {