From patchwork Tue Aug 13 04:29:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13761246 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4BEDC52D7C for ; Tue, 13 Aug 2024 04:29:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E2716B009F; Tue, 13 Aug 2024 00:29:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3695D6B00A0; Tue, 13 Aug 2024 00:29:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BEFF6B00A1; Tue, 13 Aug 2024 00:29:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F011F6B009F for ; Tue, 13 Aug 2024 00:29:56 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 596F81205EE for ; Tue, 13 Aug 2024 04:29:56 +0000 (UTC) X-FDA: 82445944392.29.7D28293 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf26.hostedemail.com (Postfix) with ESMTP id 9B8A5140004 for ; Tue, 13 Aug 2024 04:29:54 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=QET3Aid1; spf=pass (imf26.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723523360; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wBJU1f7FFd35QfSjW9BDamRkiXuE7F/4iDuuSAqyZ+M=; b=x8MMqaumBFoKK+/jtn7bMdvHzx8KE3Uuna+KlWJFV1A8KdgOcUsqY/JIJgjmS6EYTtQ+Rl I4kIPsj5Xj25GE42XfbKXwSNOyR7JF5zlcU9h789G1/fxS7MTitMjouJWbxah1ibplMikv q6LgxRvFkE7jjXxKKBSDCU2K3DzSFqw= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=QET3Aid1; spf=pass (imf26.hostedemail.com: domain of andrii@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=andrii@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723523360; a=rsa-sha256; cv=none; b=a2LXL4qw8Q6Z67gSbbqsJTjkHGbTV8SX0hVmqBddNcsomxN5gpoNgzG5sBNnov45FbqHNr RTESVYsI8kChiH15gsyHZuKv2FyYuCnIpt/VR1nXXDhW1Z2oIgmknQEeBTCXwPWITLn93o f5MBEN3EKeLzRqQTSFfYtJHGP1pKQTg= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id C315C61561; Tue, 13 Aug 2024 04:29:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5C265C4AF09; Tue, 13 Aug 2024 04:29:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1723523393; bh=r8TSvo2hmymFmaW8DnSgb8eLExbQzyutP7wrmtQjB9U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QET3Aid1p3p6zpbI/vyekDIH/TA9WuthMBXBmzg7W4mSZtM8Vmh4lkfnzmfJQ5NND gntDwNcRBBdIjXNVMkTJe+A3FxGjgBYoRHrEh+9h3qL9x9jneOpnkXPJtc9iOjFZPD j0G/XjhV5+ut0qELpIr8c1iHbFHeR0KW2/n3j1d63eQyFq3vTT9UTXb3BRIvg8FeAU k+BspMbFsMmt4gX1M5cr0Y9QP1tpXhj6KoJgVQoXRK4QIxcJ7ge9QGsqA+sRVBDtrk s5cwCL0chEsEh9f0Q1mlx7mAS8E3EjWr7cSMTPD+l6fPwueUKKMjmgBFCnG8oW1dWC Kieji6bsnHBDw== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com Cc: rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, surenb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, Andrii Nakryiko Subject: [PATCH v3 04/13] uprobes: travers uprobe's consumer list locklessly under SRCU protection Date: Mon, 12 Aug 2024 21:29:08 -0700 Message-ID: <20240813042917.506057-5-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20240813042917.506057-1-andrii@kernel.org> References: <20240813042917.506057-1-andrii@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 9B8A5140004 X-Stat-Signature: wgmji4xzbqjc7fgrj77xhjod6fynoxs4 X-HE-Tag: 1723523394-293146 X-HE-Meta: U2FsdGVkX195VA1B80xPv3uKhPvlq3Il5aqL1hbkY/CYfdzwHb+h8oias26gxaWjzg001rNRsQ6bKRxm6/ETdKdMLaX12EvLdCE2wywPvXYlEDfi1EOapyWlBYcHLAM8YvLr9Hss7VHZqXNpEDORUg2/zjcpSVe7OTFJ75C8UoU6FGZ39n79kMPLLwKgz7ElZEtbsEM2+u2h+7N8+D5r330UVMY2mRFdFdUYFrQTOOY03BCMMQqGLsrzDbhJfkhuFwXMoEXAdF9QhbLw/FJQBu4eG39+BVb049AUSHnhiSCr8Qm1oIHrt+DdVHILPkFtIe3nuG5rjUUcRwu9PKetnCzwaVJGZgyEPnZJUcDJ/wM/tq6jvPtX6FrkODSNEKN8lyWhGFFN34nWnBSvyKLPR+FfhWpzY1sRHyFihAvMB1PIW9wxqQ6h5wVH4FnxUnTViDVNcD4ig2PEXBBskF/TKnFaPm0QE0RSzwl4l2uhtIgd+X5kkPwahv0yn9BQKUyF7zE8Zrf1+fnNWNqwb+XneJTQnG0fhPzqRVKTlV+P6OlRo9+6SdBqhTPcfgLoB+O1X4QMFvcqXdaczsSHeoDMjb1Q/V5xA7ViNjs7aqmhvKhRCtU/3rLo4X5dxPsYecwc/5XdGBeM9KBmHUDMI++G6qA7lMtItwlaorb8wu3j11snPhrIsyUFtyCneIXq5hTz5D1C1ri/jVHD/I1ArgaMfJJhcRF/sHaSIvnszzmv5yqHtI5pcoxKJb4VxJSgvk7m1IshTSTXZhWlIzVi7c6cTAKxpgQkCfaWz5nvyA8h3IYcT2fGRm1OgbHfmJZVpL1V7/DJjwUodUQgz0s1vHxxY71oIOu2Qe0mL+U7vqact2j2NwNhtstlPI6u2NEm+O/Mv/Gcho351gxXCEvJlz04Ik0FBndlK0Bqk1LHnEYlGbSjklgQU2u9e5HzXAFfCyekT0xGszGULDpcZRgIbfo 5ifa01fc 93lZ5I6dhdRZCLfQbYvlPXE2gdca/sFsk5F26PcGTtOAlDn2ts36dM4bUdWFCALbFAtiJyCiTX3T0bgX8m5kRdZIbKUyaVoreSNcnfhuP/kgQQFxMDSJpsqJP2/q2u0FrPxALhCLZCci+JPj8sI47CQTGjGfHWJKKONeFe6fiXKucVsxi/DpyjmZQhSCsbviHBnKvbkf0i6q5mKpsDnH6UYVYzV94/VVvpzoHU82l2IiOqVw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: uprobe->register_rwsem is one of a few big bottlenecks to scalability of uprobes, so we need to get rid of it to improve uprobe performance and multi-CPU scalability. First, we turn uprobe's consumer list to a typical doubly-linked list and utilize existing RCU-aware helpers for traversing such lists, as well as adding and removing elements from it. For entry uprobes we already have SRCU protection active since before uprobe lookup. For uretprobe we keep refcount, guaranteeing that uprobe won't go away from under us, but we add SRCU protection around consumer list traversal. Lastly, to keep handler_chain()'s UPROBE_HANDLER_REMOVE handling simple, we remember whether any removal was requested during handler calls, but then we double-check the decision under a proper register_rwsem using consumers' filter callbacks. Handler removal is very rare, so this extra lock won't hurt performance, overall, but we also avoid the need for any extra protection (e.g., seqcount locks). Signed-off-by: Andrii Nakryiko --- include/linux/uprobes.h | 2 +- kernel/events/uprobes.c | 111 ++++++++++++++++++++++------------------ 2 files changed, 61 insertions(+), 52 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 9cf0dce62e4c..29c935b0d504 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -35,7 +35,7 @@ struct uprobe_consumer { struct pt_regs *regs); bool (*filter)(struct uprobe_consumer *self, struct mm_struct *mm); - struct uprobe_consumer *next; + struct list_head cons_node; }; #ifdef CONFIG_UPROBES diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 8bdcdc6901b2..7de1aaf50394 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -59,7 +59,7 @@ struct uprobe { struct rw_semaphore register_rwsem; struct rw_semaphore consumer_rwsem; struct list_head pending_list; - struct uprobe_consumer *consumers; + struct list_head consumers; struct inode *inode; /* Also hold a ref to inode */ struct rcu_head rcu; loff_t offset; @@ -783,6 +783,7 @@ static struct uprobe *alloc_uprobe(struct inode *inode, loff_t offset, uprobe->inode = inode; uprobe->offset = offset; uprobe->ref_ctr_offset = ref_ctr_offset; + INIT_LIST_HEAD(&uprobe->consumers); init_rwsem(&uprobe->register_rwsem); init_rwsem(&uprobe->consumer_rwsem); RB_CLEAR_NODE(&uprobe->rb_node); @@ -808,34 +809,10 @@ static struct uprobe *alloc_uprobe(struct inode *inode, loff_t offset, static void consumer_add(struct uprobe *uprobe, struct uprobe_consumer *uc) { down_write(&uprobe->consumer_rwsem); - uc->next = uprobe->consumers; - uprobe->consumers = uc; + list_add_rcu(&uc->cons_node, &uprobe->consumers); up_write(&uprobe->consumer_rwsem); } -/* - * For uprobe @uprobe, delete the consumer @uc. - * Return true if the @uc is deleted successfully - * or return false. - */ -static bool consumer_del(struct uprobe *uprobe, struct uprobe_consumer *uc) -{ - struct uprobe_consumer **con; - bool ret = false; - - down_write(&uprobe->consumer_rwsem); - for (con = &uprobe->consumers; *con; con = &(*con)->next) { - if (*con == uc) { - *con = uc->next; - ret = true; - break; - } - } - up_write(&uprobe->consumer_rwsem); - - return ret; -} - static int __copy_insn(struct address_space *mapping, struct file *filp, void *insn, int nbytes, loff_t offset) { @@ -929,7 +906,8 @@ static bool filter_chain(struct uprobe *uprobe, struct mm_struct *mm) bool ret = false; down_read(&uprobe->consumer_rwsem); - for (uc = uprobe->consumers; uc; uc = uc->next) { + list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node, + srcu_read_lock_held(&uprobes_srcu)) { ret = consumer_filter(uc, mm); if (ret) break; @@ -1125,18 +1103,31 @@ void uprobe_unregister(struct uprobe *uprobe, struct uprobe_consumer *uc) int err; down_write(&uprobe->register_rwsem); - if (WARN_ON(!consumer_del(uprobe, uc))) { - err = -ENOENT; - } else { - err = register_for_each_vma(uprobe, NULL); - /* TODO : cant unregister? schedule a worker thread */ - if (unlikely(err)) - uprobe_warn(current, "unregister, leaking uprobe"); - } + + list_del_rcu(&uc->cons_node); + err = register_for_each_vma(uprobe, NULL); + up_write(&uprobe->register_rwsem); - if (!err) - put_uprobe(uprobe); + /* TODO : cant unregister? schedule a worker thread */ + if (unlikely(err)) { + uprobe_warn(current, "unregister, leaking uprobe"); + goto out_sync; + } + + put_uprobe(uprobe); + +out_sync: + /* + * Now that handler_chain() and handle_uretprobe_chain() iterate over + * uprobe->consumers list under RCU protection without holding + * uprobe->register_rwsem, we need to wait for RCU grace period to + * make sure that we can't call into just unregistered + * uprobe_consumer's callbacks anymore. If we don't do that, fast and + * unlucky enough caller can free consumer's memory and cause + * handler_chain() or handle_uretprobe_chain() to do an use-after-free. + */ + synchronize_srcu(&uprobes_srcu); } EXPORT_SYMBOL_GPL(uprobe_unregister); @@ -1214,13 +1205,20 @@ EXPORT_SYMBOL_GPL(uprobe_register); int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool add) { struct uprobe_consumer *con; - int ret = -ENOENT; + int ret = -ENOENT, srcu_idx; down_write(&uprobe->register_rwsem); - for (con = uprobe->consumers; con && con != uc ; con = con->next) - ; - if (con) - ret = register_for_each_vma(uprobe, add ? uc : NULL); + + srcu_idx = srcu_read_lock(&uprobes_srcu); + list_for_each_entry_srcu(con, &uprobe->consumers, cons_node, + srcu_read_lock_held(&uprobes_srcu)) { + if (con == uc) { + ret = register_for_each_vma(uprobe, add ? uc : NULL); + break; + } + } + srcu_read_unlock(&uprobes_srcu, srcu_idx); + up_write(&uprobe->register_rwsem); return ret; @@ -2085,10 +2083,12 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs) struct uprobe_consumer *uc; int remove = UPROBE_HANDLER_REMOVE; bool need_prep = false; /* prepare return uprobe, when needed */ + bool has_consumers = false; - down_read(&uprobe->register_rwsem); current->utask->auprobe = &uprobe->arch; - for (uc = uprobe->consumers; uc; uc = uc->next) { + + list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node, + srcu_read_lock_held(&uprobes_srcu)) { int rc = 0; if (uc->handler) { @@ -2101,17 +2101,24 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs) need_prep = true; remove &= rc; + has_consumers = true; } current->utask->auprobe = NULL; if (need_prep && !remove) prepare_uretprobe(uprobe, regs); /* put bp at return */ - if (remove && uprobe->consumers) { - WARN_ON(!uprobe_is_active(uprobe)); - unapply_uprobe(uprobe, current->mm); + if (remove && has_consumers) { + down_read(&uprobe->register_rwsem); + + /* re-check that removal is still required, this time under lock */ + if (!filter_chain(uprobe, current->mm)) { + WARN_ON(!uprobe_is_active(uprobe)); + unapply_uprobe(uprobe, current->mm); + } + + up_read(&uprobe->register_rwsem); } - up_read(&uprobe->register_rwsem); } static void @@ -2119,13 +2126,15 @@ handle_uretprobe_chain(struct return_instance *ri, struct pt_regs *regs) { struct uprobe *uprobe = ri->uprobe; struct uprobe_consumer *uc; + int srcu_idx; - down_read(&uprobe->register_rwsem); - for (uc = uprobe->consumers; uc; uc = uc->next) { + srcu_idx = srcu_read_lock(&uprobes_srcu); + list_for_each_entry_srcu(uc, &uprobe->consumers, cons_node, + srcu_read_lock_held(&uprobes_srcu)) { if (uc->ret_handler) uc->ret_handler(uc, ri->func, regs); } - up_read(&uprobe->register_rwsem); + srcu_read_unlock(&uprobes_srcu, srcu_idx); } static struct return_instance *find_next_ret_chain(struct return_instance *ri)