From patchwork Fri Dec 6 00:24:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13896150 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 263B2208DA; Fri, 6 Dec 2024 00:24:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733444666; cv=none; b=r0BMxFDLwz+C5jM4nePj0eH+TzagQrZ85FirypYav47MouJ0DLZjzsVak/rCuapMO+Rwz9EQmhU0d+PEm8H/y6sNsQ3Km5yJVaj2njgewQq2ReikJHza1RgrSr2tvW2PfaV8eUvl8P40a5Gan1r9KgJVfOz2DvibA5BLpFYA8Tc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733444666; c=relaxed/simple; bh=XJcGgMfvVEFnnmGCSoFFS5wunI/wQCjII7NV6YUoSgs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dScStbiZZF8fV+cTP/6EBjVit24O3iM9sZbLq8yfPAVTXBd/ybpU0HwVP5vvKItksyu1bRurHEhqIGc5EqmIwTIs/Ejh19dO/UBEAyLCnlzJoD1B7oFeoDShmgtoCTe36cGCqgqvEbJ5/bjfHGK9Gj3QobkZt9rA3/l9312Fmm0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=aISFCA4L; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="aISFCA4L" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7093CC4CED1; Fri, 6 Dec 2024 00:24:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733444665; bh=XJcGgMfvVEFnnmGCSoFFS5wunI/wQCjII7NV6YUoSgs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aISFCA4LCAzc+skMVsut5LurVjDfup/3THzEjW4bJo2wNtVzVsDQgW9OhSPFhsI34 cPAt3bstXQfkf8E4U+14Ii84Hda3/ekV+1/YmfdTi7GenoMtqntQ3PqBfOJt0LXykB lhY64aZ330aNnJiTgHIeJw8PG5RFEoGCuGPxnfSMXe/4Mnz8mEGptkcwb6+WI/spuJ +XG6bhlP/CIQe0gMNm0z7xA8zzrEV0inpiZLPxRAdOw1jUDtUQWkUqPhTjSqxR+IXP tS5+GdqOz8z4TlwREhKwG6qQj9peeo67smqJaGMe4ENk9iLeMFbuYR4VW8UD9LjUG/ CJfA/8jbpflfg== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, mingo@kernel.org Cc: oleg@redhat.com, rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, liaochang1@huawei.com, kernel-team@meta.com, Andrii Nakryiko Subject: [PATCH perf/core 1/4] uprobes: simplify session consumer tracking Date: Thu, 5 Dec 2024 16:24:14 -0800 Message-ID: <20241206002417.3295533-2-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241206002417.3295533-1-andrii@kernel.org> References: <20241206002417.3295533-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In practice, each return_instance will typically contain either zero or one return_consumer, depending on whether it has any uprobe session consumer attached or not. It's highly unlikely that more than one uprobe session consumers will be attached to any given uprobe, so there is no need to optimize for that case. But the way we currently do memory allocation and accounting is by pre-allocating the space for 4 session consumers in contiguous block of memory next to struct return_instance fixed part. This is unnecessarily wasteful. This patch changes this to keep struct return_instance fixed-sized with one pre-allocated return_consumer, while (in a highly unlikely scenario) allowing for more session consumers in a separate dynamically allocated and reallocated array. We also simplify accounting a bit by not maintaining a separate temporary capacity for consumers array, and, instead, relying on krealloc() to be a no-op if underlying memory can accommodate a slightly bigger allocation (but again, it's very uncommon scenario to even have to do this reallocation). All this gets rid of ri_size(), simplifies push_consumer() and removes confusing ri->consumers_cnt re-assignment, while containing this singular preallocated consumer logic contained within a few simple preexisting helpers. Having fixed-sized struct return_instance simplifies and speeds up return_instance reuse that we ultimately add later in this patch set, see follow up patches. Signed-off-by: Andrii Nakryiko --- include/linux/uprobes.h | 10 ++++-- kernel/events/uprobes.c | 72 +++++++++++++++++++++-------------------- 2 files changed, 45 insertions(+), 37 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index e0a4c2082245..1d449978558d 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -154,12 +154,18 @@ struct return_instance { unsigned long stack; /* stack pointer */ unsigned long orig_ret_vaddr; /* original return address */ bool chained; /* true, if instance is nested */ - int consumers_cnt; + int cons_cnt; /* total number of session consumers */ struct return_instance *next; /* keep as stack */ struct rcu_head rcu; - struct return_consumer consumers[] __counted_by(consumers_cnt); + /* singular pre-allocated return_consumer instance for common case */ + struct return_consumer consumer; + /* + * extra return_consumer instances for rare cases of multiple session consumers, + * contains (cons_cnt - 1) elements + */ + struct return_consumer *extra_consumers; } ____cacheline_aligned; enum rp_check { diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index daf4314961ab..6beac52239be 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1899,6 +1899,7 @@ static struct return_instance *free_ret_instance(struct return_instance *ri, boo hprobe_finalize(&ri->hprobe, hstate); } + kfree(ri->extra_consumers); kfree_rcu(ri, rcu); return next; } @@ -1974,32 +1975,34 @@ static struct uprobe_task *get_utask(void) return current->utask; } -static size_t ri_size(int consumers_cnt) -{ - struct return_instance *ri; - - return sizeof(*ri) + sizeof(ri->consumers[0]) * consumers_cnt; -} - -#define DEF_CNT 4 - static struct return_instance *alloc_return_instance(void) { struct return_instance *ri; - ri = kzalloc(ri_size(DEF_CNT), GFP_KERNEL); + ri = kzalloc(sizeof(*ri), GFP_KERNEL); if (!ri) return ZERO_SIZE_PTR; - ri->consumers_cnt = DEF_CNT; return ri; } static struct return_instance *dup_return_instance(struct return_instance *old) { - size_t size = ri_size(old->consumers_cnt); + struct return_instance *ri; + + ri = kmemdup(old, sizeof(*ri), GFP_KERNEL); + + if (unlikely(old->cons_cnt > 1)) { + ri->extra_consumers = kmemdup(old->extra_consumers, + sizeof(ri->extra_consumers[0]) * (old->cons_cnt - 1), + GFP_KERNEL); + if (!ri->extra_consumers) { + kfree(ri); + return NULL; + } + } - return kmemdup(old, size, GFP_KERNEL); + return ri; } static int dup_utask(struct task_struct *t, struct uprobe_task *o_utask) @@ -2369,25 +2372,28 @@ static struct uprobe *find_active_uprobe_rcu(unsigned long bp_vaddr, int *is_swb return uprobe; } -static struct return_instance* -push_consumer(struct return_instance *ri, int idx, __u64 id, __u64 cookie) +static struct return_instance *push_consumer(struct return_instance *ri, __u64 id, __u64 cookie) { + struct return_consumer *ric; + if (unlikely(ri == ZERO_SIZE_PTR)) return ri; - if (unlikely(idx >= ri->consumers_cnt)) { - struct return_instance *old_ri = ri; - - ri->consumers_cnt += DEF_CNT; - ri = krealloc(old_ri, ri_size(old_ri->consumers_cnt), GFP_KERNEL); - if (!ri) { - kfree(old_ri); + if (unlikely(ri->cons_cnt > 0)) { + ric = krealloc(ri->extra_consumers, sizeof(*ric) * ri->cons_cnt, GFP_KERNEL); + if (!ric) { + kfree(ri->extra_consumers); + kfree_rcu(ri, rcu); return ZERO_SIZE_PTR; } + ri->extra_consumers = ric; } - ri->consumers[idx].id = id; - ri->consumers[idx].cookie = cookie; + ric = likely(ri->cons_cnt == 0) ? &ri->consumer : &ri->extra_consumers[ri->cons_cnt - 1]; + ric->id = id; + ric->cookie = cookie; + + ri->cons_cnt++; return ri; } @@ -2395,14 +2401,17 @@ static struct return_consumer * return_consumer_find(struct return_instance *ri, int *iter, int id) { struct return_consumer *ric; - int idx = *iter; + int idx; - for (ric = &ri->consumers[idx]; idx < ri->consumers_cnt; idx++, ric++) { + for (idx = *iter; idx < ri->cons_cnt; idx++) + { + ric = likely(idx == 0) ? &ri->consumer : &ri->extra_consumers[idx - 1]; if (ric->id == id) { *iter = idx + 1; return ric; } } + return NULL; } @@ -2416,7 +2425,6 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs) struct uprobe_consumer *uc; bool has_consumers = false, remove = true; struct return_instance *ri = NULL; - int push_idx = 0; current->utask->auprobe = &uprobe->arch; @@ -2441,18 +2449,12 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs) ri = alloc_return_instance(); if (session) - ri = push_consumer(ri, push_idx++, uc->id, cookie); + ri = push_consumer(ri, uc->id, cookie); } current->utask->auprobe = NULL; - if (!ZERO_OR_NULL_PTR(ri)) { - /* - * The push_idx value has the final number of return consumers, - * and ri->consumers_cnt has number of allocated consumers. - */ - ri->consumers_cnt = push_idx; + if (!ZERO_OR_NULL_PTR(ri)) prepare_uretprobe(uprobe, regs, ri); - } if (remove && has_consumers) { down_read(&uprobe->register_rwsem); From patchwork Fri Dec 6 00:24:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13896151 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 731733B2BB; Fri, 6 Dec 2024 00:24:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733444669; cv=none; b=p+6RNgqNh4JaPgu4gRwSVMC06nsv477GO6vUDtpw9MmEHIni1WZUW/KaqsS0kMsslZC7X69OtToTNWiMGqFmlG4hq3Gzd+ywVqO8Ad4P4ZjJ3FLT9vSWKqgKyR4gPaG5hQLMOqUfXGi7xWmKthnJu7mCxcePC1ZKwxhX8YDgxmM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733444669; c=relaxed/simple; bh=YtvsT/xx7eUlWVejaYL+t9ImUgUncT5Ai5/DofAYzYM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CYKi5SMxL5vcMTsPbsUNvlkw4sjmqWkKnFZaxvhBhIn0xi+6QXNBoJ0354H1FH6KzyLO+9DIIXQzDDby9FQFcbpoItdI1LSvmhqp4hPHcBuKwBlOkrJWkzyez/ZttXJNIH+rfDGVW4KH1zwMutJnLajfq4sJAHtD0ASUaSKmPS0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nlD20J+T; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nlD20J+T" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C75A4C4CED1; Fri, 6 Dec 2024 00:24:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733444669; bh=YtvsT/xx7eUlWVejaYL+t9ImUgUncT5Ai5/DofAYzYM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nlD20J+TmIyLZFFsG/tst8Jleq1xlIbJOBFm1H3++8sXvlRY4qSq2WjyS5zzK9iP/ fPPzf9QUwfrCp2pXSHaQeQO5o1LFZ2fDPuARDMfk0Kzap1402H4kfPtk2T2AIM6j3F twV3RilFDrwn23ObXHLbqpNDT9iGIR2nFSqSVrvScoF1fzQtWYznMKVEAGwHRQ14xa B/EFSyk3VcUI1YdQa9+WX28mWEeCrTlGKzcuCvGn6uFzoIBmhOq1B+ntLwHXgxM3nN Kd+rMrriB8ec95sVxLb3hYA1yvg9Ypik2FjDO0/k9+o+Mr+WEPyIUAkQxM52G9sWCh W3r14S7gNBaSA== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, mingo@kernel.org Cc: oleg@redhat.com, rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, liaochang1@huawei.com, kernel-team@meta.com, Andrii Nakryiko Subject: [PATCH perf/core 2/4] uprobes: decouple return_instance list traversal and freeing Date: Thu, 5 Dec 2024 16:24:15 -0800 Message-ID: <20241206002417.3295533-3-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241206002417.3295533-1-andrii@kernel.org> References: <20241206002417.3295533-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 free_ret_instance() has two unrelated responsibilities: actually cleaning up return_instance's resources and freeing memory, and also helping with utask->return_instances list traversal by returning the next alive pointer. There is no reason why these two aspects have to be mixed together, so turn free_ret_instance() into void-returning function and make callers do list traversal on their own. We'll use this simplification in the next patch that will guarantee that to-be-freed return_instance isn't reachable from utask->return_instances list. Signed-off-by: Andrii Nakryiko --- kernel/events/uprobes.c | 37 +++++++++++++++++++++---------------- 1 file changed, 21 insertions(+), 16 deletions(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 6beac52239be..cca1fe4a3fb1 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1888,10 +1888,8 @@ unsigned long uprobe_get_trap_addr(struct pt_regs *regs) return instruction_pointer(regs); } -static struct return_instance *free_ret_instance(struct return_instance *ri, bool cleanup_hprobe) +static void free_ret_instance(struct return_instance *ri, bool cleanup_hprobe) { - struct return_instance *next = ri->next; - if (cleanup_hprobe) { enum hprobe_state hstate; @@ -1901,7 +1899,6 @@ static struct return_instance *free_ret_instance(struct return_instance *ri, boo kfree(ri->extra_consumers); kfree_rcu(ri, rcu); - return next; } /* @@ -1911,7 +1908,7 @@ static struct return_instance *free_ret_instance(struct return_instance *ri, boo void uprobe_free_utask(struct task_struct *t) { struct uprobe_task *utask = t->utask; - struct return_instance *ri; + struct return_instance *ri, *ri_next; if (!utask) return; @@ -1921,8 +1918,11 @@ void uprobe_free_utask(struct task_struct *t) timer_delete_sync(&utask->ri_timer); ri = utask->return_instances; - while (ri) - ri = free_ret_instance(ri, true /* cleanup_hprobe */); + while (ri) { + ri_next = ri->next; + free_ret_instance(ri, true /* cleanup_hprobe */); + ri = ri_next; + } kfree(utask); t->utask = NULL; @@ -2111,12 +2111,15 @@ unsigned long uprobe_get_trampoline_vaddr(void) static void cleanup_return_instances(struct uprobe_task *utask, bool chained, struct pt_regs *regs) { - struct return_instance *ri = utask->return_instances; + struct return_instance *ri = utask->return_instances, *ri_next; enum rp_check ctx = chained ? RP_CHECK_CHAIN_CALL : RP_CHECK_CALL; while (ri && !arch_uretprobe_is_alive(ri, ctx, regs)) { - ri = free_ret_instance(ri, true /* cleanup_hprobe */); + ri_next = ri->next; utask->depth--; + + free_ret_instance(ri, true /* cleanup_hprobe */); + ri = ri_next; } rcu_assign_pointer(utask->return_instances, ri); } @@ -2508,7 +2511,7 @@ static struct return_instance *find_next_ret_chain(struct return_instance *ri) void uprobe_handle_trampoline(struct pt_regs *regs) { struct uprobe_task *utask; - struct return_instance *ri, *next; + struct return_instance *ri, *ri_next, *next_chain; struct uprobe *uprobe; enum hprobe_state hstate; bool valid; @@ -2528,8 +2531,8 @@ void uprobe_handle_trampoline(struct pt_regs *regs) * or NULL; the latter case means that nobody but ri->func * could hit this trampoline on return. TODO: sigaltstack(). */ - next = find_next_ret_chain(ri); - valid = !next || arch_uretprobe_is_alive(next, RP_CHECK_RET, regs); + next_chain = find_next_ret_chain(ri); + valid = !next_chain || arch_uretprobe_is_alive(next_chain, RP_CHECK_RET, regs); instruction_pointer_set(regs, ri->orig_ret_vaddr); do { @@ -2541,7 +2544,9 @@ void uprobe_handle_trampoline(struct pt_regs *regs) * trampoline addresses on the stack are replaced with correct * original return addresses */ - rcu_assign_pointer(utask->return_instances, ri->next); + ri_next = ri->next; + rcu_assign_pointer(utask->return_instances, ri_next); + utask->depth--; uprobe = hprobe_consume(&ri->hprobe, &hstate); if (valid) @@ -2549,9 +2554,9 @@ void uprobe_handle_trampoline(struct pt_regs *regs) hprobe_finalize(&ri->hprobe, hstate); /* We already took care of hprobe, no need to waste more time on that. */ - ri = free_ret_instance(ri, false /* !cleanup_hprobe */); - utask->depth--; - } while (ri != next); + free_ret_instance(ri, false /* !cleanup_hprobe */); + ri = ri_next; + } while (ri != next_chain); } while (!valid); return; From patchwork Fri Dec 6 00:24:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13896152 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 950E749641; Fri, 6 Dec 2024 00:24:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733444672; cv=none; b=kv5ySbflr+m4xu1G4STYJE+cAQnN/W+kWQJOl/PKTlwwLv0Xj9lf5Flu5eDiYToYhzKirtFUZgyuBRh8dDuYX4VQ3FB0Ic0yp6pZ4kOPiToNQxZXwDISeQxdvt5RAUk6DfvM1vKLoIbugreeOPD2RoUVX+u/jd0JOuo79SUqT3A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733444672; c=relaxed/simple; bh=/AvOE79WU2t/nI/L+G7GYlCVJATmYhJV0HV3QQ3Ahjk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rgDKyMuG+rPZgBFCr+yWfeaeRPJ8CRdrSUTCYAxrQDiOAh8W0ctI2z/riCxGg7e9ejYYxGWpANMAeQeVTONZ13wllq64pSbvsZbFN5fBpQBT/W/l76IOlw9ceoZTsoP6rZel5xC9N4mtW4LzHBGA5Xp2xrslHWc1PVTYIxVAKwQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hOcOKAPu; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hOcOKAPu" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 184ACC4CED1; Fri, 6 Dec 2024 00:24:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733444672; bh=/AvOE79WU2t/nI/L+G7GYlCVJATmYhJV0HV3QQ3Ahjk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hOcOKAPudIvndCzU+/boZ7QcqQqafDfGtcZ+vyFF5M5Mmd5UDg4Cj/qATfFF948r6 +U0ITR6O5KPyu2iSwUsLecLao4fB1b8kjHbGOTfsXt/PjGlNuQ5/Ou/jABHwWE1UVv g5BdzhCCJzvw6nTCaq6SYpFzuVCVt9sr1khaakZhiDNQOXcVJza/xfmXFMmdLhhYml ZKXr2VW1pBSucXejxChLfIHk5Y+aSUl0QwzdJ8xGL8YoU1UUYgA+7r5WhOIhZIEcp/ O0rxvLoPUXAUDKgN4oUoJ+fqO3tTXSj6kX4vvM0at2XS7f4Rkx/LEOa7wINDw6ekj5 DWTimiijVtWTw== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, mingo@kernel.org Cc: oleg@redhat.com, rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, liaochang1@huawei.com, kernel-team@meta.com, Andrii Nakryiko Subject: [PATCH perf/core 3/4] uprobes: ensure return_instance is detached from the list before freeing Date: Thu, 5 Dec 2024 16:24:16 -0800 Message-ID: <20241206002417.3295533-4-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241206002417.3295533-1-andrii@kernel.org> References: <20241206002417.3295533-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Ensure that by the time we call free_ret_instance() to clean up an instance of struct return_instance it isn't reachable from utask->return_instances anymore. free_ret_instance() is called in a few different situations, all but one of which already are fine w.r.t. return_instance visibility: - uprobe_free_utask() guarantees that ri_timer() won't be called (through timer_delete_sync() call), and so there is no need to unlink anything, because entire utask is being freed; - uprobe_handle_trampoline() is already unlinking to-be-freed return_instance with rcu_assign_pointer() before calling free_ret_instance(). Only cleanup_return_instances() violates this property, which so far is not causing problems due to RCU-delayed freeing of return_instance, which we'll change in the next patch. So make sure we unlink return_instance before passing it into free_ret_instance(), as otherwise reuse will be unsafe. Signed-off-by: Andrii Nakryiko --- kernel/events/uprobes.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index cca1fe4a3fb1..2345aeb63d3b 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -2116,12 +2116,12 @@ static void cleanup_return_instances(struct uprobe_task *utask, bool chained, while (ri && !arch_uretprobe_is_alive(ri, ctx, regs)) { ri_next = ri->next; + rcu_assign_pointer(utask->return_instances, ri_next); utask->depth--; free_ret_instance(ri, true /* cleanup_hprobe */); ri = ri_next; } - rcu_assign_pointer(utask->return_instances, ri); } static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs, From patchwork Fri Dec 6 00:24:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrii Nakryiko X-Patchwork-Id: 13896153 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B3568126C04; Fri, 6 Dec 2024 00:24:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733444675; cv=none; b=JCOIJYTfQpr6idnIAhK2A92XuuWJwjbQuBvNCAgPJVO/fL6sAVRwRkFhYr+lZDWJpKtLJgJ7HwyZ8MsFdQRqwUW/STSSF3rtFvacTgp/0ccri9fordB2wadIqxCHUr+XWHIu23od38ldK8WakCnC8D6NSUUKKIVZpbIGhu4UjlM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733444675; c=relaxed/simple; bh=2NHAC/ucuHOP+dlhjMOWq5m414F5Fa63pCr8kcoZANw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ApMCF5uUgc8YMg0HUjmnQKNGH7xV06ka7VYFqMqONN/ObNxxwdXqiqpJ/Fem5DttIsvoiUTt7uIGLDxgx1gNSA4ltPy8kUAlJS7Gvl1JYb7yo71ElSzLrchgaMNYJhtQoDqCOUo7LVrSyHCQNHJ5hYgIvnSf7J+OGjdoKQcy2QQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TMJNYpA4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TMJNYpA4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5A5D1C4CEDE; Fri, 6 Dec 2024 00:24:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1733444675; bh=2NHAC/ucuHOP+dlhjMOWq5m414F5Fa63pCr8kcoZANw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TMJNYpA4TY1sa/yhPNmOTItnOBduRdjmQn1Z539woQX3k+TyNkP1nGqZ8s8vdzUHP TZrhBS/0XZprMR3u0/AeRfD6FbDciXQZaai26J4OvEdGbp8baa3ZRaoJThNy7ntgnH 1USoJA9pDHeYCZKhkmfXVNVEP8n+fXou3+reJatRwAtd+y7tFSeSDwRqOpw8Sh4FaT 7pZpqrpcJV3JwaCaDLORFxSxMelgvoLpHF4BOQBPQilymqUxLXeFiMAJvGwkO5PJPr bh06hxn4bkjGRScpVfVwNAdc9xpi0z9ERW1zpm4D4Ab0HUI4KUfdHhQz3JmbiOQodg s8RH9I0mAQb6g== From: Andrii Nakryiko To: linux-trace-kernel@vger.kernel.org, peterz@infradead.org, mingo@kernel.org Cc: oleg@redhat.com, rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, liaochang1@huawei.com, kernel-team@meta.com, Andrii Nakryiko Subject: [PATCH perf/core 4/4] uprobes: reuse return_instances between multiple uretprobes within task Date: Thu, 5 Dec 2024 16:24:17 -0800 Message-ID: <20241206002417.3295533-5-andrii@kernel.org> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241206002417.3295533-1-andrii@kernel.org> References: <20241206002417.3295533-1-andrii@kernel.org> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Instead of constantly allocating and freeing very short-lived struct return_instance, reuse it as much as possible within current task. For that, store a linked list of reusable return_instances within current->utask. The only complication is that ri_timer() might be still processing such return_instance. And so while the main uretprobe processing logic might be already done with return_instance and would be OK to immediately reuse it for the next uretprobe instance, it's not correct to unconditionally reuse it just like that. Instead we make sure that ri_timer() can't possibly be processing it by using seqcount_t, with ri_timer() being "a writer", while free_ret_instance() being "a reader". If, after we unlink return instance from utask->return_instances list, we know that ri_timer() hasn't gotten to processing utask->return_instances yet, then we can be sure that immediate return_instance reuse is OK, and so we put it onto utask->ri_pool for future (potentially, almost immediate) reuse. This change shows improvements both in single CPU performance (by avoiding relatively expensive kmalloc/free combon) and in terms of multi-CPU scalability, where you can see that per-CPU throughput doesn't decline as steeply with increased number of CPUs (which were previously attributed to kmalloc()/free() through profiling): BASELINE (latest perf/core) =========================== uretprobe-nop ( 1 cpus): 1.898 ± 0.002M/s ( 1.898M/s/cpu) uretprobe-nop ( 2 cpus): 3.574 ± 0.011M/s ( 1.787M/s/cpu) uretprobe-nop ( 3 cpus): 5.279 ± 0.066M/s ( 1.760M/s/cpu) uretprobe-nop ( 4 cpus): 6.824 ± 0.047M/s ( 1.706M/s/cpu) uretprobe-nop ( 5 cpus): 8.339 ± 0.060M/s ( 1.668M/s/cpu) uretprobe-nop ( 6 cpus): 9.812 ± 0.047M/s ( 1.635M/s/cpu) uretprobe-nop ( 7 cpus): 11.030 ± 0.048M/s ( 1.576M/s/cpu) uretprobe-nop ( 8 cpus): 12.453 ± 0.126M/s ( 1.557M/s/cpu) uretprobe-nop (10 cpus): 14.838 ± 0.044M/s ( 1.484M/s/cpu) uretprobe-nop (12 cpus): 17.092 ± 0.115M/s ( 1.424M/s/cpu) uretprobe-nop (14 cpus): 19.576 ± 0.022M/s ( 1.398M/s/cpu) uretprobe-nop (16 cpus): 22.264 ± 0.015M/s ( 1.391M/s/cpu) uretprobe-nop (24 cpus): 33.534 ± 0.078M/s ( 1.397M/s/cpu) uretprobe-nop (32 cpus): 43.262 ± 0.127M/s ( 1.352M/s/cpu) uretprobe-nop (40 cpus): 53.252 ± 0.080M/s ( 1.331M/s/cpu) uretprobe-nop (48 cpus): 55.778 ± 0.045M/s ( 1.162M/s/cpu) uretprobe-nop (56 cpus): 56.850 ± 0.227M/s ( 1.015M/s/cpu) uretprobe-nop (64 cpus): 62.005 ± 0.077M/s ( 0.969M/s/cpu) uretprobe-nop (72 cpus): 66.445 ± 0.236M/s ( 0.923M/s/cpu) uretprobe-nop (80 cpus): 68.353 ± 0.180M/s ( 0.854M/s/cpu) THIS PATCHSET (on top of latest perf/core) ========================================== uretprobe-nop ( 1 cpus): 2.253 ± 0.004M/s ( 2.253M/s/cpu) uretprobe-nop ( 2 cpus): 4.281 ± 0.003M/s ( 2.140M/s/cpu) uretprobe-nop ( 3 cpus): 6.389 ± 0.027M/s ( 2.130M/s/cpu) uretprobe-nop ( 4 cpus): 8.328 ± 0.005M/s ( 2.082M/s/cpu) uretprobe-nop ( 5 cpus): 10.353 ± 0.001M/s ( 2.071M/s/cpu) uretprobe-nop ( 6 cpus): 12.513 ± 0.010M/s ( 2.086M/s/cpu) uretprobe-nop ( 7 cpus): 14.525 ± 0.017M/s ( 2.075M/s/cpu) uretprobe-nop ( 8 cpus): 15.633 ± 0.013M/s ( 1.954M/s/cpu) uretprobe-nop (10 cpus): 19.532 ± 0.011M/s ( 1.953M/s/cpu) uretprobe-nop (12 cpus): 21.405 ± 0.009M/s ( 1.784M/s/cpu) uretprobe-nop (14 cpus): 24.857 ± 0.020M/s ( 1.776M/s/cpu) uretprobe-nop (16 cpus): 26.466 ± 0.018M/s ( 1.654M/s/cpu) uretprobe-nop (24 cpus): 40.513 ± 0.222M/s ( 1.688M/s/cpu) uretprobe-nop (32 cpus): 54.180 ± 0.074M/s ( 1.693M/s/cpu) uretprobe-nop (40 cpus): 66.100 ± 0.082M/s ( 1.652M/s/cpu) uretprobe-nop (48 cpus): 70.544 ± 0.068M/s ( 1.470M/s/cpu) uretprobe-nop (56 cpus): 74.494 ± 0.055M/s ( 1.330M/s/cpu) uretprobe-nop (64 cpus): 79.317 ± 0.029M/s ( 1.239M/s/cpu) uretprobe-nop (72 cpus): 84.875 ± 0.020M/s ( 1.179M/s/cpu) uretprobe-nop (80 cpus): 92.318 ± 0.224M/s ( 1.154M/s/cpu) For reference, with uprobe-nop we hit the following throughput: uprobe-nop (80 cpus): 143.485 ± 0.035M/s ( 1.794M/s/cpu) So now uretprobe stays a bit closer to that performance. Signed-off-by: Andrii Nakryiko --- include/linux/uprobes.h | 6 ++- kernel/events/uprobes.c | 83 ++++++++++++++++++++++++++++++++++------- 2 files changed, 75 insertions(+), 14 deletions(-) diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index 1d449978558d..b1df7d792fa1 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -16,6 +16,7 @@ #include #include #include +#include struct uprobe; struct vm_area_struct; @@ -124,6 +125,10 @@ struct uprobe_task { unsigned int depth; struct return_instance *return_instances; + struct return_instance *ri_pool; + struct timer_list ri_timer; + seqcount_t ri_seqcount; + union { struct { struct arch_uprobe_task autask; @@ -137,7 +142,6 @@ struct uprobe_task { }; struct uprobe *active_uprobe; - struct timer_list ri_timer; unsigned long xol_vaddr; struct arch_uprobe *auprobe; diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 2345aeb63d3b..1af950208c2b 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1888,8 +1888,34 @@ unsigned long uprobe_get_trap_addr(struct pt_regs *regs) return instruction_pointer(regs); } -static void free_ret_instance(struct return_instance *ri, bool cleanup_hprobe) +static void ri_pool_push(struct uprobe_task *utask, struct return_instance *ri) { + ri->cons_cnt = 0; + ri->next = utask->ri_pool; + utask->ri_pool = ri; +} + +static struct return_instance *ri_pool_pop(struct uprobe_task *utask) +{ + struct return_instance *ri = utask->ri_pool; + + if (likely(ri)) + utask->ri_pool = ri->next; + + return ri; +} + +static void ri_free(struct return_instance *ri) +{ + kfree(ri->extra_consumers); + kfree_rcu(ri, rcu); +} + +static void free_ret_instance(struct uprobe_task *utask, + struct return_instance *ri, bool cleanup_hprobe) +{ + unsigned seq; + if (cleanup_hprobe) { enum hprobe_state hstate; @@ -1897,8 +1923,22 @@ static void free_ret_instance(struct return_instance *ri, bool cleanup_hprobe) hprobe_finalize(&ri->hprobe, hstate); } - kfree(ri->extra_consumers); - kfree_rcu(ri, rcu); + /* + * At this point return_instance is unlinked from utask's + * return_instances list and this has become visible to ri_timer(). + * If seqcount now indicates that ri_timer's return instance + * processing loop isn't active, we can return ri into the pool of + * to-be-reused return instances for future uretprobes. If ri_timer() + * happens to be running right now, though, we fallback to safety and + * just perform RCU-delated freeing of ri. + */ + if (raw_seqcount_try_begin(&utask->ri_seqcount, seq)) { + /* immediate reuse of ri without RCU GP is OK */ + ri_pool_push(utask, ri); + } else { + /* we might be racing with ri_timer(), so play it safe */ + ri_free(ri); + } } /* @@ -1920,7 +1960,15 @@ void uprobe_free_utask(struct task_struct *t) ri = utask->return_instances; while (ri) { ri_next = ri->next; - free_ret_instance(ri, true /* cleanup_hprobe */); + free_ret_instance(utask, ri, true /* cleanup_hprobe */); + ri = ri_next; + } + + /* free_ret_instance() above might add to ri_pool, so this loop should come last */ + ri = utask->ri_pool; + while (ri) { + ri_next = ri->next; + ri_free(ri); ri = ri_next; } @@ -1943,8 +1991,12 @@ static void ri_timer(struct timer_list *timer) /* RCU protects return_instance from freeing. */ guard(rcu)(); + write_seqcount_begin(&utask->ri_seqcount); + for_each_ret_instance_rcu(ri, utask->return_instances) hprobe_expire(&ri->hprobe, false); + + write_seqcount_end(&utask->ri_seqcount); } static struct uprobe_task *alloc_utask(void) @@ -1956,6 +2008,7 @@ static struct uprobe_task *alloc_utask(void) return NULL; timer_setup(&utask->ri_timer, ri_timer, 0); + seqcount_init(&utask->ri_seqcount); return utask; } @@ -1975,10 +2028,14 @@ static struct uprobe_task *get_utask(void) return current->utask; } -static struct return_instance *alloc_return_instance(void) +static struct return_instance *alloc_return_instance(struct uprobe_task *utask) { struct return_instance *ri; + ri = ri_pool_pop(utask); + if (ri) + return ri; + ri = kzalloc(sizeof(*ri), GFP_KERNEL); if (!ri) return ZERO_SIZE_PTR; @@ -2119,7 +2176,7 @@ static void cleanup_return_instances(struct uprobe_task *utask, bool chained, rcu_assign_pointer(utask->return_instances, ri_next); utask->depth--; - free_ret_instance(ri, true /* cleanup_hprobe */); + free_ret_instance(utask, ri, true /* cleanup_hprobe */); ri = ri_next; } } @@ -2186,7 +2243,7 @@ static void prepare_uretprobe(struct uprobe *uprobe, struct pt_regs *regs, return; free: - kfree(ri); + ri_free(ri); } /* Prepare to single-step probed instruction out of line. */ @@ -2385,8 +2442,7 @@ static struct return_instance *push_consumer(struct return_instance *ri, __u64 i if (unlikely(ri->cons_cnt > 0)) { ric = krealloc(ri->extra_consumers, sizeof(*ric) * ri->cons_cnt, GFP_KERNEL); if (!ric) { - kfree(ri->extra_consumers); - kfree_rcu(ri, rcu); + ri_free(ri); return ZERO_SIZE_PTR; } ri->extra_consumers = ric; @@ -2428,8 +2484,9 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs) struct uprobe_consumer *uc; bool has_consumers = false, remove = true; struct return_instance *ri = NULL; + struct uprobe_task *utask = current->utask; - current->utask->auprobe = &uprobe->arch; + utask->auprobe = &uprobe->arch; list_for_each_entry_rcu(uc, &uprobe->consumers, cons_node, rcu_read_lock_trace_held()) { bool session = uc->handler && uc->ret_handler; @@ -2449,12 +2506,12 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs) continue; if (!ri) - ri = alloc_return_instance(); + ri = alloc_return_instance(utask); if (session) ri = push_consumer(ri, uc->id, cookie); } - current->utask->auprobe = NULL; + utask->auprobe = NULL; if (!ZERO_OR_NULL_PTR(ri)) prepare_uretprobe(uprobe, regs, ri); @@ -2554,7 +2611,7 @@ void uprobe_handle_trampoline(struct pt_regs *regs) hprobe_finalize(&ri->hprobe, hstate); /* We already took care of hprobe, no need to waste more time on that. */ - free_ret_instance(ri, false /* !cleanup_hprobe */); + free_ret_instance(utask, ri, false /* !cleanup_hprobe */); ri = ri_next; } while (ri != next_chain); } while (!valid);