From patchwork Tue May 30 23:53:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 13261230 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BAF6C77B7A for ; Tue, 30 May 2023 23:53:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233147AbjE3XxQ (ORCPT ); Tue, 30 May 2023 19:53:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230151AbjE3XxP (ORCPT ); Tue, 30 May 2023 19:53:15 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6CB16B2; Tue, 30 May 2023 16:53:12 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id D449320FC46D; Tue, 30 May 2023 16:53:11 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com D449320FC46D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1685490792; bh=dz/CtqF8fC3WjlIzINF503ue//H9KjjhIANdrvLeVn4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iQORsCvD6vpEbIv0pI6gHu/s66CGSc9VNmHP5zEn17G21gdKW0xyLpMqI8Gj2mSgQ Q6ospDsmeMg37r9SfigYKv/2e/O4tyUZFj7ot16O2wQUDQ4fpEGx6evvGncO5etTHI 3QoRk85Hzl0OAZgdBza/JiNHfAThEReUerZabxBQ= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH 1/5] tracing/user_events: Store register flags on events Date: Tue, 30 May 2023 16:53:00 -0700 Message-Id: <20230530235304.2726-2-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530235304.2726-1-beaub@linux.microsoft.com> References: <20230530235304.2726-1-beaub@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org Currently we don't have any available flags for user processes to use to indicate options for user_events. We will soon have a flag to indicate the event should auto-delete once it's not being used by anyone. Add a reg_flags field to user_events and parameters to existing functions to allow for this in future patches. Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index b1ecd7677642..34aa0a5d8e2a 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -87,6 +87,7 @@ struct user_event { struct list_head validators; refcount_t refcnt; int min_size; + int reg_flags; char status; }; @@ -163,7 +164,7 @@ typedef void (*user_event_func_t) (struct user_event *user, struct iov_iter *i, static int user_event_parse(struct user_event_group *group, char *name, char *args, char *flags, - struct user_event **newuser); + struct user_event **newuser, int reg_flags); static struct user_event_mm *user_event_mm_get(struct user_event_mm *mm); static struct user_event_mm *user_event_mm_get_all(struct user_event *user); @@ -809,7 +810,8 @@ static struct list_head *user_event_get_fields(struct trace_event_call *call) * Upon success user_event has its ref count increased by 1. */ static int user_event_parse_cmd(struct user_event_group *group, - char *raw_command, struct user_event **newuser) + char *raw_command, struct user_event **newuser, + int reg_flags) { char *name = raw_command; char *args = strpbrk(name, " "); @@ -823,7 +825,7 @@ static int user_event_parse_cmd(struct user_event_group *group, if (flags) *flags++ = '\0'; - return user_event_parse(group, name, args, flags, newuser); + return user_event_parse(group, name, args, flags, newuser, reg_flags); } static int user_field_array_size(const char *type) @@ -1587,7 +1589,7 @@ static int user_event_create(const char *raw_command) mutex_lock(&group->reg_mutex); - ret = user_event_parse_cmd(group, name, &user); + ret = user_event_parse_cmd(group, name, &user, 0); if (!ret) refcount_dec(&user->refcnt); @@ -1748,7 +1750,7 @@ static int user_event_trace_register(struct user_event *user) */ static int user_event_parse(struct user_event_group *group, char *name, char *args, char *flags, - struct user_event **newuser) + struct user_event **newuser, int reg_flags) { int ret; u32 key; @@ -1819,6 +1821,8 @@ static int user_event_parse(struct user_event_group *group, char *name, if (ret) goto put_user_lock; + user->reg_flags = reg_flags; + /* Ensure we track self ref and caller ref (2) */ refcount_set(&user->refcnt, 2); @@ -2117,7 +2121,7 @@ static long user_events_ioctl_reg(struct user_event_file_info *info, return ret; } - ret = user_event_parse_cmd(info->group, name, &user); + ret = user_event_parse_cmd(info->group, name, &user, reg.flags); if (ret) { kfree(name); From patchwork Tue May 30 23:53:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 13261232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81F14C7EE2E for ; Tue, 30 May 2023 23:53:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233970AbjE3XxS (ORCPT ); Tue, 30 May 2023 19:53:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233601AbjE3XxQ (ORCPT ); Tue, 30 May 2023 19:53:16 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AD7BED9; Tue, 30 May 2023 16:53:12 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 17C0020FC46E; Tue, 30 May 2023 16:53:12 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 17C0020FC46E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1685490792; bh=7HerbMz1DFTdP2RVbzOEQ58GbkMSyi6fAKvdAGruc5c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WhmiDcT1nPL1sVpiBaasbeIVmdU3LPTBysjeRFJgo1XpGmExRYSx6z3wN/K5ghrn8 Xp7beuRJz+4p6p2c2aZT1nLSRf5svBpjIZqZIIWRAnJEMCWVXnO6aJaZBBoRJx/NUz i/sux5T5FmdbmLjNTBwyYp6SsL+6osTAH8vjhFTs= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH 2/5] tracing/user_events: Track refcount consistently via put/get Date: Tue, 30 May 2023 16:53:01 -0700 Message-Id: <20230530235304.2726-3-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530235304.2726-1-beaub@linux.microsoft.com> References: <20230530235304.2726-1-beaub@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org Various parts of the code today track user_event's refcnt field directly via a refcount_add/dec. This makes it hard to modify the behavior of the last reference decrement in all code paths consistently. For example, in the future we will auto-delete events upon the last reference going away. This last reference could happen in many places, but we want it to be consistently handled. Add user_event_get() and user_event_put() for the add/dec. Update all places where direct refcounts are being used to utilize these new functions. In each location pass if event_mutex is locked or not. This allows us to drop events automatically in future patches clearly. Ensure when caller states the lock is held, it really is (or is not) held. Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 66 +++++++++++++++++++------------- 1 file changed, 40 insertions(+), 26 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 34aa0a5d8e2a..8f0fb6cb0f33 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -175,6 +175,28 @@ static u32 user_event_key(char *name) return jhash(name, strlen(name), 0); } +static struct user_event *user_event_get(struct user_event *user) +{ + refcount_inc(&user->refcnt); + + return user; +} + +static void user_event_put(struct user_event *user, bool locked) +{ +#ifdef CONFIG_LOCKDEP + if (locked) + lockdep_assert_held(&event_mutex); + else + lockdep_assert_not_held(&event_mutex); +#endif + + if (unlikely(!user)) + return; + + refcount_dec(&user->refcnt); +} + static void user_event_group_destroy(struct user_event_group *group) { kfree(group->system_name); @@ -258,12 +280,13 @@ static struct user_event_group return NULL; }; -static void user_event_enabler_destroy(struct user_event_enabler *enabler) +static void user_event_enabler_destroy(struct user_event_enabler *enabler, + bool locked) { list_del_rcu(&enabler->link); /* No longer tracking the event via the enabler */ - refcount_dec(&enabler->event->refcnt); + user_event_put(enabler->event, locked); kfree(enabler); } @@ -325,7 +348,7 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) /* User asked for enabler to be removed during fault */ if (test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler))) { - user_event_enabler_destroy(enabler); + user_event_enabler_destroy(enabler, true); goto out; } @@ -489,13 +512,12 @@ static bool user_event_enabler_dup(struct user_event_enabler *orig, if (!enabler) return false; - enabler->event = orig->event; + enabler->event = user_event_get(orig->event); enabler->addr = orig->addr; /* Only dup part of value (ignore future flags, etc) */ enabler->values = orig->values & ENABLE_VAL_DUP_MASK; - refcount_inc(&enabler->event->refcnt); list_add_rcu(&enabler->link, &mm->enablers); return true; @@ -595,7 +617,7 @@ static void user_event_mm_destroy(struct user_event_mm *mm) struct user_event_enabler *enabler, *next; list_for_each_entry_safe(enabler, next, &mm->enablers, link) - user_event_enabler_destroy(enabler); + user_event_enabler_destroy(enabler, false); mmdrop(mm->mm); kfree(mm); @@ -748,7 +770,7 @@ static struct user_event_enabler * exit or run exec(), which includes forks and clones. */ if (!*write_result) { - refcount_inc(&enabler->event->refcnt); + user_event_get(user); list_add_rcu(&enabler->link, &user_mm->enablers); } @@ -1336,10 +1358,8 @@ static struct user_event *find_user_event(struct user_event_group *group, *outkey = key; hash_for_each_possible(group->register_table, user, node, key) - if (!strcmp(EVENT_NAME(user), name)) { - refcount_inc(&user->refcnt); - return user; - } + if (!strcmp(EVENT_NAME(user), name)) + return user_event_get(user); return NULL; } @@ -1553,12 +1573,12 @@ static int user_event_reg(struct trace_event_call *call, return ret; inc: - refcount_inc(&user->refcnt); + user_event_get(user); update_enable_bit_for(user); return 0; dec: update_enable_bit_for(user); - refcount_dec(&user->refcnt); + user_event_put(user, true); return 0; } @@ -1592,7 +1612,7 @@ static int user_event_create(const char *raw_command) ret = user_event_parse_cmd(group, name, &user, 0); if (!ret) - refcount_dec(&user->refcnt); + user_event_put(user, false); mutex_unlock(&group->reg_mutex); @@ -1856,7 +1876,7 @@ static int delete_user_event(struct user_event_group *group, char *name) if (!user) return -ENOENT; - refcount_dec(&user->refcnt); + user_event_put(user, true); if (!user_event_last_ref(user)) return -EBUSY; @@ -2015,9 +2035,7 @@ static int user_events_ref_add(struct user_event_file_info *info, for (i = 0; i < count; ++i) new_refs->events[i] = refs->events[i]; - new_refs->events[i] = user; - - refcount_inc(&user->refcnt); + new_refs->events[i] = user_event_get(user); rcu_assign_pointer(info->refs, new_refs); @@ -2131,7 +2149,7 @@ static long user_events_ioctl_reg(struct user_event_file_info *info, ret = user_events_ref_add(info, user); /* No longer need parse ref, ref_add either worked or not */ - refcount_dec(&user->refcnt); + user_event_put(user, false); /* Positive number is index and valid */ if (ret < 0) @@ -2280,7 +2298,7 @@ static long user_events_ioctl_unreg(unsigned long uarg) set_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)); if (!test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler))) - user_event_enabler_destroy(enabler); + user_event_enabler_destroy(enabler, true); /* Removed at least one */ ret = 0; @@ -2337,7 +2355,6 @@ static int user_events_release(struct inode *node, struct file *file) struct user_event_file_info *info = file->private_data; struct user_event_group *group; struct user_event_refs *refs; - struct user_event *user; int i; if (!info) @@ -2361,12 +2378,9 @@ static int user_events_release(struct inode *node, struct file *file) * The underlying user_events are ref counted, and cannot be freed. * After this decrement, the user_events may be freed elsewhere. */ - for (i = 0; i < refs->count; ++i) { - user = refs->events[i]; + for (i = 0; i < refs->count; ++i) + user_event_put(refs->events[i], false); - if (user) - refcount_dec(&user->refcnt); - } out: file->private_data = NULL; From patchwork Tue May 30 23:53:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 13261233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 453C6C7EE31 for ; Tue, 30 May 2023 23:53:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230151AbjE3XxS (ORCPT ); Tue, 30 May 2023 19:53:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37038 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233833AbjE3XxQ (ORCPT ); Tue, 30 May 2023 19:53:16 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id EC607EC; Tue, 30 May 2023 16:53:12 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 5251520FC471; Tue, 30 May 2023 16:53:12 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 5251520FC471 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1685490792; bh=o7cuIRxv95pgO2FziqXirK2G8dnOM07dCx9U3XlMIUI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aOTkpJ81RJe4uRRChokSxB4GFanHrLA/nVtkSaf/euHUrjHczbPJAHtpD0Y2YXgE8 tgogBTtsibNyUFYbqa7FRs07PZQzslyp7Lv77yYmBC61FFxaI5WiXuLVPsb6VEmS52 6cr87mk5KIx4ZgrC5ACxw7DqBtGRVQj+gDbpYdWg= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH 3/5] tracing/user_events: Add flag to auto-delete events Date: Tue, 30 May 2023 16:53:02 -0700 Message-Id: <20230530235304.2726-4-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530235304.2726-1-beaub@linux.microsoft.com> References: <20230530235304.2726-1-beaub@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org Currently user events need to be manually deleted via the delete IOCTL call or via the dynamic_events file. Some operators and processes wish to have these events auto cleanup when they are no longer used by anything to prevent them piling without manual maintenance. Add auto delete flag to user facing header and honor it within the register IOCTL call. Add max flag as well to ensure that only known flags can be used now and in the future. Update user_event_put() to attempt an auto delete of the event if it's the last reference. The auto delete must run in a work queue to ensure proper behavior of class->reg() invocations that don't expect the call to go away from underneath them during the unregister. Add work_struct to user_event struct to ensure we can do this reliably. Link: https://lore.kernel.org/linux-trace-kernel/20230518093600.3f119d68@rorschach.local.home/ Suggested-by: Steven Rostedt Signed-off-by: Beau Belgrave --- include/uapi/linux/user_events.h | 10 ++- kernel/trace/trace_events_user.c | 115 +++++++++++++++++++++++++++---- 2 files changed, 112 insertions(+), 13 deletions(-) diff --git a/include/uapi/linux/user_events.h b/include/uapi/linux/user_events.h index 2984aae4a2b4..635f45bc6457 100644 --- a/include/uapi/linux/user_events.h +++ b/include/uapi/linux/user_events.h @@ -17,6 +17,14 @@ /* Create dynamic location entry within a 32-bit value */ #define DYN_LOC(offset, size) ((size) << 16 | (offset)) +enum user_reg_flag { + /* Event will auto delete upon last reference closing */ + USER_EVENT_REG_AUTO_DEL = 1U << 0, + + /* This value or above is currently non-ABI */ + USER_EVENT_REG_MAX = 1U << 1, +}; + /* * Describes an event registration and stores the results of the registration. * This structure is passed to the DIAG_IOCSREG ioctl, callers at a minimum @@ -33,7 +41,7 @@ struct user_reg { /* Input: Enable size in bytes at address */ __u8 enable_size; - /* Input: Flags for future use, set to 0 */ + /* Input: Flags can be any of the above user_reg_flag values */ __u16 flags; /* Input: Address to update when enabled */ diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 8f0fb6cb0f33..ddd199f286fe 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -85,6 +85,7 @@ struct user_event { struct hlist_node node; struct list_head fields; struct list_head validators; + struct work_struct put_work; refcount_t refcnt; int min_size; int reg_flags; @@ -169,6 +170,7 @@ static int user_event_parse(struct user_event_group *group, char *name, static struct user_event_mm *user_event_mm_get(struct user_event_mm *mm); static struct user_event_mm *user_event_mm_get_all(struct user_event *user); static void user_event_mm_put(struct user_event_mm *mm); +static int destroy_user_event(struct user_event *user); static u32 user_event_key(char *name) { @@ -182,19 +184,98 @@ static struct user_event *user_event_get(struct user_event *user) return user; } +static void delayed_destroy_user_event(struct work_struct *work) +{ + struct user_event *user = container_of( + work, struct user_event, put_work); + + mutex_lock(&event_mutex); + + if (!refcount_dec_and_test(&user->refcnt)) + goto out; + + if (destroy_user_event(user)) { + /* + * The only reason this would fail here is if we cannot + * update the visibility of the event. In this case the + * event stays in the hashtable, waiting for someone to + * attempt to delete it later. + */ + pr_warn("user_events: Unable to delete event\n"); + refcount_set(&user->refcnt, 1); + } +out: + mutex_unlock(&event_mutex); +} + static void user_event_put(struct user_event *user, bool locked) { -#ifdef CONFIG_LOCKDEP - if (locked) - lockdep_assert_held(&event_mutex); - else - lockdep_assert_not_held(&event_mutex); -#endif + bool delete; if (unlikely(!user)) return; - refcount_dec(&user->refcnt); + /* + * When the event is not enabled for auto-delete there will always + * be at least 1 reference to the event. During the event creation + * we initially set the refcnt to 2 to achieve this. In those cases + * the caller must acquire event_mutex and after decrement check if + * the refcnt is 1, meaning this is the last reference. When auto + * delete is enabled, there will only be 1 ref, IE: refcnt will be + * only set to 1 during creation to allow the below checks to go + * through upon the last put. The last put must always be done with + * the event mutex held. + */ + if (!locked) { + lockdep_assert_not_held(&event_mutex); + delete = refcount_dec_and_mutex_lock(&user->refcnt, &event_mutex); + } else { + lockdep_assert_held(&event_mutex); + delete = refcount_dec_and_test(&user->refcnt); + } + + if (!delete) + return; + + /* We now have the event_mutex in all cases */ + + if (!(user->reg_flags & USER_EVENT_REG_AUTO_DEL)) { + /* We should not get here unless the auto-delete flag is set */ + pr_alert("BUG: Auto-delete engaged without it enabled\n"); + goto out; + } + + /* + * Unfortunately we have to attempt the actual destroy in a work + * queue. This is because not all cases handle a trace_event_call + * being removed within the class->reg() operation for unregister. + */ + INIT_WORK(&user->put_work, delayed_destroy_user_event); + + /* + * Since the event is still in the hashtable, we have to re-inc + * the ref count to 1. This count will be decremented and checked + * in the work queue to ensure it's still the last ref. This is + * needed because a user-process could register the same event in + * between the time of event_mutex release and the work queue + * running the delayed destroy. If we removed the item now from + * the hashtable, this would result in a timing window where a + * user process would fail a register because the trace_event_call + * register would fail in the tracing layers. + */ + refcount_set(&user->refcnt, 1); + + if (!schedule_work(&user->put_work)) { + /* + * If we fail we must wait for an admin to attempt delete or + * another register/close of the event, whichever is first. + */ + pr_warn("user_events: Unable to queue delayed destroy\n"); + } +out: + /* Ensure if we didn't have event_mutex before we unlock it */ + if (!locked) + mutex_unlock(&event_mutex); } static void user_event_group_destroy(struct user_event_group *group) @@ -793,7 +874,12 @@ static struct user_event_enabler static __always_inline __must_check bool user_event_last_ref(struct user_event *user) { - return refcount_read(&user->refcnt) == 1; + int last = 1; + + if (user->reg_flags & USER_EVENT_REG_AUTO_DEL) + last = 0; + + return refcount_read(&user->refcnt) == last; } static __always_inline __must_check @@ -1843,8 +1929,13 @@ static int user_event_parse(struct user_event_group *group, char *name, user->reg_flags = reg_flags; - /* Ensure we track self ref and caller ref (2) */ - refcount_set(&user->refcnt, 2); + if (user->reg_flags & USER_EVENT_REG_AUTO_DEL) { + /* Ensure we track only caller ref (1) */ + refcount_set(&user->refcnt, 1); + } else { + /* Ensure we track self ref and caller ref (2) */ + refcount_set(&user->refcnt, 2); + } dyn_event_init(&user->devent, &user_event_dops); dyn_event_add(&user->devent, &user->call); @@ -2066,8 +2157,8 @@ static long user_reg_get(struct user_reg __user *ureg, struct user_reg *kreg) if (ret) return ret; - /* Ensure no flags, since we don't support any yet */ - if (kreg->flags != 0) + /* Ensure only valid flags */ + if (kreg->flags & ~(USER_EVENT_REG_MAX-1)) return -EINVAL; /* Ensure supported size */ From patchwork Tue May 30 23:53:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 13261231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F6C5C77B73 for ; Tue, 30 May 2023 23:53:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233820AbjE3XxR (ORCPT ); Tue, 30 May 2023 19:53:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233493AbjE3XxQ (ORCPT ); Tue, 30 May 2023 19:53:16 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5207DF9; Tue, 30 May 2023 16:53:13 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 8AF2520FC475; Tue, 30 May 2023 16:53:12 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 8AF2520FC475 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1685490792; bh=jY9TSOdHVrfPeYA1EVOelAq4LjG1oi4wE8/rpaq0pvM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KTalf09dM9FN1aKmNTp560gjdCVKv7BiWHr49EW1QWtkEc6a185AqMkvW3kzzh7+v ArtAE6Os4uFfCRo5E0PcJzYcbOcZYOBOkyru6T2VoSodd0B3JNbq07AvFjcpGAxzLV EaNmsJjsfUuC6cUYIIPUVOju1g8WPoBRCK13X60Y= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH 4/5] tracing/user_events: Add self-test for auto-del flag Date: Tue, 30 May 2023 16:53:03 -0700 Message-Id: <20230530235304.2726-5-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530235304.2726-1-beaub@linux.microsoft.com> References: <20230530235304.2726-1-beaub@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org A new flag for auto-deleting user_events upon the last reference exists now. We must ensure this flag works correctly in the common cases. Update abi self test to ensure when this flag is used the user_event goes away at the appropriate time. Ensure last fd, enabler, and trace_event_call refs paths correctly delete the event. Signed-off-by: Beau Belgrave --- .../testing/selftests/user_events/abi_test.c | 115 ++++++++++++++++-- 1 file changed, 107 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/user_events/abi_test.c b/tools/testing/selftests/user_events/abi_test.c index 5125c42efe65..9c726616f763 100644 --- a/tools/testing/selftests/user_events/abi_test.c +++ b/tools/testing/selftests/user_events/abi_test.c @@ -22,10 +22,41 @@ const char *data_file = "/sys/kernel/tracing/user_events_data"; const char *enable_file = "/sys/kernel/tracing/events/user_events/__abi_event/enable"; +const char *temp_enable_file = "/sys/kernel/tracing/events/user_events/__abi_temp_event/enable"; -static int change_event(bool enable) +static bool temp_exists(int grace_ms) +{ + int fd; + + usleep(grace_ms * 1000); + + fd = open(temp_enable_file, O_RDONLY); + + if (fd == -1) + return false; + + close(fd); + + return true; +} + +static int clear_temp(void) +{ + int fd = open(data_file, O_RDWR); + int ret = 0; + + if (ioctl(fd, DIAG_IOCSDEL, "__abi_temp_event") == -1) + if (errno != ENOENT) + ret = -1; + + close(fd); + + return ret; +} + +static int __change_event(const char *path, bool enable) { - int fd = open(enable_file, O_RDWR); + int fd = open(path, O_RDWR); int ret; if (fd < 0) @@ -46,22 +77,48 @@ static int change_event(bool enable) return ret; } -static int reg_enable(long *enable, int size, int bit) +static int change_temp_event(bool enable) +{ + return __change_event(temp_enable_file, enable); +} + +static int change_event(bool enable) +{ + return __change_event(enable_file, enable); +} + +static int __reg_enable(int *fd, const char *name, long *enable, int size, + int bit, int flags) { struct user_reg reg = {0}; - int fd = open(data_file, O_RDWR); - int ret; - if (fd < 0) + *fd = open(data_file, O_RDWR); + + if (*fd < 0) return -1; reg.size = sizeof(reg); - reg.name_args = (__u64)"__abi_event"; + reg.name_args = (__u64)name; reg.enable_bit = bit; reg.enable_addr = (__u64)enable; reg.enable_size = size; + reg.flags = flags; + + return ioctl(*fd, DIAG_IOCSREG, ®); +} - ret = ioctl(fd, DIAG_IOCSREG, ®); +static int reg_enable_temp(int *fd, long *enable, int size, int bit) +{ + return __reg_enable(fd, "__abi_temp_event", enable, size, bit, + USER_EVENT_REG_AUTO_DEL); +} + +static int reg_enable(long *enable, int size, int bit) +{ + int ret; + int fd; + + ret = __reg_enable(&fd, "__abi_event", enable, size, bit, 0); close(fd); @@ -98,6 +155,7 @@ FIXTURE_SETUP(user) { } FIXTURE_TEARDOWN(user) { + clear_temp(); } TEST_F(user, enablement) { @@ -223,6 +281,47 @@ TEST_F(user, clones) { ASSERT_EQ(0, change_event(false)); } +TEST_F(user, flags) { + int grace = 100; + int fd; + + /* FLAG: USER_EVENT_REG_AUTO_DEL */ + /* Removal path 1, close on last fd ref */ + ASSERT_EQ(0, clear_temp()); + ASSERT_EQ(0, reg_enable_temp(&fd, &self->check, sizeof(int), 0)); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + close(fd); + ASSERT_EQ(false, temp_exists(grace)); + + /* Removal path 2, close on last enabler */ + ASSERT_EQ(0, clear_temp()); + ASSERT_EQ(0, reg_enable_temp(&fd, &self->check, sizeof(int), 0)); + close(fd); + ASSERT_EQ(true, temp_exists(grace)); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + ASSERT_EQ(false, temp_exists(grace)); + + /* Removal path 3, close on last trace_event ref */ + ASSERT_EQ(0, clear_temp()); + ASSERT_EQ(0, reg_enable_temp(&fd, &self->check, sizeof(int), 0)); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + ASSERT_EQ(0, change_temp_event(true)); + close(fd); + ASSERT_EQ(true, temp_exists(grace)); + ASSERT_EQ(0, change_temp_event(false)); + ASSERT_EQ(false, temp_exists(grace)); + + /* FLAG: Non-ABI */ + /* Unknown flags should fail with EINVAL */ + ASSERT_EQ(-1, __reg_enable(&fd, "__abi_invalid_event", &self->check, + sizeof(int), 0, USER_EVENT_REG_MAX)); + ASSERT_EQ(EINVAL, errno); + + ASSERT_EQ(-1, __reg_enable(&fd, "__abi_invalid_event", &self->check, + sizeof(int), 0, USER_EVENT_REG_MAX + 1)); + ASSERT_EQ(EINVAL, errno); +} + int main(int argc, char **argv) { return test_harness_run(argc, argv); From patchwork Tue May 30 23:53:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 13261234 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94211C7EE23 for ; Tue, 30 May 2023 23:53:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233980AbjE3XxT (ORCPT ); Tue, 30 May 2023 19:53:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233897AbjE3XxR (ORCPT ); Tue, 30 May 2023 19:53:17 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B26D9102; Tue, 30 May 2023 16:53:15 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id C1CAA20FC478; Tue, 30 May 2023 16:53:12 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com C1CAA20FC478 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1685490792; bh=OnNeNVSXO4ERkaHgFCtG87kG7AJD7W63eM8/Z9KuQYo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZjeeYvha+B2nxh6wbsBLm+UDoVXiEgiDGz5pSBxsZK1mzeyJ2FUM9wvakPcF+nlnp +QQnVM1844JyP3ncObsRr2dSaNiXB/wxPX90MMI9p7jbIxYJVcXBRTKvCxqMB0jN64 UrUQxOH+JW9hk3vIBdozhs+zM6S0ZgheYtuAkzao= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, ast@kernel.org, dcook@linux.microsoft.com Subject: [PATCH 5/5] tracing/user_events: Add auto-del flag documentation Date: Tue, 30 May 2023 16:53:04 -0700 Message-Id: <20230530235304.2726-6-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230530235304.2726-1-beaub@linux.microsoft.com> References: <20230530235304.2726-1-beaub@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org There is now a flag for user_events to use when registering events to auto delete events upon the last reference put. Add the new flag, USER_EVENT_REG_AUTO_DEL, to user_events documentation files to let people know how to use it. Signed-off-by: Beau Belgrave --- Documentation/trace/user_events.rst | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/Documentation/trace/user_events.rst b/Documentation/trace/user_events.rst index f79987e16cf4..946da25be812 100644 --- a/Documentation/trace/user_events.rst +++ b/Documentation/trace/user_events.rst @@ -39,6 +39,14 @@ DIAG_IOCSREG. This command takes a packed struct user_reg as an argument:: + enum user_reg_flag { + /* Event will auto delete upon last reference closing */ + USER_EVENT_REG_AUTO_DEL = 1U << 0, + + /* This value or above is currently non-ABI */ + USER_EVENT_REG_MAX = 1U << 1, + }; + struct user_reg { /* Input: Size of the user_reg structure being used */ __u32 size; @@ -49,7 +57,7 @@ This command takes a packed struct user_reg as an argument:: /* Input: Enable size in bytes at address */ __u8 enable_size; - /* Input: Flags for future use, set to 0 */ + /* Input: Flags can be any of the above user_reg_flag values */ __u16 flags; /* Input: Address to update when enabled */ @@ -73,10 +81,13 @@ The struct user_reg requires all the above inputs to be set appropriately. This must be 4 (32-bit) or 8 (64-bit). 64-bit values are only allowed to be used on 64-bit kernels, however, 32-bit can be used on all kernels. -+ flags: The flags to use, if any. For the initial version this must be 0. - Callers should first attempt to use flags and retry without flags to ensure - support for lower versions of the kernel. If a flag is not supported -EINVAL - is returned. ++ flags: The flags to use, if any. Callers should first attempt to use flags + and retry without flags to ensure support for lower versions of the kernel. + If a flag is not supported -EINVAL is returned. + + **USER_EVENT_REG_AUTO_DEL** + When the last reference is closed for the event, the event will delete + itself automatically as if the delete IOCTL was issued by a user. + enable_addr: The address of the value to use to reflect event status. This must be naturally aligned and write accessible within the user program.