diff mbox series

[2/3] tracing/user_events: Ensure bit is cleared on unregister

Message ID 20230411211709.15018-3-beaub@linux.microsoft.com (mailing list archive)
State Superseded
Headers show
Series tracing/user_events: Fixes and improvements for 6.4 | expand

Commit Message

Beau Belgrave April 11, 2023, 9:17 p.m. UTC
If an event is enabled and a user process unregisters user_events, the
bit is left set. Fix this by always clearing the bit in the user process
if unregister is successful.

Update abi self-test to ensure this occurs properly.

Suggested-by: Doug Cook <dcook@linux.microsoft.com>
Signed-off-by: Beau Belgrave <beaub@linux.microsoft.com>
---
 kernel/trace/trace_events_user.c              | 34 +++++++++++++++++++
 .../testing/selftests/user_events/abi_test.c  |  9 +++--
 2 files changed, 40 insertions(+), 3 deletions(-)

Comments

Steven Rostedt April 25, 2023, 1:39 a.m. UTC | #1
On Tue, 11 Apr 2023 14:17:08 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> +static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
> +				   unsigned long uaddr, unsigned char bit)
> +{
> +	struct user_event_enabler enabler;
> +	int result;
> +
> +	memset(&enabler, 0, sizeof(enabler));
> +	enabler.addr = uaddr;
> +	enabler.values = bit;
> +retry:
> +	/* Prevents state changes from racing with new enablers */
> +	mutex_lock(&event_mutex);
> +
> +	/* Force the bit to be cleared, since no event is attached */
> +	mmap_read_lock(user_mm->mm);
> +	result = user_event_enabler_write(user_mm, &enabler, false);
> +	mmap_read_unlock(user_mm->mm);
> +
> +	mutex_unlock(&event_mutex);
> +
> +	if (result) {
> +		/* Attempt to fault-in and retry if it worked */
> +		if (!user_event_mm_fault_in(user_mm, uaddr))
> +			goto retry;

Without looking into the functions of this call, I wonder if this can
get into an infinite loop?

-- Steve


> +	}
> +
> +	return result;
> +}
> +
Beau Belgrave April 25, 2023, 5:06 p.m. UTC | #2
On Mon, Apr 24, 2023 at 09:39:57PM -0400, Steven Rostedt wrote:
> On Tue, 11 Apr 2023 14:17:08 -0700
> Beau Belgrave <beaub@linux.microsoft.com> wrote:
> 
> > +static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
> > +				   unsigned long uaddr, unsigned char bit)
> > +{
> > +	struct user_event_enabler enabler;
> > +	int result;
> > +
> > +	memset(&enabler, 0, sizeof(enabler));
> > +	enabler.addr = uaddr;
> > +	enabler.values = bit;
> > +retry:
> > +	/* Prevents state changes from racing with new enablers */
> > +	mutex_lock(&event_mutex);
> > +
> > +	/* Force the bit to be cleared, since no event is attached */
> > +	mmap_read_lock(user_mm->mm);
> > +	result = user_event_enabler_write(user_mm, &enabler, false);
> > +	mmap_read_unlock(user_mm->mm);
> > +
> > +	mutex_unlock(&event_mutex);
> > +
> > +	if (result) {
> > +		/* Attempt to fault-in and retry if it worked */
> > +		if (!user_event_mm_fault_in(user_mm, uaddr))
> > +			goto retry;
> 
> Without looking into the functions of this call, I wonder if this can
> get into an infinite loop?
> 

That's a good point, user_event_mm_fault() is a wrapper around
fixup_user_fault(). We retry if it works, so I guess if the user could
somehow cause a fail on the write and succeed to page in repeatedly, it
could keep the loop going for that time period. I cannot think of a way
to achieve this forever, but that doesn't mean it couldn't happen.

I can certainly add an upper bound of retries (maybe 3 or so?) if you
think it would be possible for this to occur. I think we need retries of
some amount to handle spurious faults.

Thanks,
-Beau

> -- Steve
> 
> 
> > +	}
> > +
> > +	return result;
> > +}
> > +
Steven Rostedt April 25, 2023, 5:56 p.m. UTC | #3
On Tue, 25 Apr 2023 10:06:54 -0700
Beau Belgrave <beaub@linux.microsoft.com> wrote:

> That's a good point, user_event_mm_fault() is a wrapper around
> fixup_user_fault(). We retry if it works, so I guess if the user could
> somehow cause a fail on the write and succeed to page in repeatedly, it
> could keep the loop going for that time period. I cannot think of a way
> to achieve this forever, but that doesn't mean it couldn't happen.
> 
> I can certainly add an upper bound of retries (maybe 3 or so?) if you
> think it would be possible for this to occur. I think we need retries of
> some amount to handle spurious faults.

Even 10 is fine. With a comment saying, "This really shouldn't loop more
than a couple of times, but we want to make sure some mischievous user
doesn't take advantage of this looping".

-- Steve
diff mbox series

Patch

diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index e7dff24aa724..eb195d697177 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -2146,6 +2146,35 @@  static long user_unreg_get(struct user_unreg __user *ureg,
 	return ret;
 }
 
+static int user_event_mm_clear_bit(struct user_event_mm *user_mm,
+				   unsigned long uaddr, unsigned char bit)
+{
+	struct user_event_enabler enabler;
+	int result;
+
+	memset(&enabler, 0, sizeof(enabler));
+	enabler.addr = uaddr;
+	enabler.values = bit;
+retry:
+	/* Prevents state changes from racing with new enablers */
+	mutex_lock(&event_mutex);
+
+	/* Force the bit to be cleared, since no event is attached */
+	mmap_read_lock(user_mm->mm);
+	result = user_event_enabler_write(user_mm, &enabler, false);
+	mmap_read_unlock(user_mm->mm);
+
+	mutex_unlock(&event_mutex);
+
+	if (result) {
+		/* Attempt to fault-in and retry if it worked */
+		if (!user_event_mm_fault_in(user_mm, uaddr))
+			goto retry;
+	}
+
+	return result;
+}
+
 /*
  * Unregisters an enablement address/bit within a task/user mm.
  */
@@ -2190,6 +2219,11 @@  static long user_events_ioctl_unreg(unsigned long uarg)
 
 	mutex_unlock(&event_mutex);
 
+	/* Ensure bit is now cleared for user, regardless of event status */
+	if (!ret)
+		ret = user_event_mm_clear_bit(mm, reg.disable_addr,
+					      reg.disable_bit);
+
 	return ret;
 }
 
diff --git a/tools/testing/selftests/user_events/abi_test.c b/tools/testing/selftests/user_events/abi_test.c
index e0323d3777a7..5125c42efe65 100644
--- a/tools/testing/selftests/user_events/abi_test.c
+++ b/tools/testing/selftests/user_events/abi_test.c
@@ -109,13 +109,16 @@  TEST_F(user, enablement) {
 	ASSERT_EQ(0, change_event(false));
 	ASSERT_EQ(0, self->check);
 
-	/* Should not change after disable */
+	/* Ensure kernel clears bit after disable */
 	ASSERT_EQ(0, change_event(true));
 	ASSERT_EQ(1, self->check);
 	ASSERT_EQ(0, reg_disable(&self->check, 0));
+	ASSERT_EQ(0, self->check);
+
+	/* Ensure doesn't change after unreg */
+	ASSERT_EQ(0, change_event(true));
+	ASSERT_EQ(0, self->check);
 	ASSERT_EQ(0, change_event(false));
-	ASSERT_EQ(1, self->check);
-	self->check = 0;
 }
 
 TEST_F(user, bit_sizes) {