diff mbox series

poll: Ignore benign race on pwd->triggered

Message ID 20210429034411.6379-1-baptiste.lepers@gmail.com (mailing list archive)
State New, archived
Headers show
Series poll: Ignore benign race on pwd->triggered | expand

Commit Message

Baptiste Lepers April 29, 2021, 3:44 a.m. UTC
Mark data races to pwd->triggered as benign using READ_ONCE and
WRITE_ONCE. These accesses are expected to be racy per comment.

This patch is part of a series of patches aiming at reducing the number
of benign races reported by KCSAN in order to focus future debugging
effort on harmful races.

Reported-by: syzbot+9b3fb64bcc8c1d807595@syzkaller.appspotmail.com
Fixes: 5f820f648c92a ("poll: allow f_op->poll to sleep")
Signed-off-by: Baptiste Lepers <baptiste.lepers@gmail.com>
---
 fs/select.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Marco Elver May 3, 2021, 6:46 p.m. UTC | #1
Thanks for the patch -- any progress on this front is much appreciated.

However, the subject is not really doing this patch justice. We're not
ignoring the race -- and if by "ignore" you mean let KCSAN ignore the
race, then it's more like saying "we'll make the sanitizer not report
this problem anymore... by fixing the problem".

I'd suggest, similar to wording below:

	"poll: mark racy accesses on pwq->triggered"

Data races are real, and even though at a high-level (where we pretend
all accesses are atomic) these races are safe and intentional, it it not
at all clear to the reader why the data race would be "benign" -- that
is if the compiler optimized or miscompiled the code in such a way that
we end up with cases unintended by the programmer.

In this case, we can probably argue that the data race would be safe
(benign), given there's a simple 0->1 transition on triggered, and the
reader only doing a compare-against-0.

Nevertheless, a READ_ONCE()/WRITE_ONCE() pair is preferred if there are
no objections, and generated code almost always is the same, and it
saves us reasoning about another use of data_race().

Paul recently summarized some of these strategies:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/memory-model/Documentation/access-marking.txt

On Thu, Apr 29, 2021 at 01:44PM +1000, Baptiste Lepers wrote:
> Mark data races to pwd->triggered as benign using READ_ONCE and
> WRITE_ONCE. These accesses are expected to be racy per comment.

This sounds fine, given there's already a comment.

> This patch is part of a series of patches aiming at reducing the number
> of benign races reported by KCSAN in order to focus future debugging
> effort on harmful races.

Looking forward to the rest.

> Reported-by: syzbot+9b3fb64bcc8c1d807595@syzkaller.appspotmail.com
> Fixes: 5f820f648c92a ("poll: allow f_op->poll to sleep")
> Signed-off-by: Baptiste Lepers <baptiste.lepers@gmail.com>

Acked-by: Marco Elver <elver@google.com>

> ---
>  fs/select.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/select.c b/fs/select.c
> index 945896d0ac9e..e71b4d1a2606 100644
> --- a/fs/select.c
> +++ b/fs/select.c
> @@ -194,7 +194,7 @@ static int __pollwake(wait_queue_entry_t *wait, unsigned mode, int sync, void *k
>  	 * and is paired with smp_store_mb() in poll_schedule_timeout.
>  	 */
>  	smp_wmb();
> -	pwq->triggered = 1;
> +	WRITE_ONCE(pwq->triggered, 1);
>  
>  	/*
>  	 * Perform the default wake up operation using a dummy
> @@ -239,7 +239,7 @@ static int poll_schedule_timeout(struct poll_wqueues *pwq, int state,
>  	int rc = -EINTR;
>  
>  	set_current_state(state);
> -	if (!pwq->triggered)
> +	if (!READ_ONCE(pwq->triggered))
>  		rc = schedule_hrtimeout_range(expires, slack, HRTIMER_MODE_ABS);
>  	__set_current_state(TASK_RUNNING);
>  
> -- 
> 2.17.1
diff mbox series

Patch

diff --git a/fs/select.c b/fs/select.c
index 945896d0ac9e..e71b4d1a2606 100644
--- a/fs/select.c
+++ b/fs/select.c
@@ -194,7 +194,7 @@  static int __pollwake(wait_queue_entry_t *wait, unsigned mode, int sync, void *k
 	 * and is paired with smp_store_mb() in poll_schedule_timeout.
 	 */
 	smp_wmb();
-	pwq->triggered = 1;
+	WRITE_ONCE(pwq->triggered, 1);
 
 	/*
 	 * Perform the default wake up operation using a dummy
@@ -239,7 +239,7 @@  static int poll_schedule_timeout(struct poll_wqueues *pwq, int state,
 	int rc = -EINTR;
 
 	set_current_state(state);
-	if (!pwq->triggered)
+	if (!READ_ONCE(pwq->triggered))
 		rc = schedule_hrtimeout_range(expires, slack, HRTIMER_MODE_ABS);
 	__set_current_state(TASK_RUNNING);