diff mbox series

[RESEND] vfs: serialize updates to file->f_sb_err with f_lock

Message ID 20210104184347.90598-1-jlayton@kernel.org (mailing list archive)
State New, archived
Headers show
Series [RESEND] vfs: serialize updates to file->f_sb_err with f_lock | expand

Commit Message

Jeff Layton Jan. 4, 2021, 6:43 p.m. UTC
When I added the ability for syncfs to report writeback errors, I
neglected to adequately protect file->f_sb_err. While changes to
sb->s_wb_err don't require locking, we do need to protect the errseq_t
cursor in file->f_sb_err.

We could have racing updates to that value if two tasks are issuing
syncfs() on the same fd at the same time, possibly causing an error to
be reported twice or not at all.

Fix this by protecting the f_sb_err field with the file->f_lock.

Fixes: 735e4ae5ba28 ("vfs: track per-sb writeback errors and report them to syncfs")
Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
 fs/sync.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

Al, could you pick up this patch for v5.11 or v5.12? I sent the original
version about a month ago, but it didn't get picked up.

In the original posting I marked this for stable, but I'm not sure it
really qualifies since it's a pretty unlikely race with an oddball
use-case (overlapping syncfs() calls on the same fd).

Comments

Al Viro Jan. 4, 2021, 6:57 p.m. UTC | #1
On Mon, Jan 04, 2021 at 01:43:47PM -0500, Jeff Layton wrote:
> @@ -172,7 +172,12 @@ SYSCALL_DEFINE1(syncfs, int, fd)
>  	ret = sync_filesystem(sb);
>  	up_read(&sb->s_umount);
>  
> -	ret2 = errseq_check_and_advance(&sb->s_wb_err, &f.file->f_sb_err);
> +	if (errseq_check(&sb->s_wb_err, f.file->f_sb_err)) {
> +		/* Something changed, must use slow path */
> +		spin_lock(&f.file->f_lock);
> +		ret2 = errseq_check_and_advance(&sb->s_wb_err, &f.file->f_sb_err);
> +		spin_unlock(&f.file->f_lock);
> +	}

	Is there any point bothering with the fastpath here?
I mean, look at the up_read() immediately prior to that thing...
Jeff Layton Jan. 4, 2021, 7 p.m. UTC | #2
On Mon, 2021-01-04 at 18:57 +0000, Al Viro wrote:
> On Mon, Jan 04, 2021 at 01:43:47PM -0500, Jeff Layton wrote:
> > @@ -172,7 +172,12 @@ SYSCALL_DEFINE1(syncfs, int, fd)
> >  	ret = sync_filesystem(sb);
> >  	up_read(&sb->s_umount);
> >  
> > 
> > -	ret2 = errseq_check_and_advance(&sb->s_wb_err, &f.file->f_sb_err);
> > +	if (errseq_check(&sb->s_wb_err, f.file->f_sb_err)) {
> > +		/* Something changed, must use slow path */
> > +		spin_lock(&f.file->f_lock);
> > +		ret2 = errseq_check_and_advance(&sb->s_wb_err, &f.file->f_sb_err);
> > +		spin_unlock(&f.file->f_lock);
> > +	}
> 
> 	Is there any point bothering with the fastpath here?
> I mean, look at the up_read() immediately prior to that thing...

It is a micro-optimization, but the vastly common case is that we will
avoid the spinlock there. That said, I'm fine with dropping the fastpath
if you prefer.
diff mbox series

Patch

diff --git a/fs/sync.c b/fs/sync.c
index 1373a610dc78..3be26ff72431 100644
--- a/fs/sync.c
+++ b/fs/sync.c
@@ -162,7 +162,7 @@  SYSCALL_DEFINE1(syncfs, int, fd)
 {
 	struct fd f = fdget(fd);
 	struct super_block *sb;
-	int ret, ret2;
+	int ret, ret2 = 0;
 
 	if (!f.file)
 		return -EBADF;
@@ -172,7 +172,12 @@  SYSCALL_DEFINE1(syncfs, int, fd)
 	ret = sync_filesystem(sb);
 	up_read(&sb->s_umount);
 
-	ret2 = errseq_check_and_advance(&sb->s_wb_err, &f.file->f_sb_err);
+	if (errseq_check(&sb->s_wb_err, f.file->f_sb_err)) {
+		/* Something changed, must use slow path */
+		spin_lock(&f.file->f_lock);
+		ret2 = errseq_check_and_advance(&sb->s_wb_err, &f.file->f_sb_err);
+		spin_unlock(&f.file->f_lock);
+	}
 
 	fdput(f);
 	return ret ? ret : ret2;