Message ID | 20150217151224.2dc31ad8@tlielax.poochiereds.net (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, Feb 17, 2015 at 12:12 PM, Jeff Layton <jlayton@poochiereds.net> wrote: > > I agree that there's no deadlock. I also agree that allowing two > LOCK_EX's (or a LOCK_SH + LOCK_EX) on the file is broken. I'm just > leery on making a user-visible change at this point. I'd prefer to let > something like that soak in linux-next for a while. Umm. The *current* behavior is user-visible. As in "catastrophic bug" user visible. Not just Kirill's report, but the silent "possibly exclusive file locks aren't exclusive". No, we're not delaying fixing this because of concerns of other user-visible behavior. There is absolutely no way other behavior could possibly be *less* catastrophic than data corruption (in user space) due to non-working exclusive locks. > Another possibility is to keep dropping the spinlock, but check to see > if someone set a new lock on the same filp in the loop after that. What would that buy? I agree that replacing the "re-get the spin-lock" with a "goto repeat" to the top (which will re-get the spinlock and then look for new existing locks) would also fix the problem with multiple locks, but it's just more code to do complex stuff that doesn't actually fix anything. Just removing the known buggy code is actually the simpler and safer fix. Linus -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
From 3212be05d47300fbb5718932f92b33acde3d219c Mon Sep 17 00:00:00 2001 From: Jeff Layton <jeff.layton@primarydata.com> Date: Tue, 17 Feb 2015 15:08:06 -0500 Subject: [PATCH] locks: ensure that we can't set multiple flock locks for the same filp Currently, we'll drop the spinlock in the middle of flock_lock_file in the event that we found an lock that needed to be removed prior to an upgrade or downgrade. It's possible however for another task to race in and set a lock on the same filp. If that happens, then we don't want to set an additional lock, so just remove the one that raced in and set our own. Signed-off-by: Jeff Layton <jeff.layton@primarydata.com> --- fs/locks.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/fs/locks.c b/fs/locks.c index fe8f9f46445b..099b60a46ccc 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -864,7 +864,7 @@ static int posix_locks_deadlock(struct file_lock *caller_fl, static int flock_lock_file(struct file *filp, struct file_lock *request) { struct file_lock *new_fl = NULL; - struct file_lock *fl; + struct file_lock *fl, *tmp; struct file_lock_context *ctx; struct inode *inode = file_inode(filp); int error = 0; @@ -912,7 +912,12 @@ static int flock_lock_file(struct file *filp, struct file_lock *request) } find_conflict: - list_for_each_entry(fl, &ctx->flc_flock, fl_list) { + list_for_each_entry_safe(fl, tmp, &ctx->flc_flock, fl_list) { + /* did someone set a lock on the same filp? */ + if (fl->fl_file == filp) { + locks_delete_lock_ctx(fl, &dispose); + continue; + } if (!flock_locks_conflict(request, fl)) continue; error = -EAGAIN; -- 2.1.0