diff mbox

[v2,02/10] locks: have locks_release_file use flock_lock_file to release generic flock locks

Message ID 1420742065-28423-3-git-send-email-jlayton@primarydata.com (mailing list archive)
State New, archived
Headers show

Commit Message

Jeff Layton Jan. 8, 2015, 6:34 p.m. UTC
...instead of open-coding it and removing flock locks directly. This
simplifies some coming interim changes in the following patches when
we have different file_lock types protected by different spinlocks.

Signed-off-by: Jeff Layton <jlayton@primarydata.com>
---
 fs/locks.c | 49 +++++++++++++++++++++++++++++++------------------
 1 file changed, 31 insertions(+), 18 deletions(-)

Comments

Christoph Hellwig Jan. 9, 2015, 2:27 p.m. UTC | #1
On Thu, Jan 08, 2015 at 10:34:17AM -0800, Jeff Layton wrote:
> ...instead of open-coding it and removing flock locks directly. This
> simplifies some coming interim changes in the following patches when
> we have different file_lock types protected by different spinlocks.

It took me quite a while to figure out what's going on here, as this
adds a call to flock_lock_file, but it still keeps the old open coded
loop around, just with a slightly different WARN_ON.

I'd suggest keeping an open coded loop in locks_remove_flock, which
should both be more efficient and easier to review.

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jeff Layton Jan. 9, 2015, 2:42 p.m. UTC | #2
On Fri, 9 Jan 2015 06:27:23 -0800
Christoph Hellwig <hch@infradead.org> wrote:

> On Thu, Jan 08, 2015 at 10:34:17AM -0800, Jeff Layton wrote:
> > ...instead of open-coding it and removing flock locks directly. This
> > simplifies some coming interim changes in the following patches when
> > we have different file_lock types protected by different spinlocks.
> 
> It took me quite a while to figure out what's going on here, as this
> adds a call to flock_lock_file, but it still keeps the old open coded
> loop around, just with a slightly different WARN_ON.
> 

Right. Eventually that open-coded loop (and the WARN_ON) goes away once
leases get moved into i_flctx in a later patch.

FWIW, there is some messiness involved in this patchset in the interim
stages due to the need to keep things bisectable. Once all of the
patches are applied, the result looks a lot cleaner.

> I'd suggest keeping an open coded loop in locks_remove_flock, which
> should both be more efficient and easier to review.
> 

I don't know. On the one hand, I rather like keeping all of the lock
removal logic in a single spot. On the other hand, we do take and drop
the i_lock/flc_lock more than once with this scheme if there are both
flock locks and leases present at the time of the close. Open coding
the loops would allow us to do that just once.

I'll ponder it a bit more for the next iteration...

Thanks,
Christoph Hellwig Jan. 9, 2015, 2:46 p.m. UTC | #3
On Fri, Jan 09, 2015 at 06:42:57AM -0800, Jeff Layton wrote:
> > I'd suggest keeping an open coded loop in locks_remove_flock, which
> > should both be more efficient and easier to review.
> > 
> 
> I don't know. On the one hand, I rather like keeping all of the lock
> removal logic in a single spot. On the other hand, we do take and drop
> the i_lock/flc_lock more than once with this scheme if there are both
> flock locks and leases present at the time of the close. Open coding
> the loops would allow us to do that just once.
> 
> I'll ponder it a bit more for the next iteration...

FYI, I like the split into locks_remove_flock, it's just that
flock_lock_file is giant mess.  If it helps you feel free to keep
it as-is for now and just document what you did in the changelog in
detail.
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/locks.c b/fs/locks.c
index fba8bc7af6d3..cec00425bfb6 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -2360,6 +2360,30 @@  void locks_remove_posix(struct file *filp, fl_owner_t owner)
 
 EXPORT_SYMBOL(locks_remove_posix);
 
+static void
+locks_remove_flock(struct file *filp)
+{
+	struct file_lock fl = {
+		.fl_owner = filp,
+		.fl_pid = current->tgid,
+		.fl_file = filp,
+		.fl_flags = FL_FLOCK,
+		.fl_type = F_UNLCK,
+		.fl_end = OFFSET_MAX,
+	};
+
+	if (!file_inode(filp)->i_flock)
+		return;
+
+	if (filp->f_op->flock)
+		filp->f_op->flock(filp, F_SETLKW, &fl);
+	else
+		flock_lock_file(filp, &fl);
+
+	if (fl.fl_ops && fl.fl_ops->fl_release_private)
+		fl.fl_ops->fl_release_private(&fl);
+}
+
 /*
  * This function is called on the last close of an open file.
  */
@@ -2370,24 +2394,14 @@  void locks_remove_file(struct file *filp)
 	struct file_lock **before;
 	LIST_HEAD(dispose);
 
-	if (!inode->i_flock)
-		return;
-
+	/* remove any OFD locks */
 	locks_remove_posix(filp, filp);
 
-	if (filp->f_op->flock) {
-		struct file_lock fl = {
-			.fl_owner = filp,
-			.fl_pid = current->tgid,
-			.fl_file = filp,
-			.fl_flags = FL_FLOCK,
-			.fl_type = F_UNLCK,
-			.fl_end = OFFSET_MAX,
-		};
-		filp->f_op->flock(filp, F_SETLKW, &fl);
-		if (fl.fl_ops && fl.fl_ops->fl_release_private)
-			fl.fl_ops->fl_release_private(&fl);
-	}
+	/* remove flock locks */
+	locks_remove_flock(filp);
+
+	if (!inode->i_flock)
+		return;
 
 	spin_lock(&inode->i_lock);
 	before = &inode->i_flock;
@@ -2407,8 +2421,7 @@  void locks_remove_file(struct file *filp)
 			 * some info about it and then just remove it from
 			 * the list.
 			 */
-			WARN(!IS_FLOCK(fl),
-				"leftover lock: dev=%u:%u ino=%lu type=%hhd flags=0x%x start=%lld end=%lld\n",
+			WARN(1, "leftover lock: dev=%u:%u ino=%lu type=%hhd flags=0x%x start=%lld end=%lld\n",
 				MAJOR(inode->i_sb->s_dev),
 				MINOR(inode->i_sb->s_dev), inode->i_ino,
 				fl->fl_type, fl->fl_flags,