diff mbox

[linux-cifs-client] cifs: fix fh_mutex locking in cifs_reopen_file

Message ID 1246100695-10344-1-git-send-email-jlayton@redhat.com (mailing list archive)
State Accepted
Headers show

Commit Message

Jeff Layton June 27, 2009, 11:04 a.m. UTC
Fixes a regression caused by commit a6ce4932fbdbcd8f8e8c6df76812014351c32892

When this lock was converted to a mutex, the locks were turned into
unlocks and vice-versa.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
Cc: Stable Tree <stable@kernel.org>
---
 fs/cifs/file.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

Comments

Steve French June 28, 2009, 12:10 a.m. UTC | #1
Good catch .

I have reviewed and merged into cifs-2.6.git. Merge requested for upstream.

On Sat, Jun 27, 2009 at 6:04 AM, Jeff Layton<jlayton@redhat.com> wrote:
> Fixes a regression caused by commit a6ce4932fbdbcd8f8e8c6df76812014351c32892
>
> When this lock was converted to a mutex, the locks were turned into
> unlocks and vice-versa.
>
> Signed-off-by: Jeff Layton <jlayton@redhat.com>
> Cc: Stable Tree <stable@kernel.org>
> ---
>  fs/cifs/file.c |   10 +++++-----
>  1 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/fs/cifs/file.c b/fs/cifs/file.c
> index ebdbe62..97ce4bf 100644
> --- a/fs/cifs/file.c
> +++ b/fs/cifs/file.c
> @@ -493,9 +493,9 @@ static int cifs_reopen_file(struct file *file, bool can_flush)
>                return -EBADF;
>
>        xid = GetXid();
> -       mutex_unlock(&pCifsFile->fh_mutex);
> +       mutex_lock(&pCifsFile->fh_mutex);
>        if (!pCifsFile->invalidHandle) {
> -               mutex_lock(&pCifsFile->fh_mutex);
> +               mutex_unlock(&pCifsFile->fh_mutex);
>                rc = 0;
>                FreeXid(xid);
>                return rc;
> @@ -527,7 +527,7 @@ static int cifs_reopen_file(struct file *file, bool can_flush)
>        if (full_path == NULL) {
>                rc = -ENOMEM;
>  reopen_error_exit:
> -               mutex_lock(&pCifsFile->fh_mutex);
> +               mutex_unlock(&pCifsFile->fh_mutex);
>                FreeXid(xid);
>                return rc;
>        }
> @@ -569,14 +569,14 @@ reopen_error_exit:
>                         cifs_sb->local_nls, cifs_sb->mnt_cifs_flags &
>                                CIFS_MOUNT_MAP_SPECIAL_CHR);
>        if (rc) {
> -               mutex_lock(&pCifsFile->fh_mutex);
> +               mutex_unlock(&pCifsFile->fh_mutex);
>                cFYI(1, ("cifs_open returned 0x%x", rc));
>                cFYI(1, ("oplock: %d", oplock));
>        } else {
>  reopen_success:
>                pCifsFile->netfid = netfid;
>                pCifsFile->invalidHandle = false;
> -               mutex_lock(&pCifsFile->fh_mutex);
> +               mutex_unlock(&pCifsFile->fh_mutex);
>                pCifsInode = CIFS_I(inode);
>                if (pCifsInode) {
>                        if (can_flush) {
> --
> 1.6.0.6
>
>
Jeff Layton June 28, 2009, 1:23 a.m. UTC | #2
On Sat, 2009-06-27 at 19:10 -0500, Steve French wrote:
> Good catch .
> 
> I have reviewed and merged into cifs-2.6.git. Merge requested for upstream.
> 

Great. Did it fix the problems you were seeing when testing with dbench?
Steve French June 28, 2009, 4:43 p.m. UTC | #3
On Sat, Jun 27, 2009 at 8:23 PM, Jeff Layton<jlayton@redhat.com> wrote:
> On Sat, 2009-06-27 at 19:10 -0500, Steve French wrote:
>> Good catch .
>>
>> I have reviewed and merged into cifs-2.6.git. Merge requested for upstream.
>>
> Great. Did it fix the problems you were seeing when testing with dbench?

I doubt that this patch would affect my dbench runs because I wasn't
getting any network
reconnections (they would have logged a warning to dmesg) and the fix
is in reopen_file (which is invoked on reconnections).

I am updating and building current samba3 on a different machine (the one I
more frequently run dbench test runs on) to check on whether this is an
Ubuntu specific limit which I had not seen before (I usually run dbench with
50 rather than 60 processes).  The Samba handle limit should (and was)
be quite large.

In any case, as long as we are sure we are hitting a Samba server
limit (or server side
per-process limit), we are ok and can continue to review/merge the very large
inode patches.  I am verifying with one additional pair of temporary
stats (counters of successful opens) in the exit path of cifs_open and
cifs_close)
to make sure that those match what I have already verified that we are seeing
with smbstatus and the client side counters on number of successful posix_opens.
Steve French June 28, 2009, 7:02 p.m. UTC | #4
On Sun, Jun 28, 2009 at 11:43 AM, Steve French<smfrench@gmail.com> wrote:
> On Sat, Jun 27, 2009 at 8:23 PM, Jeff Layton<jlayton@redhat.com> wrote:
> In any case, as long as we are sure we are hitting a Samba server
> limit (or server side
> per-process limit), we are ok and can continue to review/merge the very large
> inode patches.  I am verifying with one additional pair of temporary
> stats (counters of successful opens) in the exit path of cifs_open and
> cifs_close)
> to make sure that those match what I have already verified that we are seeing
> with smbstatus and the client side counters on number of successful posix_opens.

I am puzzled about the Samba 3.4 max files limit (I am seeing it at
1014 opens) and seems
strange that dbench would open so many files, but with counters in
cifs_open and cifs_close - I see 1014 more opens than closes (from the vfs)
which matches what I see at the SMB level and what I see in Samba server.
dbench 4 fails even faster.   This also fails on other OS (opensuse,
Ubuntu etc.),
but worked on Samba 3.0.28.  Is it possible that Samba 3.4 changed their
max open file limit?
Jeff Layton June 28, 2009, 11:18 p.m. UTC | #5
On Sun, 2009-06-28 at 14:02 -0500, Steve French wrote:
> On Sun, Jun 28, 2009 at 11:43 AM, Steve French<smfrench@gmail.com> wrote:
> > On Sat, Jun 27, 2009 at 8:23 PM, Jeff Layton<jlayton@redhat.com> wrote:
> > In any case, as long as we are sure we are hitting a Samba server
> > limit (or server side
> > per-process limit), we are ok and can continue to review/merge the very large
> > inode patches.  I am verifying with one additional pair of temporary
> > stats (counters of successful opens) in the exit path of cifs_open and
> > cifs_close)
> > to make sure that those match what I have already verified that we are seeing
> > with smbstatus and the client side counters on number of successful posix_opens.
> 
> I am puzzled about the Samba 3.4 max files limit (I am seeing it at
> 1014 opens) and seems
> strange that dbench would open so many files, but with counters in
> cifs_open and cifs_close - I see 1014 more opens than closes (from the vfs)
> which matches what I see at the SMB level and what I see in Samba server.
> dbench 4 fails even faster.   This also fails on other OS (opensuse,
> Ubuntu etc.),
> but worked on Samba 3.0.28.  Is it possible that Samba 3.4 changed their
> max open file limit?
> 
> 

Doesn't 3.0.28 have broken POSIX open calls? That may account for the
difference. I can't be certain I was seeing the same failures you were
with dbench, but I never got a passing run until I applied that patch to
fix the reopen locking.
diff mbox

Patch

diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index ebdbe62..97ce4bf 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -493,9 +493,9 @@  static int cifs_reopen_file(struct file *file, bool can_flush)
 		return -EBADF;
 
 	xid = GetXid();
-	mutex_unlock(&pCifsFile->fh_mutex);
+	mutex_lock(&pCifsFile->fh_mutex);
 	if (!pCifsFile->invalidHandle) {
-		mutex_lock(&pCifsFile->fh_mutex);
+		mutex_unlock(&pCifsFile->fh_mutex);
 		rc = 0;
 		FreeXid(xid);
 		return rc;
@@ -527,7 +527,7 @@  static int cifs_reopen_file(struct file *file, bool can_flush)
 	if (full_path == NULL) {
 		rc = -ENOMEM;
 reopen_error_exit:
-		mutex_lock(&pCifsFile->fh_mutex);
+		mutex_unlock(&pCifsFile->fh_mutex);
 		FreeXid(xid);
 		return rc;
 	}
@@ -569,14 +569,14 @@  reopen_error_exit:
 			 cifs_sb->local_nls, cifs_sb->mnt_cifs_flags &
 				CIFS_MOUNT_MAP_SPECIAL_CHR);
 	if (rc) {
-		mutex_lock(&pCifsFile->fh_mutex);
+		mutex_unlock(&pCifsFile->fh_mutex);
 		cFYI(1, ("cifs_open returned 0x%x", rc));
 		cFYI(1, ("oplock: %d", oplock));
 	} else {
 reopen_success:
 		pCifsFile->netfid = netfid;
 		pCifsFile->invalidHandle = false;
-		mutex_lock(&pCifsFile->fh_mutex);
+		mutex_unlock(&pCifsFile->fh_mutex);
 		pCifsInode = CIFS_I(inode);
 		if (pCifsInode) {
 			if (can_flush) {