From patchwork Mon Oct 21 17:22:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 11202763 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B9E1714ED for ; Mon, 21 Oct 2019 17:23:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7B7582086D for ; Mon, 21 Oct 2019 17:23:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="b1FT+Ijo" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726819AbfJURXL (ORCPT ); Mon, 21 Oct 2019 13:23:11 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:35279 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726672AbfJURXL (ORCPT ); Mon, 21 Oct 2019 13:23:11 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1571678588; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type; bh=voBNVYxMip0Cua5n2Obu6rb0dGhN8+OrH1il0+qp+/0=; b=b1FT+IjoeQj4lOW11ZFP8FlMf2j6iYNgwqClerR6CdNYTpB25KfRLwN8qcy/0c499RK0UB rKvF58HMuR4PvP4mqsWwa6bj+FhOzzbGR9qebL8TzhZt3jK0KbgTE/TE1ACnHyMnjcnSm3 y5YLgZBzBpFBeWToIsoTwbiiC34v63U= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-3-4oECyb1QMLuk-58XzX0APA-1; Mon, 21 Oct 2019 13:23:06 -0400 Received: by mail-qk1-f200.google.com with SMTP id k67so14237891qkc.3 for ; Mon, 21 Oct 2019 10:23:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=f+wVXTfFuBDn7o/7l2n2lclpIkAILwyErCpXEOFxdmw=; b=gkjRk9KYc4Qp00Nhw72ZOePcOLepRS8/YoirgOtqSMbK1mdC9m0EibPgnePVuzNtmW o69NA+ZgExmOl96GlvT3X1pw0zSqW+3MvAsH00GX1xUejnf9QXDG0sPe6NoGsfR+iecH KWkeF9xtwnHnT5xcvF5n/CQ9VzlN7xVh7JgCbhd4yidmbXhvEQ1aVq3Qk4xSSY2klrJ/ K+F4AxOs1FufiE7XDVcyOOtaPMeStma1Rrf2jZ2L1mH2ih4I8hwpHGH00anB4DM/lOqX Wl0JytyoLj/eKM7wHacVonAL2jIkJt3oZbYhLjg7kYIfrHWNZqjBOfrGFiO1e2gzRXFN xhwA== X-Gm-Message-State: APjAAAWttlnlb1Yka/Z0VrDzJSBc/hbuog6DziX+z869ZbkjHoOK4U3H 3Ow47tigkgnJPzkW33ZuX9ou/2zcHSwFVqgP5PoqwnM/frWxr6HuCdQvtAm4stM4GZXXH0RC1Hq Km+htq3CdyaKAQOj0IB1faxXwbmnq2uJu5+amSw== X-Received: by 2002:ae9:f404:: with SMTP id y4mr23012810qkl.225.1571678585295; Mon, 21 Oct 2019 10:23:05 -0700 (PDT) X-Google-Smtp-Source: APXvYqxBWtbCGKrsRAeUJBC9B3hUSmHZqqgE0l4RJvm033YowJdodazbAsXdBJZ6dao15F0s/tjusK+szQAz1Obn92w= X-Received: by 2002:ae9:f404:: with SMTP id y4mr23012770qkl.225.1571678584708; Mon, 21 Oct 2019 10:23:04 -0700 (PDT) MIME-Version: 1.0 From: David Wysochanski Date: Mon, 21 Oct 2019 13:22:27 -0400 Message-ID: Subject: Easily reproducible deadlock on 5.4-rc4 with cifsInodeInfo.lock_sem semaphore after network disruption or smb restart To: linux-cifs X-MC-Unique: 4oECyb1QMLuk-58XzX0APA-1 X-Mimecast-Spam-Score: 0 Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org HI all, We recently found another problem when running a simple test case where multiple processes open/read/close a single file, and a network disruption or smb restart occurs. After just a few minutes of restarts, a few of the processes error out, but a few remain. If you stop the disruption, the share is accessible again, but the original processes are blocked indefinitely waiting on lock_sem Once this occurs, you'll see two threads deadlocked like the below. Basically: thread1: down_read(lock_sem) inside cifs_strict_readv(); does not block, but blocks on network temporarily thread2: down_write(lock_sem) inside _cifsFileInfo_put; blocks due to thread 1 thread1: down_read(lock_sem) inside cifs_reopen_file (after reconnect); unblocks to reconnect, but then blocks due to thread 2 At this point there is a deadlock, both threads are waiting for each other to release lock_sem The root of the problem seems to be having to hold the read sem over a network disruption, and no check when it comes back before down_read a second time. Inside cifs_strict_readv there's a comment that explains why we need hold the sem over the read, and it's so no one modifies the lock list with a brlock that prevents writing. However, in this case at least, we're only opening files for reading, and inside _cifsFileInfo_put it needs to down_write to delete the outstanding lock records. So we're not really hitting that case, but are modifying the list. One approach to avoid the deadlock is for the reading process to drop the lock_sem before re-aquiring it again inside cifs_reopen_file. We would drop the lock just before calling, and then inside we'd need to check for the condition of the conflicting brlock, and then possibly error out somehow. But that is probably not preferred. Instead, can the reader only down_read one time? cifs_relock_file is only called from one place (cifs_reopen_file) and it's when we're reconnecting, but cifs_reopen_file is called from many places. I'm not sure if we there is something in existence we can use that says we don't need to re-aquire it, or if we need to add a flag or something? Here is the live crash session crash> ps -m | grep UN [0 01:54:10.166] [UN] PID: 13271 TASK: ffff93b8a7349e40 CPU: 2 COMMAND: "python3-openloo" [0 01:54:10.167] [UN] PID: 13273 TASK: ffff93b8b3310000 CPU: 2 COMMAND: "python3-openloo" [0 01:54:10.169] [UN] PID: 13269 TASK: ffff93b8a734dac0 CPU: 0 COMMAND: "python3-openloo" [0 01:54:10.208] [UN] PID: 13272 TASK: ffff93b8a7348000 CPU: 7 COMMAND: "python3-openloo" [0 01:54:10.216] [UN] PID: 13265 TASK: ffff93b8a7358000 CPU: 6 COMMAND: "python3-openloo" [0 01:54:10.172] [UN] PID: 13263 TASK: ffff93b8b3313c80 CPU: 2 COMMAND: "python3-openloo" [0 01:54:10.229] [UN] PID: 13267 TASK: ffff93b8a7359e40 CPU: 5 COMMAND: "python3-openloo" [0 01:54:10.234] [UN] PID: 13277 TASK: ffff93b8a7dddac0 CPU: 3 COMMAND: "python3-openloo" [0 01:54:10.235] [UN] PID: 13270 TASK: ffff93b8a734bc80 CPU: 1 COMMAND: "python3-openloo" [0 01:54:10.244] [UN] PID: 13276 TASK: ffff93b8a7dd8000 CPU: 6 COMMAND: "python3-openloo" [0 01:54:10.260] [UN] PID: 13264 TASK: ffff93b8b3315ac0 CPU: 6 COMMAND: "python3-openloo" [0 01:54:10.257] [UN] PID: 13274 TASK: ffff93b8a7ddbc80 CPU: 1 COMMAND: "python3-openloo" [0 01:54:31.262] [UN] PID: 13279 TASK: ffff93b8a7d90000 CPU: 1 COMMAND: "python3-openloo" <--- block same instant [0 01:54:31.265] [UN] PID: 13275 TASK: ffff93b8a7dd9e40 CPU: 4 COMMAND: "python3-openloo" <--- block same instant crash> bt 13279 PID: 13279 TASK: ffff93b8a7d90000 CPU: 1 COMMAND: "python3-openloo" #0 [ffffaae1c0a77c20] __schedule at ffffffffa19d4239 #1 [ffffaae1c0a77cb0] schedule at ffffffffa19d46b9 #2 [ffffaae1c0a77cc8] rwsem_down_write_slowpath at ffffffffa1131ada <------------- T+1: pid 13279 down_write(... lock_sem); blocks due to pid 13275 #3 [ffffaae1c0a77d98] _cifsFileInfo_put at ffffffffc0759818 [cifs] #4 [ffffaae1c0a77e78] cifs_close at ffffffffc0759b4f [cifs] #5 [ffffaae1c0a77e88] __fput at ffffffffa12f6e51 #6 [ffffaae1c0a77ed0] task_work_run at ffffffffa10fc827 #7 [ffffaae1c0a77f08] exit_to_usermode_loop at ffffffffa1003f83 #8 [ffffaae1c0a77f38] do_syscall_64 at ffffffffa10043c2 #9 [ffffaae1c0a77f50] entry_SYSCALL_64_after_hwframe at ffffffffa1a0008c RIP: 00007fc6caca1778 RSP: 00007ffed0c43de8 RFLAGS: 00000246 RAX: 0000000000000000 RBX: 00007fc6bd5a5f30 RCX: 00007fc6caca1778 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000003 RBP: 0000558ebb9ca0d0 R8: 00007fc6cab941a8 R9: 0000000000000000 R10: 0000558ebb9ca0d0 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007fc6cab4df0c R15: 00007ffed0c43f10 ORIG_RAX: 0000000000000003 CS: 0033 SS: 002b crash> bt 13275 PID: 13275 TASK: ffff93b8a7dd9e40 CPU: 4 COMMAND: "python3-openloo" #0 [ffffaae1c0a57a00] __schedule at ffffffffa19d4239 #1 [ffffaae1c0a57a90] schedule at ffffffffa19d46b9 #2 [ffffaae1c0a57aa8] rwsem_down_read_slowpath at ffffffffa19d72b4 #3 [ffffaae1c0a57b60] cifs_reopen_file at ffffffffc0755352 [cifs] <------------- T+2: pid 13275 down_read(... lock_sem); blocks due to pid 13279 #4 [ffffaae1c0a57c08] cifs_readpage_worker at ffffffffc07575f0 [cifs] #5 [ffffaae1c0a57cd0] cifs_readpage at ffffffffc0758096 [cifs] #6 [ffffaae1c0a57d08] generic_file_read_iter at ffffffffa123acd2 #7 [ffffaae1c0a57e10] cifs_strict_readv at ffffffffc07606a5 [cifs] <------------- T+0: pid 13275 down_read(... lock_sem); does not block #8 [ffffaae1c0a57e40] new_sync_read at ffffffffa12f336a #9 [ffffaae1c0a57ec8] vfs_read at ffffffffa12f5fad #10 [ffffaae1c0a57f00] ksys_read at ffffffffa12f62df #11 [ffffaae1c0a57f38] do_syscall_64 at ffffffffa10042bb #12 [ffffaae1c0a57f50] entry_SYSCALL_64_after_hwframe at ffffffffa1a0008c RIP: 00007f6ed0b7c055 RSP: 00007ffeea637e68 RFLAGS: 00000246 RAX: ffffffffffffffda RBX: 00007f6ed05cd600 RCX: 00007f6ed0b7c055 RDX: 000000000000001e RSI: 00007f6ec3404c10 RDI: 0000000000000003 RBP: 000000000000001e R8: 00007f6ed0a6f1a8 R9: 0000000000000000 R10: 00007f6ed090faf0 R11: 0000000000000246 R12: 00007f6ec3480f40 R13: 0000000000000003 R14: 00007f6ec3404c10 R15: 000055c449ea60d0 ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b Here is my tracing from the attached debug patch which shows the conditions, and points at the bugzilla we have open: [ 233.588248] CIFS VFS: file=fs/cifs/file.c line=480 pid=13279 cifs_sem_op op=up_write sem=ffff93b8a7424bd8 [ 233.589462] CIFS VFS: file=fs/cifs/file.c line=311 pid=13279 cifs_sem_op op=down_write sem=ffff93b8a7424bd8 [ 233.592333] CIFS VFS: file=fs/cifs/file.c line=314 pid=13275 cifs_sem_op op=up_write sem=ffff93b8a7424bd8 [ 233.612324] CIFS VFS: file=fs/cifs/file.c line=314 pid=13279 cifs_sem_op op=up_write sem=ffff93b8a7424bd8 [ 233.612510] CIFS VFS: file=fs/cifs/file.c line=3920 pid=13275 cifs_sem_op op=down_read sem=ffff93b8a7424bd8 [ 233.616487] CIFS VFS: file=fs/cifs/file.c line=3920 pid=13279 cifs_sem_op op=down_read sem=ffff93b8a7424bd8 T0: https://bugzilla.redhat.com/show_bug.cgi?id=1760835#c9 [ 254.389458] CIFS VFS: Send error in read = -9 [ 254.391943] CIFS VFS: file=fs/cifs/file.c line=3926 pid=13279 cifs_sem_op op=up_read sem=ffff93b8a7424bd8 [ 254.397453] CIFS VFS: file=fs/cifs/file.c line=471 pid=13279 cifs_sem_op op=down_write sem=ffff93b8a7424bd8 T1: https://bugzilla.redhat.com/show_bug.cgi?id=1760835#c9 [ 254.399798] CIFS VFS: file=fs/cifs/file.c line=630 pid=13275 cifs_sem_op op=down_read sem=ffff93b8a7424bd8 T2: https://bugzilla.redhat.com/show_bug.cgi?id=1760835#c9 There was a patch that removed a lockdep warning, but it looks like this removed a semi-valid warning, because clearly we can deadlock here: commit 560d388950ceda5e7c7cdef7f3d9a8ff297bbf9d Author: Rabin Vincent Date: Wed May 3 17:17:21 2017 +0200 CIFS: silence lockdep splat in cifs_relock_file() cifs_relock_file() can perform a down_write() on the inode's lock_sem even though it was already performed in cifs_strict_readv(). Lockdep complains about this. AFAICS, there is no problem here, and lockdep just needs to be told that this nesting is OK. ============================================= [ INFO: possible recursive locking detected ] 4.11.0+ #20 Not tainted --------------------------------------------- cat/701 is trying to acquire lock: (&cifsi->lock_sem){++++.+}, at: cifs_reopen_file+0x7a7/0xc00 but task is already holding lock: (&cifsi->lock_sem){++++.+}, at: cifs_strict_readv+0x177/0x310 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&cifsi->lock_sem); lock(&cifsi->lock_sem); *** DEADLOCK *** May be due to missing lock nesting notation 1 lock held by cat/701: #0: (&cifsi->lock_sem){++++.+}, at: cifs_strict_readv+0x177/0x310 stack backtrace: CPU: 0 PID: 701 Comm: cat Not tainted 4.11.0+ #20 Call Trace: dump_stack+0x85/0xc2 __lock_acquire+0x17dd/0x2260 ? trace_hardirqs_on_thunk+0x1a/0x1c ? preempt_schedule_irq+0x6b/0x80 lock_acquire+0xcc/0x260 ? lock_acquire+0xcc/0x260 ? cifs_reopen_file+0x7a7/0xc00 down_read+0x2d/0x70 ? cifs_reopen_file+0x7a7/0xc00 cifs_reopen_file+0x7a7/0xc00 ? printk+0x43/0x4b cifs_readpage_worker+0x327/0x8a0 cifs_readpage+0x8c/0x2a0 generic_file_read_iter+0x692/0xd00 cifs_strict_readv+0x29f/0x310 generic_file_splice_read+0x11c/0x1c0 do_splice_to+0xa5/0xc0 splice_direct_to_actor+0xfa/0x350 ? generic_pipe_buf_nosteal+0x10/0x10 do_splice_direct+0xb5/0xe0 do_sendfile+0x278/0x3a0 SyS_sendfile64+0xc4/0xe0 entry_SYSCALL_64_fastpath+0x1f/0xbe Signed-off-by: Rabin Vincent Acked-by: Pavel Shilovsky Signed-off-by: Steve French From 68fe8004456298070247fd23c3af91b332d98d35 Mon Sep 17 00:00:00 2001 From: Dave Wysochanski Date: Mon, 21 Oct 2019 07:28:04 -0400 Subject: [PATCH] debug cifs lock_sem deadlock - redhat bug 1760835 debug of https://bugzilla.redhat.com/show_bug.cgi?id=1760835 cifs_dbg trace of lock_sem turn off ratelimiting --- Makefile | 2 +- fs/cifs/cifsfs.c | 1 + fs/cifs/cifsproto.h | 34 ++++++++++++++++++++++++++++++++++ fs/cifs/file.c | 32 ++++++++++++++++++++++++++++++++ fs/cifs/smb2file.c | 2 ++ lib/ratelimit.c | 3 +-- 6 files changed, 71 insertions(+), 3 deletions(-) diff --git a/Makefile b/Makefile index 5475cdb6d57d..47b0d42f0418 100644 --- a/Makefile +++ b/Makefile @@ -2,7 +2,7 @@ VERSION = 5 PATCHLEVEL = 4 SUBLEVEL = 0 -EXTRAVERSION = -rc4 +EXTRAVERSION = -rc4-dbg3 NAME = Nesting Opossum # *DOCUMENTATION* diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c index c049c7b3aa87..431a9d46b409 100644 --- a/fs/cifs/cifsfs.c +++ b/fs/cifs/cifsfs.c @@ -1335,6 +1335,7 @@ static ssize_t cifs_copy_file_range(struct file *src_file, loff_t off, struct cifsInodeInfo *cifsi = inode; inode_init_once(&cifsi->vfs_inode); + cifs_sem_op_trace_init_rwsem(&cifsi->lock_sem); init_rwsem(&cifsi->lock_sem); } diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h index e53e9f62b87b..5488e5e11e40 100644 --- a/fs/cifs/cifsproto.h +++ b/fs/cifs/cifsproto.h @@ -65,6 +65,40 @@ extern int smb_send(struct TCP_Server_Info *, struct smb_hdr *, else \ trace_smb3_exit_done(curr_xid, __func__); \ } while (0) + + +/* Debug lock_sem */ +#define cifs_sem_op_trace(SEM, OP) \ +do { \ + cifs_dbg(VFS, "file=%s line=%d pid=%d cifs_sem_op op=%s sem=%px\n", \ + __FILE__, __LINE__, current->pid, #OP, SEM); \ +} while(0) + +#define cifs_sem_op_trace_init_rwsem(X) \ +do { \ + cifs_sem_op_trace(X, init_rwsem); \ +} while (0) + +#define cifs_sem_op_trace_down_read(X) \ +do { \ + cifs_sem_op_trace(X, down_read); \ +} while (0) + +#define cifs_sem_op_trace_up_read(X) \ +do { \ + cifs_sem_op_trace(X, up_read); \ +} while (0) + +#define cifs_sem_op_trace_down_write(X) \ +do { \ + cifs_sem_op_trace(X, down_write); \ +} while (0) + +#define cifs_sem_op_trace_up_write(X) \ +do { \ + cifs_sem_op_trace(X, up_write); \ +} while (0) + extern int init_cifs_idmap(void); extern void exit_cifs_idmap(void); extern int init_cifs_spnego(void); diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 5ad15de2bb4f..58e554cc397a 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -270,6 +270,7 @@ int cifs_posix_open(char *full_path, struct inode **pinode, struct cifs_fid_locks *cur; bool has_locks = false; + cifs_sem_op_trace_down_read(&cinode->lock_sem); down_read(&cinode->lock_sem); list_for_each_entry(cur, &cinode->llist, llist) { if (!list_empty(&cur->locks)) { @@ -277,6 +278,7 @@ int cifs_posix_open(char *full_path, struct inode **pinode, break; } } + cifs_sem_op_trace_up_read(&cinode->lock_sem); up_read(&cinode->lock_sem); return has_locks; } @@ -306,8 +308,10 @@ struct cifsFileInfo * INIT_LIST_HEAD(&fdlocks->locks); fdlocks->cfile = cfile; cfile->llist = fdlocks; + cifs_sem_op_trace_down_write(&cinode->lock_sem); down_write(&cinode->lock_sem); list_add(&fdlocks->llist, &cinode->llist); + cifs_sem_op_trace_up_write(&cinode->lock_sem); up_write(&cinode->lock_sem); cfile->count = 1; @@ -464,6 +468,7 @@ void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_handler) * Delete any outstanding lock records. We'll lose them when the file * is closed anyway. */ + cifs_sem_op_trace_down_write(&cifsi->lock_sem); down_write(&cifsi->lock_sem); list_for_each_entry_safe(li, tmp, &cifs_file->llist->locks, llist) { list_del(&li->llist); @@ -472,6 +477,7 @@ void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_handler) } list_del(&cifs_file->llist->llist); kfree(cifs_file->llist); + cifs_sem_op_trace_up_write(&cifsi->lock_sem); up_write(&cifsi->lock_sem); cifs_put_tlink(cifs_file->tlink); @@ -621,9 +627,11 @@ int cifs_open(struct inode *inode, struct file *file) struct cifs_tcon *tcon = tlink_tcon(cfile->tlink); int rc = 0; + cifs_sem_op_trace_down_read(&cinode->lock_sem); down_read_nested(&cinode->lock_sem, SINGLE_DEPTH_NESTING); if (cinode->can_cache_brlcks) { /* can cache locks - no need to relock */ + cifs_sem_op_trace_up_read(&cinode->lock_sem); up_read(&cinode->lock_sem); return rc; } @@ -635,6 +643,7 @@ int cifs_open(struct inode *inode, struct file *file) else rc = tcon->ses->server->ops->push_mand_locks(cfile); + cifs_sem_op_trace_up_read(&cinode->lock_sem); up_read(&cinode->lock_sem); return rc; } @@ -1001,6 +1010,7 @@ int cifs_closedir(struct inode *inode, struct file *file) struct TCP_Server_Info *server = tlink_tcon(cfile->tlink)->ses->server; bool exist; + cifs_sem_op_trace_down_read(&cinode->lock_sem); down_read(&cinode->lock_sem); exist = cifs_find_lock_conflict(cfile, offset, length, type, @@ -1019,6 +1029,7 @@ int cifs_closedir(struct inode *inode, struct file *file) else flock->fl_type = F_UNLCK; + cifs_sem_op_trace_up_read(&cinode->lock_sem); up_read(&cinode->lock_sem); return rc; } @@ -1027,8 +1038,10 @@ int cifs_closedir(struct inode *inode, struct file *file) cifs_lock_add(struct cifsFileInfo *cfile, struct cifsLockInfo *lock) { struct cifsInodeInfo *cinode = CIFS_I(d_inode(cfile->dentry)); + cifs_sem_op_trace_down_write(&cinode->lock_sem); down_write(&cinode->lock_sem); list_add_tail(&lock->llist, &cfile->llist->locks); + cifs_sem_op_trace_up_write(&cinode->lock_sem); up_write(&cinode->lock_sem); } @@ -1049,6 +1062,7 @@ int cifs_closedir(struct inode *inode, struct file *file) try_again: exist = false; + cifs_sem_op_trace_down_write(&cinode->lock_sem); down_write(&cinode->lock_sem); exist = cifs_find_lock_conflict(cfile, lock->offset, lock->length, @@ -1056,6 +1070,7 @@ int cifs_closedir(struct inode *inode, struct file *file) CIFS_LOCK_OP); if (!exist && cinode->can_cache_brlcks) { list_add_tail(&lock->llist, &cfile->llist->locks); + cifs_sem_op_trace_up_write(&cinode->lock_sem); up_write(&cinode->lock_sem); return rc; } @@ -1066,16 +1081,19 @@ int cifs_closedir(struct inode *inode, struct file *file) rc = -EACCES; else { list_add_tail(&lock->blist, &conf_lock->blist); + cifs_sem_op_trace_up_write(&cinode->lock_sem); up_write(&cinode->lock_sem); rc = wait_event_interruptible(lock->block_q, (lock->blist.prev == &lock->blist) && (lock->blist.next == &lock->blist)); if (!rc) goto try_again; + cifs_sem_op_trace_down_write(&cinode->lock_sem); down_write(&cinode->lock_sem); list_del_init(&lock->blist); } + cifs_sem_op_trace_up_write(&cinode->lock_sem); up_write(&cinode->lock_sem); return rc; } @@ -1097,6 +1115,7 @@ int cifs_closedir(struct inode *inode, struct file *file) if ((flock->fl_flags & FL_POSIX) == 0) return 1; + cifs_sem_op_trace_down_read(&cinode->lock_sem); down_read(&cinode->lock_sem); posix_test_lock(file, flock); @@ -1105,6 +1124,7 @@ int cifs_closedir(struct inode *inode, struct file *file) rc = 1; } + cifs_sem_op_trace_up_read(&cinode->lock_sem); up_read(&cinode->lock_sem); return rc; } @@ -1125,13 +1145,16 @@ int cifs_closedir(struct inode *inode, struct file *file) return rc; try_again: + cifs_sem_op_trace_down_write(&cinode->lock_sem); down_write(&cinode->lock_sem); if (!cinode->can_cache_brlcks) { + cifs_sem_op_trace_up_write(&cinode->lock_sem); up_write(&cinode->lock_sem); return rc; } rc = posix_lock_file(file, flock, NULL); + cifs_sem_op_trace_up_write(&cinode->lock_sem); up_write(&cinode->lock_sem); if (rc == FILE_LOCK_DEFERRED) { rc = wait_event_interruptible(flock->fl_wait, !flock->fl_blocker); @@ -1331,8 +1354,10 @@ struct lock_to_push { int rc = 0; /* we are going to update can_cache_brlcks here - need a write access */ + cifs_sem_op_trace_down_write(&cinode->lock_sem); down_write(&cinode->lock_sem); if (!cinode->can_cache_brlcks) { + cifs_sem_op_trace_up_write(&cinode->lock_sem); up_write(&cinode->lock_sem); return rc; } @@ -1345,6 +1370,7 @@ struct lock_to_push { rc = tcon->ses->server->ops->push_mand_locks(cfile); cinode->can_cache_brlcks = false; + cifs_sem_op_trace_up_write(&cinode->lock_sem); up_write(&cinode->lock_sem); return rc; } @@ -1522,6 +1548,7 @@ struct lock_to_push { if (!buf) return -ENOMEM; + cifs_sem_op_trace_down_write(&cinode->lock_sem); down_write(&cinode->lock_sem); for (i = 0; i < 2; i++) { cur = buf; @@ -1592,6 +1619,7 @@ struct lock_to_push { } } + cifs_sem_op_trace_up_write(&cinode->lock_sem); up_write(&cinode->lock_sem); kfree(buf); return rc; @@ -3148,6 +3176,7 @@ ssize_t cifs_user_writev(struct kiocb *iocb, struct iov_iter *from) * We need to hold the sem to be sure nobody modifies lock list * with a brlock that prevents writing. */ + cifs_sem_op_trace_down_read(&cinode->lock_sem); down_read(&cinode->lock_sem); rc = generic_write_checks(iocb, from); @@ -3161,6 +3190,7 @@ ssize_t cifs_user_writev(struct kiocb *iocb, struct iov_iter *from) else rc = -EACCES; out: + cifs_sem_op_trace_up_read(&cinode->lock_sem); up_read(&cinode->lock_sem); inode_unlock(inode); @@ -3887,11 +3917,13 @@ ssize_t cifs_user_readv(struct kiocb *iocb, struct iov_iter *to) * We need to hold the sem to be sure nobody modifies lock list * with a brlock that prevents reading. */ + cifs_sem_op_trace_down_read(&cinode->lock_sem); down_read(&cinode->lock_sem); if (!cifs_find_lock_conflict(cfile, iocb->ki_pos, iov_iter_count(to), tcon->ses->server->vals->shared_lock_type, 0, NULL, CIFS_READ_OP)) rc = generic_file_read_iter(iocb, to); + cifs_sem_op_trace_up_read(&cinode->lock_sem); up_read(&cinode->lock_sem); return rc; } diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c index e6a1fc72018f..9f89a765d977 100644 --- a/fs/cifs/smb2file.c +++ b/fs/cifs/smb2file.c @@ -145,6 +145,7 @@ cur = buf; + cifs_sem_op_trace_down_write(&cinode->lock_sem); down_write(&cinode->lock_sem); list_for_each_entry_safe(li, tmp, &cfile->llist->locks, llist) { if (flock->fl_start > li->offset || @@ -206,6 +207,7 @@ } else cifs_free_llist(&tmp_llist); } + cifs_sem_op_trace_up_write(&cinode->lock_sem); up_write(&cinode->lock_sem); kfree(buf); diff --git a/lib/ratelimit.c b/lib/ratelimit.c index e01a93f46f83..bc87f790304c 100644 --- a/lib/ratelimit.c +++ b/lib/ratelimit.c @@ -29,8 +29,7 @@ int ___ratelimit(struct ratelimit_state *rs, const char *func) unsigned long flags; int ret; - if (!rs->interval) - return 1; + return 1; /* * If we contend on this state's lock then almost -- 1.8.3.1