From patchwork Fri May 26 20:14:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Benjamin Coddington X-Patchwork-Id: 9751067 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7443660209 for ; Fri, 26 May 2017 20:15:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6751A28179 for ; Fri, 26 May 2017 20:15:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5C36A2841E; Fri, 26 May 2017 20:15:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EA25628179 for ; Fri, 26 May 2017 20:15:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S944870AbdEZUPN (ORCPT ); Fri, 26 May 2017 16:15:13 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46008 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S944863AbdEZUO5 (ORCPT ); Fri, 26 May 2017 16:14:57 -0400 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 30C1580C10; Fri, 26 May 2017 20:14:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 30C1580C10 Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=bcodding@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 30C1580C10 Received: from bcodding.csb (ovpn-66-130.rdu2.redhat.com [10.10.67.130] (may be forged)) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id v4QKEqXS010900; Fri, 26 May 2017 16:14:52 -0400 Received: by bcodding.csb (Postfix, from userid 24008) id 818C810C3114; Fri, 26 May 2017 16:14:51 -0400 (EDT) From: Benjamin Coddington To: Alexander Viro , Jeff Layton , bfields@fieldses.org Cc: linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org Subject: [PATCH 2/3] fs/locks: Set fl_nspid at file_lock allocation Date: Fri, 26 May 2017 16:14:50 -0400 Message-Id: In-Reply-To: References: In-Reply-To: References: X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Fri, 26 May 2017 20:14:54 +0000 (UTC) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Since commit c69899a17ca4 "NFSv4: Update of VFS byte range lock must be atomic with the stateid update", NFSv4 has been inserting locks in rpciod worker context. The result is that the file_lock's fl_nspid is the kworker's pid instead of the original userspace pid. We can fix that up by setting fl_nspid in locks_allocate_lock, and tranfer it to the file_lock that's eventually recorded. Signed-off-by: Benjamin Coddington --- fs/locks.c | 29 ++++++++++++++++++++--------- 1 file changed, 20 insertions(+), 9 deletions(-) diff --git a/fs/locks.c b/fs/locks.c index 54aeacf8dc46..270ae50247db 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -249,7 +249,9 @@ locks_dump_ctx_list(struct list_head *list, char *list_type) struct file_lock *fl; list_for_each_entry(fl, list, fl_list) { - pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x fl_pid=%u\n", list_type, fl->fl_owner, fl->fl_flags, fl->fl_type, fl->fl_pid); + pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x fl_pid=%u fl_nspid=%u\n", + list_type, fl->fl_owner, fl->fl_flags, fl->fl_type, + fl->fl_pid, pid_vnr(fl->fl_nspid)); } } @@ -294,8 +296,10 @@ struct file_lock *locks_alloc_lock(void) { struct file_lock *fl = kmem_cache_zalloc(filelock_cache, GFP_KERNEL); - if (fl) + if (fl) { locks_init_lock_heads(fl); + fl->fl_nspid = get_pid(task_tgid(current)); + } return fl; } @@ -328,6 +332,8 @@ void locks_free_lock(struct file_lock *fl) BUG_ON(!hlist_unhashed(&fl->fl_link)); locks_release_private(fl); + if (fl->fl_nspid) + put_pid(fl->fl_nspid); kmem_cache_free(filelock_cache, fl); } EXPORT_SYMBOL(locks_free_lock); @@ -357,8 +363,15 @@ EXPORT_SYMBOL(locks_init_lock); */ void locks_copy_conflock(struct file_lock *new, struct file_lock *fl) { + struct pid *replace_pid = new->fl_nspid; + new->fl_owner = fl->fl_owner; new->fl_pid = fl->fl_pid; + if (fl->fl_nspid) { + new->fl_nspid = get_pid(fl->fl_nspid); + if (replace_pid) + put_pid(replace_pid); + } new->fl_file = NULL; new->fl_flags = fl->fl_flags; new->fl_type = fl->fl_type; @@ -733,7 +746,6 @@ static void locks_wake_up_blocks(struct file_lock *blocker) static void locks_insert_lock_ctx(struct file_lock *fl, struct list_head *before) { - fl->fl_nspid = get_pid(task_tgid(current)); list_add_tail(&fl->fl_list, before); locks_insert_global_locks(fl); } @@ -743,10 +755,6 @@ locks_unlink_lock_ctx(struct file_lock *fl) { locks_delete_global_locks(fl); list_del_init(&fl->fl_list); - if (fl->fl_nspid) { - put_pid(fl->fl_nspid); - fl->fl_nspid = NULL; - } locks_wake_up_blocks(fl); } @@ -823,8 +831,6 @@ posix_test_lock(struct file *filp, struct file_lock *fl) list_for_each_entry(cfl, &ctx->flc_posix, fl_list) { if (posix_locks_conflict(fl, cfl)) { locks_copy_conflock(fl, cfl); - if (cfl->fl_nspid) - fl->fl_pid = pid_vnr(cfl->fl_nspid); goto out; } } @@ -2492,6 +2498,7 @@ void locks_remove_posix(struct file *filp, fl_owner_t owner) lock.fl_end = OFFSET_MAX; lock.fl_owner = owner; lock.fl_pid = current->tgid; + lock.fl_nspid = get_pid(task_tgid(current)); lock.fl_file = filp; lock.fl_ops = NULL; lock.fl_lmops = NULL; @@ -2500,6 +2507,7 @@ void locks_remove_posix(struct file *filp, fl_owner_t owner) if (lock.fl_ops && lock.fl_ops->fl_release_private) lock.fl_ops->fl_release_private(&lock); + put_pid(lock.fl_nspid); trace_locks_remove_posix(inode, &lock, error); } @@ -2522,6 +2530,8 @@ locks_remove_flock(struct file *filp, struct file_lock_context *flctx) if (list_empty(&flctx->flc_flock)) return; + fl.fl_nspid = get_pid(task_tgid(current)); + if (filp->f_op->flock && is_remote_lock(filp)) filp->f_op->flock(filp, F_SETLKW, &fl); else @@ -2529,6 +2539,7 @@ locks_remove_flock(struct file *filp, struct file_lock_context *flctx) if (fl.fl_ops && fl.fl_ops->fl_release_private) fl.fl_ops->fl_release_private(&fl); + put_pid(fl.fl_nspid); } /* The i_flctx must be valid when calling into here */