From patchwork Tue Dec 13 12:11:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 13071985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59406C04FDE for ; Tue, 13 Dec 2022 12:12:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235238AbiLMMML (ORCPT ); Tue, 13 Dec 2022 07:12:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235568AbiLMMMG (ORCPT ); Tue, 13 Dec 2022 07:12:06 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6015615FED for ; Tue, 13 Dec 2022 04:11:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670933483; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kL29FbHgsP8X987LWjgI5myMeBq1Iws5MCaohqIReFk=; b=HAUhfR6mGSgNbcFfEO+6UEvqePx3Xjslp6WPU99O59i4rImd8oYm/Phu5tUwJwwgTb/902 BgDgZUWnrkoXva9m9a6/ZNIlGn6fiBsotPmmk4qNJelvdALtLxgQ/HqOiGFFC5t524TJE2 JkHyt7dO+xWY0v12V5tOhfKp2Jv44mM= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-664-naIg6KinOoWMmFXX1YzMlA-1; Tue, 13 Dec 2022 07:11:17 -0500 X-MC-Unique: naIg6KinOoWMmFXX1YzMlA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 02ACA3810D25; Tue, 13 Dec 2022 12:11:17 +0000 (UTC) Received: from lxbceph1.gsslab.pek2.redhat.com (unknown [10.72.47.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3252F40AE1E9; Tue, 13 Dec 2022 12:11:12 +0000 (UTC) From: xiubli@redhat.com To: jlayton@kernel.org, idryomov@gmail.com, ceph-devel@vger.kernel.org Cc: mchangir@redhat.com, lhenriques@suse.de, viro@zeniv.linux.org.uk, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Xiubo Li , stable@vger.kernel.org Subject: [PATCH v4 1/2] ceph: switch to vfs_inode_has_locks() to fix file lock bug Date: Tue, 13 Dec 2022 20:11:02 +0800 Message-Id: <20221213121103.213631-2-xiubli@redhat.com> In-Reply-To: <20221213121103.213631-1-xiubli@redhat.com> References: <20221213121103.213631-1-xiubli@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Xiubo Li For the POSIX locks they are using the same owner, which is the thread id. And multiple POSIX locks could be merged into single one, so when checking whether the 'file' has locks may fail. For a file where some openers use locking and others don't is a really odd usage pattern though. Locks are like stoplights -- they only work if everyone pays attention to them. Just switch ceph_get_caps() to check whether any locks are set on the inode. If there are POSIX/OFD/FLOCK locks on the file at the time, we should set CHECK_FILELOCK, regardless of what fd was used to set the lock. Cc: stable@vger.kernel.org Fixes: ff5d913dfc71 ("ceph: return -EIO if read/write against filp that lost file locks") Cc: Jeff Layton Signed-off-by: Xiubo Li --- fs/ceph/caps.c | 2 +- fs/ceph/locks.c | 4 ---- fs/ceph/super.h | 1 - 3 files changed, 1 insertion(+), 6 deletions(-) diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 065e9311b607..948136f81fc8 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -2964,7 +2964,7 @@ int ceph_get_caps(struct file *filp, int need, int want, loff_t endoff, int *got while (true) { flags &= CEPH_FILE_MODE_MASK; - if (atomic_read(&fi->num_locks)) + if (vfs_inode_has_locks(inode)) flags |= CHECK_FILELOCK; _got = 0; ret = try_get_cap_refs(inode, need, want, endoff, diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c index 3e2843e86e27..b191426bf880 100644 --- a/fs/ceph/locks.c +++ b/fs/ceph/locks.c @@ -32,18 +32,14 @@ void __init ceph_flock_init(void) static void ceph_fl_copy_lock(struct file_lock *dst, struct file_lock *src) { - struct ceph_file_info *fi = dst->fl_file->private_data; struct inode *inode = file_inode(dst->fl_file); atomic_inc(&ceph_inode(inode)->i_filelock_ref); - atomic_inc(&fi->num_locks); } static void ceph_fl_release_lock(struct file_lock *fl) { - struct ceph_file_info *fi = fl->fl_file->private_data; struct inode *inode = file_inode(fl->fl_file); struct ceph_inode_info *ci = ceph_inode(inode); - atomic_dec(&fi->num_locks); if (atomic_dec_and_test(&ci->i_filelock_ref)) { /* clear error when all locks are released */ spin_lock(&ci->i_ceph_lock); diff --git a/fs/ceph/super.h b/fs/ceph/super.h index 14454f464029..e7662ff6f149 100644 --- a/fs/ceph/super.h +++ b/fs/ceph/super.h @@ -804,7 +804,6 @@ struct ceph_file_info { struct list_head rw_contexts; u32 filp_gen; - atomic_t num_locks; }; struct ceph_dir_file_info { From patchwork Tue Dec 13 12:11:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 13071986 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D198C10F1D for ; Tue, 13 Dec 2022 12:12:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235568AbiLMMMM (ORCPT ); Tue, 13 Dec 2022 07:12:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235552AbiLMMMC (ORCPT ); Tue, 13 Dec 2022 07:12:02 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE90E1580D for ; Tue, 13 Dec 2022 04:11:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670933483; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RAI9mgdskd1ADV22NmxESjpEH55uCsxgWZtJxV8d/1E=; b=caOmXm1E7gz8oafskYHHvIeJnnybkyA4VanxlrzsqVUuGs62d07HBYXOJGVGXaOZNSBAQP BVrn6OhPbRPcipRlhkMcurhyp9w8tFFKQcKTd5zSkLC3DR6BwPwHB3Lfb1pyQUgHjvvAVI LkMrXM4aN/ZeyIkVvAZEwe0AKbK03yA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-267-dvOezT1eOcaie-ZF4hHeYw-1; Tue, 13 Dec 2022 07:11:22 -0500 X-MC-Unique: dvOezT1eOcaie-ZF4hHeYw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6CBB23C0F425; Tue, 13 Dec 2022 12:11:21 +0000 (UTC) Received: from lxbceph1.gsslab.pek2.redhat.com (unknown [10.72.47.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9CC3640AE1F4; Tue, 13 Dec 2022 12:11:17 +0000 (UTC) From: xiubli@redhat.com To: jlayton@kernel.org, idryomov@gmail.com, ceph-devel@vger.kernel.org Cc: mchangir@redhat.com, lhenriques@suse.de, viro@zeniv.linux.org.uk, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Xiubo Li , stable@vger.kernel.org Subject: [PATCH v4 2/2] ceph: add ceph specific member support for file_lock Date: Tue, 13 Dec 2022 20:11:03 +0800 Message-Id: <20221213121103.213631-3-xiubli@redhat.com> In-Reply-To: <20221213121103.213631-1-xiubli@redhat.com> References: <20221213121103.213631-1-xiubli@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Xiubo Li When ceph releasing the file_lock it will try to get the inode pointer from the fl->fl_file, which the memory could already be released by another thread in filp_close(). Because in VFS layer the fl->fl_file doesn't increase the file's reference counter. Will switch to use ceph dedicate lock info to track the inode. And in ceph_fl_release_lock() we should skip all the operations if the fl->fl_u.ceph_fl.fl_inode is not set, which should come from the request file_lock. And we will set fl->fl_u.ceph_fl.fl_inode when inserting it to the inode lock list, which is when copying the lock. Cc: stable@vger.kernel.org Cc: Jeff Layton URL: https://tracker.ceph.com/issues/57986 Signed-off-by: Xiubo Li --- fs/ceph/locks.c | 20 ++++++++++++++++++-- include/linux/fs.h | 3 +++ 2 files changed, 21 insertions(+), 2 deletions(-) diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c index b191426bf880..cf78608a3f9a 100644 --- a/fs/ceph/locks.c +++ b/fs/ceph/locks.c @@ -34,18 +34,34 @@ static void ceph_fl_copy_lock(struct file_lock *dst, struct file_lock *src) { struct inode *inode = file_inode(dst->fl_file); atomic_inc(&ceph_inode(inode)->i_filelock_ref); + dst->fl_u.ceph.fl_inode = igrab(inode); } +/* + * Do not use the 'fl->fl_file' in release function, which + * is possibly already released by another thread. + */ static void ceph_fl_release_lock(struct file_lock *fl) { - struct inode *inode = file_inode(fl->fl_file); - struct ceph_inode_info *ci = ceph_inode(inode); + struct inode *inode = fl->fl_u.ceph.fl_inode; + struct ceph_inode_info *ci; + + /* + * If inode is NULL it should be a request file_lock, + * nothing we can do. + */ + if (!inode) + return; + + ci = ceph_inode(inode); if (atomic_dec_and_test(&ci->i_filelock_ref)) { /* clear error when all locks are released */ spin_lock(&ci->i_ceph_lock); ci->i_ceph_flags &= ~CEPH_I_ERROR_FILELOCK; spin_unlock(&ci->i_ceph_lock); } + fl->fl_u.ceph.fl_inode = NULL; + iput(inode); } static const struct file_lock_operations ceph_fl_lock_ops = { diff --git a/include/linux/fs.h b/include/linux/fs.h index 7b52fdfb6da0..6106374f5257 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1119,6 +1119,9 @@ struct file_lock { int state; /* state of grant or error if -ve */ unsigned int debug_id; } afs; + struct { + struct inode *fl_inode; + } ceph; } fl_u; } __randomize_layout;