From patchwork Tue Dec 13 12:11:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiubo Li X-Patchwork-Id: 13071986 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D198C10F1D for ; Tue, 13 Dec 2022 12:12:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235568AbiLMMMM (ORCPT ); Tue, 13 Dec 2022 07:12:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235552AbiLMMMC (ORCPT ); Tue, 13 Dec 2022 07:12:02 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE90E1580D for ; Tue, 13 Dec 2022 04:11:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670933483; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RAI9mgdskd1ADV22NmxESjpEH55uCsxgWZtJxV8d/1E=; b=caOmXm1E7gz8oafskYHHvIeJnnybkyA4VanxlrzsqVUuGs62d07HBYXOJGVGXaOZNSBAQP BVrn6OhPbRPcipRlhkMcurhyp9w8tFFKQcKTd5zSkLC3DR6BwPwHB3Lfb1pyQUgHjvvAVI LkMrXM4aN/ZeyIkVvAZEwe0AKbK03yA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-267-dvOezT1eOcaie-ZF4hHeYw-1; Tue, 13 Dec 2022 07:11:22 -0500 X-MC-Unique: dvOezT1eOcaie-ZF4hHeYw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6CBB23C0F425; Tue, 13 Dec 2022 12:11:21 +0000 (UTC) Received: from lxbceph1.gsslab.pek2.redhat.com (unknown [10.72.47.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9CC3640AE1F4; Tue, 13 Dec 2022 12:11:17 +0000 (UTC) From: xiubli@redhat.com To: jlayton@kernel.org, idryomov@gmail.com, ceph-devel@vger.kernel.org Cc: mchangir@redhat.com, lhenriques@suse.de, viro@zeniv.linux.org.uk, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Xiubo Li , stable@vger.kernel.org Subject: [PATCH v4 2/2] ceph: add ceph specific member support for file_lock Date: Tue, 13 Dec 2022 20:11:03 +0800 Message-Id: <20221213121103.213631-3-xiubli@redhat.com> In-Reply-To: <20221213121103.213631-1-xiubli@redhat.com> References: <20221213121103.213631-1-xiubli@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Xiubo Li When ceph releasing the file_lock it will try to get the inode pointer from the fl->fl_file, which the memory could already be released by another thread in filp_close(). Because in VFS layer the fl->fl_file doesn't increase the file's reference counter. Will switch to use ceph dedicate lock info to track the inode. And in ceph_fl_release_lock() we should skip all the operations if the fl->fl_u.ceph_fl.fl_inode is not set, which should come from the request file_lock. And we will set fl->fl_u.ceph_fl.fl_inode when inserting it to the inode lock list, which is when copying the lock. Cc: stable@vger.kernel.org Cc: Jeff Layton URL: https://tracker.ceph.com/issues/57986 Signed-off-by: Xiubo Li --- fs/ceph/locks.c | 20 ++++++++++++++++++-- include/linux/fs.h | 3 +++ 2 files changed, 21 insertions(+), 2 deletions(-) diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c index b191426bf880..cf78608a3f9a 100644 --- a/fs/ceph/locks.c +++ b/fs/ceph/locks.c @@ -34,18 +34,34 @@ static void ceph_fl_copy_lock(struct file_lock *dst, struct file_lock *src) { struct inode *inode = file_inode(dst->fl_file); atomic_inc(&ceph_inode(inode)->i_filelock_ref); + dst->fl_u.ceph.fl_inode = igrab(inode); } +/* + * Do not use the 'fl->fl_file' in release function, which + * is possibly already released by another thread. + */ static void ceph_fl_release_lock(struct file_lock *fl) { - struct inode *inode = file_inode(fl->fl_file); - struct ceph_inode_info *ci = ceph_inode(inode); + struct inode *inode = fl->fl_u.ceph.fl_inode; + struct ceph_inode_info *ci; + + /* + * If inode is NULL it should be a request file_lock, + * nothing we can do. + */ + if (!inode) + return; + + ci = ceph_inode(inode); if (atomic_dec_and_test(&ci->i_filelock_ref)) { /* clear error when all locks are released */ spin_lock(&ci->i_ceph_lock); ci->i_ceph_flags &= ~CEPH_I_ERROR_FILELOCK; spin_unlock(&ci->i_ceph_lock); } + fl->fl_u.ceph.fl_inode = NULL; + iput(inode); } static const struct file_lock_operations ceph_fl_lock_ops = { diff --git a/include/linux/fs.h b/include/linux/fs.h index 7b52fdfb6da0..6106374f5257 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1119,6 +1119,9 @@ struct file_lock { int state; /* state of grant or error if -ve */ unsigned int debug_id; } afs; + struct { + struct inode *fl_inode; + } ceph; } fl_u; } __randomize_layout;