From patchwork Mon Mar 23 16:07:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Layton X-Patchwork-Id: 11453393 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E0FD217EF for ; Mon, 23 Mar 2020 16:07:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B6FC220637 for ; Mon, 23 Mar 2020 16:07:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979637; bh=RDtKffQ0yTtGjzq03HzrxWImPFhPLQvLikdv5b9eWHs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=E7CKcPVI2GjIqTkTowr4O9fkJ/MXgfQ1ZWPboF8prczXcDloA+f5QrejHFkwYwH1Z fZm/aDwY9Z7QUCw0vzQvAgyh3IQai7+5kNQsIh5+2RIzLuHxvWBlZGkAt3GEcM7RUA KjmxqcK1G79+0gY6g3wPbYdKZGcIrTXur3r4/kTk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727589AbgCWQHN (ORCPT ); Mon, 23 Mar 2020 12:07:13 -0400 Received: from mail.kernel.org ([198.145.29.99]:49440 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727282AbgCWQHN (ORCPT ); Mon, 23 Mar 2020 12:07:13 -0400 Received: from tleilax.com (68-20-15-154.lightspeed.rlghnc.sbcglobal.net [68.20.15.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id F172D20722; Mon, 23 Mar 2020 16:07:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979631; bh=RDtKffQ0yTtGjzq03HzrxWImPFhPLQvLikdv5b9eWHs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YlQLPVyzmglFehsKrv+CCeE2330Q4cN8s3W0JZ3yp+QheaaENuALVkk7RB53Uw355 CmUwXb853hy0mSEpeZQ31pJbXzepOwbw7yz82I5fyThrIdP6hLWpYwRXwdoFIGxvdp +O/ZrACOQ2eDflTbd2VgKF1tLWZYY0yFv9MkD240= From: Jeff Layton To: ceph-devel@vger.kernel.org Cc: ukernel@gmail.com, idryomov@gmail.com, sage@redhat.com Subject: [PATCH 1/8] ceph: reorganize __send_cap for less spinlock abuse Date: Mon, 23 Mar 2020 12:07:01 -0400 Message-Id: <20200323160708.104152-2-jlayton@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200323160708.104152-1-jlayton@kernel.org> References: <20200323160708.104152-1-jlayton@kernel.org> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Get rid of the __releases annotation by breaking it up into two functions: __prep_cap which is done under the spinlock and __send_cap that is done outside it. Nothing checks the return value from __send_cap, so make it void return. Signed-off-by: Jeff Layton --- fs/ceph/caps.c | 193 ++++++++++++++++++++++++++++--------------------- 1 file changed, 110 insertions(+), 83 deletions(-) diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 779433847f20..5bdca0da58a4 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -1318,44 +1318,27 @@ void __ceph_remove_caps(struct ceph_inode_info *ci) } /* - * Send a cap msg on the given inode. Update our caps state, then - * drop i_ceph_lock and send the message. - * - * Make note of max_size reported/requested from mds, revoked caps - * that have now been implemented. - * - * Return non-zero if delayed release, or we experienced an error - * such that the caller should requeue + retry later. - * - * called with i_ceph_lock, then drops it. - * caller should hold snap_rwsem (read), s_mutex. + * Prepare to send a cap message to an MDS. Update the cap state, and populate + * the arg struct with the parameters that will need to be sent. This should + * be done under the i_ceph_lock to guard against changes to cap state. */ -static int __send_cap(struct ceph_mds_client *mdsc, struct ceph_cap *cap, - int op, int flags, int used, int want, int retain, - int flushing, u64 flush_tid, u64 oldest_flush_tid) - __releases(cap->ci->i_ceph_lock) +static void __prep_cap(struct cap_msg_args *arg, struct ceph_cap *cap, + int op, int flags, int used, int want, int retain, + int flushing, u64 flush_tid, u64 oldest_flush_tid, + struct ceph_buffer **old_blob) { struct ceph_inode_info *ci = cap->ci; struct inode *inode = &ci->vfs_inode; - struct ceph_buffer *old_blob = NULL; - struct cap_msg_args arg; int held, revoking; - int wake = 0; - int ret; - /* Don't send anything if it's still being created. Return delayed */ - if (ci->i_ceph_flags & CEPH_I_ASYNC_CREATE) { - spin_unlock(&ci->i_ceph_lock); - dout("%s async create in flight for %p\n", __func__, inode); - return 1; - } + lockdep_assert_held(&ci->i_ceph_lock); held = cap->issued | cap->implemented; revoking = cap->implemented & ~cap->issued; retain &= ~revoking; - dout("__send_cap %p cap %p session %p %s -> %s (revoking %s)\n", - inode, cap, cap->session, + dout("%s %p cap %p session %p %s -> %s (revoking %s)\n", + __func__, inode, cap, cap->session, ceph_cap_string(held), ceph_cap_string(held & retain), ceph_cap_string(revoking)); BUG_ON((retain & CEPH_CAP_PIN) == 0); @@ -1363,60 +1346,51 @@ static int __send_cap(struct ceph_mds_client *mdsc, struct ceph_cap *cap, ci->i_ceph_flags &= ~CEPH_I_FLUSH; cap->issued &= retain; /* drop bits we don't want */ - if (cap->implemented & ~cap->issued) { - /* - * Wake up any waiters on wanted -> needed transition. - * This is due to the weird transition from buffered - * to sync IO... we need to flush dirty pages _before_ - * allowing sync writes to avoid reordering. - */ - wake = 1; - } cap->implemented &= cap->issued | used; cap->mds_wanted = want; - arg.session = cap->session; - arg.ino = ceph_vino(inode).ino; - arg.cid = cap->cap_id; - arg.follows = flushing ? ci->i_head_snapc->seq : 0; - arg.flush_tid = flush_tid; - arg.oldest_flush_tid = oldest_flush_tid; + arg->session = cap->session; + arg->ino = ceph_vino(inode).ino; + arg->cid = cap->cap_id; + arg->follows = flushing ? ci->i_head_snapc->seq : 0; + arg->flush_tid = flush_tid; + arg->oldest_flush_tid = oldest_flush_tid; - arg.size = inode->i_size; - ci->i_reported_size = arg.size; - arg.max_size = ci->i_wanted_max_size; + arg->size = inode->i_size; + ci->i_reported_size = arg->size; + arg->max_size = ci->i_wanted_max_size; if (cap == ci->i_auth_cap) - ci->i_requested_max_size = arg.max_size; + ci->i_requested_max_size = arg->max_size; if (flushing & CEPH_CAP_XATTR_EXCL) { - old_blob = __ceph_build_xattrs_blob(ci); - arg.xattr_version = ci->i_xattrs.version; - arg.xattr_buf = ci->i_xattrs.blob; + *old_blob = __ceph_build_xattrs_blob(ci); + arg->xattr_version = ci->i_xattrs.version; + arg->xattr_buf = ci->i_xattrs.blob; } else { - arg.xattr_buf = NULL; + arg->xattr_buf = NULL; } - arg.mtime = inode->i_mtime; - arg.atime = inode->i_atime; - arg.ctime = inode->i_ctime; - arg.btime = ci->i_btime; - arg.change_attr = inode_peek_iversion_raw(inode); + arg->mtime = inode->i_mtime; + arg->atime = inode->i_atime; + arg->ctime = inode->i_ctime; + arg->btime = ci->i_btime; + arg->change_attr = inode_peek_iversion_raw(inode); - arg.op = op; - arg.caps = cap->implemented; - arg.wanted = want; - arg.dirty = flushing; + arg->op = op; + arg->caps = cap->implemented; + arg->wanted = want; + arg->dirty = flushing; - arg.seq = cap->seq; - arg.issue_seq = cap->issue_seq; - arg.mseq = cap->mseq; - arg.time_warp_seq = ci->i_time_warp_seq; + arg->seq = cap->seq; + arg->issue_seq = cap->issue_seq; + arg->mseq = cap->mseq; + arg->time_warp_seq = ci->i_time_warp_seq; - arg.uid = inode->i_uid; - arg.gid = inode->i_gid; - arg.mode = inode->i_mode; + arg->uid = inode->i_uid; + arg->gid = inode->i_gid; + arg->mode = inode->i_mode; - arg.inline_data = ci->i_inline_version != CEPH_INLINE_NONE; + arg->inline_data = ci->i_inline_version != CEPH_INLINE_NONE; if (!(flags & CEPH_CLIENT_CAPS_PENDING_CAPSNAP) && !list_empty(&ci->i_cap_snaps)) { struct ceph_cap_snap *capsnap; @@ -1429,18 +1403,46 @@ static int __send_cap(struct ceph_mds_client *mdsc, struct ceph_cap *cap, } } } - arg.flags = flags; + arg->flags = flags; +} - spin_unlock(&ci->i_ceph_lock); +/* + * Wake up any waiters on wanted -> needed transition. This is due to the weird + * transition from buffered to sync IO... we need to flush dirty pages _before_ + * allowing sync writes to avoid reordering. + */ +static inline bool should_wake_cap_waiters(struct ceph_cap *cap) +{ + lockdep_assert_held(&cap->ci->i_ceph_lock); - ceph_buffer_put(old_blob); + return cap->implemented & ~cap->issued; +} - ret = send_cap_msg(&arg); +/* + * Send a cap msg on the given inode. Update our caps state, then + * drop i_ceph_lock and send the message. + * + * Make note of max_size reported/requested from mds, revoked caps + * that have now been implemented. + * + * Return non-zero if delayed release, or we experienced an error + * such that the caller should requeue + retry later. + * + * called with i_ceph_lock, then drops it. + * caller should hold snap_rwsem (read), s_mutex. + */ +static void __send_cap(struct ceph_mds_client *mdsc, struct cap_msg_args *arg, + struct ceph_inode_info *ci, bool wake) +{ + struct inode *inode = &ci->vfs_inode; + int ret; + + ret = send_cap_msg(arg); if (ret < 0) { pr_err("error sending cap msg, ino (%llx.%llx) " "flushing %s tid %llu, requeue\n", - ceph_vinop(inode), ceph_cap_string(flushing), - flush_tid); + ceph_vinop(inode), ceph_cap_string(arg->dirty), + arg->flush_tid); spin_lock(&ci->i_ceph_lock); __cap_delay_requeue(mdsc, ci); spin_unlock(&ci->i_ceph_lock); @@ -1448,8 +1450,6 @@ static int __send_cap(struct ceph_mds_client *mdsc, struct ceph_cap *cap, if (wake) wake_up_all(&ci->i_cap_wq); - - return ret; } static inline int __send_flush_snap(struct inode *inode, @@ -1967,6 +1967,10 @@ void ceph_check_caps(struct ceph_inode_info *ci, int flags, } for (p = rb_first(&ci->i_caps); p; p = rb_next(p)) { + struct cap_msg_args arg; + struct ceph_buffer *old_blob = NULL; + bool wake; + cap = rb_entry(p, struct ceph_cap, ci_node); /* avoid looping forever */ @@ -2094,9 +2098,15 @@ void ceph_check_caps(struct ceph_inode_info *ci, int flags, mds = cap->mds; /* remember mds, so we don't repeat */ - /* __send_cap drops i_ceph_lock */ - __send_cap(mdsc, cap, CEPH_CAP_OP_UPDATE, 0, cap_used, want, - retain, flushing, flush_tid, oldest_flush_tid); + __prep_cap(&arg, cap, CEPH_CAP_OP_UPDATE, 0, cap_used, want, + retain, flushing, flush_tid, oldest_flush_tid, + &old_blob); + wake = should_wake_cap_waiters(cap); + spin_unlock(&ci->i_ceph_lock); + + ceph_buffer_put(old_blob); + __send_cap(mdsc, &arg, ci, wake); + goto retry; /* retake i_ceph_lock and restart our cap scan. */ } @@ -2135,6 +2145,9 @@ static int try_flush_caps(struct inode *inode, u64 *ptid) retry_locked: if (ci->i_dirty_caps && ci->i_auth_cap) { struct ceph_cap *cap = ci->i_auth_cap; + struct ceph_buffer *old_blob = NULL; + struct cap_msg_args arg; + bool wake; if (session != cap->session) { spin_unlock(&ci->i_ceph_lock); @@ -2162,11 +2175,15 @@ static int try_flush_caps(struct inode *inode, u64 *ptid) flush_tid = __mark_caps_flushing(inode, session, true, &oldest_flush_tid); - /* __send_cap drops i_ceph_lock */ - __send_cap(mdsc, cap, CEPH_CAP_OP_FLUSH, CEPH_CLIENT_CAPS_SYNC, + __prep_cap(&arg, cap, CEPH_CAP_OP_FLUSH, CEPH_CLIENT_CAPS_SYNC, __ceph_caps_used(ci), __ceph_caps_wanted(ci), (cap->issued | cap->implemented), - flushing, flush_tid, oldest_flush_tid); + flushing, flush_tid, oldest_flush_tid, &old_blob); + wake = should_wake_cap_waiters(cap); + spin_unlock(&ci->i_ceph_lock); + + ceph_buffer_put(old_blob); + __send_cap(mdsc, &arg, ci, wake); } else { if (!list_empty(&ci->i_cap_flush_list)) { struct ceph_cap_flush *cf = @@ -2368,15 +2385,25 @@ static void __kick_flushing_caps(struct ceph_mds_client *mdsc, first_tid = cf->tid + 1; if (cf->caps) { + struct ceph_buffer *old_blob = NULL; + struct cap_msg_args arg; + bool wake; + dout("kick_flushing_caps %p cap %p tid %llu %s\n", inode, cap, cf->tid, ceph_cap_string(cf->caps)); - __send_cap(mdsc, cap, CEPH_CAP_OP_FLUSH, + __prep_cap(&arg, cap, CEPH_CAP_OP_FLUSH, (cf->tid < last_snap_flush ? CEPH_CLIENT_CAPS_PENDING_CAPSNAP : 0), __ceph_caps_used(ci), __ceph_caps_wanted(ci), (cap->issued | cap->implemented), - cf->caps, cf->tid, oldest_flush_tid); + cf->caps, cf->tid, oldest_flush_tid, + &old_blob); + wake = should_wake_cap_waiters(cap); + spin_unlock(&ci->i_ceph_lock); + + ceph_buffer_put(old_blob); + __send_cap(mdsc, &arg, ci, wake); } else { struct ceph_cap_snap *capsnap = container_of(cf, struct ceph_cap_snap, From patchwork Mon Mar 23 16:07:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Layton X-Patchwork-Id: 11453399 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E330C1667 for ; Mon, 23 Mar 2020 16:07:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C2DAB20637 for ; Mon, 23 Mar 2020 16:07:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979640; bh=RbxCfHnGWqhMz/8M+UVv7258xkqokYi+oblpSrurUxU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=AzMtkcJUk59ihXMFWHOchmy1eEXGCJXQpD/C7ogtHb2L8t4w3tYriET9x9/ss4vQ3 PNRtibQAVWptY3WaIrjXTlB1Chaj1Vl/2W3K/cI7T2Onh77CPnB7PRxrz20sS99F8N b72QhgoG1/0p8utevkZsO3apUdwCzrKPfHRr9F4I= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727586AbgCWQHN (ORCPT ); Mon, 23 Mar 2020 12:07:13 -0400 Received: from mail.kernel.org ([198.145.29.99]:49454 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727298AbgCWQHM (ORCPT ); Mon, 23 Mar 2020 12:07:12 -0400 Received: from tleilax.com (68-20-15-154.lightspeed.rlghnc.sbcglobal.net [68.20.15.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9D7292072D; Mon, 23 Mar 2020 16:07:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979632; bh=RbxCfHnGWqhMz/8M+UVv7258xkqokYi+oblpSrurUxU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZMUHAfm9vlYPSRIT3VvOTG10MrN1PRx6cur9pyXFjoP89Z03xkbSS/94zesFHo84g LBsEsZu4Dy2zBpg5NbjIgHIFq6MdZC6zfREUakI3qgJiLiSUOr80eUma6isvWh8gah lHHQyDbFQTR/7YmqbAh3y7qwPbCcDzYL7znJIMWY= From: Jeff Layton To: ceph-devel@vger.kernel.org Cc: ukernel@gmail.com, idryomov@gmail.com, sage@redhat.com Subject: [PATCH 2/8] ceph: split up __finish_cap_flush Date: Mon, 23 Mar 2020 12:07:02 -0400 Message-Id: <20200323160708.104152-3-jlayton@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200323160708.104152-1-jlayton@kernel.org> References: <20200323160708.104152-1-jlayton@kernel.org> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org This function takes a mdsc argument or ci argument, but if both are passed in, it ignores the ci arg. Fortunately, nothing does that, but there's no good reason to have the same function handle both cases. Also, get rid of some branches and just use |= to set the wake_* vals. Signed-off-by: Jeff Layton --- fs/ceph/caps.c | 60 ++++++++++++++++++++++++-------------------------- 1 file changed, 29 insertions(+), 31 deletions(-) diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 5bdca0da58a4..01877f91b85b 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -1745,30 +1745,33 @@ static u64 __get_oldest_flush_tid(struct ceph_mds_client *mdsc) * Remove cap_flush from the mdsc's or inode's flushing cap list. * Return true if caller needs to wake up flush waiters. */ -static bool __finish_cap_flush(struct ceph_mds_client *mdsc, - struct ceph_inode_info *ci, - struct ceph_cap_flush *cf) +static bool __detach_cap_flush_from_mdsc(struct ceph_mds_client *mdsc, + struct ceph_cap_flush *cf) { struct ceph_cap_flush *prev; bool wake = cf->wake; - if (mdsc) { - /* are there older pending cap flushes? */ - if (wake && cf->g_list.prev != &mdsc->cap_flush_list) { - prev = list_prev_entry(cf, g_list); - prev->wake = true; - wake = false; - } - list_del(&cf->g_list); - } else if (ci) { - if (wake && cf->i_list.prev != &ci->i_cap_flush_list) { - prev = list_prev_entry(cf, i_list); - prev->wake = true; - wake = false; - } - list_del(&cf->i_list); - } else { - BUG_ON(1); + + if (wake && cf->g_list.prev != &mdsc->cap_flush_list) { + prev = list_prev_entry(cf, g_list); + prev->wake = true; + wake = false; } + list_del(&cf->g_list); + return wake; +} + +static bool __detach_cap_flush_from_ci(struct ceph_inode_info *ci, + struct ceph_cap_flush *cf) +{ + struct ceph_cap_flush *prev; + bool wake = cf->wake; + + if (wake && cf->i_list.prev != &ci->i_cap_flush_list) { + prev = list_prev_entry(cf, i_list); + prev->wake = true; + wake = false; + } + list_del(&cf->i_list); return wake; } @@ -3493,8 +3496,7 @@ static void handle_cap_flush_ack(struct inode *inode, u64 flush_tid, if (cf->caps == 0) /* capsnap */ continue; if (cf->tid <= flush_tid) { - if (__finish_cap_flush(NULL, ci, cf)) - wake_ci = true; + wake_ci |= __detach_cap_flush_from_ci(ci, cf); list_add_tail(&cf->i_list, &to_remove); } else { cleaned &= ~cf->caps; @@ -3516,10 +3518,8 @@ static void handle_cap_flush_ack(struct inode *inode, u64 flush_tid, spin_lock(&mdsc->cap_dirty_lock); - list_for_each_entry(cf, &to_remove, i_list) { - if (__finish_cap_flush(mdsc, NULL, cf)) - wake_mdsc = true; - } + list_for_each_entry(cf, &to_remove, i_list) + wake_mdsc |= __detach_cap_flush_from_mdsc(mdsc, cf); if (ci->i_flushing_caps == 0) { if (list_empty(&ci->i_cap_flush_list)) { @@ -3611,17 +3611,15 @@ static void handle_cap_flushsnap_ack(struct inode *inode, u64 flush_tid, dout(" removing %p cap_snap %p follows %lld\n", inode, capsnap, follows); list_del(&capsnap->ci_item); - if (__finish_cap_flush(NULL, ci, &capsnap->cap_flush)) - wake_ci = true; + wake_ci |= __detach_cap_flush_from_ci(ci, &capsnap->cap_flush); spin_lock(&mdsc->cap_dirty_lock); if (list_empty(&ci->i_cap_flush_list)) list_del_init(&ci->i_flushing_item); - if (__finish_cap_flush(mdsc, NULL, &capsnap->cap_flush)) - wake_mdsc = true; - + wake_mdsc |= __detach_cap_flush_from_mdsc(mdsc, + &capsnap->cap_flush); spin_unlock(&mdsc->cap_dirty_lock); } spin_unlock(&ci->i_ceph_lock); From patchwork Mon Mar 23 16:07:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Layton X-Patchwork-Id: 11453385 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3B3631667 for ; Mon, 23 Mar 2020 16:07:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0DC992072E for ; Mon, 23 Mar 2020 16:07:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979635; bh=VkM2bUO4dDDJShlv8kfKJD6uuEQnUCWXHJp/N+emm3M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=YMq2nHXGkEIJ5tChqgun/GDvLLdplp4Ft/PbGoBbqhUs3fshJAk0jSiqvZXacRZYC KXHBsewyRVK2TuZ39A/P35PTRlf5JFEqdVEbXVLHU8cl9M0uhvfERiib5BBmrrC/Ic lo5QCQ+/dEjZvwFGS2XJlmFrF+cT60Td+ARX6a/E= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727600AbgCWQHO (ORCPT ); Mon, 23 Mar 2020 12:07:14 -0400 Received: from mail.kernel.org ([198.145.29.99]:49466 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727426AbgCWQHN (ORCPT ); Mon, 23 Mar 2020 12:07:13 -0400 Received: from tleilax.com (68-20-15-154.lightspeed.rlghnc.sbcglobal.net [68.20.15.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4990020753; Mon, 23 Mar 2020 16:07:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979632; bh=VkM2bUO4dDDJShlv8kfKJD6uuEQnUCWXHJp/N+emm3M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=opBdDJexjL3ZE75hFPEG/dY4MakvK2SysQFNT4MkRO52Uugaguek/K2O6eico0P+r QuViD72QyK+oBzS7ofEiMEcc2qcP9ghcem+yP3MqorelgSoGGW3/qUQkBl5Ektlrrn +ne+4lzXqajYcb0PdNmLWS7t+HIiQPawZTptALzU= From: Jeff Layton To: ceph-devel@vger.kernel.org Cc: ukernel@gmail.com, idryomov@gmail.com, sage@redhat.com Subject: [PATCH 3/8] ceph: add comments for handle_cap_flush_ack logic Date: Mon, 23 Mar 2020 12:07:03 -0400 Message-Id: <20200323160708.104152-4-jlayton@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200323160708.104152-1-jlayton@kernel.org> References: <20200323160708.104152-1-jlayton@kernel.org> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Signed-off-by: Jeff Layton --- fs/ceph/caps.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 01877f91b85b..6220425e2a9c 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -3491,14 +3491,26 @@ static void handle_cap_flush_ack(struct inode *inode, u64 flush_tid, bool wake_mdsc = false; list_for_each_entry_safe(cf, tmp_cf, &ci->i_cap_flush_list, i_list) { + /* Is this the one that was flushed? */ if (cf->tid == flush_tid) cleaned = cf->caps; - if (cf->caps == 0) /* capsnap */ + + /* Is this a capsnap? */ + if (cf->caps == 0) continue; + if (cf->tid <= flush_tid) { + /* + * An earlier or current tid. The FLUSH_ACK should + * represent a superset of this flush's caps. + */ wake_ci |= __detach_cap_flush_from_ci(ci, cf); list_add_tail(&cf->i_list, &to_remove); } else { + /* + * This is a later one. Any caps in it are still dirty + * so don't count them as cleaned. + */ cleaned &= ~cf->caps; if (!cleaned) break; From patchwork Mon Mar 23 16:07:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Layton X-Patchwork-Id: 11453397 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19E7E17D4 for ; Mon, 23 Mar 2020 16:07:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E3BC520637 for ; Mon, 23 Mar 2020 16:07:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979640; bh=ojSSF8jTK7PBinKt4zintMc++nq/zD7ozYopY+/1d5c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=eA/Zoo8Fx1kQ/E2P9WaUO8eLNss3zT3LNmKMMVkwwBdnWc54mLXhrNBWvyWJwGXPt F/BMlc7jjegN8xOQOqckYSDZAQJRCSY912xRlzoVCs6SSIJru1Mp100y8kITrxTkAa qLkyeexjzmVjRd5dbb00BFfriLqpsPrxvQdh7UVs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727590AbgCWQHO (ORCPT ); Mon, 23 Mar 2020 12:07:14 -0400 Received: from mail.kernel.org ([198.145.29.99]:49472 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727282AbgCWQHN (ORCPT ); Mon, 23 Mar 2020 12:07:13 -0400 Received: from tleilax.com (68-20-15-154.lightspeed.rlghnc.sbcglobal.net [68.20.15.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E9BF020714; Mon, 23 Mar 2020 16:07:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979633; bh=ojSSF8jTK7PBinKt4zintMc++nq/zD7ozYopY+/1d5c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UII6HdcJJdRV1KZN4puq3dlSi0nnRky/lCO/+2gVLCvSpFRV7UmhZwDMLgFDkUE4Q WQessg2mLoIlHNeConZLePJnkwHS84Lfct0Szk/uvbwHGCepPrZvtDMwzVuKoQI7W5 Sf32ePZ6u3ZGBxrnP5Zsb/E+miBF3GtseuOhOLck= From: Jeff Layton To: ceph-devel@vger.kernel.org Cc: ukernel@gmail.com, idryomov@gmail.com, sage@redhat.com Subject: [PATCH 4/8] ceph: don't release i_ceph_lock in handle_cap_trunc Date: Mon, 23 Mar 2020 12:07:04 -0400 Message-Id: <20200323160708.104152-5-jlayton@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200323160708.104152-1-jlayton@kernel.org> References: <20200323160708.104152-1-jlayton@kernel.org> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org There's no reason to do this here. Just have the caller handle it. Also, add a lockdep assertion. Signed-off-by: Jeff Layton --- fs/ceph/caps.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 6220425e2a9c..e112c1c802cf 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -3651,10 +3651,9 @@ static void handle_cap_flushsnap_ack(struct inode *inode, u64 flush_tid, * * caller hold s_mutex. */ -static void handle_cap_trunc(struct inode *inode, +static bool handle_cap_trunc(struct inode *inode, struct ceph_mds_caps *trunc, struct ceph_mds_session *session) - __releases(ci->i_ceph_lock) { struct ceph_inode_info *ci = ceph_inode(inode); int mds = session->s_mds; @@ -3665,7 +3664,9 @@ static void handle_cap_trunc(struct inode *inode, int implemented = 0; int dirty = __ceph_caps_dirty(ci); int issued = __ceph_caps_issued(ceph_inode(inode), &implemented); - int queue_trunc = 0; + bool queue_trunc = false; + + lockdep_assert_held(&ci->i_ceph_lock); issued |= implemented | dirty; @@ -3673,10 +3674,7 @@ static void handle_cap_trunc(struct inode *inode, inode, mds, seq, truncate_size, truncate_seq); queue_trunc = ceph_fill_file_size(inode, issued, truncate_seq, truncate_size, size); - spin_unlock(&ci->i_ceph_lock); - - if (queue_trunc) - ceph_queue_vmtruncate(inode); + return queue_trunc; } /* @@ -3924,6 +3922,7 @@ void ceph_handle_caps(struct ceph_mds_session *session, size_t snaptrace_len; void *p, *end; struct cap_extra_info extra_info = {}; + bool queue_trunc; dout("handle_caps from mds%d\n", session->s_mds); @@ -4107,7 +4106,10 @@ void ceph_handle_caps(struct ceph_mds_session *session, break; case CEPH_CAP_OP_TRUNC: - handle_cap_trunc(inode, h, session); + queue_trunc = handle_cap_trunc(inode, h, session); + spin_unlock(&ci->i_ceph_lock); + if (queue_trunc) + ceph_queue_vmtruncate(inode); break; default: From patchwork Mon Mar 23 16:07:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Layton X-Patchwork-Id: 11453387 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 222B21668 for ; Mon, 23 Mar 2020 16:07:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EC19020735 for ; Mon, 23 Mar 2020 16:07:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979636; bh=ZNDmPc0JVdVfhcA196ky7lws9HQ43eDZVpU2axre1eM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=jyLg0cQw8FVS+Snh3FFJvAWSoFEnrlsAIfJqITT0kcMfzapg+5eutPX3ARnbvED9t ePa9tVk5uwstywJKg0Dvi/FEEVfdLYo2tKPZdkk1w3k+aTb3/vioeGAOoPJUrhFIq2 BJKozfOlPLobDpoHEEhI9xcLYhWDjY6L5pDWkpkU= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727605AbgCWQHP (ORCPT ); Mon, 23 Mar 2020 12:07:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:49484 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727594AbgCWQHO (ORCPT ); Mon, 23 Mar 2020 12:07:14 -0400 Received: from tleilax.com (68-20-15-154.lightspeed.rlghnc.sbcglobal.net [68.20.15.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 94F0620637; Mon, 23 Mar 2020 16:07:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979634; bh=ZNDmPc0JVdVfhcA196ky7lws9HQ43eDZVpU2axre1eM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uiTqAyTsSuYn18sZfxfG+rzfBOiFhPY95n5+KW2aidVmH8BIXO/re55nVG7C0HUhb q+0QUI2BGs2+qVsIX/MV6JMghzcxKRGe/R7IO1OCdP9O4hadcJHPtOOvsc/ox3EElo xQ1xPRXoQzoKIrgURaJP6Lrq/XzZQ8oSJVh2NkUA= From: Jeff Layton To: ceph-devel@vger.kernel.org Cc: ukernel@gmail.com, idryomov@gmail.com, sage@redhat.com Subject: [PATCH 5/8] ceph: don't take i_ceph_lock in handle_cap_import Date: Mon, 23 Mar 2020 12:07:05 -0400 Message-Id: <20200323160708.104152-6-jlayton@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200323160708.104152-1-jlayton@kernel.org> References: <20200323160708.104152-1-jlayton@kernel.org> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Just take it before calling it. This means we have to do a couple of minor in-memory operations under the spinlock now, but those shouldn't be an issue. Signed-off-by: Jeff Layton --- fs/ceph/caps.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index e112c1c802cf..3eab905ba74b 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -3824,7 +3824,6 @@ static void handle_cap_import(struct ceph_mds_client *mdsc, struct ceph_mds_cap_peer *ph, struct ceph_mds_session *session, struct ceph_cap **target_cap, int *old_issued) - __acquires(ci->i_ceph_lock) { struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_cap *cap, *ocap, *new_cap = NULL; @@ -3849,14 +3848,13 @@ static void handle_cap_import(struct ceph_mds_client *mdsc, dout("handle_cap_import inode %p ci %p mds%d mseq %d peer %d\n", inode, ci, mds, mseq, peer); - retry: - spin_lock(&ci->i_ceph_lock); cap = __get_cap_for_mds(ci, mds); if (!cap) { if (!new_cap) { spin_unlock(&ci->i_ceph_lock); new_cap = ceph_get_cap(mdsc, NULL); + spin_lock(&ci->i_ceph_lock); goto retry; } cap = new_cap; @@ -4070,6 +4068,7 @@ void ceph_handle_caps(struct ceph_mds_session *session, } else { down_read(&mdsc->snap_rwsem); } + spin_lock(&ci->i_ceph_lock); handle_cap_import(mdsc, inode, h, peer, session, &cap, &extra_info.issued); handle_cap_grant(inode, session, cap, From patchwork Mon Mar 23 16:07:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Layton X-Patchwork-Id: 11453395 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A16D71668 for ; Mon, 23 Mar 2020 16:07:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 76CDA20637 for ; Mon, 23 Mar 2020 16:07:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979639; bh=nAtWltXtB/RI+VhP4YFP0ZqGlKEcizz/ohcMBrFhkQU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=MnoG51JN28OE3DeYv36tL0+waaHvFydIPoDcBuf0jYxHrY/aWi4hdf8X9SAl5UVNp fk+fGnZFFF/3j9ScYeh1sDRU4FuWWJcRmIxmQ6Z3KBCgX+C0EeTp7cgrycfZuMMBmF 5t1fR0iCF3aXba8sjqhqkjJcg0eEXzqUzw/uZbPI= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727617AbgCWQHS (ORCPT ); Mon, 23 Mar 2020 12:07:18 -0400 Received: from mail.kernel.org ([198.145.29.99]:49498 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727426AbgCWQHP (ORCPT ); Mon, 23 Mar 2020 12:07:15 -0400 Received: from tleilax.com (68-20-15-154.lightspeed.rlghnc.sbcglobal.net [68.20.15.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 42B3120722; Mon, 23 Mar 2020 16:07:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979634; bh=nAtWltXtB/RI+VhP4YFP0ZqGlKEcizz/ohcMBrFhkQU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=H/VgKcaDvmhXKgtiMKoqcno4EG9oPVn39toGESTju/kZGnD+XkAHRJvf0VVv1UF0n Tciklo8XhQOmdVyInGaf4kNp24F+djtyV0FxEe6YN4OSmNhe+ziU9tvTBGZ4h77xzi Ug8VP1QXKbotR6zA3ANaLUf4spE72bXroHUtVdh8= From: Jeff Layton To: ceph-devel@vger.kernel.org Cc: ukernel@gmail.com, idryomov@gmail.com, sage@redhat.com Subject: [PATCH 6/8] ceph: document what protects i_dirty_item and i_flushing_item Date: Mon, 23 Mar 2020 12:07:06 -0400 Message-Id: <20200323160708.104152-7-jlayton@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200323160708.104152-1-jlayton@kernel.org> References: <20200323160708.104152-1-jlayton@kernel.org> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Signed-off-by: Jeff Layton --- fs/ceph/super.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/fs/ceph/super.h b/fs/ceph/super.h index 47cfd8935b9c..bb372859c0ad 100644 --- a/fs/ceph/super.h +++ b/fs/ceph/super.h @@ -351,7 +351,9 @@ struct ceph_inode_info { struct rb_root i_caps; /* cap list */ struct ceph_cap *i_auth_cap; /* authoritative cap, if any */ unsigned i_dirty_caps, i_flushing_caps; /* mask of dirtied fields */ - struct list_head i_dirty_item, i_flushing_item; + struct list_head i_dirty_item, i_flushing_item; /* protected by + * mdsc->cap_dirty_lock + */ /* we need to track cap writeback on a per-cap-bit basis, to allow * overlapping, pipelined cap flushes to the mds. we can probably * reduce the tid to 8 bits if we're concerned about inode size. */ From patchwork Mon Mar 23 16:07:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Layton X-Patchwork-Id: 11453389 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CB4D81668 for ; Mon, 23 Mar 2020 16:07:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9F95520637 for ; Mon, 23 Mar 2020 16:07:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979636; bh=5+j2drKZa+CQfEsajNfhjZNzFm4sp+fulTVSSukQduM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=U2Dr2ZIlxY26CbDgojjQsxCDbxHGfooZXG19q+N5reQzcsPWg0u8nNQz8kxtpQunG uqlLmSLaFwwBJGtdJ0NkormB0xzcCA7/RxFpxTSfawkUnBz/sURqbZvePafwjnvCfa tNm8dGk4O78hhIxNnRieTxHrVXTTTF6lUdH/9r0I= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727609AbgCWQHQ (ORCPT ); Mon, 23 Mar 2020 12:07:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:49506 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727594AbgCWQHP (ORCPT ); Mon, 23 Mar 2020 12:07:15 -0400 Received: from tleilax.com (68-20-15-154.lightspeed.rlghnc.sbcglobal.net [68.20.15.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E1E4D2072D; Mon, 23 Mar 2020 16:07:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979635; bh=5+j2drKZa+CQfEsajNfhjZNzFm4sp+fulTVSSukQduM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SlWlE2lrT1YX8heEWyLDXaiyW5YE6mwD09HocuIj1Cz+oLK0mDGcYHc97AwPAJO06 KscuXXsP6jt8SoV+B/BURnH4tw0fxkpriYFg/GRaCKWS5d3mrGVB6LFOVXUCSopHN4 fhtkhhJ1v0p/90I7cgoM/vrVp7tZ49kKieVdF4no= From: Jeff Layton To: ceph-devel@vger.kernel.org Cc: ukernel@gmail.com, idryomov@gmail.com, sage@redhat.com Subject: [PATCH 7/8] ceph: fix potential race in ceph_check_caps Date: Mon, 23 Mar 2020 12:07:07 -0400 Message-Id: <20200323160708.104152-8-jlayton@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200323160708.104152-1-jlayton@kernel.org> References: <20200323160708.104152-1-jlayton@kernel.org> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Nothing ensures that session will still be valid by the time we dereference the pointer. Take and put a reference. Signed-off-by: Jeff Layton --- fs/ceph/caps.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 3eab905ba74b..061e52912991 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -2051,12 +2051,14 @@ void ceph_check_caps(struct ceph_inode_info *ci, int flags, if (mutex_trylock(&session->s_mutex) == 0) { dout("inverting session/ino locks on %p\n", session); + session = ceph_get_mds_session(session); spin_unlock(&ci->i_ceph_lock); if (took_snap_rwsem) { up_read(&mdsc->snap_rwsem); took_snap_rwsem = 0; } mutex_lock(&session->s_mutex); + ceph_put_mds_session(session); goto retry; } } From patchwork Mon Mar 23 16:07:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeffrey Layton X-Patchwork-Id: 11453391 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 730D71667 for ; Mon, 23 Mar 2020 16:07:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 50C7520637 for ; Mon, 23 Mar 2020 16:07:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979637; bh=Do2IXVrJnIM5yT/kKD1jy0YvlSBikWNaa6uR3PSF/BY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=J74bx069VISZfr004bAafcTprupNTs69QGTK5SO/mcud4e9CUEoSiGdGDeq1a/iEH uhz0rl52Q+lU1jUWXw7e2uwvbkGFljZ8v/h9kOD4qlGFZU3D9KknkGuNiAhkrgoIOh O18eLjuRbrPnMQMHvGLGaDSt0v3zm3kcEKIJkCcs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727613AbgCWQHQ (ORCPT ); Mon, 23 Mar 2020 12:07:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:49516 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727611AbgCWQHQ (ORCPT ); Mon, 23 Mar 2020 12:07:16 -0400 Received: from tleilax.com (68-20-15-154.lightspeed.rlghnc.sbcglobal.net [68.20.15.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8CE872072E; Mon, 23 Mar 2020 16:07:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584979636; bh=Do2IXVrJnIM5yT/kKD1jy0YvlSBikWNaa6uR3PSF/BY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kzwAT62I0hy2Ga/xQlqW4AP5mvGkEEz/fAB5hVBhQMrqHDLR8WsNaXMcK1s5hO++6 YquYyp3s8FUYL8ELjl6KMHIZmS2tbBTm1hNFTs/4r84X5T/I+ZMtVqIvIDwCnWTRJq +a22y8Ga4krw7z+4JvU60OS5/F2D/ipd5VmUAM7Y= From: Jeff Layton To: ceph-devel@vger.kernel.org Cc: ukernel@gmail.com, idryomov@gmail.com, sage@redhat.com Subject: [PATCH 8/8] ceph: throw a warning if we destroy session with mutex still locked Date: Mon, 23 Mar 2020 12:07:08 -0400 Message-Id: <20200323160708.104152-9-jlayton@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200323160708.104152-1-jlayton@kernel.org> References: <20200323160708.104152-1-jlayton@kernel.org> MIME-Version: 1.0 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org Signed-off-by: Jeff Layton --- fs/ceph/mds_client.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index acce04483471..9a8e7013aca1 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -659,6 +659,7 @@ void ceph_put_mds_session(struct ceph_mds_session *s) if (refcount_dec_and_test(&s->s_ref)) { if (s->s_auth.authorizer) ceph_auth_destroy_authorizer(s->s_auth.authorizer); + WARN_ON(mutex_is_locked(&s->s_mutex)); xa_destroy(&s->s_delegated_inos); kfree(s); }