From patchwork Tue Mar 13 10:49:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10278185 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A5AF2603B5 for ; Tue, 13 Mar 2018 10:50:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 96B7228F8C for ; Tue, 13 Mar 2018 10:50:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9592628F8E; Tue, 13 Mar 2018 10:50:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F2AB628F8C for ; Tue, 13 Mar 2018 10:50:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932566AbeCMKt5 (ORCPT ); Tue, 13 Mar 2018 06:49:57 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:38008 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932903AbeCMKtz (ORCPT ); Tue, 13 Mar 2018 06:49:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:To:From:Sender:Reply-To:Cc:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=NC+s5Fo6V74onB1kMG9Vc+nhPRecfdZOL6MKMuTM0dw=; b=TQrPOBdMwZE7bAEpVvoSYGeHf L6uNvVQVGOXXDyg7oNr0eFWh5RJ0Z8t/M+Mk3YrddAJuMPVWCqx5q1HhuhveMpERHXaZC9dKEH/bd a7nHNZmro8swppikd6KuqYWPSHj38HdL5hltOHBwH9O1StRXWJPhQ+EqnR/wye8JE+y5zqJQaeofM OZCNnqID1L4HlIfvMQHTs4RAygkKM4chdlVD+/VK9FuaNIWomTZ/9ZL7qymFixYuYDKC1Z64Yb12l FPeWBbOFKOCfN+n1vQnzKWtUXD6MFsXLPb7jak+dOCRjxwIcyY1Z5NkEcYJnT+se1YO/nv/QDusMp rIcgWyYWA==; Received: from 77.117.139.172.wireless.dyn.drei.com ([77.117.139.172] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1evhVC-0006IH-38 for linux-xfs@vger.kernel.org; Tue, 13 Mar 2018 10:49:54 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Subject: [PATCH 7/8] xfs: refactor xfs_log_force_lsn Date: Tue, 13 Mar 2018 11:49:26 +0100 Message-Id: <20180313104927.12926-8-hch@lst.de> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180313104927.12926-1-hch@lst.de> References: <20180313104927.12926-1-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Use the the smallest possible loop as preable to find the correct iclog buffer, and then use gotos for unwinding to straighten the code. Also fix the top of function comment while we're at it. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_log.c | 142 ++++++++++++++++++++++++------------------------------- 1 file changed, 62 insertions(+), 80 deletions(-) diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index a37a8defcd39..b6c6f227b2d7 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -3404,11 +3404,10 @@ xfs_log_force( * state and go to sleep or return. * If it is in any other state, go to sleep or return. * - * Synchronous forces are implemented with a signal variable. All callers - * to force a given lsn to disk will wait on a the sv attached to the - * specific in-core log. When given in-core log finally completes its - * write to disk, that thread will wake up all threads waiting on the - * sv. + * Synchronous forces are implemented with a wait queue. All callers to force a + * given lsn to disk will wait on a the queue attached to the specific in-core + * log. When given in-core log finally completes its write to disk, that thread + * will wake up all threads waiting on the queue. */ int xfs_log_force_lsn( @@ -3433,92 +3432,75 @@ xfs_log_force_lsn( try_again: spin_lock(&log->l_icloglock); iclog = log->l_iclog; - if (iclog->ic_state & XLOG_STATE_IOERROR) { - spin_unlock(&log->l_icloglock); - return -EIO; - } + if (iclog->ic_state & XLOG_STATE_IOERROR) + goto out_error; - do { - if (be64_to_cpu(iclog->ic_header.h_lsn) != lsn) { - iclog = iclog->ic_next; - continue; - } + while (be64_to_cpu(iclog->ic_header.h_lsn) != lsn) { + iclog = iclog->ic_next; + if (iclog == log->l_iclog) + goto out_unlock; + } - if (iclog->ic_state == XLOG_STATE_DIRTY) { - spin_unlock(&log->l_icloglock); - return 0; - } + if (iclog->ic_state == XLOG_STATE_DIRTY) + goto out_unlock; - if (iclog->ic_state == XLOG_STATE_ACTIVE) { - /* - * We sleep here if we haven't already slept (e.g. - * this is the first time we've looked at the correct - * iclog buf) and the buffer before us is going to - * be sync'ed. The reason for this is that if we - * are doing sync transactions here, by waiting for - * the previous I/O to complete, we can allow a few - * more transactions into this iclog before we close - * it down. - * - * Otherwise, we mark the buffer WANT_SYNC, and bump - * up the refcnt so we can release the log (which - * drops the ref count). The state switch keeps new - * transaction commits from using this buffer. When - * the current commits finish writing into the buffer, - * the refcount will drop to zero and the buffer will - * go out then. - */ - if (!already_slept && - (iclog->ic_prev->ic_state & - (XLOG_STATE_WANT_SYNC | XLOG_STATE_SYNCING))) { - ASSERT(!(iclog->ic_state & XLOG_STATE_IOERROR)); + if (iclog->ic_state == XLOG_STATE_ACTIVE) { + /* + * We sleep here if we haven't already slept (e.g. this is the + * first time we've looked at the correct iclog buf) and the + * buffer before us is going to be sync'ed. The reason for this + * is that if we are doing sync transactions here, by waiting + * for the previous I/O to complete, we can allow a few more + * transactions into this iclog before we close it down. + * + * Otherwise, we mark the buffer WANT_SYNC, and bump up the + * refcnt so we can release the log (which drops the ref count). + * The state switch keeps new transaction commits from using + * this buffer. When the current commits finish writing into + * the buffer, the refcount will drop to zero and the buffer + * will go out then. + */ + if (!already_slept && + (iclog->ic_prev->ic_state & + (XLOG_STATE_WANT_SYNC | XLOG_STATE_SYNCING))) { + ASSERT(!(iclog->ic_state & XLOG_STATE_IOERROR)); - XFS_STATS_INC(mp, xs_log_force_sleep); + XFS_STATS_INC(mp, xs_log_force_sleep); - xlog_wait(&iclog->ic_prev->ic_write_wait, - &log->l_icloglock); - already_slept = 1; - goto try_again; - } - atomic_inc(&iclog->ic_refcnt); - xlog_state_switch_iclogs(log, iclog, 0); - spin_unlock(&log->l_icloglock); - if (xlog_state_release_iclog(log, iclog)) - return -EIO; - if (log_flushed) - *log_flushed = 1; - spin_lock(&log->l_icloglock); + xlog_wait(&iclog->ic_prev->ic_write_wait, + &log->l_icloglock); + already_slept = 1; + goto try_again; } + atomic_inc(&iclog->ic_refcnt); + xlog_state_switch_iclogs(log, iclog, 0); + spin_unlock(&log->l_icloglock); + if (xlog_state_release_iclog(log, iclog)) + return -EIO; + if (log_flushed) + *log_flushed = 1; + spin_lock(&log->l_icloglock); + } - if ((flags & XFS_LOG_SYNC) && /* sleep */ - !(iclog->ic_state & - (XLOG_STATE_ACTIVE | XLOG_STATE_DIRTY))) { - /* - * Don't wait on completion if we know that we've - * gotten a log write error. - */ - if (iclog->ic_state & XLOG_STATE_IOERROR) { - spin_unlock(&log->l_icloglock); - return -EIO; - } - XFS_STATS_INC(mp, xs_log_force_sleep); - xlog_wait(&iclog->ic_force_wait, &log->l_icloglock); - /* - * No need to grab the log lock here since we're - * only deciding whether or not to return EIO - * and the memory read should be atomic. - */ - if (iclog->ic_state & XLOG_STATE_IOERROR) - return -EIO; - } else { /* just return */ - spin_unlock(&log->l_icloglock); - } + if (!(flags & XFS_LOG_SYNC) || + (iclog->ic_state & (XLOG_STATE_ACTIVE | XLOG_STATE_DIRTY))) + goto out_unlock; - return 0; - } while (iclog != log->l_iclog); + if (iclog->ic_state & XLOG_STATE_IOERROR) + goto out_error; + + XFS_STATS_INC(mp, xs_log_force_sleep); + xlog_wait(&iclog->ic_force_wait, &log->l_icloglock); + if (iclog->ic_state & XLOG_STATE_IOERROR) + return -EIO; + return 0; +out_unlock: spin_unlock(&log->l_icloglock); return 0; +out_error: + spin_unlock(&log->l_icloglock); + return -EIO; } /*