From patchwork Sun Apr 22 15:06:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Darrick J. Wong" X-Patchwork-Id: 10355323 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E0CD960231 for ; Sun, 22 Apr 2018 15:06:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CDDA428988 for ; Sun, 22 Apr 2018 15:06:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C28492898C; Sun, 22 Apr 2018 15:06:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B00D928988 for ; Sun, 22 Apr 2018 15:06:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755339AbeDVPGN (ORCPT ); Sun, 22 Apr 2018 11:06:13 -0400 Received: from aserp2130.oracle.com ([141.146.126.79]:37794 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751556AbeDVPGJ (ORCPT ); Sun, 22 Apr 2018 11:06:09 -0400 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w3MF67Wd061835; Sun, 22 Apr 2018 15:06:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : from : to : cc : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2017-10-26; bh=VKU+1Sw0IuxI3UvStvPCt8ru1jVciRIYyq8Wau8f7cg=; b=D27SEg/wg9R0sc/R1Sd7UX6wvB5WQw4bH8eo/8hhvrXonpfJ0ynuF1XnHo8N/EZ4o0C+ xlaUBcuct/O1c0KBu3pMR5yFzzMFQn/Mer7W34VmkjZ5UUZxT6sMF7we2neqCbDB+Peq rliZANqVqgO/J+o6SjwZwbnblJUS+kms+sNLhIU43jd/so3DINRCTUIxC5rPsNHzgP4E V6blLHiRvjp70Uofrn/w0cBv8qo15yQ36nQg7aSgIYZps7Lfx/Uzy7Rgd92QBULnLIJN crGQNfD39NuHqI3qRXKwYpWfNoUQuMFtsf2DhspgDTSnkBdkmhMVgjEQwrMGt6AuhbJY Vw== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by aserp2130.oracle.com with ESMTP id 2hfttxtn6g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 22 Apr 2018 15:06:06 +0000 Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w3MF66wV021544 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Sun, 22 Apr 2018 15:06:06 GMT Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id w3MF661K032748; Sun, 22 Apr 2018 15:06:06 GMT Received: from localhost (/10.159.140.197) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Sun, 22 Apr 2018 08:06:05 -0700 Subject: [PATCH 02/13] xfs: refactor dquot cache handling From: "Darrick J. Wong" To: darrick.wong@oracle.com Cc: linux-xfs@vger.kernel.org, hch@lst.de Date: Sun, 22 Apr 2018 08:06:05 -0700 Message-ID: <152440956493.29601.14685471061467996971.stgit@magnolia> In-Reply-To: <152440954198.29601.14390250048915099607.stgit@magnolia> References: <152440954198.29601.14390250048915099607.stgit@magnolia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8871 signatures=668698 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=1 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1804220171 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Darrick J. Wong Delegate the dquot cache handling (radix tree lookup and insertion) to separate helper functions so that we can continue to simplify the body of dqget. Signed-off-by: Darrick J. Wong Reviewed-by: Brian Foster Reviewed-by: Christoph Hellwig --- fs/xfs/xfs_dquot.c | 112 ++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 78 insertions(+), 34 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c index 79a9df6..43b0b32 100644 --- a/fs/xfs/xfs_dquot.c +++ b/fs/xfs/xfs_dquot.c @@ -679,6 +679,81 @@ xfs_dq_get_next_id( } /* + * Look up the dquot in the in-core cache. If found, the dquot is returned + * locked and ready to go. + */ +static struct xfs_dquot * +xfs_qm_dqget_cache_lookup( + struct xfs_mount *mp, + struct xfs_quotainfo *qi, + struct radix_tree_root *tree, + xfs_dqid_t id) +{ + struct xfs_dquot *dqp; + +restart: + mutex_lock(&qi->qi_tree_lock); + dqp = radix_tree_lookup(tree, id); + if (!dqp) { + mutex_unlock(&qi->qi_tree_lock); + XFS_STATS_INC(mp, xs_qm_dqcachemisses); + return NULL; + } + + xfs_dqlock(dqp); + if (dqp->dq_flags & XFS_DQ_FREEING) { + xfs_dqunlock(dqp); + mutex_unlock(&qi->qi_tree_lock); + trace_xfs_dqget_freeing(dqp); + delay(1); + goto restart; + } + + dqp->q_nrefs++; + mutex_unlock(&qi->qi_tree_lock); + + trace_xfs_dqget_hit(dqp); + XFS_STATS_INC(mp, xs_qm_dqcachehits); + return dqp; +} + +/* + * Try to insert a new dquot into the in-core cache. If an error occurs the + * caller should throw away the dquot and start over. Otherwise, the dquot + * is returned locked (and held by the cache) as if there had been a cache + * hit. + */ +static int +xfs_qm_dqget_cache_insert( + struct xfs_mount *mp, + struct xfs_quotainfo *qi, + struct radix_tree_root *tree, + xfs_dqid_t id, + struct xfs_dquot *dqp) +{ + int error; + + mutex_lock(&qi->qi_tree_lock); + error = radix_tree_insert(tree, id, dqp); + if (unlikely(error)) { + /* Duplicate found! Caller must try again. */ + WARN_ON(error != -EEXIST); + mutex_unlock(&qi->qi_tree_lock); + trace_xfs_dqget_dup(dqp); + return error; + } + + /* Return a locked dquot to the caller, with a reference taken. */ + xfs_dqlock(dqp); + dqp->q_nrefs = 1; + + qi->qi_dquots++; + mutex_unlock(&qi->qi_tree_lock); + + return 0; +} + +/* * Given the file system, inode OR id, and type (UDQUOT/GDQUOT), return a * a locked dquot, doing an allocation (if requested) as needed. * When both an inode and an id are given, the inode's id takes precedence. @@ -716,28 +791,11 @@ xfs_qm_dqget( } restart: - mutex_lock(&qi->qi_tree_lock); - dqp = radix_tree_lookup(tree, id); + dqp = xfs_qm_dqget_cache_lookup(mp, qi, tree, id); if (dqp) { - xfs_dqlock(dqp); - if (dqp->dq_flags & XFS_DQ_FREEING) { - xfs_dqunlock(dqp); - mutex_unlock(&qi->qi_tree_lock); - trace_xfs_dqget_freeing(dqp); - delay(1); - goto restart; - } - - dqp->q_nrefs++; - mutex_unlock(&qi->qi_tree_lock); - - trace_xfs_dqget_hit(dqp); - XFS_STATS_INC(mp, xs_qm_dqcachehits); *O_dqpp = dqp; return 0; } - mutex_unlock(&qi->qi_tree_lock); - XFS_STATS_INC(mp, xs_qm_dqcachemisses); /* * Dquot cache miss. We don't want to keep the inode lock across @@ -779,31 +837,17 @@ xfs_qm_dqget( } } - mutex_lock(&qi->qi_tree_lock); - error = radix_tree_insert(tree, id, dqp); - if (unlikely(error)) { - WARN_ON(error != -EEXIST); - + error = xfs_qm_dqget_cache_insert(mp, qi, tree, id, dqp); + if (error) { /* * Duplicate found. Just throw away the new dquot and start * over. */ - mutex_unlock(&qi->qi_tree_lock); - trace_xfs_dqget_dup(dqp); xfs_qm_dqdestroy(dqp); XFS_STATS_INC(mp, xs_qm_dquot_dups); goto restart; } - /* - * We return a locked dquot to the caller, with a reference taken - */ - xfs_dqlock(dqp); - dqp->q_nrefs = 1; - - qi->qi_dquots++; - mutex_unlock(&qi->qi_tree_lock); - dqret: ASSERT((ip == NULL) || xfs_isilocked(ip, XFS_ILOCK_EXCL)); trace_xfs_dqget_miss(dqp);