From patchwork Wed May 2 22:26:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miklos Szeredi X-Patchwork-Id: 10376737 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C2FFC6038F for ; Wed, 2 May 2018 22:26:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A3AE728535 for ; Wed, 2 May 2018 22:26:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 96D142855E; Wed, 2 May 2018 22:26:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 452A128535 for ; Wed, 2 May 2018 22:26:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751527AbeEBW0m (ORCPT ); Wed, 2 May 2018 18:26:42 -0400 Received: from mail-wr0-f179.google.com ([209.85.128.179]:38585 "EHLO mail-wr0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751503AbeEBW0l (ORCPT ); Wed, 2 May 2018 18:26:41 -0400 Received: by mail-wr0-f179.google.com with SMTP id 94-v6so14254957wrf.5 for ; Wed, 02 May 2018 15:26:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=eM6MwvMsFR5YxzDKGyd9Ra54Sz5qwk/lx98h8ds9yVk=; b=IzeXYCTnrQA8oiXvNvbvxQPV4PLjY4sbJFf6MS6ySMA9bTvlOld5vycAKdrA6gcek9 HloukyNjBUmmHSWq4aw+swF6sKNCIpfTG2ELdEMNtONeloUwPU2x8VhvFHyumVQlWZho PO0gT27QDG1B6iwR2yB2fsiVFtMi2QgicMcRmCQ4pmeCIa93ZgowS25krh94Odn1Gxxo ajhf2rsXvGPWXbJFJ4LFFzFHlO8Rb3jW/FyBkbQyyNPnb6N+7UkRoO0oqw/4p2MTHmdA iGDm6X7kZLbd2nll99O3oCkInyZpTD1bCUwE6sszcckTwQqZDs/7PqvyArdVmlEvpD/E wblw== X-Gm-Message-State: ALQs6tAX6i2EkjrrhormZPBr/Ej0STVnCnOgsVd40Yh/MOmLCWQG+Rc5 OagPHkGzd/crKO2WMK7gTPZ+HhgMEt8= X-Google-Smtp-Source: AB8JxZqKXTG2zXjXD+J1OKGVJwW0EVwrC8k/65rduHiOzMWtgbKGtmo3F6gnNL+kbaoE5e8BY4+LuQ== X-Received: by 2002:adf:8186:: with SMTP id 6-v6mr17459182wra.160.1525300000550; Wed, 02 May 2018 15:26:40 -0700 (PDT) Received: from veci.piliscsaba.redhat.com (catv-176-63-54-97.catv.broadband.hu. [176.63.54.97]) by smtp.gmail.com with ESMTPSA id w31-v6sm26017556wrb.93.2018.05.02.15.26.39 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 02 May 2018 15:26:39 -0700 (PDT) From: Miklos Szeredi To: Al Viro Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, miklos@szeredi.hu Subject: [PATCH] dcache: fix quadratic behavior with parallel shrinkers Date: Thu, 3 May 2018 00:26:35 +0200 Message-Id: <20180502222635.1862-1-mszeredi@redhat.com> X-Mailer: git-send-email 2.14.3 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When multiple shrinkers are operating on a directory containing many dentries, it takes much longer than if only one shrinker is operating on the directory. Call the shrinker instances A and B, which shrink DIR containing NUM dentries. Assume A wins the race for locking DIR's d_lock, then it goes onto moving all unlinked dentries to its dispose list. When it's done, then B will scan the directory once again, but will find that all dentries are already being shrunk, so it will have an empty dispose list. Both A and B will have found NUM dentries (data.found == NUM). Now comes the interesting part: A will proceed to shrink the dispose list by killing individual dentries and decrementing the refcount of the parent (which is DIR). NB: decrementing DIR's refcount will block if DIR's d_lock is held. B will shrink a zero size list and then immediately restart scanning the directory, where it will lock DIR's d_lock, scan the remaining dentries and find no dentry to dispose. So that results in B doing the directory scan over and over again, holding d_lock of DIR, while A is waiting for a chance to decrement refcount of DIR and making very slow progress because of this. B is wasting time and holding up progress of A at the same time. Proposed fix is to check this situation in B (found some dentries, but all are being shrunk already) and just sleep for some time, before retrying the scan. The sleep is proportional to the number of found dentries. Test script: --- 8< --- 8< --- 8< --- 8< --- 8< --- #!/bin/bash TESTROOT=/var/tmp/test-root SUBDIR=$TESTROOT/sub/dir prepare() { rm -rf $TESTROOT mkdir -p $SUBDIR for (( i = 0; i < 1000; i++ )); do for ((j = 0; j < 1000; j++)); do if test -e $SUBDIR/$i.$j; then echo "This should not happen!" exit 1 fi done printf "%i (%s) ...\r" $((($i + 1) * $j)) `grep dentry /proc/slabinfo | sed -e "s/dentry *\([0-9]*\).*/\1/"` done } prepare printf "\nStarting shrinking\n" time rmdir $TESTROOT 2> /dev/null prepare printf "\nStarting parallel shrinking\n" time (rmdir $SUBDIR & rmdir $TESTROOT 2> /dev/null & wait) --- 8< --- 8< --- 8< --- 8< --- 8< --- Signed-off-by: Miklos Szeredi --- fs/dcache.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/fs/dcache.c b/fs/dcache.c index 60df712262c2..ff250f3843d7 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -30,6 +30,7 @@ #include #include #include +#include #include "internal.h" #include "mount.h" @@ -1479,9 +1480,15 @@ void shrink_dcache_parent(struct dentry *parent) continue; } - cond_resched(); if (!data.found) break; + + /* + * Wait some for other shrinkers to process these found + * dentries. This formula gives about 100ns on average per + * dentry for large number of dentries. + */ + usleep_range(data.found / 15 + 1, data.found / 7 + 2); } } EXPORT_SYMBOL(shrink_dcache_parent);