From patchwork Tue Aug 28 17:19:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 10578875 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA9F217DB for ; Tue, 28 Aug 2018 17:20:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B59A52A925 for ; Tue, 28 Aug 2018 17:20:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A8C692A92E; Tue, 28 Aug 2018 17:20:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D4AC2A925 for ; Tue, 28 Aug 2018 17:20:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 349316B4732; Tue, 28 Aug 2018 13:20:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2FC156B4733; Tue, 28 Aug 2018 13:20:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0FC436B4734; Tue, 28 Aug 2018 13:20:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f198.google.com (mail-qk0-f198.google.com [209.85.220.198]) by kanga.kvack.org (Postfix) with ESMTP id D7DA76B4732 for ; Tue, 28 Aug 2018 13:20:01 -0400 (EDT) Received: by mail-qk0-f198.google.com with SMTP id 93-v6so1877198qkq.7 for ; Tue, 28 Aug 2018 10:20:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=X8B2MZcIaz1/qnz5E6BYH1ls5S1ihyOFhRzTkOvQulQ=; b=V9rEGDCilHsDJ5XSlDqUgiZe6PV3Aa6y0DVyyNusesu0ENgNRFj7zO3LX44Xy41ubn 9U2wwehaXY7ryToA6kDdxEOf+QWyj7BTQ/mfA3o9L2o+QCoJspB+8rHiFMlvTFXuNTIl MNOi0lKWIGfMhiNgw0m7BPfZcCgrjonmK96/0cR+usfCWP4gi1+MlYO2sxbykX9h7QNM uWTvpKqa+GIs6ZCa6z/0etjt67zUlVJ7g8LY3GAXIvxpgdxPGXzURUjTc2blCAXXlF9b kl4CqhKJOCEUxjF2tcX940xTZftboCTyx4fxPMQ73XZUqso9Xglo+TSuubwqdPoNiiCF NRcg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of longman@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=longman@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: APzg51DZfTO1I8R+K3dPmfCqOezCGwKPeDMbN6x1qt7Udr2snbG3naHj aGl3qOmN8lC0FuJQYwHVO7FGHHeN5Th/8C7F9OqYUmPrS0H9KiJ+I+eufeZMmu8A3bUK5FeS9r5 SGBUkecsk/ZkNI/CxGE1m+a++QdrCdW4JOeSbd7GaCtCqrJ07EihEXVEO2b5/JKi0uw== X-Received: by 2002:a0c:fa4e:: with SMTP id k14-v6mr2639555qvo.35.1535476801642; Tue, 28 Aug 2018 10:20:01 -0700 (PDT) X-Google-Smtp-Source: ANB0Vda7rEOHFP95Jx22tbfo2WUblPstqlAeuuZET3v3yDttQzEKWAs7RhhdZYf88LwNc3eGFRcr X-Received: by 2002:a0c:fa4e:: with SMTP id k14-v6mr2639498qvo.35.1535476800804; Tue, 28 Aug 2018 10:20:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535476800; cv=none; d=google.com; s=arc-20160816; b=MQHs+uQh3B9Knv9K3zonfraKcnA48syjhOFOiMrgzOJLfJMJAUKW2bSKWBVvq2TZgs lS6w74O0OPhwtxSYld7JEkfFPjMn07UrozX5uuJd+OWPbUolZkFmOKnzPdlL98wv4Q1Y Rw/3mthbvq0mPVhu6tGRSCuH41WwpDnibWZ/WVXXlZRkkLK/N4LYoepoJURg4vtw1F1z 0VTyC4Mg+ab/9SaX/udSgipQpSJ1EVsU1bzidBebYmK/jwOIRhdaLu5q+RUfGOPhcjCa lgxTzCsznzEM72VHiyn5oVPOvJslCg0lsskoD4enhAR13jyOw+tGcXSgbCecCtXw8qfi aMTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=X8B2MZcIaz1/qnz5E6BYH1ls5S1ihyOFhRzTkOvQulQ=; b=oqdoG0YmeIznHX1OBuw1XOhB89wsSrirpMcvCahOEdfiSsb+A52uf6yZbQB2CBd3aS iXd/Ewy81peTzIPWjRIXEgil/vzs/ehhDMBzs2H9sDTZJw9AZM7+ZQyAWTnEav9HTF0h ke8LtR8qZTiax7RxM23Y5sdPZixDtHuOvND+5zmUgnpmWIAjYqaMV/UB9zXI1ey3mDJn n7P/Pxtz6sdd7dkh93wyKHfS/0MLoUyQmcb3ONWe3EeWU5sxprTXKhz7k4lIdZ9D62t/ oDM5xsAS4tm4h2ZjZpeFicDvgEF9c5lJ+DidOS2D1M+wGwOYAfJvyCIICE+7W4ZqRtkw 1+mA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of longman@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=longman@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id i37-v6si1491275qti.112.2018.08.28.10.20.00 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 28 Aug 2018 10:20:00 -0700 (PDT) Received-SPF: pass (google.com: domain of longman@redhat.com designates 66.187.233.73 as permitted sender) client-ip=66.187.233.73; Authentication-Results: mx.google.com; spf=pass (google.com: domain of longman@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=longman@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 39F1640241F0; Tue, 28 Aug 2018 17:20:00 +0000 (UTC) Received: from llong.com (ovpn-123-12.rdu2.redhat.com [10.10.123.12]) by smtp.corp.redhat.com (Postfix) with ESMTP id 43FC89463D; Tue, 28 Aug 2018 17:19:59 +0000 (UTC) From: Waiman Long To: Alexander Viro , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-doc@vger.kernel.org, "Luis R. Rodriguez" , Kees Cook , Linus Torvalds , Jan Kara , "Paul E. McKenney" , Andrew Morton , Ingo Molnar , Miklos Szeredi , Matthew Wilcox , Larry Woodman , James Bottomley , "Wangkai (Kevin C)" , Michal Hocko , Waiman Long Subject: [PATCH 2/2] fs/dcache: Make negative dentries easier to be reclaimed Date: Tue, 28 Aug 2018 13:19:40 -0400 Message-Id: <1535476780-5773-3-git-send-email-longman@redhat.com> In-Reply-To: <1535476780-5773-1-git-send-email-longman@redhat.com> References: <1535476780-5773-1-git-send-email-longman@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Tue, 28 Aug 2018 17:20:00 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.7]); Tue, 28 Aug 2018 17:20:00 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'longman@redhat.com' RCPT:'' X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP For negative dentries that are accessed once and never used again, they should be removed first before other dentries when shrinker is running. This is done by putting negative dentries at the head of the LRU list instead at the tail. A new DCACHE_NEW_NEGATIVE flag is now added to a negative dentry when it is initially created. When such a dentry is added to the LRU, it will be added to the head so that it will be the first to go when a shrinker is running if it is never accessed again (DCACHE_REFERENCED bit not set). The flag is cleared after the LRU list addition. Suggested-by: Larry Woodman Signed-off-by: Waiman Long --- fs/dcache.c | 25 +++++++++++++++++-------- include/linux/dcache.h | 1 + include/linux/list_lru.h | 17 +++++++++++++++++ mm/list_lru.c | 16 ++++++++++++++-- 4 files changed, 49 insertions(+), 10 deletions(-) diff --git a/fs/dcache.c b/fs/dcache.c index 69f5541..ab6a4cf 100644 --- a/fs/dcache.c +++ b/fs/dcache.c @@ -242,12 +242,6 @@ static inline void __neg_dentry_inc(struct dentry *dentry) this_cpu_inc(nr_dentry_neg); } -static inline void neg_dentry_inc(struct dentry *dentry) -{ - if (unlikely(d_is_negative(dentry))) - __neg_dentry_inc(dentry); -} - static inline int dentry_cmp(const struct dentry *dentry, const unsigned char *ct, unsigned tcount) { /* @@ -353,7 +347,7 @@ static inline void __d_set_inode_and_type(struct dentry *dentry, dentry->d_inode = inode; flags = READ_ONCE(dentry->d_flags); - flags &= ~(DCACHE_ENTRY_TYPE | DCACHE_FALLTHRU); + flags &= ~(DCACHE_ENTRY_TYPE | DCACHE_FALLTHRU | DCACHE_NEW_NEGATIVE); flags |= type_flags; WRITE_ONCE(dentry->d_flags, flags); } @@ -430,8 +424,20 @@ static void d_lru_add(struct dentry *dentry) D_FLAG_VERIFY(dentry, 0); dentry->d_flags |= DCACHE_LRU_LIST; this_cpu_inc(nr_dentry_unused); + if (d_is_negative(dentry)) { + __neg_dentry_inc(dentry); + if (dentry->d_flags & DCACHE_NEW_NEGATIVE) { + /* + * Add the negative dentry to the head once, it + * will be added to the tail next time. + */ + WARN_ON_ONCE(!list_lru_add_head( + &dentry->d_sb->s_dentry_lru, &dentry->d_lru)); + dentry->d_flags &= ~DCACHE_NEW_NEGATIVE; + return; + } + } WARN_ON_ONCE(!list_lru_add(&dentry->d_sb->s_dentry_lru, &dentry->d_lru)); - neg_dentry_inc(dentry); } static void d_lru_del(struct dentry *dentry) @@ -2620,6 +2626,9 @@ static inline void __d_add(struct dentry *dentry, struct inode *inode) __d_set_inode_and_type(dentry, inode, add_flags); raw_write_seqcount_end(&dentry->d_seq); fsnotify_update_flags(dentry); + } else { + /* It is a negative dentry, add it to LRU head initially. */ + dentry->d_flags |= DCACHE_NEW_NEGATIVE; } __d_rehash(dentry); if (dir) diff --git a/include/linux/dcache.h b/include/linux/dcache.h index df942e5..03a1918 100644 --- a/include/linux/dcache.h +++ b/include/linux/dcache.h @@ -214,6 +214,7 @@ struct dentry_operations { #define DCACHE_FALLTHRU 0x01000000 /* Fall through to lower layer */ #define DCACHE_ENCRYPTED_WITH_KEY 0x02000000 /* dir is encrypted with a valid key */ #define DCACHE_OP_REAL 0x04000000 +#define DCACHE_NEW_NEGATIVE 0x08000000 /* New negative dentry */ #define DCACHE_PAR_LOOKUP 0x10000000 /* being looked up (with parent locked shared) */ #define DCACHE_DENTRY_CURSOR 0x20000000 diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h index aa5efd9..bfac057 100644 --- a/include/linux/list_lru.h +++ b/include/linux/list_lru.h @@ -90,6 +90,23 @@ int __list_lru_init(struct list_lru *lru, bool memcg_aware, bool list_lru_add(struct list_lru *lru, struct list_head *item); /** + * list_lru_add_head: add an element to the lru list's head + * @list_lru: the lru pointer + * @item: the item to be added. + * + * This is similar to list_lru_add(). The only difference is the location + * where the new item will be added. The list_lru_add() function will add + * the new item to the tail as it is the most recently used one. The + * list_lru_add_head() will add the new item into the head so that it + * will the first to go if a shrinker is running. So this function should + * only be used for less important item that can be the first to go if + * the system is under memory pressure. + * + * Return value: true if the list was updated, false otherwise + */ +bool list_lru_add_head(struct list_lru *lru, struct list_head *item); + +/** * list_lru_del: delete an element to the lru list * @list_lru: the lru pointer * @item: the item to be deleted. diff --git a/mm/list_lru.c b/mm/list_lru.c index 5b30625..133f41c 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -124,7 +124,8 @@ static inline bool list_lru_memcg_aware(struct list_lru *lru) } #endif /* CONFIG_MEMCG_KMEM */ -bool list_lru_add(struct list_lru *lru, struct list_head *item) +static inline bool __list_lru_add(struct list_lru *lru, struct list_head *item, + const bool add_tail) { int nid = page_to_nid(virt_to_page(item)); struct list_lru_node *nlru = &lru->node[nid]; @@ -134,7 +135,7 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) spin_lock(&nlru->lock); if (list_empty(item)) { l = list_lru_from_kmem(nlru, item, &memcg); - list_add_tail(item, &l->list); + (add_tail ? list_add_tail : list_add)(item, &l->list); /* Set shrinker bit if the first element was added */ if (!l->nr_items++) memcg_set_shrinker_bit(memcg, nid, @@ -146,8 +147,19 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) spin_unlock(&nlru->lock); return false; } + +bool list_lru_add(struct list_lru *lru, struct list_head *item) +{ + return __list_lru_add(lru, item, true); +} EXPORT_SYMBOL_GPL(list_lru_add); +bool list_lru_add_head(struct list_lru *lru, struct list_head *item) +{ + return __list_lru_add(lru, item, false); +} +EXPORT_SYMBOL_GPL(list_lru_add_head); + bool list_lru_del(struct list_lru *lru, struct list_head *item) { int nid = page_to_nid(virt_to_page(item));