From patchwork Fri Apr 29 19:23:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12832648 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 124F6C433FE for ; Fri, 29 Apr 2022 19:23:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A21226B0081; Fri, 29 Apr 2022 15:23:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 600B46B007D; Fri, 29 Apr 2022 15:23:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F6026B0085; Fri, 29 Apr 2022 15:23:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id DC65F6B0088 for ; Fri, 29 Apr 2022 15:23:41 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id C265060253 for ; Fri, 29 Apr 2022 19:23:41 +0000 (UTC) X-FDA: 79410891042.16.B53D8BA Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 53A89C0066 for ; Fri, 29 Apr 2022 19:23:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pGA+ufhTLejZs6UQD/U24KGfzpBgMkKhRJ/yYi2BWVE=; b=AtbyDaHg0Ojx/uNmzzqM+VGd9i x2qeP8S98f2GUMcgmmmEAG5rJvJLq8Rnjb05ijuyRfZvDbDBkRqV1Z1QXGsImGOIsAyrZS1/CYcGq 0r/pSs41vZH0sYNp3e2J3nGnM5/ok+q40Z/Lw7VsfJL+KPZxpSP+4oOVG+8DNIlNR8okQRXtaUks0 twFEvUpm6vhOAcsfEzlhFSejj5OdgOFVamLxTpvwP/rQ7HAHal1HwN0NQjF81BWgGUFRkFun1J6sH YadKqjp3OZp9xS1J/eGy20rOT4nHbOe94F4ZxRxs5b60btX27Zy1cHv3b9CxGPRAEVjfLIH7e+b4O rsEGh6fA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkWDJ-00CjPG-Tm; Fri, 29 Apr 2022 19:23:37 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linuxfoundation.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 11/21] vmscan: Move initialisation of mapping down Date: Fri, 29 Apr 2022 20:23:19 +0100 Message-Id: <20220429192329.3034378-12-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220429192329.3034378-1-willy@infradead.org> References: <20220429192329.3034378-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=AtbyDaHg; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 53A89C0066 X-Stat-Signature: 8jgrfi11696m1rwmyp4gbdjdkedoj7np X-HE-Tag: 1651260220-18155 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that we don't interrogate the BDI for congestion, we can delay looking up the folio's mapping until we've got further through the function, reducing register pressure and saving a call to folio_mapping for folios we're adding to the swap cache. Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 0368ea3e9880..9ac2583ca5e5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1568,12 +1568,11 @@ static unsigned int shrink_page_list(struct list_head *page_list, stat->nr_unqueued_dirty += nr_pages; /* - * Treat this page as congested if the underlying BDI is or if + * Treat this page as congested if * pages are cycling through the LRU so quickly that the * pages marked for immediate reclaim are making it to the * end of the LRU a second time. */ - mapping = page_mapping(page); if (writeback && PageReclaim(page)) stat->nr_congested += nr_pages; @@ -1725,9 +1724,6 @@ static unsigned int shrink_page_list(struct list_head *page_list, } may_enter_fs = true; - - /* Adding to swap updated mapping */ - mapping = page_mapping(page); } } else if (PageSwapBacked(page) && PageTransHuge(page)) { /* Split shmem THP */ @@ -1768,6 +1764,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, } } + mapping = folio_mapping(folio); if (folio_test_dirty(folio)) { /* * Only kswapd can writeback filesystem folios