From patchwork Fri Feb 4 19:58:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12735541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15166C433F5 for ; Fri, 4 Feb 2022 20:00:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A68E8D0015; Fri, 4 Feb 2022 14:59:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4BE868D0001; Fri, 4 Feb 2022 14:59:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 94ECA8D0009; Fri, 4 Feb 2022 14:59:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id 014298D0009 for ; Fri, 4 Feb 2022 14:59:08 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B3C72824C424 for ; Fri, 4 Feb 2022 19:59:08 +0000 (UTC) X-FDA: 79106161176.23.9175537 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 5E3B540007 for ; Fri, 4 Feb 2022 19:59:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ysaRl84X3MDbn+S9kI3xR6Jc2oyHwyLSQUnLAsEjMuE=; b=p5WgdxEXuQhOsrZSCWfZ4nbvWd S9QxA2QuZm41bgYIDSTo/MoJ3F6y5NybwdjKAb1zENuNPxQ5JaCgoisSanyS+VxfAKNTZ3DhCnGxM GRI34pww82APmBXvYe1slhO7KbeDXyRYwJhBD4FhrACgeJ9vVmZjLA9Egg2oklHL/mRUPz6/77gVa yDklaGfn9er6UvGCNj/3gWHOB8DvqpXs0QFCe2otSwINEjDLWRJJhtCLlD5Ckjm0Gl6Md8WfVQF1h woWkdLuvkwT4gu9PxQ+vAkPFn1LNuh6QF/usG7AwoaOZZD9Z45+qMiHMtBRIQsv92umXED2i2WbuQ Tyyv2O+A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nG4jb-007LqN-0i; Fri, 04 Feb 2022 19:59:07 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH 64/75] mm/vmscan: Turn page_check_references() into folio_check_references() Date: Fri, 4 Feb 2022 19:58:41 +0000 Message-Id: <20220204195852.1751729-65-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220204195852.1751729-1-willy@infradead.org> References: <20220204195852.1751729-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5E3B540007 X-Rspam-User: nil Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=p5WgdxEX; dmarc=none; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Stat-Signature: nzartjpodgfzh5xdspsgdhctnp4s5kz4 X-HE-Tag: 1644004748-438305 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This function only has one caller, and it already has a folio. This removes a number of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 31 +++++++++++++++---------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 5ceed53cb326..450dd9c3395f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1376,55 +1376,54 @@ enum page_references { PAGEREF_ACTIVATE, }; -static enum page_references page_check_references(struct page *page, +static enum page_references folio_check_references(struct folio *folio, struct scan_control *sc) { - struct folio *folio = page_folio(page); - int referenced_ptes, referenced_page; + int referenced_ptes, referenced_folio; unsigned long vm_flags; referenced_ptes = folio_referenced(folio, 1, sc->target_mem_cgroup, &vm_flags); - referenced_page = TestClearPageReferenced(page); + referenced_folio = folio_test_clear_referenced(folio); /* * Mlock lost the isolation race with us. Let try_to_unmap() - * move the page to the unevictable list. + * move the folio to the unevictable list. */ if (vm_flags & VM_LOCKED) return PAGEREF_RECLAIM; if (referenced_ptes) { /* - * All mapped pages start out with page table + * All mapped folios start out with page table * references from the instantiating fault, so we need - * to look twice if a mapped file page is used more + * to look twice if a mapped file folio is used more * than once. * * Mark it and spare it for another trip around the * inactive list. Another page table reference will * lead to its activation. * - * Note: the mark is set for activated pages as well - * so that recently deactivated but used pages are + * Note: the mark is set for activated folios as well + * so that recently deactivated but used folios are * quickly recovered. */ - SetPageReferenced(page); + folio_set_referenced(folio); - if (referenced_page || referenced_ptes > 1) + if (referenced_folio || referenced_ptes > 1) return PAGEREF_ACTIVATE; /* - * Activate file-backed executable pages after first usage. + * Activate file-backed executable folios after first usage. */ - if ((vm_flags & VM_EXEC) && !PageSwapBacked(page)) + if ((vm_flags & VM_EXEC) && !folio_test_swapbacked(folio)) return PAGEREF_ACTIVATE; return PAGEREF_KEEP; } - /* Reclaim if clean, defer dirty pages to writeback */ - if (referenced_page && !PageSwapBacked(page)) + /* Reclaim if clean, defer dirty folios to writeback */ + if (referenced_folio && !folio_test_swapbacked(folio)) return PAGEREF_RECLAIM_CLEAN; return PAGEREF_RECLAIM; @@ -1664,7 +1663,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, } if (!ignore_references) - references = page_check_references(page, sc); + references = folio_check_references(folio, sc); switch (references) { case PAGEREF_ACTIVATE: