From patchwork Mon Jan 16 19:28:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13103603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38897C67871 for ; Mon, 16 Jan 2023 19:28:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 35BE26B0075; Mon, 16 Jan 2023 14:28:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E4AF6B0078; Mon, 16 Jan 2023 14:28:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1AD2A6B007B; Mon, 16 Jan 2023 14:28:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E52C76B0075 for ; Mon, 16 Jan 2023 14:28:19 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B03EF4023F for ; Mon, 16 Jan 2023 19:28:19 +0000 (UTC) X-FDA: 80361648318.16.CD3A60C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 38B9640005 for ; Mon, 16 Jan 2023 19:28:17 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WWd1I4if; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673897298; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oDDlfTy4F/Ni9xs5NsJqi1EBJfMM9tlXZi6Qrx5xFzI=; b=s8o4P2ubXzuy2GsQQpMY5tbF19j5dEH0ZHpviLvBYQHHjm9b3N7n3zTJGT1kWycq65coJ8 0mI1jCoLFpXNprvWzlq+JE5CU+3obH/tRmR1Sn/HWJ/UWJXd6O2LNgYWy6aiaiqHcD5cNj PFWvwE1Jt12ZV4DT682VRGoR5fX0wJ8= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WWd1I4if; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673897298; a=rsa-sha256; cv=none; b=5cnWnVo2uI7vTPyXZIwXM//qA9xKWW2rr17S5hhKy0yB4DKSd+mnbkClPXf901Wgb2BXAX 164pnH/jD57DBPbC+bM7Ke+ky2IYNfYGW7vAA/AN7W+QWd9EqqJhJggPdJM1uEX5YtNUba dhuxBRqLDJX1Nr523IbvqFUEXovGmk8= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=oDDlfTy4F/Ni9xs5NsJqi1EBJfMM9tlXZi6Qrx5xFzI=; b=WWd1I4if8/wu4fSilq7akwTYhL G6ideV41ebA2S9W199/o8F2rCxYxI+R1PNM2pZPo/zGrJHmaG3oV0r0PD1PoZD7XCIy4w54qZWbvh iQOC7Fyx7cVN/LL0tPGQss/j8vekYZ3nLu139Qea0KiBevIyd70Vha7tjzAxMvOqRPcPz5dIg6M0y Ohn5/tUMwsjyUxh66uR4+rQdoyCjB+RTNWCBRRs4twqfButD4GZErPVardjUSPTOCY1PraH2d9m43 dmQ9Xn90Fj/cKpajtnbTb6gZB8R93s+s+q2brr3a28Nv/OnFhfpHziZeWQ25mYcUbNN7ZYejDUVGP nbby0v4g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pHV9h-0090TC-Pk; Mon, 16 Jan 2023 19:28:29 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org, Andrew Morton Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 4/4] mm: Clean up mlock_page / munlock_page references in comments Date: Mon, 16 Jan 2023 19:28:27 +0000 Message-Id: <20230116192827.2146732-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230116192827.2146732-1-willy@infradead.org> References: <20230116192827.2146732-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 38B9640005 X-Stat-Signature: 6jkhx1bye41i3b1bemzn71u49dpmre7q X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1673897297-468435 X-HE-Meta: U2FsdGVkX1/YjJKKQYTHk2QulnST59EXdE5oJYJzvXGSz4DWEHfADj1pyS6mB//BGcvQkBnPPzelPEZLomr5iypPMcoIxfp5fW7rZa5umjkZT7hZgamo66ACamNnS71KH6e39gvVjVgMabNIriSGe9hTb0aG67uc1NjfRloQdgqwOM8UFwGSiq1sZBjQF2xaCmeBpEqh23KmaGr2WjRf7S6qgOKOw6APkJz6R4JVPlGeSBWkAnBhKn7PvDtLBV3bbQltIq5d63Jrh0NQ87N0STdYF6Q0Cw+FkbHp5l2FfNUFFYKJ8a8VTUfRJHA42k0z1MqcPpZKNb2FUojS7ebkv5AFvDR6mQnZfbXYT6XrDBVF+yJdyfEHWRMrdhJ78A0gHCvGztCtQz7a/xa689TPTcxgIsxju4+CDdZnLgzqbftDH7B3ABfi8ntUuEdc430dNl49z9TtufvS3Dm1Zs227v/Wkr83CrKJIQSf9GfXT3XhktYRVuYEV38bsp1vhn0bG1ZrxBhKY6LKoOmEnScqxT94P2MMvgd68Lwm0cl0hW/HsyL4GKk2JNsoIqIMJE2vvnNoudzZ9y+/Gt04BNLeTPMmFa6me7P5cAzYBfSJtF1VAfrY1fduTc9itnzcFVFz9t4SjHXChLtUwadv8oBcbtbUf7uMKui6anEpM4+wu+uyddTwSAw9JhzyR4vKLqIWqhHGq57P1TLH7NKitCqqntcLMQDYlBML8uNHynhUXb+Ace9xCNl/6MaZFuay31EU/JTYViiScEJbhAjiK67ijYyxXW60PmYpCRz2T/sGpvkkjyXc5UOx4DGzHUAFWEqMgpV8d9rRV33oZOm1WRZPmObRPbiOASNnBKDDauzdTrS6HmRxzzNHTiCq8pT6orVK/pXERSlfS985gPsInlMI+p3L14nBOf2Zq3fImhD6aoTtI7ajev34TA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change documentation and comments that refer to now-renamed functions. Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/mm/unevictable-lru.rst | 30 +++++++++++++++------------- mm/memory-failure.c | 2 +- mm/swap.c | 4 ++-- 3 files changed, 19 insertions(+), 17 deletions(-) diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst index 9afceabf26f7..0662254d8267 100644 --- a/Documentation/mm/unevictable-lru.rst +++ b/Documentation/mm/unevictable-lru.rst @@ -298,7 +298,7 @@ treated as a no-op and mlock_fixup() simply returns. If the VMA passes some filtering as described in "Filtering Special VMAs" below, mlock_fixup() will attempt to merge the VMA with its neighbors or split off a subset of the VMA if the range does not cover the entire VMA. Any pages -already present in the VMA are then marked as mlocked by mlock_page() via +already present in the VMA are then marked as mlocked by mlock_folio() via mlock_pte_range() via walk_page_range() via mlock_vma_pages_range(). Before returning from the system call, do_mlock() or mlockall() will call @@ -373,20 +373,21 @@ Because of the VMA filtering discussed above, VM_LOCKED will not be set in any "special" VMAs. So, those VMAs will be ignored for munlock. If the VMA is VM_LOCKED, mlock_fixup() again attempts to merge or split off the -specified range. All pages in the VMA are then munlocked by munlock_page() via +specified range. All pages in the VMA are then munlocked by munlock_folio() via mlock_pte_range() via walk_page_range() via mlock_vma_pages_range() - the same function used when mlocking a VMA range, with new flags for the VMA indicating that it is munlock() being performed. -munlock_page() uses the mlock pagevec to batch up work to be done under -lru_lock by __munlock_page(). __munlock_page() decrements the page's -mlock_count, and when that reaches 0 it clears PG_mlocked and clears -PG_unevictable, moving the page from unevictable state to inactive LRU. +munlock_folio() uses the mlock pagevec to batch up work to be done +under lru_lock by __munlock_folio(). __munlock_folio() decrements the +folio's mlock_count, and when that reaches 0 it clears the mlocked flag +and clears the unevictable flag, moving the folio from unevictable state +to the inactive LRU. -But in practice that may not work ideally: the page may not yet have reached +But in practice that may not work ideally: the folio may not yet have reached "the unevictable LRU", or it may have been temporarily isolated from it. In those cases its mlock_count field is unusable and must be assumed to be 0: so -that the page will be rescued to an evictable LRU, then perhaps be mlocked +that the folio will be rescued to an evictable LRU, then perhaps be mlocked again later if vmscan finds it in a VM_LOCKED VMA. @@ -489,15 +490,16 @@ For each PTE (or PMD) being unmapped from a VMA, page_remove_rmap() calls munlock_vma_folio(), which calls munlock_folio() when the VMA is VM_LOCKED (unless it was a PTE mapping of a part of a transparent huge page). -munlock_page() uses the mlock pagevec to batch up work to be done under -lru_lock by __munlock_page(). __munlock_page() decrements the page's -mlock_count, and when that reaches 0 it clears PG_mlocked and clears -PG_unevictable, moving the page from unevictable state to inactive LRU. +munlock_folio() uses the mlock pagevec to batch up work to be done +under lru_lock by __munlock_folio(). __munlock_folio() decrements the +folio's mlock_count, and when that reaches 0 it clears the mlocked flag +and clears the unevictable flag, moving the folio from unevictable state +to the inactive LRU. -But in practice that may not work ideally: the page may not yet have reached +But in practice that may not work ideally: the folio may not yet have reached "the unevictable LRU", or it may have been temporarily isolated from it. In those cases its mlock_count field is unusable and must be assumed to be 0: so -that the page will be rescued to an evictable LRU, then perhaps be mlocked +that the folio will be rescued to an evictable LRU, then perhaps be mlocked again later if vmscan finds it in a VM_LOCKED VMA. diff --git a/mm/memory-failure.c b/mm/memory-failure.c index ee8548a8b049..2dad72c1b281 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -2167,7 +2167,7 @@ int memory_failure(unsigned long pfn, int flags) } /* - * __munlock_pagevec may clear a writeback page's LRU flag without + * __munlock_folio() may clear a writeback page's LRU flag without * page_lock. We need wait writeback completion for this page or it * may trigger vfs BUG while evict inode. */ diff --git a/mm/swap.c b/mm/swap.c index 5e4f92700c16..2a51faa34e64 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -201,7 +201,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio) * Is an smp_mb__after_atomic() still required here, before * folio_evictable() tests the mlocked flag, to rule out the possibility * of stranding an evictable folio on an unevictable LRU? I think - * not, because __munlock_page() only clears the mlocked flag + * not, because __munlock_folio() only clears the mlocked flag * while the LRU lock is held. * * (That is not true of __page_cache_release(), and not necessarily @@ -216,7 +216,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio) folio_set_unevictable(folio); /* * folio->mlock_count = !!folio_test_mlocked(folio)? - * But that leaves __mlock_page() in doubt whether another + * But that leaves __mlock_folio() in doubt whether another * actor has already counted the mlock or not. Err on the * safe side, underestimate, let page reclaim fix it, rather * than leaving a page on the unevictable LRU indefinitely.