From patchwork Wed Jul 6 23:59:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12908924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C99DCC433EF for ; Thu, 7 Jul 2022 00:06:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 023306B0072; Wed, 6 Jul 2022 20:06:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EEE116B0073; Wed, 6 Jul 2022 20:06:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8EB76B0074; Wed, 6 Jul 2022 20:06:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C5E416B0072 for ; Wed, 6 Jul 2022 20:06:05 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 9A812602A6 for ; Thu, 7 Jul 2022 00:06:05 +0000 (UTC) X-FDA: 79658361090.22.F195882 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) by imf13.hostedemail.com (Postfix) with ESMTP id 3C1552003B for ; Thu, 7 Jul 2022 00:06:05 +0000 (UTC) Received: by mail-pg1-f202.google.com with SMTP id bd7-20020a656e07000000b00412a946da8eso991862pgb.20 for ; Wed, 06 Jul 2022 17:06:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc :content-transfer-encoding; bh=kyp+5YwOrtf5RfAo+V2Tr1wtfbLjh9l3l8P143K5BLQ=; b=Zie7z4kyoQmoewYn8B8vh/fCVQLitPv7wyXAcMXoVmy4ZwMcQzI3WRQNn0SEA4VK5E g+vSFAUHRaISvHoCBwtCXo1+IiOMTP14QhdGJjfofgsdGbg6hKN1HyqUjSBsvJ/FomeE 35o+Opf0cQUmX2eiZEtFHKHjGA2CMPsp4DI/Rh6C69boXHYbNdWpbF0c1ksPSexBCtj0 G8vb2Dcbswy+pd9LeR+D2pVZyGjdQovDITruvx28KLRo4A1rct8wZxlY8LqvqftctrWf qxCepjYcyaZsXgksr/yxEw2DTEtJoI86ebakesbEXYnW4FZJBNSJZf9eFYb6WO1u5dy4 M3hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc :content-transfer-encoding; bh=kyp+5YwOrtf5RfAo+V2Tr1wtfbLjh9l3l8P143K5BLQ=; b=o4yqcmdti356W/ZU7E+3+6D8MPhAr+w0goMpnAkZPPgB/RhrDCk+sKK/iWVve87+0F 5zUg0dzKmdcdAIL6A7tmkqVEVxtMPR5EXzqfYAbEeNSjDcbGGw/6IFC35o/bh74LmxrJ W46QEYncO72A7zsT27yORLiuILzRvhtF0n9VvS6lRmAe/wbPSwi1n4R8jCNjma5Sf9k8 +xS9m/UiY/4IQJ+hPAiwri+qIgaNMUnJXnBvO14elnYwZ6wrxkIagnL7XdGX75uWw/Ee pUQ4IbmGALUV4gUMXuXVWw9xVky3y00081VPViaWW97d0SR1DtcDh3JVunL4rdGRRKBY Xv6Q== X-Gm-Message-State: AJIora+wQnXwUlyfakxyDcr/hESbXAhc79UIaUxTMmKxo0YB4lvw4PFM wqnN0mNTP+Nwzi/ybMc6/JzUojZTomE6 X-Google-Smtp-Source: AGRyM1vpbbt4cu7R3j59YbWX6qOY9tHKp6+OxVn5S1t0tzcQffOSMCN/8ew7TXaYyn4QiyPG9RlodgVzp3aq X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a62:d150:0:b0:525:84b6:9a18 with SMTP id t16-20020a62d150000000b0052584b69a18mr48986579pfl.42.1657152363942; Wed, 06 Jul 2022 17:06:03 -0700 (PDT) Date: Wed, 6 Jul 2022 16:59:18 -0700 Message-Id: <20220706235936.2197195-1-zokeefe@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [mm-unstable v7 00/18] mm: userspace hugepage collapse From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , Peter Xu , Rongwei Wang , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Thomas Bogendoerfer , "Zach O'Keefe" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657152365; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=kyp+5YwOrtf5RfAo+V2Tr1wtfbLjh9l3l8P143K5BLQ=; b=DPx9hDTL4xT8e5rObLWEHca6ov6oP33IePrCXNKrShZmaU8SZ8KFwE4/0kTpzImuvS1CnL g6ZZnVrWXfu6CM3tq2cwU/9GCDk7kcZusnzgCEnnAPTpQJSJ/3kN84nKQhBeqXVy27bbzp uclvjmtNToPry3rNZ3tWdVsnkuHxLPs= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Zie7z4ky; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf13.hostedemail.com: domain of 3ayPGYgcKCOYhWSMMNMOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--zokeefe.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=3ayPGYgcKCOYhWSMMNMOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--zokeefe.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657152365; a=rsa-sha256; cv=none; b=Zcy7B/QsRFkdJ5UwagtI1bq+9pUitNvNvt/aKn+rVL8qFnNuDgkJ2biHWbB+4ig8KAEM4i MbtfApBl1ij7ej3dyOs4cUSQr36pgNXhSASLvWsbytv4LAKz7w5aTy8RgTdFTZtWJJiSrn 72aR4U7Df3QCJE/45EPwhssvS6B7r0c= X-Rspam-User: X-Rspamd-Server: rspam07 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Zie7z4ky; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf13.hostedemail.com: domain of 3ayPGYgcKCOYhWSMMNMOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--zokeefe.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=3ayPGYgcKCOYhWSMMNMOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--zokeefe.bounces.google.com X-Stat-Signature: 3bsh489xdk58ojs7rgpfuqjexrhy46zt X-Rspamd-Queue-Id: 3C1552003B X-HE-Tag: 1657152365-247139 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: v7 Forward -------------------------------- The major changes to v7 over v6[1] are: 1. mm_find_pmd() refactoring has been extended, and now returns the raw pmd_t* without additional check (which was it's original behavior). For MADV_COLLAPSE, we've tightened up our use of it and now check if we've raced with khugepaged when collapsing (Yang Shi). 2. errno return values have been changed, and now deviate from madvise convention in some places. Most notably, this is to allow ENOMEM to mean " memory allocation failed" to the user - the most important being THP allocation failure. 3. We now longer do lru_add_drain() + lru_add_drain_all() if we fail collapse because pages aren't found on the LRU. This has been simplified, and we just do a lru_add_drain_all() upfront (Yang Shi). 4. struct collapse_control has been further simplified, and all flags controlling collapse behavior are now squashed into a single .is_hugepaged flag. We also now kmalloc() this structure in MADV_COLLAPSE context. 5. Rebased on top of Yang Shi's "Cleanup transhuge_xxx helpers" series [2] as well as Miaohe Lin's "A few cleanup patches for khugepaged" series [3] which caused some refactoring and allowed for some nice simplifications - most notably the VMA (re)validation checks. 6. A new /proc//smaps field, PMDMappable, has been added to inform userspace what VMAs are eligible for MADV_COLLAPSE. 7. A tracepoint was added to assist with MADV_COLLAPSE debugging 8. selftest coverage is tightened up and now covers collapsing multiple hugepage-sized regions. See the Changelog for more details. v6 Forward -------------------------------- v6 improves on v5[4] in 3 major ways: 1. Changed MADV_COLLAPSE eligibility semantics. In v5, MADV_COLLAPSE ignored khugepaged max_ptes_* sysfs settings, as well as all sysfs defrag settings. v6 takes this further by also decoupling MADV_COLLAPSE from sysfs enabled setting. MADV_COLLAPSE can now initiate a collapse of memory into THPs in "madvise" and "never" mode, and doesn't ever require VM_HUGEPAGE. MADV_COLLAPSE retains it's adherence to not operating on VM_NOHUGEPAGE-marked VMAs. 2. Thanks to a patch by Yang Shi to remove UMA hugepage preallocation, hugepage allocation in khugepaged is independent of CONFIG_NUMA. This allows us to reuse all the allocation codepaths between collapse contexts, greatly simplifying struct collapse_control. Redundant khugepaged heuristic flags have also been merged into a new enforce_page_heuristics flag. 3. Using MADV_COLLAPSE's new eligibility semantics, the hacks in the selftests to disable khugepaged are no longer necessary, since we can test MADV_COLLAPSE in "never" THP mode to prevent khugepaged interaction. Introduction -------------------------------- This series provides a mechanism for userspace to induce a collapse of eligible ranges of memory into transparent hugepages in process context, thus permitting users to more tightly control their own hugepage utilization policy at their own expense. This idea was introduced by David Rientjes[5]. Interface -------------------------------- The proposed interface adds a new madvise(2) mode, MADV_COLLAPSE, and leverages the new process_madvise(2) call. process_madvise(2) Performs a synchronous collapse of the native pages mapped by the list of iovecs into transparent hugepages. This operation is independent of the system THP sysfs settings, but attempts to collapse VMAs marked VM_NOHUGEPAGE will still fail. THP allocation may enter direct reclaim and/or compaction. When a range spans multiple VMAs, the semantics of the collapse over of each VMA is independent from the others. Caller must have CAP_SYS_ADMIN if not acting on self. Return value follows existing process_madvise(2) conventions. A “success” indicates that all hugepage-sized/aligned regions covered by the provided range were either successfully collapsed, or were already pmd-mapped THPs. madvise(2) Equivalent to process_madvise(2) on self, with 0 returned on “success”. Current Use-Cases -------------------------------- (1) Immediately back executable text by THPs. Current support provided by CONFIG_READ_ONLY_THP_FOR_FS may take a long time on a large system which might impair services from serving at their full rated load after (re)starting. Tricks like mremap(2)'ing text onto anonymous memory to immediately realize iTLB performance prevents page sharing and demand paging, both of which increase steady state memory footprint. With MADV_COLLAPSE, we get the best of both worlds: Peak upfront performance and lower RAM footprints. Note that subsequent support for file-backed memory is required here. (2) malloc() implementations that manage memory in hugepage-sized chunks, but sometimes subrelease memory back to the system in native-sized chunks via MADV_DONTNEED; zapping the pmd. Later, when the memory is hot, the implementation could madvise(MADV_COLLAPSE) to re-back the memory by THPs to regain hugepage coverage and dTLB performance. TCMalloc is such an implementation that could benefit from this[6]. A prior study of Google internal workloads during evaluation of Temeraire, a hugepage-aware enhancement to TCMalloc, showed that nearly 20% of all cpu cycles were spent in dTLB stalls, and that increasing hugepage coverage by even small amount can help with that[7]. (3) userfaultfd-based live migration of virtual machines satisfy UFFD faults by fetching native-sized pages over the network (to avoid latency of transferring an entire hugepage). However, after guest memory has been fully copied to the new host, MADV_COLLAPSE can be used to immediately increase guest performance. Note that subsequent support for file/shmem-backed memory is required here. (4) HugeTLB high-granularity mapping allows HugeTLB a HugeTLB page to be mapped at different levels in the page tables[8]. As it's not "transparent" like THP, HugeTLB high-granularity mappings require an explicit user API. It is intended that MADV_COLLAPSE be co-opted for this use case[9]. Note that subsequent support for HugeTLB memory is required here. Future work -------------------------------- Only private anonymous memory is supported by this series. File and shmem memory support will be added later. One possible user of this functionality is a userspace agent that attempts to optimize THP utilization system-wide by allocating THPs based on, for example, task priority, task performance requirements, or heatmaps. For the latter, one idea that has already surfaced is using DAMON to identify hot regions, and driving THP collapse through a new DAMOS_COLLAPSE scheme[10]. Sequence of Patches -------------------------------- * Patch 1 is a cleanup patch. * Patch 2 (Yang Shi) removes UMA hugepage preallocation and makes khugepaged hugepage allocation independent of CONFIG_NUMA * Patches 3-8 perform refactoring of collapse logic within khugepaged.c and introduce the notion of a collapse context. * Patch 9 introduces MADV_COLLAPSE and is the main patch in this series. * Patches 10-13 add additional support: tracepoints, clean-ups, process_madvise(2), and /proc//smaps output * Patches 14-18 add selftests. Applies against mm-unstable Changelog -------------------------------- v6 -> v7: * Added 'mm/khugepaged: remove redundant transhuge_vma_suitable() check' * 'mm: khugepaged: don't carry huge page to the next loop for !CONFIG_NUMA' -> Open-coded khugepaged_alloc_sleep() logic (Peter Xu) * 'mm/khugepaged: pipe enum scan_result codes back to callers' -> Refactored __collapse_huge_page_swapin() to return enum scan_result -> A few small cleanups (Yang Shi) * 'mm/khugepaged: add flag to predicate khugepaged-only behavior' -> Renamed from 'mm/khugepaged: add flag to ignore khugepaged heuristics' -> The flag is now ".is_hugepaged" (Peter Xu) * 'mm/khugepaged: add flag to ignore THP sysfs enabled' -> Refactored to pass flag to hugepage_vma_check(), and to reuse .is_khugepaged flag (Peter Xu) * 'mm/khugepaged: make allocation semantics context-specific' -> !CONFIG_SHMEM bugfix and minor changes (Yang Shi) -> Squashed into 'mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse' -> Removed .gfp member of struct collapse_control. Instead, use the .is_khugepaged member to decide what gfp flags to use. * 'mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds THP' -> Replaced multiple mm_find_pmd() callsites with find_pmd_or_thp_or_none() to make sure khugepaged doesn't collapse out from under us (Yang Shi) -> Added check_pmd_still_valid() helper -> Return SCAN_PMD_NULL if pmd_bad() (Yang Shi) -> Renamed mm_find_pmd() -> mm_find_pte_pmd() -> Renamed mm_find_pmd_raw() -> mm_find_pmd() -> Add mm_find_pmd() to split_huge_pmd_address() * 'mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse' -> Replace SCAN_PAGE_LRU + lru_add_drain_all() retry logic with single lru_add_drain_all() upfront. -> errno mapping changes. Most notably, use ENOMEM when memory allocation (most notably, THP allocation) fails. -> When !THP, madvise_collapse() and hugepage_madvise() return -EINVAL instead of BUG(). (Yang Shi) * 'tools headers uapi: add MADV_COLLAPSE madvise mode to tools' -> Squashed into 'mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse' (Yang Shi) * 'mm/khugepaged: rename prefix of shared collapse functions' -> Revert change to huge_memory:mm_khugepaged_scan_pmd tracepoint to retain ABI. (Yang Shi) * Added 'mm/madvise: add huge_memory:mm_madvise_collapse tracepoint' * Added 'proc/smaps: add PMDMappable field to smaps' * Added 'selftests/vm: dedup hugepage allocation logic' * Added 'selftests/vm: add selftest to verify multi THP collapse' * Collected review tags * Rebased on ?? v5 -> v6: * Added 'mm: khugepaged: don't carry huge page to the next loop for !CONFIG_NUMA' (Yang Shi) * 'mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds THP' -> Add a pmd_bad() check for nonhuge pmds (Peter Xu) * 'mm/khugepaged: dedup and simplify hugepage alloc and charging' -> Remove dependency on 'mm/khugepaged: sched to numa node when collapse huge page' -> No more !NUMA casing * 'mm/khugepaged: make allocation semantics context-specific' -> Renamed from 'mm/khugepaged: make hugepage allocation context-specific' -> Removed function pointer hooks. (David Rientjes) -> Added gfp_t member to control allocation semantics. * 'mm/khugepaged: add flag to ignore khugepaged heuristics' -> Squashed from 'mm/khugepaged: add flag to ignore khugepaged_max_ptes_*' and 'mm/khugepaged: add flag to ignore page young/referenced requirement'. (David Rientjes) * Added 'mm/khugepaged: add flag to ignore THP sysfs enabled' * 'mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse' -> Use hugepage_vma_check() instead of transparent_hugepage_active() to determine vma eligibility. -> Only retry collapse once per hugepage if pages aren't found on LRU -> Save last failed result for more accurate errno -> Refactored loop structure -> Renamed labels * 'selftests/vm: modularize collapse selftests' -> Refactored into straightline code and removed loop over contexts. * 'selftests/vm: add MADV_COLLAPSE collapse context to selftests; -> Removed ->init() and ->cleanup() hooks from struct collapse_context() (David Rientjes) -> MADV_COLLAPSE operates in "never" THP mode to prevent khugepaged interaction. Removed all the previous khugepaged hacks. * Added 'tools headers uapi: add MADV_COLLAPSE madvise mode to tools' * Rebased on next-20220603 v4 -> v5: * Fix kernel test robot errors * 'mm/khugepaged: make hugepage allocation context-specific' -> Fix khugepaged_alloc_page() UMA definition * 'mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse' -> Add "fallthrough" pseudo keyword to fix -Wimplicit-fallthrough v3 -> v4: * 'mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds THP' -> Dropped pmd_none() check from find_pmd_or_thp_or_none() -> Moved SCAN_PMD_MAPPED after SCAN_PMD_NULL -> Dropped from sign-offs * 'mm/khugepaged: add struct collapse_control' -> Updated commit description and some code comments -> Removed extra brackets added in khugepaged_find_target_node() * Added 'mm/khugepaged: dedup hugepage allocation and charging code' * 'mm/khugepaged: make hugepage allocation context-specific' -> Has been majorly reworked to replace ->gfp() and ->alloc_hpage() struct collapse_control hooks with a ->alloc_charge_hpage() hook which makes node-allocation, gfp flags, node scheduling, hpage allocation, and accounting/charging context-specific. -> Dropped from sign-offs * Added 'mm/khugepaged: pipe enum scan_result codes back to callers' -> Replaces 'mm/khugepaged: add struct collapse_result' * Dropped 'mm/khugepaged: add struct collapse_result' * 'mm/khugepaged: add flag to ignore khugepaged_max_ptes_*' -> Moved before 'mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse' * 'mm/khugepaged: add flag to ignore page young/referenced requirement' -> Moved before 'mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse' * 'mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse' -> Moved struct collapse_control* argument to end of alloc_hpage() -> Some refactoring to rebase on top changes to struct collapse_control hook changes and other previous commits. -> Reworded commit description -> Dropped from sign-offs * 'mm/khugepaged: rename prefix of shared collapse functions' -> Renamed from 'mm/khugepaged: remove khugepaged prefix from shared collapse functions' -> Instead of dropping "khugepaged_" prefix, replace with "hpage_collapse_" -> Dropped from sign-offs * Rebased onto next-20220502 v2 -> v3: * Collapse semantics have changed: the gfp flags used for hugepage allocation now are independent of khugepaged. * Cover-letter: add primary use-cases and update description of collapse semantics. * 'mm/khugepaged: make hugepage allocation context-specific' -> Added .gfp operation to struct collapse_control * 'mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse' -> Added madvise context .gfp implementation. -> Set scan_result appropriately on early exit due to mm exit or vma vma revalidation. -> Reword patch description * Rebased onto next-20220426 v1 -> v2: * Cover-letter clarification and added RFC -> v1 notes * Fixes issues reported by kernel test robot * 'mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds THP' -> Fixed mixed code/declarations * 'mm/khugepaged: make hugepage allocation context-specific' -> Fixed bad function signature in !NUMA && TRANSPARENT_HUGEPAGE configs -> Added doc comment to retract_page_tables() for "cc" * 'mm/khugepaged: add struct collapse_result' -> Added doc comment to retract_page_tables() for "cr" * 'mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse' -> Added MADV_COLLAPSE definitions for alpha, mips, parisc, xtensa -> Moved an "#ifdef NUMA" so that khugepaged_find_target_node() is defined in !NUMA && TRANSPARENT_HUGEPAGE configs. * 'mm/khugepaged: remove khugepaged prefix from shared collapse' functions -> Removed khugepaged prefix from khugepaged_find_target_node on L914 * Rebased onto next-20220414 RFC -> v1: * The series was significantly reworked from RFC and most patches are entirely new or reworked. * Collapse eligibility criteria has changed: MADV_COLLAPSE now respects VM_NOHUGEPAGE. * Collapse semantics have changed: the gfp flags used for hugepage allocation now match that of khugepaged for the same VMA, instead of the gfp flags used at-fault for calling process for the VMA. * Collapse semantics have changed: The collapse semantics for multiple VMAs spanning a single MADV_COLLAPSE call are now independent, whereas before the idea was to allow direct reclaim/compaction if any spanned VMA permitted so. * The process_madvise(2) flags, MADV_F_COLLAPSE_LIMITS and MADV_F_COLLAPSE_DEFRAG have been removed. * Implementation change: the RFC implemented collapse over a range of hugepages in a batched-fashion with the aim of doing multiple page table updates inside a single mmap_lock write. This has been changed, and the implementation now collapses each hugepage-aligned/sized region iteratively. This was motivated by an experiment which showed that, when multiple threads were concurrently faulting during a MADV_COLLAPSE operation, mean and tail latency to acquire mmap_lock in read for threads in the fault patch was improved by using a batch size of 1 (batch sizes of 1, 8, 16, 32 were tested)[11]. * Added: If a collapse operation fails because a page isn't found on the LRU, do a lru_add_drain_all() and retry. * Added: selftests [1] https://lore.kernel.org/linux-mm/20220604004004.954674-1-zokeefe@google.com/ [2] https://lore.kernel.org/linux-mm/YrJJoP5vrZflvwd0@google.com/ [3] https://lore.kernel.org/linux-mm/20220625092816.4856-1-linmiaohe@huawei.com/ [4] https://lore.kernel.org/linux-mm/20220504214437.2850685-1-zokeefe@google.com/ [5] https://lore.kernel.org/all/d098c392-273a-36a4-1a29-59731cdf5d3d@google.com/ [6] https://github.com/google/tcmalloc/tree/master/tcmalloc [7] https://research.google/pubs/pub50370/ [8] https://lore.kernel.org/linux-mm/CAHS8izPnJd5EQjUi9cOk=03u3X1rk0PexTQZi+bEE4VMtFfksQ@mail.gmail.com/ [9] https://lore.kernel.org/linux-mm/20220624173656.2033256-23-jthoughton@google.com/ [10] https://lore.kernel.org/lkml/bcc8d9a0-81d-5f34-5e4-fcc28eb7ce@google.com/T/ [11] https://lore.kernel.org/linux-mm/CAAa6QmRc76n-dspGT7UK8DkaqZAOz-CkCsME1V7KGtQ6Yt2FqA@mail.gmail.com/ Zach O'Keefe (18): mm/khugepaged: remove redundant transhuge_vma_suitable() check mm: khugepaged: don't carry huge page to the next loop for !CONFIG_NUMA mm/khugepaged: add struct collapse_control mm/khugepaged: dedup and simplify hugepage alloc and charging mm/khugepaged: pipe enum scan_result codes back to callers mm/khugepaged: add flag to predicate khugepaged-only behavior mm/thp: add flag to enforce sysfs THP in hugepage_vma_check() mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds hugepage mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse mm/khugepaged: rename prefix of shared collapse functions mm/madvise: add huge_memory:mm_madvise_collapse tracepoint mm/madvise: add MADV_COLLAPSE to process_madvise() proc/smaps: add PMDMappable field to smaps selftests/vm: modularize collapse selftests selftests/vm: dedup hugepage allocation logic selftests/vm: add MADV_COLLAPSE collapse context to selftests selftests/vm: add selftest to verify recollapse of THPs selftests/vm: add selftest to verify multi THP collapse Documentation/filesystems/proc.rst | 10 +- arch/alpha/include/uapi/asm/mman.h | 2 + arch/mips/include/uapi/asm/mman.h | 2 + arch/parisc/include/uapi/asm/mman.h | 2 + arch/xtensa/include/uapi/asm/mman.h | 2 + fs/proc/task_mmu.c | 4 +- include/linux/huge_mm.h | 23 +- include/trace/events/huge_memory.h | 23 + include/uapi/asm-generic/mman-common.h | 2 + mm/huge_memory.c | 32 +- mm/internal.h | 2 +- mm/khugepaged.c | 745 +++++++++++-------- mm/ksm.c | 10 + mm/madvise.c | 11 +- mm/memory.c | 4 +- mm/rmap.c | 15 +- tools/include/uapi/asm-generic/mman-common.h | 2 + tools/testing/selftests/vm/khugepaged.c | 563 ++++++++------ 18 files changed, 845 insertions(+), 609 deletions(-)