From patchwork Thu Apr 14 18:06:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12813842 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6927C433EF for ; Thu, 14 Apr 2022 18:06:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C15B6B0073; Thu, 14 Apr 2022 14:06:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 16FD96B0074; Thu, 14 Apr 2022 14:06:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 010096B0075; Thu, 14 Apr 2022 14:06:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id E41246B0073 for ; Thu, 14 Apr 2022 14:06:37 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B78AB63760 for ; Thu, 14 Apr 2022 18:06:37 +0000 (UTC) X-FDA: 79356264834.13.A94A673 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) by imf17.hostedemail.com (Postfix) with ESMTP id 1D80E40005 for ; Thu, 14 Apr 2022 18:06:36 +0000 (UTC) Received: by mail-pg1-f201.google.com with SMTP id v188-20020a632fc5000000b0039d3ce300eeso3084950pgv.4 for ; Thu, 14 Apr 2022 11:06:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9sSTo5Mxf0tASsQXorJe5tCc/cpdNsuKRXm4qsHQW/g=; b=mzKDhXyqe8SR5YOuNqZnUY1zKLYxRwTTufdvlcdhPSmG+j5MX3gs8N+1GKOStlXCMq k42jNhNL0mr68IaqgBOP1CdBm2QeoHyt42upyXu0FxVJj9F2OrpY/NNuHiwHtv5D5Rkq 6XaI4dU/fyTJNGArszTG0Z/AOkDiOFeMJ2sPhHX+dqwtMdOEGTJjG/0Cdxh3T181ai1p 7kf2TNnFsuOwiVO5J+YeYB2Tqm0B04zaEQRBaYfybs+pBidSTxaOGb29o3vY3CiHzRla 0ydoTb30KLAdU6OJNOAzqNdmg1H9JFTHAv0pO0AgFweYgc3VwZZX9XHm6MIWgUm3G3sF YTsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9sSTo5Mxf0tASsQXorJe5tCc/cpdNsuKRXm4qsHQW/g=; b=qMh2Ney1sfzyxk2I0QYoKpGklXwvcrz/q0KZ5HBNc636DwrNSCgBKJZeEaq+V3keH6 aUrC+lLFftiFR1Rzhc6MB3w3YLjSNeKyPWr4X5NeXk3GSJPL3REeEVuLVVa+7BP0L8Cn LJKep4HNUI0nGacszqhipI4IcxFsAwKfx4P2BVvimNgYfLi4MoLFXVEmcCWvIF7yVckd 9CPpiZyi3CzxMQuQxQQRsvrgOL1MvUmloxBRK3SzFyJuQB+7f0nuJD9ETKMI2ye/QlAl BaaERoiFMedpcfY6URsaFuFX98qCspTa3n3YvP+W0sYSB85KUFSOLPyAvCSbVdPAqZjT wYOQ== X-Gm-Message-State: AOAM5312mBRiMy0DWdu2xJVxpZn+bbIjf0OyHLFhTSZpuI1qwGD6SfwR LojhWawyRrTHhsrDGD3dgzvV6bfoPdle X-Google-Smtp-Source: ABdhPJyjCupz7I5cjFgQOSEhST8H5JcUJd55cx2sx+KwSEfLlvQiJOOLCJhBGwWMHMm07c8wMwGtHNJ3fwUz X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:90b:1804:b0:1cb:82e3:5cd0 with SMTP id lw4-20020a17090b180400b001cb82e35cd0mr5660653pjb.8.1649959595903; Thu, 14 Apr 2022 11:06:35 -0700 (PDT) Date: Thu, 14 Apr 2022 11:06:01 -0700 In-Reply-To: <20220414180612.3844426-1-zokeefe@google.com> Message-Id: <20220414180612.3844426-2-zokeefe@google.com> Mime-Version: 1.0 References: <20220414180612.3844426-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 01/12] mm/khugepaged: record SCAN_PMD_MAPPED when scan_pmd() finds THP From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" , kernel test robot X-Stat-Signature: 5ka5g19zdo34wptozco3ozcummo4p6x9 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 1D80E40005 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=mzKDhXyq; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf17.hostedemail.com: domain of 3q2JYYgcKCOolaWQQRQSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--zokeefe.bounces.google.com designates 209.85.215.201 as permitted sender) smtp.mailfrom=3q2JYYgcKCOolaWQQRQSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--zokeefe.bounces.google.com X-Rspam-User: X-HE-Tag: 1649959596-930584 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When scanning an anon pmd to see if it's eligible for collapse, return SCAN_PMD_MAPPED if the pmd already maps a THP. Note that SCAN_PMD_MAPPED is different from SCAN_PAGE_COMPOUND used in the file-collapse path, since the latter might identify pte-mapped compound pages. This is required by MADV_COLLAPSE which necessarily needs to know what hugepage-aligned/sized regions are already pmd-mapped. Signed-off-by: Zach O'Keefe Reported-by: kernel test robot --- include/trace/events/huge_memory.h | 3 ++- mm/internal.h | 1 + mm/khugepaged.c | 30 ++++++++++++++++++++++++++---- mm/rmap.c | 15 +++++++++++++-- 4 files changed, 42 insertions(+), 7 deletions(-) diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h index d651f3437367..9faa678e0a5b 100644 --- a/include/trace/events/huge_memory.h +++ b/include/trace/events/huge_memory.h @@ -33,7 +33,8 @@ EM( SCAN_ALLOC_HUGE_PAGE_FAIL, "alloc_huge_page_failed") \ EM( SCAN_CGROUP_CHARGE_FAIL, "ccgroup_charge_failed") \ EM( SCAN_TRUNCATED, "truncated") \ - EMe(SCAN_PAGE_HAS_PRIVATE, "page_has_private") \ + EM( SCAN_PAGE_HAS_PRIVATE, "page_has_private") \ + EMe(SCAN_PMD_MAPPED, "page_pmd_mapped") \ #undef EM #undef EMe diff --git a/mm/internal.h b/mm/internal.h index 48eb2d24fcd2..24fca92bd51a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -173,6 +173,7 @@ extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason /* * in mm/rmap.c: */ +pmd_t *mm_find_pmd_raw(struct mm_struct *mm, unsigned long address); extern pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address); /* diff --git a/mm/khugepaged.c b/mm/khugepaged.c index cb43c3aee8b2..5e5404aa6579 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -51,6 +51,7 @@ enum scan_result { SCAN_CGROUP_CHARGE_FAIL, SCAN_TRUNCATED, SCAN_PAGE_HAS_PRIVATE, + SCAN_PMD_MAPPED, }; #define CREATE_TRACE_POINTS @@ -987,6 +988,29 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, return 0; } +static int find_pmd_or_thp_or_none(struct mm_struct *mm, + unsigned long address, + pmd_t **pmd) +{ + pmd_t pmde; + + *pmd = mm_find_pmd_raw(mm, address); + if (!*pmd) + return SCAN_PMD_NULL; + + pmde = pmd_read_atomic(*pmd); + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + /* See comments in pmd_none_or_trans_huge_or_clear_bad() */ + barrier(); +#endif + if (!pmd_present(pmde) || pmd_none(pmde)) + return SCAN_PMD_NULL; + if (pmd_trans_huge(pmde)) + return SCAN_PMD_MAPPED; + return SCAN_SUCCEED; +} + /* * Bring missing pages in from swap, to complete THP collapse. * Only done if khugepaged_scan_pmd believes it is worthwhile. @@ -1238,11 +1262,9 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, VM_BUG_ON(address & ~HPAGE_PMD_MASK); - pmd = mm_find_pmd(mm, address); - if (!pmd) { - result = SCAN_PMD_NULL; + result = find_pmd_or_thp_or_none(mm, address, &pmd); + if (result != SCAN_SUCCEED) goto out; - } memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load)); pte = pte_offset_map_lock(mm, pmd, address, &ptl); diff --git a/mm/rmap.c b/mm/rmap.c index edfe61f95a7f..bf2a3a08d965 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -759,13 +759,12 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) return vma_address(page, vma); } -pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address) +pmd_t *mm_find_pmd_raw(struct mm_struct *mm, unsigned long address) { pgd_t *pgd; p4d_t *p4d; pud_t *pud; pmd_t *pmd = NULL; - pmd_t pmde; pgd = pgd_offset(mm, address); if (!pgd_present(*pgd)) @@ -780,6 +779,18 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address) goto out; pmd = pmd_offset(pud, address); +out: + return pmd; +} + +pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address) +{ + pmd_t pmde; + pmd_t *pmd; + + pmd = mm_find_pmd_raw(mm, address); + if (!pmd) + goto out; /* * Some THP functions use the sequence pmdp_huge_clear_flush(), set_pmd_at() * without holding anon_vma lock for write. So when looking for a From patchwork Thu Apr 14 18:06:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12813843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 184BAC433F5 for ; Thu, 14 Apr 2022 18:06:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 807436B0074; Thu, 14 Apr 2022 14:06:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 78B5F6B0075; Thu, 14 Apr 2022 14:06:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 652D56B0078; Thu, 14 Apr 2022 14:06:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 559EF6B0074 for ; Thu, 14 Apr 2022 14:06:39 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 258DD37B7 for ; Thu, 14 Apr 2022 18:06:39 +0000 (UTC) X-FDA: 79356264918.25.8FE5810 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) by imf20.hostedemail.com (Postfix) with ESMTP id 8A1781C000C for ; Thu, 14 Apr 2022 18:06:38 +0000 (UTC) Received: by mail-pf1-f201.google.com with SMTP id j185-20020a62c5c2000000b0050822009f1bso2057820pfg.23 for ; Thu, 14 Apr 2022 11:06:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ljgMjJ5mf3p+BuFlCalC/tnul3Iai39hsVX+m0cXAYg=; b=q79k7AIB7dvicBNm1QFdipxUMn5r4q8do2t7ck1+ppIFelnL/LwNJLy8vkV4u7gtTr nFSZOeqBuGJ5iGKgdwnbg042xVce/F439JaOUjJYIzQqZA3peqP8aE/G7BixWbI9HKaA Cu6CG/93uW8ti2sI8SC64WwUHue6myaovDKVXuy0+WJN92q1P2eGqZnvB3gxCJ0St2AK MuTd/keSsO/insLHnTIaruPVZrolQwV2BCm7Lk7Z+sqBESfTeWVfXrNnQ5d2nitcYz/M PYTi0Clu/I4xx6qYpUUp22CpAAmqS3NcufGXpd6BGmkMnJdS6L1MW60sOridapJLfhHw slIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ljgMjJ5mf3p+BuFlCalC/tnul3Iai39hsVX+m0cXAYg=; b=tY+M51zgUrpv8WRfbNgXVr9+hiPanjSa8MKWRtV1nqv1R6cpdbGQaOuwC58P32GeGr PFa4VrCydwwt243ZgNIJaURE6Xq9CA5w58a1J2fWG8vDoWyWWz6BO0rYcCW2yX/ue+1i 6GqAsR4cqRx7v4RjuNYEfv+7tzRGr9WWYCUeyYRjEnhYSsB2HzugFqWj00CN5FLje5Kq 6J1B+XxN5HtxovgZhrpS+67rEME6XM2zclVPYtfBaxqSIvCo0XkvSRR/yEmSugBxda7p qivOpvlTUXhl6XTpk0S+1JzJx0NIq+A0+BUQWuZI87B2roTo1jgZeF6WLCo5FFWj6/fK egsQ== X-Gm-Message-State: AOAM530zg3wlvGqcN6ewmJ1Vrxjw0+mxgbrJ7MbOB8X4mpVzz4VIF2Sm bBsoNd1/EGsJRFLEGV42JR4SyydhjneW X-Google-Smtp-Source: ABdhPJyK0bzLx/K4facmkuAno9gluk0oGBxQzdUFSf8rxEMU1HbNaiEYRQ8ak0cfgG0GBnkaabrNGKxe6j+w X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:90a:634b:b0:1ca:6c7e:7952 with SMTP id v11-20020a17090a634b00b001ca6c7e7952mr5519961pjs.54.1649959597514; Thu, 14 Apr 2022 11:06:37 -0700 (PDT) Date: Thu, 14 Apr 2022 11:06:02 -0700 In-Reply-To: <20220414180612.3844426-1-zokeefe@google.com> Message-Id: <20220414180612.3844426-3-zokeefe@google.com> Mime-Version: 1.0 References: <20220414180612.3844426-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 02/12] mm/khugepaged: add struct collapse_control From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8A1781C000C X-Stat-Signature: uejcq4wjwqkaf3ptww9rgrdchtugzdx6 X-Rspam-User: Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=q79k7AIB; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3rWJYYgcKCOwncYSSTSUccUZS.QcaZWbil-aaYjOQY.cfU@flex--zokeefe.bounces.google.com designates 209.85.210.201 as permitted sender) smtp.mailfrom=3rWJYYgcKCOwncYSSTSUccUZS.QcaZWbil-aaYjOQY.cfU@flex--zokeefe.bounces.google.com X-HE-Tag: 1649959598-492543 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Modularize hugepage collapse by introducing struct collapse_control. This structure serves to describe the properties of the requested collapse, as well as serve as a local scratch pad to use during the collapse itself. Signed-off-by: Zach O'Keefe --- mm/khugepaged.c | 79 ++++++++++++++++++++++++++++--------------------- 1 file changed, 46 insertions(+), 33 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 5e5404aa6579..25f45ac7f6bd 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -86,6 +86,14 @@ static struct kmem_cache *mm_slot_cache __read_mostly; #define MAX_PTE_MAPPED_THP 8 +struct collapse_control { + /* Num pages scanned per node */ + int node_load[MAX_NUMNODES]; + + /* Last target selected in khugepaged_find_target_node() for this scan */ + int last_target_node; +}; + /** * struct mm_slot - hash lookup from mm to mm_slot * @hash: hash collision list @@ -796,9 +804,7 @@ static void khugepaged_alloc_sleep(void) remove_wait_queue(&khugepaged_wait, &wait); } -static int khugepaged_node_load[MAX_NUMNODES]; - -static bool khugepaged_scan_abort(int nid) +static bool khugepaged_scan_abort(int nid, struct collapse_control *cc) { int i; @@ -810,11 +816,11 @@ static bool khugepaged_scan_abort(int nid) return false; /* If there is a count for this node already, it must be acceptable */ - if (khugepaged_node_load[nid]) + if (cc->node_load[nid]) return false; for (i = 0; i < MAX_NUMNODES; i++) { - if (!khugepaged_node_load[i]) + if (!cc->node_load[i]) continue; if (node_distance(nid, i) > node_reclaim_distance) return true; @@ -829,28 +835,28 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void) } #ifdef CONFIG_NUMA -static int khugepaged_find_target_node(void) +static int khugepaged_find_target_node(struct collapse_control *cc) { - static int last_khugepaged_target_node = NUMA_NO_NODE; int nid, target_node = 0, max_value = 0; /* find first node with max normal pages hit */ for (nid = 0; nid < MAX_NUMNODES; nid++) - if (khugepaged_node_load[nid] > max_value) { - max_value = khugepaged_node_load[nid]; + if (cc->node_load[nid] > max_value) { + max_value = cc->node_load[nid]; target_node = nid; } /* do some balance if several nodes have the same hit record */ - if (target_node <= last_khugepaged_target_node) - for (nid = last_khugepaged_target_node + 1; nid < MAX_NUMNODES; - nid++) - if (max_value == khugepaged_node_load[nid]) { + if (target_node <= cc->last_target_node) + for (nid = cc->last_target_node + 1; nid < MAX_NUMNODES; + nid++) { + if (max_value == cc->node_load[nid]) { target_node = nid; break; } + } - last_khugepaged_target_node = target_node; + cc->last_target_node = target_node; return target_node; } @@ -888,7 +894,7 @@ khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node) return *hpage; } #else -static int khugepaged_find_target_node(void) +static int khugepaged_find_target_node(struct collapse_control *cc) { return 0; } @@ -1248,7 +1254,8 @@ static void collapse_huge_page(struct mm_struct *mm, static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, - struct page **hpage) + struct page **hpage, + struct collapse_control *cc) { pmd_t *pmd; pte_t *pte, *_pte; @@ -1266,7 +1273,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, if (result != SCAN_SUCCEED) goto out; - memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load)); + memset(cc->node_load, 0, sizeof(cc->node_load)); pte = pte_offset_map_lock(mm, pmd, address, &ptl); for (_address = address, _pte = pte; _pte < pte+HPAGE_PMD_NR; _pte++, _address += PAGE_SIZE) { @@ -1332,16 +1339,16 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, /* * Record which node the original page is from and save this - * information to khugepaged_node_load[]. + * information to cc->node_load[]. * Khugepaged will allocate hugepage from the node has the max * hit record. */ node = page_to_nid(page); - if (khugepaged_scan_abort(node)) { + if (khugepaged_scan_abort(node, cc)) { result = SCAN_SCAN_ABORT; goto out_unmap; } - khugepaged_node_load[node]++; + cc->node_load[node]++; if (!PageLRU(page)) { result = SCAN_PAGE_LRU; goto out_unmap; @@ -1392,7 +1399,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, out_unmap: pte_unmap_unlock(pte, ptl); if (ret) { - node = khugepaged_find_target_node(); + node = khugepaged_find_target_node(cc); /* collapse_huge_page will return with the mmap_lock released */ collapse_huge_page(mm, address, hpage, node, referenced, unmapped); @@ -2044,7 +2051,8 @@ static void collapse_file(struct mm_struct *mm, } static void khugepaged_scan_file(struct mm_struct *mm, - struct file *file, pgoff_t start, struct page **hpage) + struct file *file, pgoff_t start, struct page **hpage, + struct collapse_control *cc) { struct page *page = NULL; struct address_space *mapping = file->f_mapping; @@ -2055,7 +2063,7 @@ static void khugepaged_scan_file(struct mm_struct *mm, present = 0; swap = 0; - memset(khugepaged_node_load, 0, sizeof(khugepaged_node_load)); + memset(cc->node_load, 0, sizeof(cc->node_load)); rcu_read_lock(); xas_for_each(&xas, page, start + HPAGE_PMD_NR - 1) { if (xas_retry(&xas, page)) @@ -2080,11 +2088,11 @@ static void khugepaged_scan_file(struct mm_struct *mm, } node = page_to_nid(page); - if (khugepaged_scan_abort(node)) { + if (khugepaged_scan_abort(node, cc)) { result = SCAN_SCAN_ABORT; break; } - khugepaged_node_load[node]++; + cc->node_load[node]++; if (!PageLRU(page)) { result = SCAN_PAGE_LRU; @@ -2117,7 +2125,7 @@ static void khugepaged_scan_file(struct mm_struct *mm, result = SCAN_EXCEED_NONE_PTE; count_vm_event(THP_SCAN_EXCEED_NONE_PTE); } else { - node = khugepaged_find_target_node(); + node = khugepaged_find_target_node(cc); collapse_file(mm, file, start, hpage, node); } } @@ -2126,7 +2134,8 @@ static void khugepaged_scan_file(struct mm_struct *mm, } #else static void khugepaged_scan_file(struct mm_struct *mm, - struct file *file, pgoff_t start, struct page **hpage) + struct file *file, pgoff_t start, struct page **hpage, + struct collapse_control *cc) { BUILD_BUG(); } @@ -2137,7 +2146,8 @@ static void khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot) #endif static unsigned int khugepaged_scan_mm_slot(unsigned int pages, - struct page **hpage) + struct page **hpage, + struct collapse_control *cc) __releases(&khugepaged_mm_lock) __acquires(&khugepaged_mm_lock) { @@ -2213,12 +2223,12 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, mmap_read_unlock(mm); ret = 1; - khugepaged_scan_file(mm, file, pgoff, hpage); + khugepaged_scan_file(mm, file, pgoff, hpage, cc); fput(file); } else { ret = khugepaged_scan_pmd(mm, vma, khugepaged_scan.address, - hpage); + hpage, cc); } /* move to next address */ khugepaged_scan.address += HPAGE_PMD_SIZE; @@ -2274,7 +2284,7 @@ static int khugepaged_wait_event(void) kthread_should_stop(); } -static void khugepaged_do_scan(void) +static void khugepaged_do_scan(struct collapse_control *cc) { struct page *hpage = NULL; unsigned int progress = 0, pass_through_head = 0; @@ -2298,7 +2308,7 @@ static void khugepaged_do_scan(void) if (khugepaged_has_work() && pass_through_head < 2) progress += khugepaged_scan_mm_slot(pages - progress, - &hpage); + &hpage, cc); else progress = pages; spin_unlock(&khugepaged_mm_lock); @@ -2337,12 +2347,15 @@ static void khugepaged_wait_work(void) static int khugepaged(void *none) { struct mm_slot *mm_slot; + struct collapse_control cc = { + .last_target_node = NUMA_NO_NODE, + }; set_freezable(); set_user_nice(current, MAX_NICE); while (!kthread_should_stop()) { - khugepaged_do_scan(); + khugepaged_do_scan(&cc); khugepaged_wait_work(); } From patchwork Thu Apr 14 18:06:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12813844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5FBDC433FE for ; Thu, 14 Apr 2022 18:06:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3474E6B0075; Thu, 14 Apr 2022 14:06:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 282226B0078; Thu, 14 Apr 2022 14:06:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F2E1A6B007B; Thu, 14 Apr 2022 14:06:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id E06D46B0075 for ; Thu, 14 Apr 2022 14:06:40 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id C09CB123328 for ; Thu, 14 Apr 2022 18:06:40 +0000 (UTC) X-FDA: 79356264960.13.D390823 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf07.hostedemail.com (Postfix) with ESMTP id 4473340008 for ; Thu, 14 Apr 2022 18:06:40 +0000 (UTC) Received: by mail-pj1-f74.google.com with SMTP id e12-20020a17090a7c4c00b001cb1b3274c9so3445688pjl.4 for ; Thu, 14 Apr 2022 11:06:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=CbYwHIqxj7L7cTAy/Tm6k6NK9EUWazawJNfUEq9VbmE=; b=FcquyZ3TzmVu7Qu7839jywUhelBxf2xOdw8/iKiXv10OVJil9BucNixlCi0Dsakrwn QCCsZSkfc+iKGITH8NU/a84Vn3viWrXmZloTHyZY0WhIBG2PIG9K131x1zG67RpiXqTf CArNj1fBoNqAqYO/N500yCSMAV51Zt1HJwckbrrp+nTxAv1/tM4o2YwRIWvNtGfW5F3T mKmyiaSsNRaMKyps0y16V7k9PNJc40u2g4F9LRPUWBQUbyV+C+Bv/mz8mOKc7vraGWwf sBnQmB6LFwqUx98V6RcNbRO1Q06falaBTP6yTRJqEBZOCUrKdzYXUCCGZKvvoVn/3TX8 1H5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=CbYwHIqxj7L7cTAy/Tm6k6NK9EUWazawJNfUEq9VbmE=; b=hNbAT0/79BUZWSjuJ/dwblWYMIik/2dZhgMSZiDIv0/hOzrk1Hs/MKdRP9KjKkcw3D mf9qhf8EvXPsKhsh+pv7VralULnn4afSo3QrukywUFh0ytP0xc8n8nKBvir4fyAuWRv6 WLy9OUDJ72roBUHt47MXkHUenn3FX6osePvP/KF0F2F/y7wfX6Ej0v4m18ymc5WD3zWe TUjZvnCs/g9KDvoMHPKm5iLpudrUsichJ4eIO/DIbbiPQZieX819CnFX8RB8vzzhfRoT E+ZR/YZeKooxpgT9NXx3WykqHXQFBS+UMeqHtgR9eo/PTp6iwJGccdFsMvgUYnd9M2vW 2aBw== X-Gm-Message-State: AOAM533UgiooHq1pA04FndQjxBDB2tUzrSkV8BgLKcZ4Jy8AjWLKVqdy UqrVXZTVyFcdJ6Rg4i7v+7zalw4Mrxj8 X-Google-Smtp-Source: ABdhPJytvZTSyN+ea1zuRG/iVesLH5AkWfSDPabo7YkEb0lfonrhbRl140qmX/H97lpsYwCg/q3zOBdJR60Z X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:902:7ec1:b0:156:17a4:a2f8 with SMTP id p1-20020a1709027ec100b0015617a4a2f8mr49620617plb.155.1649959599160; Thu, 14 Apr 2022 11:06:39 -0700 (PDT) Date: Thu, 14 Apr 2022 11:06:03 -0700 In-Reply-To: <20220414180612.3844426-1-zokeefe@google.com> Message-Id: <20220414180612.3844426-4-zokeefe@google.com> Mime-Version: 1.0 References: <20220414180612.3844426-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 03/12] mm/khugepaged: make hugepage allocation context-specific From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" , kernel test robot X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 9mziqweyx7i11nc7doi841wixo8wi41r Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FcquyZ3T; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 3r2JYYgcKCO4peaUUVUWeeWbU.SecbYdkn-ccalQSa.ehW@flex--zokeefe.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3r2JYYgcKCO4peaUUVUWeeWbU.SecbYdkn-ccalQSa.ehW@flex--zokeefe.bounces.google.com X-Rspamd-Queue-Id: 4473340008 X-HE-Tag: 1649959600-716682 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add hugepage allocation context to struct collapse_context, allowing different collapse contexts to allocate hugepages differently. For example, khugepaged decides to allocate differently in NUMA and UMA configurations, and other collapse contexts shouldn't be coupled to this decision. Additionally, move [pre]allocated hugepage pointer into struct collapse_context. Signed-off-by: Zach O'Keefe Reported-by: kernel test robot --- mm/khugepaged.c | 96 ++++++++++++++++++++++++------------------------- 1 file changed, 48 insertions(+), 48 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 25f45ac7f6bd..21c8436fa73c 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -92,6 +92,10 @@ struct collapse_control { /* Last target selected in khugepaged_find_target_node() for this scan */ int last_target_node; + + struct page *hpage; + struct page* (*alloc_hpage)(struct collapse_control *cc, gfp_t gfp, + int node); }; /** @@ -877,21 +881,21 @@ static bool khugepaged_prealloc_page(struct page **hpage, bool *wait) return true; } -static struct page * -khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node) +static struct page *khugepaged_alloc_page(struct collapse_control *cc, + gfp_t gfp, int node) { - VM_BUG_ON_PAGE(*hpage, *hpage); + VM_BUG_ON_PAGE(cc->hpage, cc->hpage); - *hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER); - if (unlikely(!*hpage)) { + cc->hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER); + if (unlikely(!cc->hpage)) { count_vm_event(THP_COLLAPSE_ALLOC_FAILED); - *hpage = ERR_PTR(-ENOMEM); + cc->hpage = ERR_PTR(-ENOMEM); return NULL; } - prep_transhuge_page(*hpage); + prep_transhuge_page(cc->hpage); count_vm_event(THP_COLLAPSE_ALLOC); - return *hpage; + return cc->hpage; } #else static int khugepaged_find_target_node(struct collapse_control *cc) @@ -953,12 +957,12 @@ static bool khugepaged_prealloc_page(struct page **hpage, bool *wait) return true; } -static struct page * -khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node) +static struct page *khugepaged_alloc_page(struct collapse_control *cc, + gfp_t gfp, int node) { - VM_BUG_ON(!*hpage); + VM_BUG_ON(!cc->hpage); - return *hpage; + return cc->hpage; } #endif @@ -1080,10 +1084,9 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, return true; } -static void collapse_huge_page(struct mm_struct *mm, - unsigned long address, - struct page **hpage, - int node, int referenced, int unmapped) +static void collapse_huge_page(struct mm_struct *mm, unsigned long address, + struct collapse_control *cc, int referenced, + int unmapped) { LIST_HEAD(compound_pagelist); pmd_t *pmd, _pmd; @@ -1096,6 +1099,7 @@ static void collapse_huge_page(struct mm_struct *mm, struct mmu_notifier_range range; gfp_t gfp; const struct cpumask *cpumask; + int node; VM_BUG_ON(address & ~HPAGE_PMD_MASK); @@ -1110,13 +1114,14 @@ static void collapse_huge_page(struct mm_struct *mm, */ mmap_read_unlock(mm); + node = khugepaged_find_target_node(cc); /* sched to specified node before huage page memory copy */ if (task_node(current) != node) { cpumask = cpumask_of_node(node); if (!cpumask_empty(cpumask)) set_cpus_allowed_ptr(current, cpumask); } - new_page = khugepaged_alloc_page(hpage, gfp, node); + new_page = cc->alloc_hpage(cc, gfp, node); if (!new_page) { result = SCAN_ALLOC_HUGE_PAGE_FAIL; goto out_nolock; @@ -1238,15 +1243,15 @@ static void collapse_huge_page(struct mm_struct *mm, update_mmu_cache_pmd(vma, address, pmd); spin_unlock(pmd_ptl); - *hpage = NULL; + cc->hpage = NULL; khugepaged_pages_collapsed++; result = SCAN_SUCCEED; out_up_write: mmap_write_unlock(mm); out_nolock: - if (!IS_ERR_OR_NULL(*hpage)) - mem_cgroup_uncharge(page_folio(*hpage)); + if (!IS_ERR_OR_NULL(cc->hpage)) + mem_cgroup_uncharge(page_folio(cc->hpage)); trace_mm_collapse_huge_page(mm, isolated, result); return; } @@ -1254,7 +1259,6 @@ static void collapse_huge_page(struct mm_struct *mm, static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, - struct page **hpage, struct collapse_control *cc) { pmd_t *pmd; @@ -1399,10 +1403,8 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, out_unmap: pte_unmap_unlock(pte, ptl); if (ret) { - node = khugepaged_find_target_node(cc); /* collapse_huge_page will return with the mmap_lock released */ - collapse_huge_page(mm, address, hpage, node, - referenced, unmapped); + collapse_huge_page(mm, address, cc, referenced, unmapped); } out: trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, @@ -1667,8 +1669,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) * @mm: process address space where collapse happens * @file: file that collapse on * @start: collapse start address - * @hpage: new allocated huge page for collapse - * @node: appointed node the new huge page allocate from + * @cc: collapse context and scratchpad * * Basic scheme is simple, details are more complex: * - allocate and lock a new huge page; @@ -1686,8 +1687,8 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) * + unlock and free huge page; */ static void collapse_file(struct mm_struct *mm, - struct file *file, pgoff_t start, - struct page **hpage, int node) + struct file *file, pgoff_t start, + struct collapse_control *cc) { struct address_space *mapping = file->f_mapping; gfp_t gfp; @@ -1697,15 +1698,16 @@ static void collapse_file(struct mm_struct *mm, XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER); int nr_none = 0, result = SCAN_SUCCEED; bool is_shmem = shmem_file(file); - int nr; + int nr, node; VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); /* Only allocate from the target node */ gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE; + node = khugepaged_find_target_node(cc); - new_page = khugepaged_alloc_page(hpage, gfp, node); + new_page = cc->alloc_hpage(cc, gfp, node); if (!new_page) { result = SCAN_ALLOC_HUGE_PAGE_FAIL; goto out; @@ -1998,7 +2000,7 @@ static void collapse_file(struct mm_struct *mm, * Remove pte page tables, so we can re-fault the page as huge. */ retract_page_tables(mapping, start); - *hpage = NULL; + cc->hpage = NULL; khugepaged_pages_collapsed++; } else { @@ -2045,14 +2047,14 @@ static void collapse_file(struct mm_struct *mm, unlock_page(new_page); out: VM_BUG_ON(!list_empty(&pagelist)); - if (!IS_ERR_OR_NULL(*hpage)) - mem_cgroup_uncharge(page_folio(*hpage)); + if (!IS_ERR_OR_NULL(cc->hpage)) + mem_cgroup_uncharge(page_folio(cc->hpage)); /* TODO: tracepoints */ } static void khugepaged_scan_file(struct mm_struct *mm, - struct file *file, pgoff_t start, struct page **hpage, - struct collapse_control *cc) + struct file *file, pgoff_t start, + struct collapse_control *cc) { struct page *page = NULL; struct address_space *mapping = file->f_mapping; @@ -2125,8 +2127,7 @@ static void khugepaged_scan_file(struct mm_struct *mm, result = SCAN_EXCEED_NONE_PTE; count_vm_event(THP_SCAN_EXCEED_NONE_PTE); } else { - node = khugepaged_find_target_node(cc); - collapse_file(mm, file, start, hpage, node); + collapse_file(mm, file, start, cc); } } @@ -2134,8 +2135,8 @@ static void khugepaged_scan_file(struct mm_struct *mm, } #else static void khugepaged_scan_file(struct mm_struct *mm, - struct file *file, pgoff_t start, struct page **hpage, - struct collapse_control *cc) + struct file *file, pgoff_t start, + struct collapse_control *cc) { BUILD_BUG(); } @@ -2146,7 +2147,6 @@ static void khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot) #endif static unsigned int khugepaged_scan_mm_slot(unsigned int pages, - struct page **hpage, struct collapse_control *cc) __releases(&khugepaged_mm_lock) __acquires(&khugepaged_mm_lock) @@ -2223,12 +2223,11 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, mmap_read_unlock(mm); ret = 1; - khugepaged_scan_file(mm, file, pgoff, hpage, cc); + khugepaged_scan_file(mm, file, pgoff, cc); fput(file); } else { ret = khugepaged_scan_pmd(mm, vma, - khugepaged_scan.address, - hpage, cc); + khugepaged_scan.address, cc); } /* move to next address */ khugepaged_scan.address += HPAGE_PMD_SIZE; @@ -2286,15 +2285,15 @@ static int khugepaged_wait_event(void) static void khugepaged_do_scan(struct collapse_control *cc) { - struct page *hpage = NULL; unsigned int progress = 0, pass_through_head = 0; unsigned int pages = READ_ONCE(khugepaged_pages_to_scan); bool wait = true; + cc->hpage = NULL; lru_add_drain_all(); while (progress < pages) { - if (!khugepaged_prealloc_page(&hpage, &wait)) + if (!khugepaged_prealloc_page(&cc->hpage, &wait)) break; cond_resched(); @@ -2308,14 +2307,14 @@ static void khugepaged_do_scan(struct collapse_control *cc) if (khugepaged_has_work() && pass_through_head < 2) progress += khugepaged_scan_mm_slot(pages - progress, - &hpage, cc); + cc); else progress = pages; spin_unlock(&khugepaged_mm_lock); } - if (!IS_ERR_OR_NULL(hpage)) - put_page(hpage); + if (!IS_ERR_OR_NULL(cc->hpage)) + put_page(cc->hpage); } static bool khugepaged_should_wakeup(void) @@ -2349,6 +2348,7 @@ static int khugepaged(void *none) struct mm_slot *mm_slot; struct collapse_control cc = { .last_target_node = NUMA_NO_NODE, + .alloc_hpage = &khugepaged_alloc_page, }; set_freezable(); From patchwork Thu Apr 14 18:06:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12813845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACB76C433EF for ; Thu, 14 Apr 2022 18:06:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 433F26B0078; Thu, 14 Apr 2022 14:06:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E3F76B007B; Thu, 14 Apr 2022 14:06:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BE096B007D; Thu, 14 Apr 2022 14:06:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 0C7416B0078 for ; Thu, 14 Apr 2022 14:06:43 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id D4E5327708 for ; Thu, 14 Apr 2022 18:06:42 +0000 (UTC) X-FDA: 79356265044.11.D9689C2 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) by imf14.hostedemail.com (Postfix) with ESMTP id E2DB4100003 for ; Thu, 14 Apr 2022 18:06:41 +0000 (UTC) Received: by mail-pg1-f202.google.com with SMTP id h9-20020a631209000000b0039cc31b22aeso3082201pgl.9 for ; Thu, 14 Apr 2022 11:06:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=eREJZIfibLlJ6tO5D69+lQB4HL51t6kfZlD10/tqO/I=; b=Zc+J1UShd71YOUGf9wutJUDCJ4uXRaCvkqHp04y1ZwT9m1C8+SLwFCQ1LH394hIUXY 7bUKGLcPYt8ggh+5XQHnVu2q2bpp2YsRKSs3+CUsK7nDXoB4U7z1uPbECKGph+0EttED oi+zJQ4jpi8HzBX4uzOKWJRA0bsBbRvelTkWHeBCENuyyM+uP5i3TacmpoRyutVQVAVR FXKiNMYHF7vxYS/HEiRozM6xlkLNjb+y64OwP+dHNxvV0/O+6f/LM3Xsjfcm7ZXvqf7s vIvk/+NV4O99MtM5FD3g2bCYXUkwYgliHVE/lj8dyoW5tTO23IfBlPFKMHk+6sSu+qXA zObw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=eREJZIfibLlJ6tO5D69+lQB4HL51t6kfZlD10/tqO/I=; b=MU0/kY0NibQZFcP9dFwym61rh4cuXPD5nvRbGUCN3u0vA+PoaA0qNbCagpHzjLVIJY IX+oAcRIUljwPLgPZAlNVIZd1H7D6hs4wUoHscQW6isC7pSSK6o/cAfb3ahqev1HmLrY Qac0WyIJ9dNC0dOHzQI3sMLhuNmnXMbglWgLTeczD14HKPq6gQ6qlawf2aY36hKZrVjp OPTZ/CK0gGNnkWF7dWmLqCiQi/0c1o4KmGXAFY/rUNClVH2GGEA/xmAq920vKQzemvL7 sHKcxqPRgV/eMgHAHTZG60U8hQvzdLjLTXnmDj06et8gpZbWQnwjSqeV9EcGLBCiYKMT 64oA== X-Gm-Message-State: AOAM533xxuq/c5i68bBvYltmWQ8oOl84f/3XtPz/guIIZQYEvsBiRYhs HDrjQrQECXDr+/NXvJjWG4sOteUqMQ/B X-Google-Smtp-Source: ABdhPJx6B9j8Wv0h3eGLZtL9k5LmPcpPWVz6yYkLTzc+4GnYRo+0y4YjWn/o7lK6XT0VCrPOn7D5i19QKrqV X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a62:be14:0:b0:505:a43b:cf6e with SMTP id l20-20020a62be14000000b00505a43bcf6emr5080628pff.33.1649959600802; Thu, 14 Apr 2022 11:06:40 -0700 (PDT) Date: Thu, 14 Apr 2022 11:06:04 -0700 In-Reply-To: <20220414180612.3844426-1-zokeefe@google.com> Message-Id: <20220414180612.3844426-5-zokeefe@google.com> Mime-Version: 1.0 References: <20220414180612.3844426-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 04/12] mm/khugepaged: add struct collapse_result From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" X-Rspam-User: Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Zc+J1USh; spf=pass (imf14.hostedemail.com: domain of 3sGJYYgcKCO8qfbVVWVXffXcV.TfdcZelo-ddbmRTb.fiX@flex--zokeefe.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=3sGJYYgcKCO8qfbVVWVXffXcV.TfdcZelo-ddbmRTb.fiX@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E2DB4100003 X-Stat-Signature: cayt7zeggmmtqh9i54hh7zj3qcnuxorw X-HE-Tag: 1649959601-59929 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add struct collapse_result which aggregates data from a single khugepaged_scan_pmd() or khugapaged_scan_file() request. Change khugepaged to take action based on this returned data instead of deep within the collapsing functions themselves. Signed-off-by: Zach O'Keefe --- mm/khugepaged.c | 187 ++++++++++++++++++++++++++---------------------- 1 file changed, 101 insertions(+), 86 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 21c8436fa73c..e330a95a0479 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -98,6 +98,14 @@ struct collapse_control { int node); }; +/* Gather information from one khugepaged_scan_[pmd|file]() request */ +struct collapse_result { + enum scan_result result; + + /* Was mmap_lock dropped during request? */ + bool dropped_mmap_lock; +}; + /** * struct mm_slot - hash lookup from mm to mm_slot * @hash: hash collision list @@ -742,13 +750,13 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, result = SCAN_SUCCEED; trace_mm_collapse_huge_page_isolate(page, none_or_zero, referenced, writable, result); - return 1; + return SCAN_SUCCEED; } out: release_pte_pages(pte, _pte, compound_pagelist); trace_mm_collapse_huge_page_isolate(page, none_or_zero, referenced, writable, result); - return 0; + return result; } static void __collapse_huge_page_copy(pte_t *pte, struct page *page, @@ -1086,7 +1094,7 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, static void collapse_huge_page(struct mm_struct *mm, unsigned long address, struct collapse_control *cc, int referenced, - int unmapped) + int unmapped, struct collapse_result *cr) { LIST_HEAD(compound_pagelist); pmd_t *pmd, _pmd; @@ -1094,7 +1102,6 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, pgtable_t pgtable; struct page *new_page; spinlock_t *pmd_ptl, *pte_ptl; - int isolated = 0, result = 0; struct vm_area_struct *vma; struct mmu_notifier_range range; gfp_t gfp; @@ -1102,6 +1109,7 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, int node; VM_BUG_ON(address & ~HPAGE_PMD_MASK); + cr->result = SCAN_FAIL; /* Only allocate from the target node */ gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE; @@ -1113,6 +1121,7 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, * that. We will recheck the vma after taking it again in write mode. */ mmap_read_unlock(mm); + cr->dropped_mmap_lock = true; node = khugepaged_find_target_node(cc); /* sched to specified node before huage page memory copy */ @@ -1123,26 +1132,26 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, } new_page = cc->alloc_hpage(cc, gfp, node); if (!new_page) { - result = SCAN_ALLOC_HUGE_PAGE_FAIL; + cr->result = SCAN_ALLOC_HUGE_PAGE_FAIL; goto out_nolock; } if (unlikely(mem_cgroup_charge(page_folio(new_page), mm, gfp))) { - result = SCAN_CGROUP_CHARGE_FAIL; + cr->result = SCAN_CGROUP_CHARGE_FAIL; goto out_nolock; } count_memcg_page_event(new_page, THP_COLLAPSE_ALLOC); mmap_read_lock(mm); - result = hugepage_vma_revalidate(mm, address, &vma); - if (result) { + cr->result = hugepage_vma_revalidate(mm, address, &vma); + if (cr->result) { mmap_read_unlock(mm); goto out_nolock; } pmd = mm_find_pmd(mm, address); if (!pmd) { - result = SCAN_PMD_NULL; + cr->result = SCAN_PMD_NULL; mmap_read_unlock(mm); goto out_nolock; } @@ -1165,8 +1174,8 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, * handled by the anon_vma lock + PG_lock. */ mmap_write_lock(mm); - result = hugepage_vma_revalidate(mm, address, &vma); - if (result) + cr->result = hugepage_vma_revalidate(mm, address, &vma); + if (cr->result) goto out_up_write; /* check if the pmd is still valid */ if (mm_find_pmd(mm, address) != pmd) @@ -1193,11 +1202,11 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, mmu_notifier_invalidate_range_end(&range); spin_lock(pte_ptl); - isolated = __collapse_huge_page_isolate(vma, address, pte, - &compound_pagelist); + cr->result = __collapse_huge_page_isolate(vma, address, pte, + &compound_pagelist); spin_unlock(pte_ptl); - if (unlikely(!isolated)) { + if (unlikely(cr->result != SCAN_SUCCEED)) { pte_unmap(pte); spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); @@ -1209,7 +1218,7 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, pmd_populate(mm, pmd, pmd_pgtable(_pmd)); spin_unlock(pmd_ptl); anon_vma_unlock_write(vma->anon_vma); - result = SCAN_FAIL; + cr->result = SCAN_FAIL; goto out_up_write; } @@ -1245,25 +1254,25 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, cc->hpage = NULL; - khugepaged_pages_collapsed++; - result = SCAN_SUCCEED; + cr->result = SCAN_SUCCEED; out_up_write: mmap_write_unlock(mm); out_nolock: if (!IS_ERR_OR_NULL(cc->hpage)) mem_cgroup_uncharge(page_folio(cc->hpage)); - trace_mm_collapse_huge_page(mm, isolated, result); + trace_mm_collapse_huge_page(mm, cr->result == SCAN_SUCCEED, cr->result); return; } -static int khugepaged_scan_pmd(struct mm_struct *mm, - struct vm_area_struct *vma, - unsigned long address, - struct collapse_control *cc) +static void khugepaged_scan_pmd(struct mm_struct *mm, + struct vm_area_struct *vma, + unsigned long address, + struct collapse_control *cc, + struct collapse_result *cr) { pmd_t *pmd; pte_t *pte, *_pte; - int ret = 0, result = 0, referenced = 0; + int referenced = 0; int none_or_zero = 0, shared = 0; struct page *page = NULL; unsigned long _address; @@ -1272,9 +1281,10 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, bool writable = false; VM_BUG_ON(address & ~HPAGE_PMD_MASK); + cr->result = SCAN_FAIL; - result = find_pmd_or_thp_or_none(mm, address, &pmd); - if (result != SCAN_SUCCEED) + cr->result = find_pmd_or_thp_or_none(mm, address, &pmd); + if (cr->result != SCAN_SUCCEED) goto out; memset(cc->node_load, 0, sizeof(cc->node_load)); @@ -1290,12 +1300,12 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, * comment below for pte_uffd_wp(). */ if (pte_swp_uffd_wp(pteval)) { - result = SCAN_PTE_UFFD_WP; + cr->result = SCAN_PTE_UFFD_WP; goto out_unmap; } continue; } else { - result = SCAN_EXCEED_SWAP_PTE; + cr->result = SCAN_EXCEED_SWAP_PTE; count_vm_event(THP_SCAN_EXCEED_SWAP_PTE); goto out_unmap; } @@ -1305,7 +1315,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, ++none_or_zero <= khugepaged_max_ptes_none) { continue; } else { - result = SCAN_EXCEED_NONE_PTE; + cr->result = SCAN_EXCEED_NONE_PTE; count_vm_event(THP_SCAN_EXCEED_NONE_PTE); goto out_unmap; } @@ -1320,7 +1330,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, * userfault messages that falls outside of * the registered range. So, just be simple. */ - result = SCAN_PTE_UFFD_WP; + cr->result = SCAN_PTE_UFFD_WP; goto out_unmap; } if (pte_write(pteval)) @@ -1328,13 +1338,13 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, page = vm_normal_page(vma, _address, pteval); if (unlikely(!page)) { - result = SCAN_PAGE_NULL; + cr->result = SCAN_PAGE_NULL; goto out_unmap; } if (page_mapcount(page) > 1 && ++shared > khugepaged_max_ptes_shared) { - result = SCAN_EXCEED_SHARED_PTE; + cr->result = SCAN_EXCEED_SHARED_PTE; count_vm_event(THP_SCAN_EXCEED_SHARED_PTE); goto out_unmap; } @@ -1349,20 +1359,20 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, */ node = page_to_nid(page); if (khugepaged_scan_abort(node, cc)) { - result = SCAN_SCAN_ABORT; + cr->result = SCAN_SCAN_ABORT; goto out_unmap; } cc->node_load[node]++; if (!PageLRU(page)) { - result = SCAN_PAGE_LRU; + cr->result = SCAN_PAGE_LRU; goto out_unmap; } if (PageLocked(page)) { - result = SCAN_PAGE_LOCK; + cr->result = SCAN_PAGE_LOCK; goto out_unmap; } if (!PageAnon(page)) { - result = SCAN_PAGE_ANON; + cr->result = SCAN_PAGE_ANON; goto out_unmap; } @@ -1384,7 +1394,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, * will be done again later the risk seems low. */ if (!is_refcount_suitable(page)) { - result = SCAN_PAGE_COUNT; + cr->result = SCAN_PAGE_COUNT; goto out_unmap; } if (pte_young(pteval) || @@ -1393,23 +1403,20 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, referenced++; } if (!writable) { - result = SCAN_PAGE_RO; + cr->result = SCAN_PAGE_RO; } else if (!referenced || (unmapped && referenced < HPAGE_PMD_NR/2)) { - result = SCAN_LACK_REFERENCED_PAGE; + cr->result = SCAN_LACK_REFERENCED_PAGE; } else { - result = SCAN_SUCCEED; - ret = 1; + cr->result = SCAN_SUCCEED; } out_unmap: pte_unmap_unlock(pte, ptl); - if (ret) { + if (cr->result == SCAN_SUCCEED) /* collapse_huge_page will return with the mmap_lock released */ - collapse_huge_page(mm, address, cc, referenced, unmapped); - } + collapse_huge_page(mm, address, cc, referenced, unmapped, cr); out: trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, - none_or_zero, result, unmapped); - return ret; + none_or_zero, cr->result, unmapped); } static void collect_mm_slot(struct mm_slot *mm_slot) @@ -1670,6 +1677,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) * @file: file that collapse on * @start: collapse start address * @cc: collapse context and scratchpad + * @cr: aggregate result information of collapse * * Basic scheme is simple, details are more complex: * - allocate and lock a new huge page; @@ -1688,7 +1696,9 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) */ static void collapse_file(struct mm_struct *mm, struct file *file, pgoff_t start, - struct collapse_control *cc) + struct collapse_control *cc, + struct collapse_result *cr) + { struct address_space *mapping = file->f_mapping; gfp_t gfp; @@ -1696,25 +1706,27 @@ static void collapse_file(struct mm_struct *mm, pgoff_t index, end = start + HPAGE_PMD_NR; LIST_HEAD(pagelist); XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER); - int nr_none = 0, result = SCAN_SUCCEED; + int nr_none = 0; bool is_shmem = shmem_file(file); int nr, node; VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); + cr->result = SCAN_SUCCEED; + /* Only allocate from the target node */ gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE; node = khugepaged_find_target_node(cc); new_page = cc->alloc_hpage(cc, gfp, node); if (!new_page) { - result = SCAN_ALLOC_HUGE_PAGE_FAIL; + cr->result = SCAN_ALLOC_HUGE_PAGE_FAIL; goto out; } if (unlikely(mem_cgroup_charge(page_folio(new_page), mm, gfp))) { - result = SCAN_CGROUP_CHARGE_FAIL; + cr->result = SCAN_CGROUP_CHARGE_FAIL; goto out; } count_memcg_page_event(new_page, THP_COLLAPSE_ALLOC); @@ -1730,7 +1742,7 @@ static void collapse_file(struct mm_struct *mm, break; xas_unlock_irq(&xas); if (!xas_nomem(&xas, GFP_KERNEL)) { - result = SCAN_FAIL; + cr->result = SCAN_FAIL; goto out; } } while (1); @@ -1761,13 +1773,13 @@ static void collapse_file(struct mm_struct *mm, */ if (index == start) { if (!xas_next_entry(&xas, end - 1)) { - result = SCAN_TRUNCATED; + cr->result = SCAN_TRUNCATED; goto xa_locked; } xas_set(&xas, index); } if (!shmem_charge(mapping->host, 1)) { - result = SCAN_FAIL; + cr->result = SCAN_FAIL; goto xa_locked; } xas_store(&xas, new_page); @@ -1780,14 +1792,14 @@ static void collapse_file(struct mm_struct *mm, /* swap in or instantiate fallocated page */ if (shmem_getpage(mapping->host, index, &page, SGP_NOALLOC)) { - result = SCAN_FAIL; + cr->result = SCAN_FAIL; goto xa_unlocked; } } else if (trylock_page(page)) { get_page(page); xas_unlock_irq(&xas); } else { - result = SCAN_PAGE_LOCK; + cr->result = SCAN_PAGE_LOCK; goto xa_locked; } } else { /* !is_shmem */ @@ -1800,7 +1812,7 @@ static void collapse_file(struct mm_struct *mm, lru_add_drain(); page = find_lock_page(mapping, index); if (unlikely(page == NULL)) { - result = SCAN_FAIL; + cr->result = SCAN_FAIL; goto xa_unlocked; } } else if (PageDirty(page)) { @@ -1819,17 +1831,17 @@ static void collapse_file(struct mm_struct *mm, */ xas_unlock_irq(&xas); filemap_flush(mapping); - result = SCAN_FAIL; + cr->result = SCAN_FAIL; goto xa_unlocked; } else if (PageWriteback(page)) { xas_unlock_irq(&xas); - result = SCAN_FAIL; + cr->result = SCAN_FAIL; goto xa_unlocked; } else if (trylock_page(page)) { get_page(page); xas_unlock_irq(&xas); } else { - result = SCAN_PAGE_LOCK; + cr->result = SCAN_PAGE_LOCK; goto xa_locked; } } @@ -1842,7 +1854,7 @@ static void collapse_file(struct mm_struct *mm, /* make sure the page is up to date */ if (unlikely(!PageUptodate(page))) { - result = SCAN_FAIL; + cr->result = SCAN_FAIL; goto out_unlock; } @@ -1851,12 +1863,12 @@ static void collapse_file(struct mm_struct *mm, * we locked the first page, then a THP might be there already. */ if (PageTransCompound(page)) { - result = SCAN_PAGE_COMPOUND; + cr->result = SCAN_PAGE_COMPOUND; goto out_unlock; } if (page_mapping(page) != mapping) { - result = SCAN_TRUNCATED; + cr->result = SCAN_TRUNCATED; goto out_unlock; } @@ -1867,18 +1879,18 @@ static void collapse_file(struct mm_struct *mm, * page is dirty because it hasn't been flushed * since first write. */ - result = SCAN_FAIL; + cr->result = SCAN_FAIL; goto out_unlock; } if (isolate_lru_page(page)) { - result = SCAN_DEL_PAGE_LRU; + cr->result = SCAN_DEL_PAGE_LRU; goto out_unlock; } if (page_has_private(page) && !try_to_release_page(page, GFP_KERNEL)) { - result = SCAN_PAGE_HAS_PRIVATE; + cr->result = SCAN_PAGE_HAS_PRIVATE; putback_lru_page(page); goto out_unlock; } @@ -1899,7 +1911,7 @@ static void collapse_file(struct mm_struct *mm, * - one from isolate_lru_page; */ if (!page_ref_freeze(page, 3)) { - result = SCAN_PAGE_COUNT; + cr->result = SCAN_PAGE_COUNT; xas_unlock_irq(&xas); putback_lru_page(page); goto out_unlock; @@ -1934,7 +1946,7 @@ static void collapse_file(struct mm_struct *mm, */ smp_mb(); if (inode_is_open_for_write(mapping->host)) { - result = SCAN_FAIL; + cr->result = SCAN_FAIL; __mod_lruvec_page_state(new_page, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); goto xa_locked; @@ -1961,7 +1973,7 @@ static void collapse_file(struct mm_struct *mm, */ try_to_unmap_flush(); - if (result == SCAN_SUCCEED) { + if (cr->result == SCAN_SUCCEED) { struct page *page, *tmp; /* @@ -2001,8 +2013,6 @@ static void collapse_file(struct mm_struct *mm, */ retract_page_tables(mapping, start); cc->hpage = NULL; - - khugepaged_pages_collapsed++; } else { struct page *page; @@ -2054,15 +2064,16 @@ static void collapse_file(struct mm_struct *mm, static void khugepaged_scan_file(struct mm_struct *mm, struct file *file, pgoff_t start, - struct collapse_control *cc) + struct collapse_control *cc, + struct collapse_result *cr) { struct page *page = NULL; struct address_space *mapping = file->f_mapping; XA_STATE(xas, &mapping->i_pages, start); int present, swap; int node = NUMA_NO_NODE; - int result = SCAN_SUCCEED; + cr->result = SCAN_SUCCEED; present = 0; swap = 0; memset(cc->node_load, 0, sizeof(cc->node_load)); @@ -2073,7 +2084,7 @@ static void khugepaged_scan_file(struct mm_struct *mm, if (xa_is_value(page)) { if (++swap > khugepaged_max_ptes_swap) { - result = SCAN_EXCEED_SWAP_PTE; + cr->result = SCAN_EXCEED_SWAP_PTE; count_vm_event(THP_SCAN_EXCEED_SWAP_PTE); break; } @@ -2085,25 +2096,25 @@ static void khugepaged_scan_file(struct mm_struct *mm, * into a PMD sized page */ if (PageTransCompound(page)) { - result = SCAN_PAGE_COMPOUND; + cr->result = SCAN_PAGE_COMPOUND; break; } node = page_to_nid(page); if (khugepaged_scan_abort(node, cc)) { - result = SCAN_SCAN_ABORT; + cr->result = SCAN_SCAN_ABORT; break; } cc->node_load[node]++; if (!PageLRU(page)) { - result = SCAN_PAGE_LRU; + cr->result = SCAN_PAGE_LRU; break; } if (page_count(page) != 1 + page_mapcount(page) + page_has_private(page)) { - result = SCAN_PAGE_COUNT; + cr->result = SCAN_PAGE_COUNT; break; } @@ -2122,12 +2133,12 @@ static void khugepaged_scan_file(struct mm_struct *mm, } rcu_read_unlock(); - if (result == SCAN_SUCCEED) { + if (cr->result == SCAN_SUCCEED) { if (present < HPAGE_PMD_NR - khugepaged_max_ptes_none) { - result = SCAN_EXCEED_NONE_PTE; + cr->result = SCAN_EXCEED_NONE_PTE; count_vm_event(THP_SCAN_EXCEED_NONE_PTE); } else { - collapse_file(mm, file, start, cc); + collapse_file(mm, file, start, cc, cr); } } @@ -2136,7 +2147,8 @@ static void khugepaged_scan_file(struct mm_struct *mm, #else static void khugepaged_scan_file(struct mm_struct *mm, struct file *file, pgoff_t start, - struct collapse_control *cc) + struct collapse_control *cc, + struct collapse_result *cr) { BUILD_BUG(); } @@ -2208,7 +2220,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, goto skip; while (khugepaged_scan.address < hend) { - int ret; + struct collapse_result cr = {0}; cond_resched(); if (unlikely(khugepaged_test_exit(mm))) goto breakouterloop; @@ -2222,17 +2234,20 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, khugepaged_scan.address); mmap_read_unlock(mm); - ret = 1; - khugepaged_scan_file(mm, file, pgoff, cc); + cr.dropped_mmap_lock = true; + khugepaged_scan_file(mm, file, pgoff, cc, &cr); fput(file); } else { - ret = khugepaged_scan_pmd(mm, vma, - khugepaged_scan.address, cc); + khugepaged_scan_pmd(mm, vma, + khugepaged_scan.address, + cc, &cr); } + if (cr.result == SCAN_SUCCEED) + ++khugepaged_pages_collapsed; /* move to next address */ khugepaged_scan.address += HPAGE_PMD_SIZE; progress += HPAGE_PMD_NR; - if (ret) + if (cr.dropped_mmap_lock) /* we released mmap_lock so break loop */ goto breakouterloop_mmap_lock; if (progress >= pages) From patchwork Thu Apr 14 18:06:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12813846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3667C433F5 for ; Thu, 14 Apr 2022 18:06:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7DC0A6B007B; Thu, 14 Apr 2022 14:06:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 789EE6B007D; Thu, 14 Apr 2022 14:06:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 62BC06B007E; Thu, 14 Apr 2022 14:06:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 55BD66B007B for ; Thu, 14 Apr 2022 14:06:44 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 2B2BB271A3 for ; Thu, 14 Apr 2022 18:06:44 +0000 (UTC) X-FDA: 79356265128.16.F82F6CD Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf24.hostedemail.com (Postfix) with ESMTP id A215A180003 for ; Thu, 14 Apr 2022 18:06:43 +0000 (UTC) Received: by mail-pl1-f202.google.com with SMTP id l6-20020a170903120600b0014f43ba55f3so3097259plh.11 for ; Thu, 14 Apr 2022 11:06:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RKG0GoBs/zmVhJJjld1ckdQ+Ak9rjqigsrVne8X+2jQ=; b=lxNYbO4PLTG4TBHmk9c+ne4AdLRbjSxadz52WEmaUK1olCCQXQ42TJuCqu8PBl99gd Ezvdm2r2a5E+MKmNEt6ifakua/69bxKi8oPIcN3MvZnIBaEL7pOHRGb34NiuyuOdg+ZV OMzUv2c9O6dLQXJzXqWe0iUtsU/B8v1xKJyFzPpJ0lmCXSdNVDXUgv9NnwjjboaXRxA3 6vR5xqlgTBq5g6HQuKwF9qZdLOxpQw86NlRYjYkpAToQJJ5bA2w8ta09xlzcHcv5AwVv FkwMNxTgByHm2FDcwEc/M8JD7LCHue601gvbcnBJLLW9X8rsz9zLS7YjjC4zTTfipj0Y zEzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RKG0GoBs/zmVhJJjld1ckdQ+Ak9rjqigsrVne8X+2jQ=; b=sYRXDmKMHbb9igapPG0gosL9YQUQ1XRN6zXJPntKeIT1eZQvlAhZlYjZSwdYUqWyDl L2XIPUzRHNUzvd5Co4SWzm/ASXfRDAzQJZqBxVDce5DmjV3kBPOOFDSnxDtmAxPCcRKf liqnZJl5nlmWTyL2wFBwPEPKj58f1ytWBG8VyGBYxBks//+j14WeumfPasp+EW1LZUC7 xj9A7EyWriF40VGuVtRkikKwbXa/yOWEpkGQvVSKztYa45PtMum0aw5HPnxykRKswBAR x04N7gVkmBPZKKV3yP2jgNzKL+gY8dGrApPnFDeetGDsTqEpGk1KDU6del4nBVszNAuS 03AA== X-Gm-Message-State: AOAM531j7NEj8e3iCpcZrrXacOef+Noo1RQRtCgNKFSVdY8DyQUcTn36 vp06Ah/5Th24kkb7SkD1jEaniupOulDS X-Google-Smtp-Source: ABdhPJxarJg7K2RviNgURjsbliNTw2ZlOnhLAS/OUubMlZOO4X7Z71azN4Vc6ygsIPcYkUF/DVaXD6VmOYIZ X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:90a:4294:b0:1cd:5524:cd6a with SMTP id p20-20020a17090a429400b001cd5524cd6amr5584550pjg.212.1649959602557; Thu, 14 Apr 2022 11:06:42 -0700 (PDT) Date: Thu, 14 Apr 2022 11:06:05 -0700 In-Reply-To: <20220414180612.3844426-1-zokeefe@google.com> Message-Id: <20220414180612.3844426-6-zokeefe@google.com> Mime-Version: 1.0 References: <20220414180612.3844426-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 05/12] mm/madvise: introduce MADV_COLLAPSE sync hugepage collapse From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" , kernel test robot X-Stat-Signature: 6fgmd9xn73ux8pm888f3ww4ibtuccy3g X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: A215A180003 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=lxNYbO4P; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3smJYYgcKCPEshdXXYXZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--zokeefe.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3smJYYgcKCPEshdXXYXZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--zokeefe.bounces.google.com X-Rspam-User: X-HE-Tag: 1649959603-964368 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This idea was introduced by David Rientjes[1], and the semantics and implementation were introduced and discussed in a previous PATCH RFC[2]. Introduce a new madvise mode, MADV_COLLAPSE, that allows users to request a synchronous collapse of memory at their own expense. The benefits of this approach are: * CPU is charged to the process that wants to spend the cycles for the THP * avoid unpredictable timing of khugepaged collapse Immediate users of this new functionality include: * immediately back executable text by hugepages. Current support provided by CONFIG_READ_ONLY_THP_FOR_FS may take too long on a large system. * malloc implementations that manage memory in hugepage-sized chunks, but sometimes subrelease memory back to the system in native-sized chunks via MADV_DONTNEED; zapping the pmd. Later, when the memory is hot, the implementation could madvise(MADV_COLLAPSE) to re-back the memory by THP to regain TLB performance. Allocation semantics are the same as khugepaged, and depend on (1) the active sysfs settings /sys/kernel/mm/transparent_hugepage/enabled and /sys/kernel/mm/transparent_hugepage/khugepaged/defrag, and (2) the VMA flags of the memory range being collapsed. Only privately-mapped anon memory is supported for now. [1] https://lore.kernel.org/linux-mm/d098c392-273a-36a4-1a29-59731cdf5d3d@google.com/ [2] https://lore.kernel.org/linux-mm/20220308213417.1407042-1-zokeefe@google.com/ Suggested-by: David Rientjes Signed-off-by: Zach O'Keefe Reported-by: kernel test robot --- arch/alpha/include/uapi/asm/mman.h | 2 + arch/mips/include/uapi/asm/mman.h | 2 + arch/parisc/include/uapi/asm/mman.h | 2 + arch/xtensa/include/uapi/asm/mman.h | 2 + include/linux/huge_mm.h | 12 ++ include/uapi/asm-generic/mman-common.h | 2 + mm/khugepaged.c | 149 +++++++++++++++++++++++-- mm/madvise.c | 5 + 8 files changed, 164 insertions(+), 12 deletions(-) diff --git a/arch/alpha/include/uapi/asm/mman.h b/arch/alpha/include/uapi/asm/mman.h index 4aa996423b0d..763929e814e9 100644 --- a/arch/alpha/include/uapi/asm/mman.h +++ b/arch/alpha/include/uapi/asm/mman.h @@ -76,6 +76,8 @@ #define MADV_DONTNEED_LOCKED 24 /* like DONTNEED, but drop locked pages too */ +#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ + /* compatibility flags */ #define MAP_FILE 0 diff --git a/arch/mips/include/uapi/asm/mman.h b/arch/mips/include/uapi/asm/mman.h index 1be428663c10..c6e1fc77c996 100644 --- a/arch/mips/include/uapi/asm/mman.h +++ b/arch/mips/include/uapi/asm/mman.h @@ -103,6 +103,8 @@ #define MADV_DONTNEED_LOCKED 24 /* like DONTNEED, but drop locked pages too */ +#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ + /* compatibility flags */ #define MAP_FILE 0 diff --git a/arch/parisc/include/uapi/asm/mman.h b/arch/parisc/include/uapi/asm/mman.h index a7ea3204a5fa..22133a6a506e 100644 --- a/arch/parisc/include/uapi/asm/mman.h +++ b/arch/parisc/include/uapi/asm/mman.h @@ -70,6 +70,8 @@ #define MADV_WIPEONFORK 71 /* Zero memory on fork, child only */ #define MADV_KEEPONFORK 72 /* Undo MADV_WIPEONFORK */ +#define MADV_COLLAPSE 73 /* Synchronous hugepage collapse */ + #define MADV_HWPOISON 100 /* poison a page for testing */ #define MADV_SOFT_OFFLINE 101 /* soft offline page for testing */ diff --git a/arch/xtensa/include/uapi/asm/mman.h b/arch/xtensa/include/uapi/asm/mman.h index 7966a58af472..1ff0c858544f 100644 --- a/arch/xtensa/include/uapi/asm/mman.h +++ b/arch/xtensa/include/uapi/asm/mman.h @@ -111,6 +111,8 @@ #define MADV_DONTNEED_LOCKED 24 /* like DONTNEED, but drop locked pages too */ +#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ + /* compatibility flags */ #define MAP_FILE 0 diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 816a9937f30e..ddad7c7af44e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -236,6 +236,9 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, int hugepage_madvise(struct vm_area_struct *vma, unsigned long *vm_flags, int advice); +int madvise_collapse(struct vm_area_struct *vma, + struct vm_area_struct **prev, + unsigned long start, unsigned long end); void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start, unsigned long end, long adjust_next); spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma); @@ -392,6 +395,15 @@ static inline int hugepage_madvise(struct vm_area_struct *vma, BUG(); return 0; } + +static inline int madvise_collapse(struct vm_area_struct *vma, + struct vm_area_struct **prev, + unsigned long start, unsigned long end) +{ + BUG(); + return 0; +} + static inline void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start, unsigned long end, diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h index 6c1aa92a92e4..6ce1f1ceb432 100644 --- a/include/uapi/asm-generic/mman-common.h +++ b/include/uapi/asm-generic/mman-common.h @@ -77,6 +77,8 @@ #define MADV_DONTNEED_LOCKED 24 /* like DONTNEED, but drop locked pages too */ +#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */ + /* compatibility flags */ #define MAP_FILE 0 diff --git a/mm/khugepaged.c b/mm/khugepaged.c index e330a95a0479..c745829c3965 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -846,6 +846,23 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void) return khugepaged_defrag() ? GFP_TRANSHUGE : GFP_TRANSHUGE_LIGHT; } +static struct page *alloc_hpage(struct collapse_control *cc, gfp_t gfp, + int node) +{ + VM_BUG_ON_PAGE(cc->hpage, cc->hpage); + + cc->hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER); + if (unlikely(!cc->hpage)) { + count_vm_event(THP_COLLAPSE_ALLOC_FAILED); + cc->hpage = ERR_PTR(-ENOMEM); + return NULL; + } + + prep_transhuge_page(cc->hpage); + count_vm_event(THP_COLLAPSE_ALLOC); + return cc->hpage; +} + #ifdef CONFIG_NUMA static int khugepaged_find_target_node(struct collapse_control *cc) { @@ -892,18 +909,7 @@ static bool khugepaged_prealloc_page(struct page **hpage, bool *wait) static struct page *khugepaged_alloc_page(struct collapse_control *cc, gfp_t gfp, int node) { - VM_BUG_ON_PAGE(cc->hpage, cc->hpage); - - cc->hpage = __alloc_pages_node(node, gfp, HPAGE_PMD_ORDER); - if (unlikely(!cc->hpage)) { - count_vm_event(THP_COLLAPSE_ALLOC_FAILED); - cc->hpage = ERR_PTR(-ENOMEM); - return NULL; - } - - prep_transhuge_page(cc->hpage); - count_vm_event(THP_COLLAPSE_ALLOC); - return cc->hpage; + return alloc_hpage(cc, gfp, node); } #else static int khugepaged_find_target_node(struct collapse_control *cc) @@ -2469,3 +2475,122 @@ void khugepaged_min_free_kbytes_update(void) set_recommended_min_free_kbytes(); mutex_unlock(&khugepaged_mutex); } + +static void madvise_collapse_cleanup_page(struct page **hpage) +{ + if (!IS_ERR(*hpage) && *hpage) + put_page(*hpage); + *hpage = NULL; +} + +static int madvise_collapse_errno(enum scan_result r) +{ + switch (r) { + case SCAN_PMD_NULL: + case SCAN_ADDRESS_RANGE: + case SCAN_VMA_NULL: + case SCAN_PTE_NON_PRESENT: + case SCAN_PAGE_NULL: + /* + * Addresses in the specified range are not currently mapped, + * or are outside the AS of the process. + */ + return -ENOMEM; + case SCAN_ALLOC_HUGE_PAGE_FAIL: + case SCAN_CGROUP_CHARGE_FAIL: + /* A kernel resource was temporarily unavailable. */ + return -EAGAIN; + default: + return -EINVAL; + } +} + +int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, + unsigned long start, unsigned long end) +{ + struct collapse_control cc = { + .last_target_node = NUMA_NO_NODE, + .hpage = NULL, + .alloc_hpage = &alloc_hpage, + }; + struct mm_struct *mm = vma->vm_mm; + struct collapse_result cr; + unsigned long hstart, hend, addr; + int thps = 0, nr_hpages = 0; + + BUG_ON(vma->vm_start > start); + BUG_ON(vma->vm_end < end); + + *prev = vma; + + if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file) + return -EINVAL; + + hstart = (start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; + hend = end & HPAGE_PMD_MASK; + nr_hpages = (hend - hstart) >> HPAGE_PMD_SHIFT; + + if (hstart >= hend || !transparent_hugepage_active(vma)) + return -EINVAL; + + mmgrab(mm); + lru_add_drain(); + + for (addr = hstart; ; ) { + mmap_assert_locked(mm); + cond_resched(); + memset(&cr, 0, sizeof(cr)); + + if (unlikely(khugepaged_test_exit(mm))) + break; + + memset(cc.node_load, 0, sizeof(cc.node_load)); + khugepaged_scan_pmd(mm, vma, addr, &cc, &cr); + if (cr.dropped_mmap_lock) + *prev = NULL; /* tell madvise we dropped mmap_lock */ + + switch (cr.result) { + /* Whitelisted set of results where continuing OK */ + case SCAN_SUCCEED: + case SCAN_PMD_MAPPED: + ++thps; + case SCAN_PMD_NULL: + case SCAN_PTE_NON_PRESENT: + case SCAN_PTE_UFFD_WP: + case SCAN_PAGE_RO: + case SCAN_LACK_REFERENCED_PAGE: + case SCAN_PAGE_NULL: + case SCAN_PAGE_COUNT: + case SCAN_PAGE_LOCK: + case SCAN_PAGE_COMPOUND: + break; + case SCAN_PAGE_LRU: + lru_add_drain_all(); + goto retry; + default: + /* Other error, exit */ + goto break_loop; + } + addr += HPAGE_PMD_SIZE; + if (addr >= hend) + break; +retry: + if (cr.dropped_mmap_lock) { + mmap_read_lock(mm); + if (hugepage_vma_revalidate(mm, addr, &vma)) + goto out; + } + madvise_collapse_cleanup_page(&cc.hpage); + } + +break_loop: + /* madvise_walk_vmas() expects us to hold mmap_lock on return */ + if (cr.dropped_mmap_lock) + mmap_read_lock(mm); +out: + mmap_assert_locked(mm); + madvise_collapse_cleanup_page(&cc.hpage); + mmdrop(mm); + + return thps == nr_hpages ? 0 : madvise_collapse_errno(cr.result); +} diff --git a/mm/madvise.c b/mm/madvise.c index ec03a76244b7..7ad53e5311cf 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -59,6 +59,7 @@ static int madvise_need_mmap_write(int behavior) case MADV_FREE: case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: + case MADV_COLLAPSE: return 0; default: /* be safe, default to 1. list exceptions explicitly */ @@ -1051,6 +1052,8 @@ static int madvise_vma_behavior(struct vm_area_struct *vma, if (error) goto out; break; + case MADV_COLLAPSE: + return madvise_collapse(vma, prev, start, end); } anon_name = anon_vma_name(vma); @@ -1144,6 +1147,7 @@ madvise_behavior_valid(int behavior) #ifdef CONFIG_TRANSPARENT_HUGEPAGE case MADV_HUGEPAGE: case MADV_NOHUGEPAGE: + case MADV_COLLAPSE: #endif case MADV_DONTDUMP: case MADV_DODUMP: @@ -1333,6 +1337,7 @@ int madvise_set_anon_name(struct mm_struct *mm, unsigned long start, * MADV_NOHUGEPAGE - mark the given range as not worth being backed by * transparent huge pages so the existing pages will not be * coalesced into THP and new pages will not be allocated as THP. + * MADV_COLLAPSE - synchronously coalesce pages into new THP. * MADV_DONTDUMP - the application wants to prevent pages in the given range * from being included in its core dump. * MADV_DODUMP - cancel MADV_DONTDUMP: no longer exclude from core dump. From patchwork Thu Apr 14 18:06:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12813847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD07FC433FE for ; Thu, 14 Apr 2022 18:06:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BADBA6B007D; Thu, 14 Apr 2022 14:06:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B34A26B007E; Thu, 14 Apr 2022 14:06:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D6056B0080; Thu, 14 Apr 2022 14:06:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 8C8A66B007D for ; Thu, 14 Apr 2022 14:06:46 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0EBC127712 for ; Thu, 14 Apr 2022 18:06:45 +0000 (UTC) X-FDA: 79356265212.04.93A20C4 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf23.hostedemail.com (Postfix) with ESMTP id 69462140008 for ; Thu, 14 Apr 2022 18:06:45 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id i2-20020a056a00224200b004fa60c248a1so3488364pfu.13 for ; Thu, 14 Apr 2022 11:06:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HUkVlN4nGhbyY5EuY4YQtXsvF2avC5ZJ/9hqOWQq5p0=; b=Gmig23aqyA3vlzjfMX7vD9McnXjWuA1VwWXEpq3NKhytyqTDDUyvAw6EmFDjGeKYjw kU81W+ziSCtANIFzvlb6joab8uhi+nlGNgWTDEEum/HdTDLh66H04coGVjtTmbZrRHHY 4JBdPxPs/TQcXdv2GLmLQkgponJ96gFWzZx0MY15PrfeBqcfwluxxxqp0OoQ8mCg1fi8 WybMdxYr8RK1eZifbzGolaMpvaT8sjMZgP1SEvmy+uDLhz6BuoV4S5ykm2xrRp4m7Nsm PzWZQCb9Iej7ndk5Tw7tIvBbjZbMn2jX+z81nAexmyJ9hfcAiaamkdP/xVQuCnAyX1zo BVzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HUkVlN4nGhbyY5EuY4YQtXsvF2avC5ZJ/9hqOWQq5p0=; b=lAIhMMnKYS9lmkpcYhyGfvfWIrMawIAxDWJlndc+McKGQjdWYHJb3Fax/LT0EhbvSf EUzMWoeUluZ0+7jwAlzOE+31m6Yhj3w3kGMhHGAPUwbk7XzHZg8V85XHFnYHP7lnQ8Sb rI8cWjTZZxY7UCsWzJEhnXobFnfGOS6S86X2D1QguXh5FVm+53RlsIDBgiaAjkfDL38i u7HkLaP5KWXSNXgkG88xtvt/EfNcutTxe6OtEXKj+clPvT5tk6zu054mFAzNMp5fo6o0 zqBMriQHLzEZZ9V6MVu34vmep7cnc5Iwdm0ihG7O7dicgXlnyhPZ37BqF6wNKHYR+uf7 KMzA== X-Gm-Message-State: AOAM533kRl+CxfWPaRYEBL18BJ/eOnlAPN5BbbAekY0XVh9JF/hAzd4d 6EgDSVzixj1/EkSCryj3qvoLYZLx3733 X-Google-Smtp-Source: ABdhPJxpwmL4X/WFD49yq9jm5ARP6gwefjvxodXSfLCvEtdim1pKMVTdZobmbqfNo3QLkt7qCcqnoVaeOhEr X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a05:6a00:10cc:b0:506:e0:d6c3 with SMTP id d12-20020a056a0010cc00b0050600e0d6c3mr13750059pfu.33.1649959604277; Thu, 14 Apr 2022 11:06:44 -0700 (PDT) Date: Thu, 14 Apr 2022 11:06:06 -0700 In-Reply-To: <20220414180612.3844426-1-zokeefe@google.com> Message-Id: <20220414180612.3844426-7-zokeefe@google.com> Mime-Version: 1.0 References: <20220414180612.3844426-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 06/12] mm/khugepaged: remove khugepaged prefix from shared collapse functions From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" , kernel test robot Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Gmig23aq; spf=pass (imf23.hostedemail.com: domain of 3tGJYYgcKCPMujfZZaZbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--zokeefe.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3tGJYYgcKCPMujfZZaZbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: y9ncs8yinsytpdy74f4tyeaognhguk45 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 69462140008 X-HE-Tag: 1649959605-70815 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The following functions/tracepoints are shared between khugepaged and madvise collapse contexts. Remove the khugepaged prefixes. tracepoint:mm_khugepaged_scan_pmd -> tracepoint:mm_scan_pmd khugepaged_test_exit() -> test_exit() khugepaged_scan_abort() -> scan_abort() khugepaged_scan_pmd() -> scan_pmd() khugepaged_find_target_node() -> find_target_node() Signed-off-by: Zach O'Keefe Reported-by: kernel test robot --- include/trace/events/huge_memory.h | 2 +- mm/khugepaged.c | 70 ++++++++++++++---------------- 2 files changed, 34 insertions(+), 38 deletions(-) diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h index 9faa678e0a5b..09be0e2f76b1 100644 --- a/include/trace/events/huge_memory.h +++ b/include/trace/events/huge_memory.h @@ -48,7 +48,7 @@ SCAN_STATUS #define EM(a, b) {a, b}, #define EMe(a, b) {a, b} -TRACE_EVENT(mm_khugepaged_scan_pmd, +TRACE_EVENT(mm_scan_pmd, TP_PROTO(struct mm_struct *mm, struct page *page, bool writable, int referenced, int none_or_zero, int status, int unmapped), diff --git a/mm/khugepaged.c b/mm/khugepaged.c index c745829c3965..716ba465b356 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -90,7 +90,7 @@ struct collapse_control { /* Num pages scanned per node */ int node_load[MAX_NUMNODES]; - /* Last target selected in khugepaged_find_target_node() for this scan */ + /* Last target selected in find_target_node() for this scan */ int last_target_node; struct page *hpage; @@ -453,7 +453,7 @@ static void insert_to_mm_slots_hash(struct mm_struct *mm, hash_add(mm_slots_hash, &mm_slot->hash, (long)mm); } -static inline int khugepaged_test_exit(struct mm_struct *mm) +static inline int test_exit(struct mm_struct *mm) { return atomic_read(&mm->mm_users) == 0; } @@ -505,7 +505,7 @@ void __khugepaged_enter(struct mm_struct *mm) return; /* __khugepaged_exit() must not run from under us */ - VM_BUG_ON_MM(khugepaged_test_exit(mm), mm); + VM_BUG_ON_MM(test_exit(mm), mm); if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) { free_mm_slot(mm_slot); return; @@ -557,12 +557,11 @@ void __khugepaged_exit(struct mm_struct *mm) mmdrop(mm); } else if (mm_slot) { /* - * This is required to serialize against - * khugepaged_test_exit() (which is guaranteed to run - * under mmap sem read mode). Stop here (after we - * return all pagetables will be destroyed) until - * khugepaged has finished working on the pagetables - * under the mmap_lock. + * This is required to serialize against test_exit() (which is + * guaranteed to run under mmap sem read mode). Stop here + * (after we return all pagetables will be destroyed) until + * khugepaged has finished working on the pagetables under + * the mmap_lock. */ mmap_write_lock(mm); mmap_write_unlock(mm); @@ -816,7 +815,7 @@ static void khugepaged_alloc_sleep(void) remove_wait_queue(&khugepaged_wait, &wait); } -static bool khugepaged_scan_abort(int nid, struct collapse_control *cc) +static bool scan_abort(int nid, struct collapse_control *cc) { int i; @@ -864,7 +863,7 @@ static struct page *alloc_hpage(struct collapse_control *cc, gfp_t gfp, } #ifdef CONFIG_NUMA -static int khugepaged_find_target_node(struct collapse_control *cc) +static int find_target_node(struct collapse_control *cc) { int nid, target_node = 0, max_value = 0; @@ -912,7 +911,7 @@ static struct page *khugepaged_alloc_page(struct collapse_control *cc, return alloc_hpage(cc, gfp, node); } #else -static int khugepaged_find_target_node(struct collapse_control *cc) +static int find_target_node(struct collapse_control *cc) { return 0; } @@ -993,7 +992,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, struct vm_area_struct *vma; unsigned long hstart, hend; - if (unlikely(khugepaged_test_exit(mm))) + if (unlikely(test_exit(mm))) return SCAN_ANY_PROCESS; *vmap = vma = find_vma(mm, address); @@ -1037,7 +1036,7 @@ static int find_pmd_or_thp_or_none(struct mm_struct *mm, /* * Bring missing pages in from swap, to complete THP collapse. - * Only done if khugepaged_scan_pmd believes it is worthwhile. + * Only done if scan_pmd believes it is worthwhile. * * Called and returns without pte mapped or spinlocks held, * but with mmap_lock held to protect against vma changes. @@ -1129,7 +1128,7 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, mmap_read_unlock(mm); cr->dropped_mmap_lock = true; - node = khugepaged_find_target_node(cc); + node = find_target_node(cc); /* sched to specified node before huage page memory copy */ if (task_node(current) != node) { cpumask = cpumask_of_node(node); @@ -1270,11 +1269,9 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, return; } -static void khugepaged_scan_pmd(struct mm_struct *mm, - struct vm_area_struct *vma, - unsigned long address, - struct collapse_control *cc, - struct collapse_result *cr) +static void scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long address, struct collapse_control *cc, + struct collapse_result *cr) { pmd_t *pmd; pte_t *pte, *_pte; @@ -1364,7 +1361,7 @@ static void khugepaged_scan_pmd(struct mm_struct *mm, * hit record. */ node = page_to_nid(page); - if (khugepaged_scan_abort(node, cc)) { + if (scan_abort(node, cc)) { cr->result = SCAN_SCAN_ABORT; goto out_unmap; } @@ -1421,8 +1418,8 @@ static void khugepaged_scan_pmd(struct mm_struct *mm, /* collapse_huge_page will return with the mmap_lock released */ collapse_huge_page(mm, address, cc, referenced, unmapped, cr); out: - trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, - none_or_zero, cr->result, unmapped); + trace_mm_scan_pmd(mm, page, writable, referenced, none_or_zero, + cr->result, unmapped); } static void collect_mm_slot(struct mm_slot *mm_slot) @@ -1431,7 +1428,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot) lockdep_assert_held(&khugepaged_mm_lock); - if (khugepaged_test_exit(mm)) { + if (test_exit(mm)) { /* free mm_slot */ hash_del(&mm_slot->hash); list_del(&mm_slot->mm_node); @@ -1602,7 +1599,7 @@ static void khugepaged_collapse_pte_mapped_thps(struct mm_slot *mm_slot) if (!mmap_write_trylock(mm)) return; - if (unlikely(khugepaged_test_exit(mm))) + if (unlikely(test_exit(mm))) goto out; for (i = 0; i < mm_slot->nr_pte_mapped_thp; i++) @@ -1665,7 +1662,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) * it'll always mapped in small page size for uffd-wp * registered ranges. */ - if (!khugepaged_test_exit(mm) && !userfaultfd_wp(vma)) + if (!test_exit(mm) && !userfaultfd_wp(vma)) collapse_and_free_pmd(mm, vma, addr, pmd); mmap_write_unlock(mm); } else { @@ -1723,7 +1720,7 @@ static void collapse_file(struct mm_struct *mm, /* Only allocate from the target node */ gfp = alloc_hugepage_khugepaged_gfpmask() | __GFP_THISNODE; - node = khugepaged_find_target_node(cc); + node = find_target_node(cc); new_page = cc->alloc_hpage(cc, gfp, node); if (!new_page) { @@ -2107,7 +2104,7 @@ static void khugepaged_scan_file(struct mm_struct *mm, } node = page_to_nid(page); - if (khugepaged_scan_abort(node, cc)) { + if (scan_abort(node, cc)) { cr->result = SCAN_SCAN_ABORT; break; } @@ -2196,7 +2193,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, vma = NULL; if (unlikely(!mmap_read_trylock(mm))) goto breakouterloop_mmap_lock; - if (likely(!khugepaged_test_exit(mm))) + if (likely(!test_exit(mm))) vma = find_vma(mm, khugepaged_scan.address); progress++; @@ -2204,7 +2201,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, unsigned long hstart, hend; cond_resched(); - if (unlikely(khugepaged_test_exit(mm))) { + if (unlikely(test_exit(mm))) { progress++; break; } @@ -2228,7 +2225,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, while (khugepaged_scan.address < hend) { struct collapse_result cr = {0}; cond_resched(); - if (unlikely(khugepaged_test_exit(mm))) + if (unlikely(test_exit(mm))) goto breakouterloop; VM_BUG_ON(khugepaged_scan.address < hstart || @@ -2244,9 +2241,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, khugepaged_scan_file(mm, file, pgoff, cc, &cr); fput(file); } else { - khugepaged_scan_pmd(mm, vma, - khugepaged_scan.address, - cc, &cr); + scan_pmd(mm, vma, khugepaged_scan.address, cc, + &cr); } if (cr.result == SCAN_SUCCEED) ++khugepaged_pages_collapsed; @@ -2270,7 +2266,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, * Release the current mm_slot if this mm is about to die, or * if we scanned all vmas of this mm. */ - if (khugepaged_test_exit(mm) || !vma) { + if (test_exit(mm) || !vma) { /* * Make sure that if mm_users is reaching zero while * khugepaged runs here, khugepaged_exit will find @@ -2541,11 +2537,11 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, cond_resched(); memset(&cr, 0, sizeof(cr)); - if (unlikely(khugepaged_test_exit(mm))) + if (unlikely(test_exit(mm))) break; memset(cc.node_load, 0, sizeof(cc.node_load)); - khugepaged_scan_pmd(mm, vma, addr, &cc, &cr); + scan_pmd(mm, vma, addr, &cc, &cr); if (cr.dropped_mmap_lock) *prev = NULL; /* tell madvise we dropped mmap_lock */ From patchwork Thu Apr 14 18:06:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12813848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8064C433F5 for ; Thu, 14 Apr 2022 18:06:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 765F76B007E; Thu, 14 Apr 2022 14:06:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7155E6B0080; Thu, 14 Apr 2022 14:06:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5DDEE6B0081; Thu, 14 Apr 2022 14:06:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 50D776B007E for ; Thu, 14 Apr 2022 14:06:49 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D78F1235FD for ; Thu, 14 Apr 2022 18:06:47 +0000 (UTC) X-FDA: 79356265254.16.9D37BF3 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf04.hostedemail.com (Postfix) with ESMTP id 3EFA640008 for ; Thu, 14 Apr 2022 18:06:47 +0000 (UTC) Received: by mail-pl1-f202.google.com with SMTP id l6-20020a170903120600b0014f43ba55f3so3097330plh.11 for ; Thu, 14 Apr 2022 11:06:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=MiAoRuoF75xZwfOeEOmMMAHOt8RR57w9sTCwWNNMnTE=; b=YsiE1PtGxTUG1OpIdrsBhdz7blfxiDypdTgX/vjuZjEtN4MadSMeVas82kojl3tlNR KRAM3pbRAeaPiJKo40WzofJPnQTUGHc0y+9Gp+/GT7f3cbx2GyB6ImjiF12KGDeLqC7E Ohu8ZN4oNlm9hNIzq34p4NIF5AQMrzml/FpOT0LIhF4y3G5/7BFe82COoZvdOm1/KfBe FnjuM9jy9JweiQ8B2JKqbDdmIANgKs4qzwrbX7muokGco6UestH4atgS2r0QiFpYDCGQ CspA7fMgoyhxHPM4/fXC6GsqyxxFKGSlX6rwEwnHy2nFP/RtOKTfGW6JofBjwAePTeoZ z7Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=MiAoRuoF75xZwfOeEOmMMAHOt8RR57w9sTCwWNNMnTE=; b=RMuD5OuvyqfvVBXG2Cm7vpYSRsVRnRgK9y6iwc1hmrFMONYssD6igptzZ38OhoEgfQ vw28tN9oSorR9bnv216aznxoHbraeiYT4hjhs2ao0mHD+Xx3QqHFkVIyIwgnAPoLjMTm cBsyWgJ3h9vb0vAgX9FvxgQ88rXaRLeZHfHmyLy50lFlm2L2/TrhAyeBDd+jKtTH96EB Lkpzj04rZEXJ+Adrfv7Unutdf52ng4tJyeN2tjOlRpD8BFPajpCN9A/qf4e5OhYuJcMC qglD/qnoxy4cNGrLIrzEbzmDY0ReJegfGwXNGsIF/mRd87v15yf5dgpT1A8O7mEGfgqd DaDg== X-Gm-Message-State: AOAM531xPoHM1oA5jLsaIzRO1NbT60xvJuk6TMgWgZ12n5PsjWIZLLcr cmkOAvoCRSgUVkmZkksh1Ly3Vqc7TuIk X-Google-Smtp-Source: ABdhPJwZILYIPw9MsNNKU5cptSUiRK5JVdiNmlPyW2xn1mxWYgEVyGodnId8iKoOiTF/EndMjpzvVAhxLENv X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:90a:858b:b0:1c6:5bc8:781a with SMTP id m11-20020a17090a858b00b001c65bc8781amr92977pjn.0.1649959605809; Thu, 14 Apr 2022 11:06:45 -0700 (PDT) Date: Thu, 14 Apr 2022 11:06:07 -0700 In-Reply-To: <20220414180612.3844426-1-zokeefe@google.com> Message-Id: <20220414180612.3844426-8-zokeefe@google.com> Mime-Version: 1.0 References: <20220414180612.3844426-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 07/12] mm/khugepaged: add flag to ignore khugepaged_max_ptes_* From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" X-Stat-Signature: wbbzg3xe8856xmnd77bzf516dtxmdbf1 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=YsiE1PtG; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of 3tWJYYgcKCPQvkgaabackkcha.Ykihejqt-iigrWYg.knc@flex--zokeefe.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3tWJYYgcKCPQvkgaabackkcha.Ykihejqt-iigrWYg.knc@flex--zokeefe.bounces.google.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 3EFA640008 X-HE-Tag: 1649959607-799638 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add enforce_pte_scan_limits flag to struct collapse_control that allows context to ignore sysfs-controlled knobs: khugepaged_max_ptes_[none|swap|shared]. Set this flag in khugepaged collapse context to preserve existing khugepaged behavior and unset the flag in madvise collapse context since the user presumably has reason to believe the collapse will be beneficial. Signed-off-by: Zach O'Keefe --- mm/khugepaged.c | 32 ++++++++++++++++++++++---------- 1 file changed, 22 insertions(+), 10 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 716ba465b356..2f95f60431aa 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -87,6 +87,9 @@ static struct kmem_cache *mm_slot_cache __read_mostly; #define MAX_PTE_MAPPED_THP 8 struct collapse_control { + /* Respect khugepaged_max_ptes_[none|swap|shared] */ + bool enforce_pte_scan_limits; + /* Num pages scanned per node */ int node_load[MAX_NUMNODES]; @@ -631,6 +634,7 @@ static bool is_refcount_suitable(struct page *page) static int __collapse_huge_page_isolate(struct vm_area_struct *vma, unsigned long address, pte_t *pte, + struct collapse_control *cc, struct list_head *compound_pagelist) { struct page *page = NULL; @@ -644,7 +648,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, if (pte_none(pteval) || (pte_present(pteval) && is_zero_pfn(pte_pfn(pteval)))) { if (!userfaultfd_armed(vma) && - ++none_or_zero <= khugepaged_max_ptes_none) { + (++none_or_zero <= khugepaged_max_ptes_none || + !cc->enforce_pte_scan_limits)) { continue; } else { result = SCAN_EXCEED_NONE_PTE; @@ -664,8 +669,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, VM_BUG_ON_PAGE(!PageAnon(page), page); - if (page_mapcount(page) > 1 && - ++shared > khugepaged_max_ptes_shared) { + if (cc->enforce_pte_scan_limits && page_mapcount(page) > 1 && + ++shared > khugepaged_max_ptes_shared) { result = SCAN_EXCEED_SHARED_PTE; count_vm_event(THP_SCAN_EXCEED_SHARED_PTE); goto out; @@ -1207,7 +1212,7 @@ static void collapse_huge_page(struct mm_struct *mm, unsigned long address, mmu_notifier_invalidate_range_end(&range); spin_lock(pte_ptl); - cr->result = __collapse_huge_page_isolate(vma, address, pte, + cr->result = __collapse_huge_page_isolate(vma, address, pte, cc, &compound_pagelist); spin_unlock(pte_ptl); @@ -1296,7 +1301,8 @@ static void scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, _pte++, _address += PAGE_SIZE) { pte_t pteval = *_pte; if (is_swap_pte(pteval)) { - if (++unmapped <= khugepaged_max_ptes_swap) { + if (++unmapped <= khugepaged_max_ptes_swap || + !cc->enforce_pte_scan_limits) { /* * Always be strict with uffd-wp * enabled swap entries. Please see @@ -1315,7 +1321,8 @@ static void scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, } if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { if (!userfaultfd_armed(vma) && - ++none_or_zero <= khugepaged_max_ptes_none) { + (++none_or_zero <= khugepaged_max_ptes_none || + !cc->enforce_pte_scan_limits)) { continue; } else { cr->result = SCAN_EXCEED_NONE_PTE; @@ -1345,8 +1352,9 @@ static void scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, goto out_unmap; } - if (page_mapcount(page) > 1 && - ++shared > khugepaged_max_ptes_shared) { + if (cc->enforce_pte_scan_limits && + page_mapcount(page) > 1 && + ++shared > khugepaged_max_ptes_shared) { cr->result = SCAN_EXCEED_SHARED_PTE; count_vm_event(THP_SCAN_EXCEED_SHARED_PTE); goto out_unmap; @@ -2086,7 +2094,8 @@ static void khugepaged_scan_file(struct mm_struct *mm, continue; if (xa_is_value(page)) { - if (++swap > khugepaged_max_ptes_swap) { + if (cc->enforce_pte_scan_limits && + ++swap > khugepaged_max_ptes_swap) { cr->result = SCAN_EXCEED_SWAP_PTE; count_vm_event(THP_SCAN_EXCEED_SWAP_PTE); break; @@ -2137,7 +2146,8 @@ static void khugepaged_scan_file(struct mm_struct *mm, rcu_read_unlock(); if (cr->result == SCAN_SUCCEED) { - if (present < HPAGE_PMD_NR - khugepaged_max_ptes_none) { + if (present < HPAGE_PMD_NR - khugepaged_max_ptes_none && + cc->enforce_pte_scan_limits) { cr->result = SCAN_EXCEED_NONE_PTE; count_vm_event(THP_SCAN_EXCEED_NONE_PTE); } else { @@ -2364,6 +2374,7 @@ static int khugepaged(void *none) { struct mm_slot *mm_slot; struct collapse_control cc = { + .enforce_pte_scan_limits = true, .last_target_node = NUMA_NO_NODE, .alloc_hpage = &khugepaged_alloc_page, }; @@ -2505,6 +2516,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, unsigned long start, unsigned long end) { struct collapse_control cc = { + .enforce_pte_scan_limits = false, .last_target_node = NUMA_NO_NODE, .hpage = NULL, .alloc_hpage = &alloc_hpage, From patchwork Thu Apr 14 18:06:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12813849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C759C433EF for ; Thu, 14 Apr 2022 18:06:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2457C6B0082; Thu, 14 Apr 2022 14:06:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E6716B0085; Thu, 14 Apr 2022 14:06:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F052C6B0082; Thu, 14 Apr 2022 14:06:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id E1BB06B0080 for ; Thu, 14 Apr 2022 14:06:50 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 4FC41834A9 for ; Thu, 14 Apr 2022 18:06:49 +0000 (UTC) X-FDA: 79356265338.18.867038E Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf25.hostedemail.com (Postfix) with ESMTP id A7D4AA000D for ; Thu, 14 Apr 2022 18:06:48 +0000 (UTC) Received: by mail-pj1-f74.google.com with SMTP id v14-20020a17090a0c8e00b001cb778cc439so3215469pja.3 for ; Thu, 14 Apr 2022 11:06:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=TSyQ00cmc2Sqw2+m7G2pC8yk3etzd3mGCxmpa7uLA/c=; b=S21sub5WAGAm1ijMOnsvbppyFOUubRLWWqL1lFVynKqX4tjwrDuya1HZElexqTL7p9 gosoZx/Xdi/fwkCGGkifaaVbiK3MhxMJvm0rNGQouHA2xeGlOq1X3F1TDQx92DvXDq+V 3qBeOmLWKFJd58tpnEd3WYFTFt5oMrneog4eerlY4yBNUG59+23ZfHxz9YiLnekcDhmS T31moboexSxgXf8ZGrnYgl0LGD+J14JImArUzUCaEfmC0DxzCt0TLTGcQ4EA9Ns8/5df 1JR6bGd4AXUmfGide8+exM7IBS0pUFrWhBNT4p01yERaAJyU4GC/RSc+qZq/lsfdb2lW ihTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TSyQ00cmc2Sqw2+m7G2pC8yk3etzd3mGCxmpa7uLA/c=; b=GZe0sBk9M0n1VhN+Clmh6qGzRqJIqH5bF5uv/07vn0FJpWVccs2ZuHnbDC7R1r0fh5 MrsoWMpwVzcqMlzXRBbk43LiQm8DKYFCCnCgpHwBOmZP/09xPL3PB17ftubq5oQT/zZL qtFW5DZRrKXogw/eosaKGWJcwF4nCLVlxTO3SnLh29Eh5TaT++cPHl8aD63XGYJpiks5 3HpxHIw+BpiQxGkzZMFmKew/vja5XZ64o/+fqUteEa/C3Gw+F8GDsJnPlP4evZTCOz/N dyyfAsuWXi9EBM4ImcHffAmYhRu73zr4kxzDjEZxM6uWkv7w5eY8vk4eqjHFJLSXGKHx St6w== X-Gm-Message-State: AOAM531pxGMDAMG/SjXO+IdAQ6m69ELpbRLRLdkyEHS5x4ZyI9zkSPak ObennPBUapzfLOoN55/BwxgLf/WBocKN X-Google-Smtp-Source: ABdhPJycg04QNpoRfyq2QO82oPFG7IIyjQoP4/R0tNE2T6GG0vSniPRTxR+YRAWXH8pxz/FIbmzAi1iRC61C X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a05:6a00:894:b0:4fe:25d7:f59e with SMTP id q20-20020a056a00089400b004fe25d7f59emr5174895pfj.58.1649959607680; Thu, 14 Apr 2022 11:06:47 -0700 (PDT) Date: Thu, 14 Apr 2022 11:06:08 -0700 In-Reply-To: <20220414180612.3844426-1-zokeefe@google.com> Message-Id: <20220414180612.3844426-9-zokeefe@google.com> Mime-Version: 1.0 References: <20220414180612.3844426-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 08/12] mm/khugepaged: add flag to ignore page young/referenced requirement From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=S21sub5W; spf=pass (imf25.hostedemail.com: domain of 3t2JYYgcKCPYxmiccdcemmejc.amkjglsv-kkitYai.mpe@flex--zokeefe.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3t2JYYgcKCPYxmiccdcemmejc.amkjglsv-kkitYai.mpe@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: ugcsmr81uo7i46doccjg55jnidreagxy X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: A7D4AA000D X-HE-Tag: 1649959608-798642 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add enforce_young flag to struct collapse_control that allows context to ignore requirement that some pages in region being collapsed be young or referenced. Set this flag in khugepaged collapse context to preserve existing khugepaged behavior and unset the flag in madvise collapse context since the user presumably has reason to believe the collapse will be beneficial. Signed-off-by: Zach O'Keefe --- mm/khugepaged.c | 24 ++++++++++++++++-------- 1 file changed, 16 insertions(+), 8 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2f95f60431aa..b9bf15faba26 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -90,6 +90,9 @@ struct collapse_control { /* Respect khugepaged_max_ptes_[none|swap|shared] */ bool enforce_pte_scan_limits; + /* Require memory to be young */ + bool enforce_young; + /* Num pages scanned per node */ int node_load[MAX_NUMNODES]; @@ -737,9 +740,10 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, list_add_tail(&page->lru, compound_pagelist); next: /* There should be enough young pte to collapse the page */ - if (pte_young(pteval) || - page_is_young(page) || PageReferenced(page) || - mmu_notifier_test_young(vma->vm_mm, address)) + if (cc->enforce_young && + (pte_young(pteval) || page_is_young(page) || + PageReferenced(page) || mmu_notifier_test_young(vma->vm_mm, + address))) referenced++; if (pte_write(pteval)) @@ -748,7 +752,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, if (unlikely(!writable)) { result = SCAN_PAGE_RO; - } else if (unlikely(!referenced)) { + } else if (unlikely(cc->enforce_young && !referenced)) { result = SCAN_LACK_REFERENCED_PAGE; } else { result = SCAN_SUCCEED; @@ -1408,14 +1412,16 @@ static void scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, cr->result = SCAN_PAGE_COUNT; goto out_unmap; } - if (pte_young(pteval) || - page_is_young(page) || PageReferenced(page) || - mmu_notifier_test_young(vma->vm_mm, address)) + if (cc->enforce_young && + (pte_young(pteval) || page_is_young(page) || + PageReferenced(page) || mmu_notifier_test_young(vma->vm_mm, + address))) referenced++; } if (!writable) { cr->result = SCAN_PAGE_RO; - } else if (!referenced || (unmapped && referenced < HPAGE_PMD_NR/2)) { + } else if (cc->enforce_young && (!referenced || (unmapped && referenced + < HPAGE_PMD_NR / 2))) { cr->result = SCAN_LACK_REFERENCED_PAGE; } else { cr->result = SCAN_SUCCEED; @@ -2375,6 +2381,7 @@ static int khugepaged(void *none) struct mm_slot *mm_slot; struct collapse_control cc = { .enforce_pte_scan_limits = true, + .enforce_young = true, .last_target_node = NUMA_NO_NODE, .alloc_hpage = &khugepaged_alloc_page, }; @@ -2517,6 +2524,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, { struct collapse_control cc = { .enforce_pte_scan_limits = false, + .enforce_young = false, .last_target_node = NUMA_NO_NODE, .hpage = NULL, .alloc_hpage = &alloc_hpage, From patchwork Thu Apr 14 18:06:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12813850 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 502EDC433FE for ; Thu, 14 Apr 2022 18:06:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5939C6B0080; Thu, 14 Apr 2022 14:06:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5417D6B0083; Thu, 14 Apr 2022 14:06:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 236C86B0080; Thu, 14 Apr 2022 14:06:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 0348C6B0080 for ; Thu, 14 Apr 2022 14:06:51 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id CDA1E2358C for ; Thu, 14 Apr 2022 18:06:50 +0000 (UTC) X-FDA: 79356265380.25.5246BA9 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf24.hostedemail.com (Postfix) with ESMTP id 49522180002 for ; Thu, 14 Apr 2022 18:06:50 +0000 (UTC) Received: by mail-pj1-f73.google.com with SMTP id d15-20020a17090a3b0f00b001cd5528627eso2281781pjc.1 for ; Thu, 14 Apr 2022 11:06:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=qt7LAMoWwJRlSLWPA4drMIMlweaGvvHJUEKLpTNLGFs=; b=K1pmTkZ7xMyHq/WaSOdT2PLMHoeGIaNe+XBrANKHrYp+GD6iuKvTItnoZIPJqTNtjk IN2MykryqWRYxEQQubHLB+zdi279s3Lhp5U/S59DLyST75j7pqjf0x/F9CIoQDPeWeT6 fKbkTm1T30AiirtMf/m8qvgoD6sfYq7Re4qoNzA20C26GLwn5ZUSsf/c7hgDQyaFjabL vfe5jGePTliDMqj/fNN0WMQcRIuFV2lmQmkkTW9GZyqYAhV151QyWVNlnbUPhEKFGgT9 02nuy2xkrmYLNoz3ZfXlrpYBB8fc8Ere3f2jqNPWq87VkdeEpVGVQaXvmAuLnDRpZt5B 0oFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=qt7LAMoWwJRlSLWPA4drMIMlweaGvvHJUEKLpTNLGFs=; b=YvSJ9lXnszzfT2Dd01kkCu1o2MVdxNE7pH65w6mTeN4GFn6u1xkjazbWP5o8POQxDc 9GWrFtw3yiu/Sfd77IvpaGPf4/9k+jyVk+Cp4d6oRGnT+2q3EJD0NPlm99rpbHQPWggR L80LR/hiCPkXl92ItUxsTLssle6Xh3RQhePRXUovF1kFNhWRz131hX2Tu9DS+4SGc6r0 sn+tPf9BLPNamybvU8TYfBZfMvqR54YFrIup/Zl2OtyAExgp/mnXYjUXYVcaMFJJj5cd V9St2rTtkWGivWqEGD6mF9znYeVpJYHWRjIiJJ+9UmTemH8I8EPpzC3EXdQLUqLEXrch o4Gw== X-Gm-Message-State: AOAM530s1QiJLVUFx7v7pmq+x2gSYSl31eH+AiBk+u3gEIEiFDjeooxi x8BCaE3uvsN5FYpHtzraPdsyxhZRz14p X-Google-Smtp-Source: ABdhPJxJAS5XPOE3eYXnTEWLcfkTBvyytCQdcnqYmFp2ueTRwd7MLrLX7mzR5AG7HIn03bCKnI6EeAj+W/vf X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:902:ab04:b0:156:1517:411a with SMTP id ik4-20020a170902ab0400b001561517411amr47503720plb.128.1649959609273; Thu, 14 Apr 2022 11:06:49 -0700 (PDT) Date: Thu, 14 Apr 2022 11:06:09 -0700 In-Reply-To: <20220414180612.3844426-1-zokeefe@google.com> Message-Id: <20220414180612.3844426-10-zokeefe@google.com> Mime-Version: 1.0 References: <20220414180612.3844426-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 09/12] mm/madvise: add MADV_COLLAPSE to process_madvise() From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=K1pmTkZ7; spf=pass (imf24.hostedemail.com: domain of 3uWJYYgcKCPg4tpjjkjlttlqj.htrqnsz2-rrp0fhp.twl@flex--zokeefe.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3uWJYYgcKCPg4tpjjkjlttlqj.htrqnsz2-rrp0fhp.twl@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: oadoz1p5ttrgfyuzkurskzp545nae8pw X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 49522180002 X-HE-Tag: 1649959610-246864 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allow MADV_COLLAPSE behavior for process_madvise(2) if caller has CAP_SYS_ADMIN or is requesting collapse of it's own memory. Signed-off-by: Zach O'Keefe --- mm/madvise.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index 7ad53e5311cf..a5c82fa7972b 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -1165,13 +1165,15 @@ madvise_behavior_valid(int behavior) } static bool -process_madvise_behavior_valid(int behavior) +process_madvise_behavior_valid(int behavior, struct task_struct *task) { switch (behavior) { case MADV_COLD: case MADV_PAGEOUT: case MADV_WILLNEED: return true; + case MADV_COLLAPSE: + return task == current || capable(CAP_SYS_ADMIN); default: return false; } @@ -1449,7 +1451,7 @@ SYSCALL_DEFINE5(process_madvise, int, pidfd, const struct iovec __user *, vec, goto free_iov; } - if (!process_madvise_behavior_valid(behavior)) { + if (!process_madvise_behavior_valid(behavior, task)) { ret = -EINVAL; goto release_task; } From patchwork Thu Apr 14 18:06:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12813851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7336C433EF for ; Thu, 14 Apr 2022 18:06:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E3606B0081; Thu, 14 Apr 2022 14:06:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 591A56B0083; Thu, 14 Apr 2022 14:06:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40AA36B0085; Thu, 14 Apr 2022 14:06:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 2F59B6B0081 for ; Thu, 14 Apr 2022 14:06:53 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 040D823581 for ; Thu, 14 Apr 2022 18:06:52 +0000 (UTC) X-FDA: 79356265506.16.F1A0E40 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) by imf30.hostedemail.com (Postfix) with ESMTP id 7015980006 for ; Thu, 14 Apr 2022 18:06:52 +0000 (UTC) Received: by mail-pg1-f202.google.com with SMTP id c32-20020a631c60000000b0039cec64e9f1so3092275pgm.3 for ; Thu, 14 Apr 2022 11:06:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+WKLM2l2Qz6S5JZOTjtzm1JY1i34mMq0r3QxdKks1+k=; b=BdB8j2wxU04N+qpy+YlFEvw2sIWyg18ZyydjMRDCxrW/DTvdRHnqmupXa0lOxEirEZ A863ei/9gGcnO79XSS92dbW0E40HvYKVCi2cEcf0vGyHYGnmSm0UOxduyasiywrEJ9Yv wBGHEPFoAJMFyaHtkfwn/o/oHdscM4rKM21yO6OVQuh3RyBPiigArDZ1c7fyMrtNtvwN SgRRlCcIl2PAfDstDhkscbDVf/xOzTIQraZr3E/f4eLKtrkfhfpJrbeLdKGV3/UkP5mQ mfh5Q4PSrEe5oJ3omN0YbFF8Hcs2l69lhchQ3uqXnFBvj1b6TBchOH/h1wsOueyVw9SR zpLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+WKLM2l2Qz6S5JZOTjtzm1JY1i34mMq0r3QxdKks1+k=; b=AMEOJZCKJGmCv+t85znzquR7xV+IL/jujTCcsOMD3HYcea5gwd+heqQOEM6QyEs3HZ tYBwSOg06r7wkh6m10f1WYHe8nFmHWmBYAd+2D9vDIhcomjYVjau8z5qLbIepq17GWpR YDD5NUDfdBsKIrxPegEmaAEjdqPH09AX+9yDdGO3a77NvZqtCxW6q9Z6WF9m043TA2oy ZOhHBk9+++3I+ebjb5GLFYWya7eAQ9A9yhE4rArTRimtHkZWMQZ4kPy/i3E9OA4LqIv6 zDqDC1WKorvow21BBsBuU0mznHgaOjIvX4a14Bc9FSlYLZtifES/Op6RIs5HJivVoduI ZZFA== X-Gm-Message-State: AOAM530Q0xauSUoqs/mBEwEgcl9taQzYFDaSEAiZeG99E4Cs+kNqu2iv qnHtVm318vGCzhjcQzWTDT7Th1KOTcpi X-Google-Smtp-Source: ABdhPJwIUOl0J0l7HH+56G2Su3pRe4dV6aClVwfII1TzXx5LIDkL8qQMNcbNw8/uun6OBaqbY35wQ/MnkXEi X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:903:120c:b0:154:c135:60d3 with SMTP id l12-20020a170903120c00b00154c13560d3mr48134168plh.48.1649959611357; Thu, 14 Apr 2022 11:06:51 -0700 (PDT) Date: Thu, 14 Apr 2022 11:06:10 -0700 In-Reply-To: <20220414180612.3844426-1-zokeefe@google.com> Message-Id: <20220414180612.3844426-11-zokeefe@google.com> Mime-Version: 1.0 References: <20220414180612.3844426-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 10/12] selftests/vm: modularize collapse selftests From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=BdB8j2wx; spf=pass (imf30.hostedemail.com: domain of 3u2JYYgcKCPo1qmgghgiqqing.eqonkpwz-oomxcem.qti@flex--zokeefe.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=3u2JYYgcKCPo1qmgghgiqqing.eqonkpwz-oomxcem.qti@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: icd7qmfzhtkwhmsi5hii6yagkjsc8q7x X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7015980006 X-HE-Tag: 1649959612-683438 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Modularize the collapse action of khugepaged collapse selftests by introducing a struct collapse_context which specifies how to collapse a given memory range and the expected semantics of the collapse. This can be reused later to test other collapse contexts. Signed-off-by: Zach O'Keefe --- tools/testing/selftests/vm/khugepaged.c | 257 +++++++++++------------- 1 file changed, 116 insertions(+), 141 deletions(-) diff --git a/tools/testing/selftests/vm/khugepaged.c b/tools/testing/selftests/vm/khugepaged.c index 155120b67a16..c59d832fee96 100644 --- a/tools/testing/selftests/vm/khugepaged.c +++ b/tools/testing/selftests/vm/khugepaged.c @@ -23,6 +23,12 @@ static int hpage_pmd_nr; #define THP_SYSFS "/sys/kernel/mm/transparent_hugepage/" #define PID_SMAPS "/proc/self/smaps" +struct collapse_context { + const char *name; + void (*collapse)(const char *msg, char *p, bool expect); + bool enforce_pte_scan_limits; +}; + enum thp_enabled { THP_ALWAYS, THP_MADVISE, @@ -528,53 +534,39 @@ static void alloc_at_fault(void) munmap(p, hpage_pmd_size); } -static void collapse_full(void) +static void collapse_full(struct collapse_context *context) { void *p; p = alloc_mapping(); fill_memory(p, 0, hpage_pmd_size); - if (wait_for_scan("Collapse fully populated PTE table", p)) - fail("Timeout"); - else if (check_huge(p)) - success("OK"); - else - fail("Fail"); + context->collapse("Collapse fully populated PTE table", p, true); validate_memory(p, 0, hpage_pmd_size); munmap(p, hpage_pmd_size); } -static void collapse_empty(void) +static void collapse_empty(struct collapse_context *context) { void *p; p = alloc_mapping(); - if (wait_for_scan("Do not collapse empty PTE table", p)) - fail("Timeout"); - else if (check_huge(p)) - fail("Fail"); - else - success("OK"); + context->collapse("Do not collapse empty PTE table", p, false); munmap(p, hpage_pmd_size); } -static void collapse_single_pte_entry(void) +static void collapse_single_pte_entry(struct collapse_context *context) { void *p; p = alloc_mapping(); fill_memory(p, 0, page_size); - if (wait_for_scan("Collapse PTE table with single PTE entry present", p)) - fail("Timeout"); - else if (check_huge(p)) - success("OK"); - else - fail("Fail"); + context->collapse("Collapse PTE table with single PTE entry present", p, + true); validate_memory(p, 0, page_size); munmap(p, hpage_pmd_size); } -static void collapse_max_ptes_none(void) +static void collapse_max_ptes_none(struct collapse_context *context) { int max_ptes_none = hpage_pmd_nr / 2; struct settings settings = default_settings; @@ -586,28 +578,23 @@ static void collapse_max_ptes_none(void) p = alloc_mapping(); fill_memory(p, 0, (hpage_pmd_nr - max_ptes_none - 1) * page_size); - if (wait_for_scan("Do not collapse with max_ptes_none exceeded", p)) - fail("Timeout"); - else if (check_huge(p)) - fail("Fail"); - else - success("OK"); + context->collapse("Maybe collapse with max_ptes_none exceeded", p, + !context->enforce_pte_scan_limits); validate_memory(p, 0, (hpage_pmd_nr - max_ptes_none - 1) * page_size); - fill_memory(p, 0, (hpage_pmd_nr - max_ptes_none) * page_size); - if (wait_for_scan("Collapse with max_ptes_none PTEs empty", p)) - fail("Timeout"); - else if (check_huge(p)) - success("OK"); - else - fail("Fail"); - validate_memory(p, 0, (hpage_pmd_nr - max_ptes_none) * page_size); + if (context->enforce_pte_scan_limits) { + fill_memory(p, 0, (hpage_pmd_nr - max_ptes_none) * page_size); + context->collapse("Collapse with max_ptes_none PTEs empty", p, + true); + validate_memory(p, 0, + (hpage_pmd_nr - max_ptes_none) * page_size); + } munmap(p, hpage_pmd_size); write_settings(&default_settings); } -static void collapse_swapin_single_pte(void) +static void collapse_swapin_single_pte(struct collapse_context *context) { void *p; p = alloc_mapping(); @@ -625,18 +612,14 @@ static void collapse_swapin_single_pte(void) goto out; } - if (wait_for_scan("Collapse with swapping in single PTE entry", p)) - fail("Timeout"); - else if (check_huge(p)) - success("OK"); - else - fail("Fail"); + context->collapse("Collapse with swapping in single PTE entry", + p, true); validate_memory(p, 0, hpage_pmd_size); out: munmap(p, hpage_pmd_size); } -static void collapse_max_ptes_swap(void) +static void collapse_max_ptes_swap(struct collapse_context *context) { int max_ptes_swap = read_num("khugepaged/max_ptes_swap"); void *p; @@ -656,39 +639,34 @@ static void collapse_max_ptes_swap(void) goto out; } - if (wait_for_scan("Do not collapse with max_ptes_swap exceeded", p)) - fail("Timeout"); - else if (check_huge(p)) - fail("Fail"); - else - success("OK"); + context->collapse("Maybe collapse with max_ptes_swap exceeded", + p, !context->enforce_pte_scan_limits); validate_memory(p, 0, hpage_pmd_size); - fill_memory(p, 0, hpage_pmd_size); - printf("Swapout %d of %d pages...", max_ptes_swap, hpage_pmd_nr); - if (madvise(p, max_ptes_swap * page_size, MADV_PAGEOUT)) { - perror("madvise(MADV_PAGEOUT)"); - exit(EXIT_FAILURE); - } - if (check_swap(p, max_ptes_swap * page_size)) { - success("OK"); - } else { - fail("Fail"); - goto out; - } + if (context->enforce_pte_scan_limits) { + fill_memory(p, 0, hpage_pmd_size); + printf("Swapout %d of %d pages...", max_ptes_swap, + hpage_pmd_nr); + if (madvise(p, max_ptes_swap * page_size, MADV_PAGEOUT)) { + perror("madvise(MADV_PAGEOUT)"); + exit(EXIT_FAILURE); + } + if (check_swap(p, max_ptes_swap * page_size)) { + success("OK"); + } else { + fail("Fail"); + goto out; + } - if (wait_for_scan("Collapse with max_ptes_swap pages swapped out", p)) - fail("Timeout"); - else if (check_huge(p)) - success("OK"); - else - fail("Fail"); - validate_memory(p, 0, hpage_pmd_size); + context->collapse("Collapse with max_ptes_swap pages swapped out", + p, true); + validate_memory(p, 0, hpage_pmd_size); + } out: munmap(p, hpage_pmd_size); } -static void collapse_single_pte_entry_compound(void) +static void collapse_single_pte_entry_compound(struct collapse_context *context) { void *p; @@ -710,17 +688,13 @@ static void collapse_single_pte_entry_compound(void) else fail("Fail"); - if (wait_for_scan("Collapse PTE table with single PTE mapping compound page", p)) - fail("Timeout"); - else if (check_huge(p)) - success("OK"); - else - fail("Fail"); + context->collapse("Collapse PTE table with single PTE mapping compound page", + p, true); validate_memory(p, 0, page_size); munmap(p, hpage_pmd_size); } -static void collapse_full_of_compound(void) +static void collapse_full_of_compound(struct collapse_context *context) { void *p; @@ -742,17 +716,12 @@ static void collapse_full_of_compound(void) else fail("Fail"); - if (wait_for_scan("Collapse PTE table full of compound pages", p)) - fail("Timeout"); - else if (check_huge(p)) - success("OK"); - else - fail("Fail"); + context->collapse("Collapse PTE table full of compound pages", p, true); validate_memory(p, 0, hpage_pmd_size); munmap(p, hpage_pmd_size); } -static void collapse_compound_extreme(void) +static void collapse_compound_extreme(struct collapse_context *context) { void *p; int i; @@ -798,18 +767,14 @@ static void collapse_compound_extreme(void) else fail("Fail"); - if (wait_for_scan("Collapse PTE table full of different compound pages", p)) - fail("Timeout"); - else if (check_huge(p)) - success("OK"); - else - fail("Fail"); + context->collapse("Collapse PTE table full of different compound pages", + p, true); validate_memory(p, 0, hpage_pmd_size); munmap(p, hpage_pmd_size); } -static void collapse_fork(void) +static void collapse_fork(struct collapse_context *context) { int wstatus; void *p; @@ -835,13 +800,8 @@ static void collapse_fork(void) fail("Fail"); fill_memory(p, page_size, 2 * page_size); - - if (wait_for_scan("Collapse PTE table with single page shared with parent process", p)) - fail("Timeout"); - else if (check_huge(p)) - success("OK"); - else - fail("Fail"); + context->collapse("Collapse PTE table with single page shared with parent process", + p, true); validate_memory(p, 0, page_size); munmap(p, hpage_pmd_size); @@ -860,7 +820,7 @@ static void collapse_fork(void) munmap(p, hpage_pmd_size); } -static void collapse_fork_compound(void) +static void collapse_fork_compound(struct collapse_context *context) { int wstatus; void *p; @@ -896,14 +856,10 @@ static void collapse_fork_compound(void) fill_memory(p, 0, page_size); write_num("khugepaged/max_ptes_shared", hpage_pmd_nr - 1); - if (wait_for_scan("Collapse PTE table full of compound pages in child", p)) - fail("Timeout"); - else if (check_huge(p)) - success("OK"); - else - fail("Fail"); + context->collapse("Collapse PTE table full of compound pages in child", + p, true); write_num("khugepaged/max_ptes_shared", - default_settings.khugepaged.max_ptes_shared); + default_settings.khugepaged.max_ptes_shared); validate_memory(p, 0, hpage_pmd_size); munmap(p, hpage_pmd_size); @@ -922,7 +878,7 @@ static void collapse_fork_compound(void) munmap(p, hpage_pmd_size); } -static void collapse_max_ptes_shared() +static void collapse_max_ptes_shared(struct collapse_context *context) { int max_ptes_shared = read_num("khugepaged/max_ptes_shared"); int wstatus; @@ -957,28 +913,22 @@ static void collapse_max_ptes_shared() else fail("Fail"); - if (wait_for_scan("Do not collapse with max_ptes_shared exceeded", p)) - fail("Timeout"); - else if (!check_huge(p)) - success("OK"); - else - fail("Fail"); - - printf("Trigger CoW on page %d of %d...", - hpage_pmd_nr - max_ptes_shared, hpage_pmd_nr); - fill_memory(p, 0, (hpage_pmd_nr - max_ptes_shared) * page_size); - if (!check_huge(p)) - success("OK"); - else - fail("Fail"); - - - if (wait_for_scan("Collapse with max_ptes_shared PTEs shared", p)) - fail("Timeout"); - else if (check_huge(p)) - success("OK"); - else - fail("Fail"); + context->collapse("Maybe collapse with max_ptes_shared exceeded", + p, !context->enforce_pte_scan_limits); + + if (context->enforce_pte_scan_limits) { + printf("Trigger CoW on page %d of %d...", + hpage_pmd_nr - max_ptes_shared, hpage_pmd_nr); + fill_memory(p, 0, (hpage_pmd_nr - max_ptes_shared) * + page_size); + if (!check_huge(p)) + success("OK"); + else + fail("Fail"); + + context->collapse("Collapse with max_ptes_shared PTEs shared", + p, true); + } validate_memory(p, 0, hpage_pmd_size); munmap(p, hpage_pmd_size); @@ -997,8 +947,27 @@ static void collapse_max_ptes_shared() munmap(p, hpage_pmd_size); } +static void khugepaged_collapse(const char *msg, char *p, bool expect) +{ + if (wait_for_scan(msg, p)) + fail("Timeout"); + else if (check_huge(p) == expect) + success("OK"); + else + fail("Fail"); +} + int main(void) { + struct collapse_context contexts[] = { + { + .name = "khugepaged", + .collapse = &khugepaged_collapse, + .enforce_pte_scan_limits = true, + }, + }; + int i; + setbuf(stdout, NULL); page_size = getpagesize(); @@ -1014,18 +983,24 @@ int main(void) adjust_settings(); alloc_at_fault(); - collapse_full(); - collapse_empty(); - collapse_single_pte_entry(); - collapse_max_ptes_none(); - collapse_swapin_single_pte(); - collapse_max_ptes_swap(); - collapse_single_pte_entry_compound(); - collapse_full_of_compound(); - collapse_compound_extreme(); - collapse_fork(); - collapse_fork_compound(); - collapse_max_ptes_shared(); + + for (i = 0; i < sizeof(contexts) / sizeof(contexts[0]); ++i) { + struct collapse_context *c = &contexts[i]; + + printf("\n*** Testing context: %s ***\n", c->name); + collapse_full(c); + collapse_empty(c); + collapse_single_pte_entry(c); + collapse_max_ptes_none(c); + collapse_swapin_single_pte(c); + collapse_max_ptes_swap(c); + collapse_single_pte_entry_compound(c); + collapse_full_of_compound(c); + collapse_compound_extreme(c); + collapse_fork(c); + collapse_fork_compound(c); + collapse_max_ptes_shared(c); + } restore_settings(0); } From patchwork Thu Apr 14 18:06:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12813852 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2FC4C433F5 for ; Thu, 14 Apr 2022 18:06:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E6BC46B0083; Thu, 14 Apr 2022 14:06:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E19F36B0085; Thu, 14 Apr 2022 14:06:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE2E66B0087; Thu, 14 Apr 2022 14:06:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id BF23A6B0083 for ; Thu, 14 Apr 2022 14:06:54 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id AA4766336B for ; Thu, 14 Apr 2022 18:06:54 +0000 (UTC) X-FDA: 79356265548.24.ED153C7 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) by imf16.hostedemail.com (Postfix) with ESMTP id 2AC03180008 for ; Thu, 14 Apr 2022 18:06:54 +0000 (UTC) Received: by mail-pg1-f201.google.com with SMTP id c194-20020a6335cb000000b0039d9a489d44so3080569pga.6 for ; Thu, 14 Apr 2022 11:06:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FmGM93CoIaiFzSmPRDRvjJk64wXgoeJKdB/myBxsM5I=; b=f2fXh/mp+6yfhUc729wrGwiqbWppYFtmJBR9dqYnCQOycNyl+OaFsbLRMHVdCuC8xa dapdYxCbUbOfOMtjJTrnNRu6JmpUKZtl1tyuQ53SNj6BEvucVOFbvf9RGBSlWM94TkRf eXVB9xcS1Y0B8cCIQmhvT7oeOXswLynRkCZIESMi2FhRMCP/cTCAIqhDmpReWfu1Ichj 7HNIgP7S5pMwbGf4ZPliFd92QSCNvuqFTwiURacht6hvYu2NXgRl7cGSEPS30s8Jhhgv PG4VYegk61+1eqLxQIiB4DVI6Oaw89ijJkpGWcLP9qGonjslLAfAAFLZSEUrq5eyhYAz 65Hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FmGM93CoIaiFzSmPRDRvjJk64wXgoeJKdB/myBxsM5I=; b=lGx4CRS3pnIaG8P5VM0EeuPtl5qZ67Y+1PCHH/oF1FIfuhx9Gw6SZ2u8Clq/K61QV3 YNMFhBwJXLNMriMWe2Ve3Ep9qi1Ox44QcljskSFQrBd0ej9LyBGGN8V4vvL85DDf31JF zXWpctyTZZTY+GzO/GePNxsd3uUhqfPUdpdRAnjXcAxcI0UUCR5uCr0UuKH/LWVD8Whl p/XqSbq8pYmoMaQjqKkjw7DlyoyGBWbUQT98JTSBMwkKQyy4CpyOzXIS6O0o8VrzK+vK IpxuB7AHz2b3yNkBkZ81ltTeJNaCtkWbZovuxsryFRkol4K9eUcfkD5vDVOw61PDC5/9 4RKA== X-Gm-Message-State: AOAM533VippnozjPV0MMb6YjyR3cp20bNxEAej5sVIaoN4OKlUhZyWKs ng9/JWqEuZLOmXBaQohV/WcGJZEu7mUi X-Google-Smtp-Source: ABdhPJx1h7BFO4k+08xhsqxLRvtnt15Pdn9wt4ypUeDBx3yuyfP3biSf0OUPzCHt7X4wr/MxaMZ2pqij0Sjz X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:902:e2d3:b0:158:9fcc:e0f9 with SMTP id l19-20020a170902e2d300b001589fcce0f9mr9417728plc.9.1649959613190; Thu, 14 Apr 2022 11:06:53 -0700 (PDT) Date: Thu, 14 Apr 2022 11:06:11 -0700 In-Reply-To: <20220414180612.3844426-1-zokeefe@google.com> Message-Id: <20220414180612.3844426-12-zokeefe@google.com> Mime-Version: 1.0 References: <20220414180612.3844426-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 11/12] selftests/vm: add MADV_COLLAPSE collapse context to selftests From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 2AC03180008 X-Stat-Signature: 7zuyxrc47rnkit5gao56zbm8q5n941pe Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="f2fXh/mp"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3vWJYYgcKCPw3soiijiksskpi.gsqpmry1-qqozego.svk@flex--zokeefe.bounces.google.com designates 209.85.215.201 as permitted sender) smtp.mailfrom=3vWJYYgcKCPw3soiijiksskpi.gsqpmry1-qqozego.svk@flex--zokeefe.bounces.google.com X-Rspam-User: X-HE-Tag: 1649959614-727661 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add MADV_COLLAPSE selftests. Extend struct collapse_context to support context initialization/cleanup. This is used by madvise collapse context to "disable" and "enable" khugepaged, since it would otherwise interfere with the tests. The mechanism used to "disable" khugepaged is a hack: it sets /sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs to a large value and feeds khugepaged enough suitable VMAs/pages to keep khugepaged sleeping for the duration of the madvise collapse tests. Since khugepaged is woken when this file is written, enough VMAs must be queued to put khugepaged back to sleep when the tests write to this file in write_settings(). Signed-off-by: Zach O'Keefe --- tools/testing/selftests/vm/khugepaged.c | 133 ++++++++++++++++++++++-- 1 file changed, 125 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/vm/khugepaged.c b/tools/testing/selftests/vm/khugepaged.c index c59d832fee96..e0ccc9443f78 100644 --- a/tools/testing/selftests/vm/khugepaged.c +++ b/tools/testing/selftests/vm/khugepaged.c @@ -14,17 +14,23 @@ #ifndef MADV_PAGEOUT #define MADV_PAGEOUT 21 #endif +#ifndef MADV_COLLAPSE +#define MADV_COLLAPSE 25 +#endif #define BASE_ADDR ((void *)(1UL << 30)) static unsigned long hpage_pmd_size; static unsigned long page_size; static int hpage_pmd_nr; +static int num_khugepaged_wakeups; #define THP_SYSFS "/sys/kernel/mm/transparent_hugepage/" #define PID_SMAPS "/proc/self/smaps" struct collapse_context { const char *name; + bool (*init_context)(void); + bool (*cleanup_context)(void); void (*collapse)(const char *msg, char *p, bool expect); bool enforce_pte_scan_limits; }; @@ -264,6 +270,17 @@ static void write_num(const char *name, unsigned long num) } } +/* + * Use this macro instead of write_settings inside tests, and should + * be called at most once per callsite. + * + * Hack to statically count the number of times khugepaged is woken up due to + * writes to + * /sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs, + * and is stored in __COUNTER__. + */ +#define WRITE_SETTINGS(s) do { __COUNTER__; write_settings(s); } while (0) + static void write_settings(struct settings *settings) { struct khugepaged_settings *khugepaged = &settings->khugepaged; @@ -332,7 +349,7 @@ static void adjust_settings(void) { printf("Adjust settings..."); - write_settings(&default_settings); + WRITE_SETTINGS(&default_settings); success("OK"); } @@ -440,20 +457,25 @@ static bool check_swap(void *addr, unsigned long size) return swap; } -static void *alloc_mapping(void) +static void *alloc_mapping_at(void *at, size_t size) { void *p; - p = mmap(BASE_ADDR, hpage_pmd_size, PROT_READ | PROT_WRITE, - MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); - if (p != BASE_ADDR) { - printf("Failed to allocate VMA at %p\n", BASE_ADDR); + p = mmap(at, size, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, + -1, 0); + if (p != at) { + printf("Failed to allocate VMA at %p\n", at); exit(EXIT_FAILURE); } return p; } +static void *alloc_mapping(void) +{ + return alloc_mapping_at(BASE_ADDR, hpage_pmd_size); +} + static void fill_memory(int *p, unsigned long start, unsigned long end) { int i; @@ -573,7 +595,7 @@ static void collapse_max_ptes_none(struct collapse_context *context) void *p; settings.khugepaged.max_ptes_none = max_ptes_none; - write_settings(&settings); + WRITE_SETTINGS(&settings); p = alloc_mapping(); @@ -591,7 +613,7 @@ static void collapse_max_ptes_none(struct collapse_context *context) } munmap(p, hpage_pmd_size); - write_settings(&default_settings); + WRITE_SETTINGS(&default_settings); } static void collapse_swapin_single_pte(struct collapse_context *context) @@ -947,6 +969,87 @@ static void collapse_max_ptes_shared(struct collapse_context *context) munmap(p, hpage_pmd_size); } +static void madvise_collapse(const char *msg, char *p, bool expect) +{ + int ret; + + printf("%s...", msg); + /* Sanity check */ + if (check_huge(p)) { + printf("Unexpected huge page\n"); + exit(EXIT_FAILURE); + } + + madvise(p, hpage_pmd_size, MADV_HUGEPAGE); + ret = madvise(p, hpage_pmd_size, MADV_COLLAPSE); + if (((bool)ret) == expect) + fail("Fail: Bad return value"); + else if (check_huge(p) != expect) + fail("Fail: check_huge()"); + else + success("OK"); +} + +static struct khugepaged_disable_state { + void *p; + size_t map_size; +} khugepaged_disable_state; + +static bool disable_khugepaged(void) +{ + /* + * Hack to "disable" khugepaged by setting + * /transparent_hugepage/khugepaged/scan_sleep_millisecs to some large + * value, then feeding it enough suitable VMAs to scan and subsequently + * sleep. + * + * khugepaged is woken up on writes to + * /transparent_hugepage/khugepaged/scan_sleep_millisecs, so care must + * be taken to not inadvertently wake khugepaged in these tests. + * + * Feed khugepaged 1 hugepage-sized VMA to scan and sleep on, then + * N more for each time khugepaged would be woken up. + */ + size_t map_size = (num_khugepaged_wakeups + 1) * hpage_pmd_size; + void *p; + bool ret = true; + int full_scans; + int timeout = 6; /* 3 seconds */ + + default_settings.khugepaged.scan_sleep_millisecs = 1000 * 60 * 10; + default_settings.khugepaged.pages_to_scan = 1; + write_settings(&default_settings); + + p = alloc_mapping_at(((char *)BASE_ADDR) + (1UL << 30), map_size); + fill_memory(p, 0, map_size); + + full_scans = read_num("khugepaged/full_scans") + 2; + + printf("disabling khugepaged..."); + while (timeout--) { + if (read_num("khugepaged/full_scans") >= full_scans) { + fail("Fail"); + ret = false; + break; + } + printf("."); + usleep(TICK); + } + success("OK"); + khugepaged_disable_state.p = p; + khugepaged_disable_state.map_size = map_size; + return ret; +} + +static bool enable_khugepaged(void) +{ + printf("enabling khugepaged..."); + munmap(khugepaged_disable_state.p, khugepaged_disable_state.map_size); + write_settings(&saved_settings); + success("OK"); + return true; +} + static void khugepaged_collapse(const char *msg, char *p, bool expect) { if (wait_for_scan(msg, p)) @@ -962,9 +1065,18 @@ int main(void) struct collapse_context contexts[] = { { .name = "khugepaged", + .init_context = NULL, + .cleanup_context = NULL, .collapse = &khugepaged_collapse, .enforce_pte_scan_limits = true, }, + { + .name = "madvise", + .init_context = &disable_khugepaged, + .cleanup_context = &enable_khugepaged, + .collapse = &madvise_collapse, + .enforce_pte_scan_limits = false, + }, }; int i; @@ -973,6 +1085,7 @@ int main(void) page_size = getpagesize(); hpage_pmd_size = read_num("hpage_pmd_size"); hpage_pmd_nr = hpage_pmd_size / page_size; + num_khugepaged_wakeups = __COUNTER__; default_settings.khugepaged.max_ptes_none = hpage_pmd_nr - 1; default_settings.khugepaged.max_ptes_swap = hpage_pmd_nr / 8; @@ -988,6 +1101,8 @@ int main(void) struct collapse_context *c = &contexts[i]; printf("\n*** Testing context: %s ***\n", c->name); + if (c->init_context && !c->init_context()) + continue; collapse_full(c); collapse_empty(c); collapse_single_pte_entry(c); @@ -1000,6 +1115,8 @@ int main(void) collapse_fork(c); collapse_fork_compound(c); collapse_max_ptes_shared(c); + if (c->cleanup_context && !c->cleanup_context()) + break; } restore_settings(0); From patchwork Thu Apr 14 18:06:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12813853 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5FE4C433F5 for ; Thu, 14 Apr 2022 18:06:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C24206B0085; Thu, 14 Apr 2022 14:06:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BAB476B0087; Thu, 14 Apr 2022 14:06:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A98E76B0088; Thu, 14 Apr 2022 14:06:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 9017D6B0085 for ; Thu, 14 Apr 2022 14:06:56 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 72DB5123396 for ; Thu, 14 Apr 2022 18:06:56 +0000 (UTC) X-FDA: 79356265632.20.DF96A12 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf23.hostedemail.com (Postfix) with ESMTP id B72B6140008 for ; Thu, 14 Apr 2022 18:06:55 +0000 (UTC) Received: by mail-pj1-f73.google.com with SMTP id q1-20020a17090a2dc100b001cba43e127dso3435196pjm.9 for ; Thu, 14 Apr 2022 11:06:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZslTpN+zaI+Cktyb9pBRiHydgIvkgYEeuv76UYxgmAY=; b=juduUCaCpheOn8qEVwD0IpZsB5qPKK5n5MXx6BUB2MK683gWLzs8+dcNGC6YQs+sZi p84XKsDXDO+cPEkHVFFySn2JRWzDNw8IaS7urq+/s6AsCOXYYda1Y4mGmAF2Cu1DiLdm GSOF/O+kPPqMS4C6NEVz388DHmN1RJRWBq52ikQ0d0B1fjzbjJxCVTpc98Lvy8poQXxF wbWi+BAumdh30gsdkJrIp2IguJZkFJXVcIk/Ek/wi0ef04YE27kP3JQrE9+MvgYzDTkJ B9bz99N1+o5HN1bkJ5l33++uQyIqloNr0nBgDTj1H+7vQFZ3FGU+MB49EE4m1ETp4CNv /q7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZslTpN+zaI+Cktyb9pBRiHydgIvkgYEeuv76UYxgmAY=; b=kQgJo+7Ms18D0M1rxCL7gVQPiPHptX7VMdrtpFdzrDQcpQnTESIYXTZcMQrJd89mXz svNvGlHXp++cFX+hXfbvo2GNGkc3YY1z4ctULiBc6nK9btCROpN+p8b030p6o264ZeCc K+Hz/ES6tDW3xFvf7+vqBbnyQOGsB5Tmt5f676Qq3CyLi+4Alj2UIx++suiaKw3k/V9s EidnSlRPqUpZD686qZnBfWbUU0PKku4wwsFsqeqP1SvnZ4Oooenwubfm9D0za7PZUTFZ dA+kS5PDYaA1tuSlA/WmRZOIsK9J5pM5FhDeLofZ7HolehcqAtJm0UmNfG6LbX89F8rO 4h/w== X-Gm-Message-State: AOAM53255TVH57mm4phcPUPigQcBQlsnAQs/l4wC7gpvBAchcyEUIw9x KAoAFeLq6RW7XBjYev4QLnarjgSgyE2t X-Google-Smtp-Source: ABdhPJyQ9jCPWCAJoZa8S6PFEs0nvgtUrkg9riyPggsy+tcQima03XhJxnBs3lAIeOlw5vNX7PztdoKiByIv X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:902:b208:b0:14f:14e8:1e49 with SMTP id t8-20020a170902b20800b0014f14e81e49mr47636961plr.35.1649959614790; Thu, 14 Apr 2022 11:06:54 -0700 (PDT) Date: Thu, 14 Apr 2022 11:06:12 -0700 In-Reply-To: <20220414180612.3844426-1-zokeefe@google.com> Message-Id: <20220414180612.3844426-13-zokeefe@google.com> Mime-Version: 1.0 References: <20220414180612.3844426-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v2 12/12] selftests/vm: add test to verify recollapse of THPs From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Peter Xu , Thomas Bogendoerfer , "Zach O'Keefe" X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B72B6140008 X-Stat-Signature: pqjbq9xfb9n3skesimt51z4msekti1r4 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=juduUCaC; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of 3vmJYYgcKCP04tpjjkjlttlqj.htrqnsz2-rrp0fhp.twl@flex--zokeefe.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3vmJYYgcKCP04tpjjkjlttlqj.htrqnsz2-rrp0fhp.twl@flex--zokeefe.bounces.google.com X-Rspam-User: X-HE-Tag: 1649959615-38871 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add selftest specific to madvise collapse context that tests MADV_COLLAPSE is "successful" if a hugepage-algined/sized region is already pmd-mapped. Signed-off-by: Zach O'Keefe --- tools/testing/selftests/vm/khugepaged.c | 32 +++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/tools/testing/selftests/vm/khugepaged.c b/tools/testing/selftests/vm/khugepaged.c index e0ccc9443f78..c36d04218083 100644 --- a/tools/testing/selftests/vm/khugepaged.c +++ b/tools/testing/selftests/vm/khugepaged.c @@ -969,6 +969,32 @@ static void collapse_max_ptes_shared(struct collapse_context *context) munmap(p, hpage_pmd_size); } +static void madvise_collapse_existing_thps(void) +{ + void *p; + int err; + + p = alloc_mapping(); + fill_memory(p, 0, hpage_pmd_size); + + printf("Collapse fully populated PTE table..."); + madvise(p, hpage_pmd_size, MADV_HUGEPAGE); + err = madvise(p, hpage_pmd_size, MADV_COLLAPSE); + if (err == 0 && check_huge(p)) { + success("OK"); + printf("Re-collapse PMD-mapped hugepage"); + err = madvise(p, hpage_pmd_size, MADV_COLLAPSE); + if (err == 0 && check_huge(p)) + success("OK"); + else + fail("Fail"); + } else { + fail("Fail"); + } + validate_memory(p, 0, hpage_pmd_size); + munmap(p, hpage_pmd_size); +} + static void madvise_collapse(const char *msg, char *p, bool expect) { int ret; @@ -1097,6 +1123,7 @@ int main(void) alloc_at_fault(); + /* Shared tests */ for (i = 0; i < sizeof(contexts) / sizeof(contexts[0]); ++i) { struct collapse_context *c = &contexts[i]; @@ -1119,5 +1146,10 @@ int main(void) break; } + /* madvise-specific tests */ + disable_khugepaged(); + madvise_collapse_existing_thps(); + enable_khugepaged(); + restore_settings(0); }