From patchwork Mon May 2 18:17:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12834611 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 023DFC433FE for ; Mon, 2 May 2022 18:17:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7957B6B007E; Mon, 2 May 2022 14:17:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 71E056B0080; Mon, 2 May 2022 14:17:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AEBD6B0081; Mon, 2 May 2022 14:17:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 38F1C6B007E for ; Mon, 2 May 2022 14:17:41 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1919A2135D for ; Mon, 2 May 2022 18:17:41 +0000 (UTC) X-FDA: 79421611122.05.C021B7B Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf26.hostedemail.com (Postfix) with ESMTP id B4F1014007A for ; Mon, 2 May 2022 18:17:38 +0000 (UTC) Received: by mail-pl1-f202.google.com with SMTP id x23-20020a170902b41700b0015ea144789fso1544455plr.13 for ; Mon, 02 May 2022 11:17:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=erI4ca1qTOIvCwMNCO2PV4QqS7GqFU6EKIq6u+xihDE=; b=qpLnfevqGgSg3XwH0US2RW/l35u8YKVN/qvrkISR/gu1TUsMqlz1HrGDk7BvtM3qM8 yAoTyw8cOWIbxvihiTeNTezXH+YJHQYzwO17w3Nr059yRxukfbln3AMyAFpdMTw771lY FQr1jsc+boVagcmF69K5F7MNsHn84EXD8UnUMN2k05AheZ6pj23dEdfvVJYCFDdRnVIs onhKW30e5gmNxZB58w5h/lF1qr6hI1IFyunGqT0FIRqxDQN7ZwdbOaAbfg/OcKcfO9Kj ZVrEKEmmFrdzKqSZvKur25o8HAvf1cmiNOfGgxodIzRIe+3TGmMzrTFRmBIr+cD+PvkN 7Bjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=erI4ca1qTOIvCwMNCO2PV4QqS7GqFU6EKIq6u+xihDE=; b=dhQhu6afrO2GoUwGLiVkGRy4xJbhTzG0QQBSFSXVFr1j0lNTC4NIzBqJLSS7/JkIHz LsOqqV+cvBrP3brbIlI511aopYk4Bp7tQHVXD5kTNaoT1vigI8RNKyAny0QnZN/BZ6jz mpWoO3dazH8hHFh0C8wPD6I6mHC256URvRC60ocBa9R9iaxY9IZgPTyxHHswuVSWWGRb w71YolO45XDegGn8JMZ2PMpc2IHK+514aqU6j7mBpBZItBd9qwemRtV9awhZ1DcW3cAg Zusi1fjcnWIyfseAWFWyoYfjD5387EkCYJP57uplqovu2eZKr0vB9hXIZngNZnlZtTMy K8kQ== X-Gm-Message-State: AOAM530yXYov2JfSf7qV04/JkmIFyU/YxX7NvxAemUXhR7V9EI9FcB10 tAgCy2kbjSRmrkIhVyt6CkcQWyLv8agd X-Google-Smtp-Source: ABdhPJwgGLRJ/LEJlLpB5qfd9A86fzKOZqNUiCg0REO0SNO6uC7HzxavsvvQmHBln4S+3eaEZsV0RQ8KjtrE X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:902:7795:b0:157:c50:53a6 with SMTP id o21-20020a170902779500b001570c5053a6mr12845224pll.40.1651515459586; Mon, 02 May 2022 11:17:39 -0700 (PDT) Date: Mon, 2 May 2022 11:17:08 -0700 In-Reply-To: <20220502181714.3483177-1-zokeefe@google.com> Message-Id: <20220502181714.3483177-8-zokeefe@google.com> Mime-Version: 1.0 References: <20220502181714.3483177-1-zokeefe@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH v4 07/13] mm/khugepaged: add flag to ignore page young/referenced requirement From: "Zach O'Keefe" To: Alex Shi , David Hildenbrand , David Rientjes , Matthew Wilcox , Michal Hocko , Pasha Tatashin , Peter Xu , SeongJae Park , Song Liu , Vlastimil Babka , Yang Shi , Zi Yan , linux-mm@kvack.org Cc: Andrea Arcangeli , Andrew Morton , Arnd Bergmann , Axel Rasmussen , Chris Kennelly , Chris Zankel , Helge Deller , Hugh Dickins , Ivan Kokshaysky , "James E.J. Bottomley" , Jens Axboe , "Kirill A. Shutemov" , Matt Turner , Max Filippov , Miaohe Lin , Minchan Kim , Patrick Xia , Pavel Begunkov , Thomas Bogendoerfer , "Zach O'Keefe" X-Stat-Signature: bwf95rhowoidpcdhh637ydyjhuihox5w Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qpLnfevq; spf=pass (imf26.hostedemail.com: domain of 3QyBwYgcKCF4VKGAABACKKCHA.8KIHEJQT-IIGR68G.KNC@flex--zokeefe.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3QyBwYgcKCF4VKGAABACKKCHA.8KIHEJQT-IIGR68G.KNC@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B4F1014007A X-HE-Tag: 1651515458-211968 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add enforce_young flag to struct collapse_control that allows context to ignore requirement that some pages in region being collapsed be young or referenced. Set this flag in khugepaged collapse context to preserve existing khugepaged behavior. This flag will be used (unset) when introducing madvise collapse context since here, the user presumably has reason to believe the collapse will be beneficial and khugepaged heuristics shouldn't tell the user they are wrong. Signed-off-by: Zach O'Keefe --- mm/khugepaged.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 94f18be83835..b57a4a643053 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -90,6 +90,9 @@ struct collapse_control { /* Respect khugepaged_max_ptes_[none|swap|shared] */ bool enforce_pte_scan_limits; + /* Require memory to be young */ + bool enforce_young; + /* Num pages scanned per node */ int node_load[MAX_NUMNODES]; @@ -720,9 +723,10 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, list_add_tail(&page->lru, compound_pagelist); next: /* There should be enough young pte to collapse the page */ - if (pte_young(pteval) || - page_is_young(page) || PageReferenced(page) || - mmu_notifier_test_young(vma->vm_mm, address)) + if (cc->enforce_young && + (pte_young(pteval) || page_is_young(page) || + PageReferenced(page) || mmu_notifier_test_young(vma->vm_mm, + address))) referenced++; if (pte_write(pteval)) @@ -731,7 +735,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, if (unlikely(!writable)) { result = SCAN_PAGE_RO; - } else if (unlikely(!referenced)) { + } else if (unlikely(cc->enforce_young && !referenced)) { result = SCAN_LACK_REFERENCED_PAGE; } else { result = SCAN_SUCCEED; @@ -1388,14 +1392,16 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, struct vm_area_struct *vma, result = SCAN_PAGE_COUNT; goto out_unmap; } - if (pte_young(pteval) || - page_is_young(page) || PageReferenced(page) || - mmu_notifier_test_young(vma->vm_mm, address)) + if (cc->enforce_young && + (pte_young(pteval) || page_is_young(page) || + PageReferenced(page) || mmu_notifier_test_young(vma->vm_mm, + address))) referenced++; } if (!writable) { result = SCAN_PAGE_RO; - } else if (!referenced || (unmapped && referenced < HPAGE_PMD_NR/2)) { + } else if (cc->enforce_young && (!referenced || (unmapped && referenced + < HPAGE_PMD_NR / 2))) { result = SCAN_LACK_REFERENCED_PAGE; } else { result = SCAN_SUCCEED; @@ -2348,6 +2354,7 @@ static int khugepaged(void *none) struct mm_slot *mm_slot; struct collapse_control cc = { .enforce_pte_scan_limits = true, + .enforce_young = true, .last_target_node = NUMA_NO_NODE, .alloc_charge_hpage = &alloc_charge_hpage, };