From patchwork Tue Aug 17 20:21:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12441941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE57BC43214 for ; Tue, 17 Aug 2021 20:21:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 248FC600AA for ; Tue, 17 Aug 2021 20:21:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 248FC600AA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id B71666B0071; Tue, 17 Aug 2021 16:21:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B1FF28D0001; Tue, 17 Aug 2021 16:21:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A0F636B0073; Tue, 17 Aug 2021 16:21:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0173.hostedemail.com [216.40.44.173]) by kanga.kvack.org (Postfix) with ESMTP id 83C236B0071 for ; Tue, 17 Aug 2021 16:21:50 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2B1B5182293D3 for ; Tue, 17 Aug 2021 20:21:50 +0000 (UTC) X-FDA: 78485693580.21.53C25A8 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) by imf27.hostedemail.com (Postfix) with ESMTP id E05957005A92 for ; Tue, 17 Aug 2021 20:21:49 +0000 (UTC) Received: by mail-pj1-f54.google.com with SMTP id n5so993325pjt.4 for ; Tue, 17 Aug 2021 13:21:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=HQ7ZSuuD6tb1hBMUbxFFC7kpnSgGHoGR91mEf8GYgTE=; b=avIpKrF9Nji9lsUE6QWvB406A6ea/v4R7yDfqBWiRe2V1XDllMt0tqpxaP28hfYpKx ax4Q2ayK1rV36ElRsDUAW6Wp/CE0jRi9XTVRFOau+Gl71r4vim0akJ/WBI+g3jaX7bio jGnenDtm1DYm3mV1CBJlWx5FoWYfNaYMQEcyF42nY5UcWNP8h17EIAg2S4xPqE7cHO/j rQMK9gk+GDZOQjkXL/Sy3tIOFwXhqml8CR0f489yMrnmxVMvzpfJ2HPJCDLXmA5jYJPG BZyygOJfVJECHAF10agd7b3zcvllQATfOJ03MbVmrw0cIxOfJIWW8GqPaDxhXW8p6HUl Gx2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=HQ7ZSuuD6tb1hBMUbxFFC7kpnSgGHoGR91mEf8GYgTE=; b=iOiFDociIzuf/61KEy4p2Ig46eRws5XOxlaTrPoIiRlhU1Yoc6qbFLfFrmLeZ6sCay lNJR/P3eslYBdwajNZEVeHCxYzovimm2ijotFHumQ9qylKxmfj4lQuO/RBJzjKaVvnSb sSVMIp5OYNtKyuERe3cKE3jrwBZE032nUYdPBQ4zyyV/tBvY1Q86L6rxEV3V9JiQItrT P+zq2ArgFkaTIOUif+OPPWi4GGCclcJWeE1M47v8q0VqwjIvxKC0hdO1/rYAh5TqVu3V DB2Shwbr29mojZcwPWdTe1E395twto1hx3q6l3SXHyGE9gFrX8alC0qoeDX6PR4nd32k 3OIA== X-Gm-Message-State: AOAM5328/qD+kbLAhWBUslCd+IIT/5cUieJJBvzxOssti70i5MKhb+iM N1FIzDUWmKLbkrkDJtlzS0E= X-Google-Smtp-Source: ABdhPJzOg0igKRSLy6X8qGilBeVhIpoGqG9rd18JlmvUZCFxi8VBAmS7X0jXGmMXDOdNrW4nbhtLlA== X-Received: by 2002:a05:6a00:1ad3:b029:3e0:c106:2dea with SMTP id f19-20020a056a001ad3b02903e0c1062deamr5285921pfv.8.1629231708767; Tue, 17 Aug 2021 13:21:48 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id o22sm3408642pfu.87.2021.08.17.13.21.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Aug 2021 13:21:47 -0700 (PDT) From: Yang Shi To: hughd@google.com, kirill.shutemov@linux.intel.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH] mm: khugepaged: don't carry huge page to the next loop for !CONFIG_NUMA Date: Tue, 17 Aug 2021 13:21:46 -0700 Message-Id: <20210817202146.3218-1-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E05957005A92 X-Stat-Signature: wf1s3c64ua71iytrfkuahca76bcjzhq4 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=avIpKrF9; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of shy828301@gmail.com designates 209.85.216.54 as permitted sender) smtp.mailfrom=shy828301@gmail.com X-HE-Tag: 1629231709-666569 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The khugepaged has optimization to reduce huge page allocation calls for !CONFIG_NUMA by carrying the allocated but failed to collapse huge page to the next loop. CONFIG_NUMA doesn't do so since the next loop may try to collapse huge page from a different node, so it doesn't make too much sense to carry it. But when NUMA=n, the huge page is allocated by khugepaged_prealloc_page() before scanning the address space, so it means huge page may be allocated even though there is no suitable range for collapsing. Then the page would be just freed if khugepaged already made enough progress. This could make NUMA=n run have 5 times as much thp_collapse_alloc as NUMA=y run. This problem actually makes things worse due to the way more pointless THP allocations and makes the optimization pointless. This could be fixed by carrying the huge page across scans, but it will complicate the code further and the huge page may be carried indefinitely. But if we take one step back, the optimization itself seems not worth keeping nowadays since: * Not too many users build NUMA=n kernel nowadays even though the kernel is actually running on a non-NUMA machine. Some small devices may run NUMA=n kernel, but I don't think they actually use THP. * Since commit 44042b449872 ("mm/page_alloc: allow high-order pages to be stored on the per-cpu lists"), THP could be cached by pcp. This actually somehow does the job done by the optimization. Cc: Hugh Dickins Cc: "Kirill A. Shutemov" Signed-off-by: Yang Shi --- mm/khugepaged.c | 74 ++++--------------------------------------------- 1 file changed, 6 insertions(+), 68 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6b9c98ddcd09..d6beb10e29e2 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -855,6 +855,12 @@ static int khugepaged_find_target_node(void) last_khugepaged_target_node = target_node; return target_node; } +#else +static inline int khugepaged_find_target_node(void) +{ + return 0; +} +#endif static bool khugepaged_prealloc_page(struct page **hpage, bool *wait) { @@ -889,74 +895,6 @@ khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node) count_vm_event(THP_COLLAPSE_ALLOC); return *hpage; } -#else -static int khugepaged_find_target_node(void) -{ - return 0; -} - -static inline struct page *alloc_khugepaged_hugepage(void) -{ - struct page *page; - - page = alloc_pages(alloc_hugepage_khugepaged_gfpmask(), - HPAGE_PMD_ORDER); - if (page) - prep_transhuge_page(page); - return page; -} - -static struct page *khugepaged_alloc_hugepage(bool *wait) -{ - struct page *hpage; - - do { - hpage = alloc_khugepaged_hugepage(); - if (!hpage) { - count_vm_event(THP_COLLAPSE_ALLOC_FAILED); - if (!*wait) - return NULL; - - *wait = false; - khugepaged_alloc_sleep(); - } else - count_vm_event(THP_COLLAPSE_ALLOC); - } while (unlikely(!hpage) && likely(khugepaged_enabled())); - - return hpage; -} - -static bool khugepaged_prealloc_page(struct page **hpage, bool *wait) -{ - /* - * If the hpage allocated earlier was briefly exposed in page cache - * before collapse_file() failed, it is possible that racing lookups - * have not yet completed, and would then be unpleasantly surprised by - * finding the hpage reused for the same mapping at a different offset. - * Just release the previous allocation if there is any danger of that. - */ - if (*hpage && page_count(*hpage) > 1) { - put_page(*hpage); - *hpage = NULL; - } - - if (!*hpage) - *hpage = khugepaged_alloc_hugepage(wait); - - if (unlikely(!*hpage)) - return false; - - return true; -} - -static struct page * -khugepaged_alloc_page(struct page **hpage, gfp_t gfp, int node) -{ - VM_BUG_ON(!*hpage); - - return *hpage; -} -#endif /* * If mmap_lock temporarily dropped, revalidate vma