From patchwork Tue Jul 2 14:46:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13719872 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24CFCC3064D for ; Tue, 2 Jul 2024 14:46:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 823EA6B0085; Tue, 2 Jul 2024 10:46:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D2D96B0088; Tue, 2 Jul 2024 10:46:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6732C6B0089; Tue, 2 Jul 2024 10:46:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 488746B0085 for ; Tue, 2 Jul 2024 10:46:31 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 79F0FA1F16 for ; Tue, 2 Jul 2024 14:46:30 +0000 (UTC) X-FDA: 82295088540.06.693FC27 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf07.hostedemail.com (Postfix) with ESMTP id 7544940010 for ; Tue, 2 Jul 2024 14:46:28 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719931577; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=MI97JFI3+gQwg85we4KeWjcabQnj1s9QV1sq9uCpoDc=; b=EbsGOAv8bWPhYVzoXTFV04k+FRQ7LVyRIxz6c5dssjJsX+AtrSytQ4Nbrqg9kTtZKV2un6 y1nTKtc90p1jHtJPf1k3aiXmK/InvcXsU0gEYuZwBxMgJBW0AQVwk9SWAXHmnCZh26Hf/n S3oU0ERSA4SIJ3ljOAdTrBMqYan+JhA= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; spf=pass (imf07.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719931577; a=rsa-sha256; cv=none; b=IqIrQzw53OGqHrTQJn2hwMHhSs0fIq3ls+/2E9QJBvBEi/69dZnh8lif8qM4o6EOLl3bhL RyTF0v8xE/7JdM3RNJd7/PCNnddAEZwARESufRdxeg2tyU5/PmtOprc4dVBNQlb0CVT1jp 82GN3s7HNw1hpAkHi/EVgBnz1xwBj34= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 78B8C339; Tue, 2 Jul 2024 07:46:52 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1A8D63F766; Tue, 2 Jul 2024 07:46:26 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Jonathan Corbet , David Hildenbrand , Barry Song , Baolin Wang , Lance Yang , Yang Shi Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH v1] mm: Fix khugepaged activation policy Date: Tue, 2 Jul 2024 15:46:14 +0100 Message-ID: <20240702144617.2291480-1-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 7544940010 X-Stat-Signature: yq47r1rzae41sbbmcouoe3qyitx6j96k X-Rspam-User: X-HE-Tag: 1719931588-9110 X-HE-Meta: U2FsdGVkX19hmAcPhZly+L49c7m9uWZbUj5Qnv4sRDfR0fuot08o0ASolugs1CRgqn59LHwlDXk5oFgmrxYzoqGYH2wSk1M+0KDdSSaQzeOsJKvNY3Bx6GrCgdZt96+TkTLt5e+skXyvfLwgKDx6AmKRmY48EYmzGVs4kX50V/dZoys5XsOZYq7u5wiikFKaehBa5rZBuxxbEsOwdqV1P/6VMxfvIJ1lEOQwNMXef0IaEslkwAPs/EwzoVuU+h4p9JueQEfb7LHks+VzS5W+8VFBlUFLnyKxQjOhgeeMirDH0ikTKpmGEAOZiYt7FBPglepgb5/FcpoDdzJoiLvL0da7YdvDedjOK20gBVSDfA6tSFDNlSEz5MjAg6rcQEWS/yUvOE4L6ZpEM1i5u8pPCjVvYmSR5DACQs7kSwfhP8ui8sphxc2g2pFRs3SKNCnExgKwswlAo/N4jNBmfRtvY2n5AuCR4ckY5b71hgzBDJ3r7/zca78TRZKtoaLlLjpTshJ3y/yVXcxDCLORWBcIhO+E2F4tYdTvJS7Yak9/vHx+VR9z4sZhalnYpH8mn3vZGh0RuMaNnus7o0UZ4ALdKEh6SDG4fN/WMhaREmBZBnn0y3phHU3LdkKtL5c2fn9x/iJtx/cryn7CLBqq22q5et+yKgL8PWx135YqK5Z0LrwjCoMsBsSW4phmroYsC5fMxmKOVl7ncF1XO44fVBvV4zP4C237734/6Sa5aucduUuIFtVtrRQb1Ec9dmuTaUJbCCDVeO4MXVS4Bq3A6RW0kGEeEZpbaIRT1aeHN1r5JVsTbKnMGeEXXHGCcRxS4FwVAxYv388WTkE+OUlY6bMldBuN1x3KCGr5VCoKlvZqWKhQ6SwavFS8J2cS6TcPAOZFvnU3D45vGo2WTEzRFs7Y6lgvCy8WgrLRpUJhXdM354liUvrmkVqADsTCwm7r9+NExiaPT2KlLUMPd+Bc5Dj RvymUrrQ vfuINBFprhlH/XgIPsMbVoqNvj8lxQ8HnIaRBGLTOEAaZrRfaiCLV+RkladHiq2hzLgTRi+NzUO+kD4QiVOlhQboioK6ZtEI7A/VctEscDnB0JIOagZiuoHtnd3IdD1hjvvP1GqF1QkH3PZ9BfR1cbtGNhWUd2OY+Yizj1/TYiQmj3GGRC6tsNmjJG/5IU95uysrB0lQ1HxbJ605v83MC4U0fnZ6hjgfv9agkmZzI78Vlkh4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Since the introduction of mTHP, the docuementation has stated that khugepaged would be enabled when any mTHP size is enabled, and disabled when all mTHP sizes are disabled. There are 2 problems with this; 1. this is not what was implemented by the code and 2. this is not the desirable behavior. Desirable behavior is for khugepaged to be enabled when any PMD-sized THP is enabled, anon or file. (Note that file THP is still controlled by the top-level control so we must always consider that, as well as the PMD-size mTHP control for anon). khugepaged only supports collapsing to PMD-sized THP so there is no value in enabling it when PMD-sized THP is disabled. So let's change the code and documentation to reflect this policy. Further, per-size enabled control modification events were not previously forwarded to khugepaged to give it an opportunity to start or stop. Consequently the following was resulting in khugepaged eroneously not being activated: echo never > /sys/kernel/mm/transparent_hugepage/enabled echo always > /sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled Signed-off-by: Ryan Roberts Fixes: 3485b88390b0 ("mm: thp: introduce multi-size THP sysfs interface") Closes: https://lore.kernel.org/linux-mm/7a0bbe69-1e3d-4263-b206-da007791a5c4@redhat.com/ Cc: stable@vger.kernel.org --- Hi All, Applies on top of today's mm-unstable (9bb8753acdd8). No regressions observed in mm selftests. When fixing this I also noticed that khugepaged doesn't get (and never has been) activated/deactivated by `shmem_enabled=`. I'm not sure if khugepaged knows how to collapse shmem - perhaps it should be activated in this case? Thanks, Ryan Documentation/admin-guide/mm/transhuge.rst | 11 +++++------ include/linux/huge_mm.h | 13 +++++++------ mm/huge_memory.c | 7 +++++++ mm/khugepaged.c | 13 ++++++------- 4 files changed, 25 insertions(+), 19 deletions(-) -- 2.43.0 diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 709fe10b60f4..fc321d40b8ac 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -202,12 +202,11 @@ PMD-mappable transparent hugepage:: cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size -khugepaged will be automatically started when one or more hugepage -sizes are enabled (either by directly setting "always" or "madvise", -or by setting "inherit" while the top-level enabled is set to "always" -or "madvise"), and it'll be automatically shutdown when the last -hugepage size is disabled (either by directly setting "never", or by -setting "inherit" while the top-level enabled is set to "never"). +khugepaged will be automatically started when PMD-sized THP is enabled +(either of the per-size anon control or the top-level control are set +to "always" or "madvise"), and it'll be automatically shutdown when +PMD-sized THP is disabled (when both the per-size anon control and the +top-level control are "never") Khugepaged controls ------------------- diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 4d155c7a4792..ce1b47b49cc3 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -128,16 +128,17 @@ static inline bool hugepage_global_always(void) (1< 0) { + int err; + + err = start_stop_khugepaged(); + if (err) + ret = err; + } return ret; } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 409f67a817f1..708d0e74b61f 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -449,7 +449,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, unsigned long vm_flags) { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && - hugepage_flags_enabled()) { + hugepage_pmd_enabled()) { if (thp_vma_allowable_order(vma, vm_flags, TVA_ENFORCE_SYSFS, PMD_ORDER)) __khugepaged_enter(vma->vm_mm); @@ -2462,8 +2462,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, static int khugepaged_has_work(void) { - return !list_empty(&khugepaged_scan.mm_head) && - hugepage_flags_enabled(); + return !list_empty(&khugepaged_scan.mm_head) && hugepage_pmd_enabled(); } static int khugepaged_wait_event(void) @@ -2536,7 +2535,7 @@ static void khugepaged_wait_work(void) return; } - if (hugepage_flags_enabled()) + if (hugepage_pmd_enabled()) wait_event_freezable(khugepaged_wait, khugepaged_wait_event()); } @@ -2567,7 +2566,7 @@ static void set_recommended_min_free_kbytes(void) int nr_zones = 0; unsigned long recommended_min; - if (!hugepage_flags_enabled()) { + if (!hugepage_pmd_enabled()) { calculate_min_free_kbytes(); goto update_wmarks; } @@ -2617,7 +2616,7 @@ int start_stop_khugepaged(void) int err = 0; mutex_lock(&khugepaged_mutex); - if (hugepage_flags_enabled()) { + if (hugepage_pmd_enabled()) { if (!khugepaged_thread) khugepaged_thread = kthread_run(khugepaged, NULL, "khugepaged"); @@ -2643,7 +2642,7 @@ int start_stop_khugepaged(void) void khugepaged_min_free_kbytes_update(void) { mutex_lock(&khugepaged_mutex); - if (hugepage_flags_enabled() && khugepaged_thread) + if (hugepage_pmd_enabled() && khugepaged_thread) set_recommended_min_free_kbytes(); mutex_unlock(&khugepaged_mutex); }