From patchwork Mon Feb 28 23:57:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12763909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63D18C433F5 for ; Mon, 28 Feb 2022 23:58:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231605AbiB1X6t (ORCPT ); Mon, 28 Feb 2022 18:58:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231607AbiB1X6o (ORCPT ); Mon, 28 Feb 2022 18:58:44 -0500 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD93A40E4E; Mon, 28 Feb 2022 15:58:04 -0800 (PST) Received: by mail-pl1-x62a.google.com with SMTP id ay5so9251891plb.1; Mon, 28 Feb 2022 15:58:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7qVTXgEk7jDbNe3RIcGVI67pez+4LMoxn8Ha8tkNEVM=; b=I/uneKQYRfBvvU1QYfMRxA9cdeOKNDx/GA80yQj/Vii5WOYT8JOLzKQ39OtDsLJu0e w68+trSpW+r3e+66nPqchuDl41VrZKCXAQGBhtvqKIYb9WYtGnPkgetwDILXcx3Ufnm6 qMBAVb3txy8flmueV6alKq9spfgtUhlVfk7krxGlnlerA9XBBtqiWGZLBz96Yoa74hLj Od7jXKzhB8UI9wQQeuJYi4wYkOq19NjFPBQEOFDqXYdIwIKiaAZkzknKapA4y0+lcBGv J2k+NZJFmd7jy+8HFikciV61Djwz36lqySOqaLUGqMA7vNp0f/ai6ZRZpZv18TqYZ8j0 EYOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7qVTXgEk7jDbNe3RIcGVI67pez+4LMoxn8Ha8tkNEVM=; b=6h0dU8/uMz0b5aalZ5alqVNnBpRZhWH2HKe6SxMNvPcRmIpVyj40Q+qBmu+qi/3ISi SbXnJ9iYIXcy9vspVFUBMs5mnB/7/EZZUTtLN+mVQlISM1gwq7R0wK6Z2qHLw3rZeWx7 aCDM5NERmMqhUkTuyG+0P7YnhC9qk/lc6I4jOsYRLyu+rz1R9qgBzVJ9/wOv0RmXgAVb RWrnQoNU6WutKPr09o30F4LH1kPAoBaT/lZxUSrHBJz8KOIRnMpK8evfsTa53lvJbBX8 iaHVEvOUiSSDLJoNiEF4179LUsx2EIDPO/djPfpAdNsjUkpHf1V5AkHvYSMQoNr5Hsnl Ytjw== X-Gm-Message-State: AOAM530zO55JZw/7N9tiRcieUaizykWpAfB3mzZgYnES3sgCdVGeGjEb oP2Va/7qk7+1Xa2sURXyk0w= X-Google-Smtp-Source: ABdhPJwS2d/mQvKUD9RT9YRNebvue4fyV0w7v+iF01ewR6eZtUfak47eHHmfgpbarwiWX2HhFS9nLA== X-Received: by 2002:a17:902:b204:b0:14d:a8c8:af37 with SMTP id t4-20020a170902b20400b0014da8c8af37mr23074869plr.108.1646092684471; Mon, 28 Feb 2022 15:58:04 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id on15-20020a17090b1d0f00b001b9d1b5f901sm396963pjb.47.2022.02.28.15.57.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 15:58:04 -0800 (PST) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, songliubraving@fb.com, linmiaohe@huawei.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, akpm@linux-foundation.org, tytso@mit.edu, adilger.kernel@dilger.ca, darrick.wong@oracle.com Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/8] sched: coredump.h: clarify the use of MMF_VM_HUGEPAGE Date: Mon, 28 Feb 2022 15:57:34 -0800 Message-Id: <20220228235741.102941-2-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220228235741.102941-1-shy828301@gmail.com> References: <20220228235741.102941-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org MMF_VM_HUGEPAGE is set as long as the mm is available for khugepaged by khugepaged_enter(), not only when VM_HUGEPAGE is set on vma. Correct the comment to avoid confusion. Signed-off-by: Yang Shi Reviewed-by: Miaohe Lin --- include/linux/sched/coredump.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h index 4d9e3a656875..4d0a5be28b70 100644 --- a/include/linux/sched/coredump.h +++ b/include/linux/sched/coredump.h @@ -57,7 +57,8 @@ static inline int get_dumpable(struct mm_struct *mm) #endif /* leave room for more dump flags */ #define MMF_VM_MERGEABLE 16 /* KSM may merge identical pages */ -#define MMF_VM_HUGEPAGE 17 /* set when VM_HUGEPAGE is set on vma */ +#define MMF_VM_HUGEPAGE 17 /* set when mm is available for + khugepaged */ /* * This one-shot flag is dropped due to necessity of changing exe once again * on NFS restore From patchwork Mon Feb 28 23:57:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12763910 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94FF8C433EF for ; Mon, 28 Feb 2022 23:58:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231637AbiB1X64 (ORCPT ); Mon, 28 Feb 2022 18:58:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231641AbiB1X6u (ORCPT ); Mon, 28 Feb 2022 18:58:50 -0500 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0F124DF4B; Mon, 28 Feb 2022 15:58:08 -0800 (PST) Received: by mail-pg1-x52d.google.com with SMTP id z4so12920593pgh.12; Mon, 28 Feb 2022 15:58:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+7utIe6zwoBoauJ2NDcsB651FtDL3xoIf2xpV9Lf8S0=; b=qXcrsfxt+hvAbzaB5ZB/MyAxUI9xoo6HoxydYwNc3CkF77+DBqZC8T5imRGPWrcRAA esh99RrRkr7bWkbYvpNi9XG4kEFjKk26aEuhEQ/J2Iu8zksxCUbIAmUrNU7+waOfM9lM yfphxzALZU1B1wD/dqWHXYoL+DYtJ8io1U1jAf70r+taKnzydII+k6v084eL8SSkYMue c/qRxVY6MuheK7tBR1u/mxmKUtMTqzKqdCy1Qp/R2wVHpWQUGVniSyRZBa5c6c6fO0A3 5mVx8u6UxkZ+0YSiav8lTwmXy40ZD5wmo7tyzS5539mlw0GPSZRBGbghOu2hFqm0mI1y vrbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+7utIe6zwoBoauJ2NDcsB651FtDL3xoIf2xpV9Lf8S0=; b=7N94TMYYdTH/NMoIe24PvuveNoK2nhgSCP8gwZMvKgjhYlYy4doOLGecpG0ZUFJFCp 5WtcDU2N+P/mb0/qI4M22crDTsim7NExg1GdcEqqXQZgerPVXPaVJ3n3JqmlMbx72aZA pd/fwn4VOwh4IBlKFLw6XjRdOgi4TrnIzxUhtP0IMVll+0H5hl1Qb/XfrgtCRENGZeSJ C8GPgVgOONpAw1SWxr1a/Q+Ro70e0rQAidsAIESEFGtZLw/SRWoLVVOTDo5ZVU5mgE9z ot5peW+Wt/nJ/NSepFcjkwijZgjbLKaVoJBtWtrRkt2ZoTOZ/bfXAobmbomAXL9djNqE LPUQ== X-Gm-Message-State: AOAM533ciM6WDrmSlrgmL6+ovVu8JgBijjobbqoKb2v3xt0aD4RiKbHa fWEaDYbHOfX8EmShPKQDNyo= X-Google-Smtp-Source: ABdhPJyUkJBFxZjHIgqTERZs6mkUNDMkgNt9kOSA3xlCm6y1IkyOi4wDqhot8eXP8IdFiudmtbtypw== X-Received: by 2002:a65:63d6:0:b0:375:7cc6:2b63 with SMTP id n22-20020a6563d6000000b003757cc62b63mr19289215pgv.598.1646092688435; Mon, 28 Feb 2022 15:58:08 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id on15-20020a17090b1d0f00b001b9d1b5f901sm396963pjb.47.2022.02.28.15.58.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 15:58:07 -0800 (PST) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, songliubraving@fb.com, linmiaohe@huawei.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, akpm@linux-foundation.org, tytso@mit.edu, adilger.kernel@dilger.ca, darrick.wong@oracle.com Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/8] mm: khugepaged: remove redundant check for VM_NO_KHUGEPAGED Date: Mon, 28 Feb 2022 15:57:35 -0800 Message-Id: <20220228235741.102941-3-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220228235741.102941-1-shy828301@gmail.com> References: <20220228235741.102941-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org The hugepage_vma_check() called by khugepaged_enter_vma_merge() does check VM_NO_KHUGEPAGED. Remove the check from caller and move the check in hugepage_vma_check() up. More checks may be run for VM_NO_KHUGEPAGED vmas, but MADV_HUGEPAGE is definitely not a hot path, so cleaner code does outweigh. Signed-off-by: Yang Shi Reviewed-by: Miaohe Lin --- mm/khugepaged.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 131492fd1148..82c71c6da9ce 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -366,8 +366,7 @@ int hugepage_madvise(struct vm_area_struct *vma, * register it here without waiting a page fault that * may not happen any time soon. */ - if (!(*vm_flags & VM_NO_KHUGEPAGED) && - khugepaged_enter_vma_merge(vma, *vm_flags)) + if (khugepaged_enter_vma_merge(vma, *vm_flags)) return -ENOMEM; break; case MADV_NOHUGEPAGE: @@ -446,6 +445,9 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, if (!transhuge_vma_enabled(vma, vm_flags)) return false; + if (vm_flags & VM_NO_KHUGEPAGED) + return false; + if (vma->vm_file && !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, HPAGE_PMD_NR)) return false; @@ -471,7 +473,8 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, return false; if (vma_is_temporary_stack(vma)) return false; - return !(vm_flags & VM_NO_KHUGEPAGED); + + return true; } int __khugepaged_enter(struct mm_struct *mm) From patchwork Mon Feb 28 23:57:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12763911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D6A3C433EF for ; Mon, 28 Feb 2022 23:58:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231767AbiB1X7K (ORCPT ); Mon, 28 Feb 2022 18:59:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231685AbiB1X6y (ORCPT ); Mon, 28 Feb 2022 18:58:54 -0500 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E5F24D258; Mon, 28 Feb 2022 15:58:12 -0800 (PST) Received: by mail-pl1-x634.google.com with SMTP id h17so6498259plc.5; Mon, 28 Feb 2022 15:58:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=p8kDVzZeSF/gSse0fUjCfu4T2T1nlW4Umj0Wis3mUpc=; b=DtcxAF2YbSTtBa+NYkVVk8WdqZzc0TXAiOomstKNeYxOjPsnsVSIu8Trep2mjqdb6T jsLDhrJA39R4NAIZbYdTbTIhrI4ckIyWNXGzkPBt2C2niwCCOqSUV7wFrB3ISqJma8WL htd5qRu9kgUvXHxzxeL7RODzYPXvqmG8BHkIMbvDSsLcpnzN+/2htnU3Og2Ef9JsANgl bS7M1W3T7vOnGTGO8nKSku88gdZaBDVo3ZaM1qM46mlyzTnZ9+cq/oxfZiiTxklUgr4G f7BSoTKxcbhMlNvaPOPR6IDGpuEXv3DD54YgtamAD8PHO+GleKkbydY1IgzpNBF6ZjgM DHYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=p8kDVzZeSF/gSse0fUjCfu4T2T1nlW4Umj0Wis3mUpc=; b=S9m/0rL4oAS39TlsuEPyuiOl1HDnmp0hPnSHViYVLUVxX43REFez50doUBXDIZmu6X IxtdDPm/g3FhMKqyV+W4Sj9oaWPHgGTvWP7QcspSj3I3lPbx2l+gwff3FdgkyE3XgCcm ZjVZUlpKtV2KSudu+sDqkaanll6w/SmQWv3Pfsc5NcZUKLzvmt6kCJPhtPGz3aQc3kzX OCKScS5zY0/dWwUbjsV0gmQ6FqSx9qi5bSZJSSP2FnuwhvhD3pu2OlmpS7ySrDGSCDgy 0GWTtnd9cIK0XnsXbknEBAboqgBYurh4cOw0FsZlEbImHaNFBlOSjxcpBh5xXkenjFWv fh3Q== X-Gm-Message-State: AOAM531e0g17GVxP8hgT/7Wl9WVIfiUiTXE22QZYUNLp/vceueVW7mNc Kg7dI43W+AQ5TSGT2Gh1xcU= X-Google-Smtp-Source: ABdhPJyF4X1yuiU1rNjzj1hH5DMaMGbsoU0rLs4bppVSjZZfontbcTs0n4CH4W0cRbWUzlx+TYRyaA== X-Received: by 2002:a17:902:a60d:b0:14f:b781:ccd7 with SMTP id u13-20020a170902a60d00b0014fb781ccd7mr23252925plq.2.1646092691740; Mon, 28 Feb 2022 15:58:11 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id on15-20020a17090b1d0f00b001b9d1b5f901sm396963pjb.47.2022.02.28.15.58.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 15:58:11 -0800 (PST) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, songliubraving@fb.com, linmiaohe@huawei.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, akpm@linux-foundation.org, tytso@mit.edu, adilger.kernel@dilger.ca, darrick.wong@oracle.com Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/8] mm: khugepaged: skip DAX vma Date: Mon, 28 Feb 2022 15:57:36 -0800 Message-Id: <20220228235741.102941-4-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220228235741.102941-1-shy828301@gmail.com> References: <20220228235741.102941-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org The DAX vma may be seen by khugepaged when the mm has other khugepaged suitable vmas. So khugepaged may try to collapse THP for DAX vma, but it will fail due to page sanity check, for example, page is not on LRU. So it is not harmful, but it is definitely pointless to run khugepaged against DAX vma, so skip it in early check. Signed-off-by: Yang Shi Reviewed-by: Miaohe Lin --- mm/khugepaged.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 82c71c6da9ce..a0e4fa33660e 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -448,6 +448,10 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, if (vm_flags & VM_NO_KHUGEPAGED) return false; + /* Don't run khugepaged against DAX vma */ + if (vma_is_dax(vma)) + return false; + if (vma->vm_file && !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, HPAGE_PMD_NR)) return false; From patchwork Mon Feb 28 23:57:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12763912 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAAADC43217 for ; Mon, 28 Feb 2022 23:58:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231797AbiB1X7L (ORCPT ); Mon, 28 Feb 2022 18:59:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231512AbiB1X7I (ORCPT ); Mon, 28 Feb 2022 18:59:08 -0500 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE34A4F460; Mon, 28 Feb 2022 15:58:15 -0800 (PST) Received: by mail-pf1-x42d.google.com with SMTP id s11so2189902pfu.13; Mon, 28 Feb 2022 15:58:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pVRCwJsnXRapubfoKiJlcGvRUl3JFnCz5Si7tbrZLpI=; b=LPc9yKE7q4GUZk+StLd67GP4m3Wy74Y/4FO2wtG0pHgA9+i6BhG5pSo3LIXwy+kN/2 ytsJ6UjvwHuhTIhaUSwtm/andOpYZtnta0CknadHD4T+bAQrLhYKdm9hkDohw/HLl10e KhtiWm7dDcIOVueaxOvjjmPsnJPw8OOZo88af1mNJs5GRMHwajphoBIii3bb2fB8cJAl Tiu9j4DsmPyFLindLKcapatb/57mYkm/H6uwcJl3cIqzjUppbP48jpVh1g259tXyB3vg 8fh0mI+AYeim7dcwv/oESfglqvrA2KeXw5J7SspnPhAILjJ+opx+IOzlH+kLgJhEuOTE TWmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pVRCwJsnXRapubfoKiJlcGvRUl3JFnCz5Si7tbrZLpI=; b=e5MkzMvK2qRaPo8sQyPKgkZ4aiAK32E5zosZJDxRsPf4FDDe1HnXMNKrvvVja7MfcV SosYoRxxklLq1LsWLBLWFZDqCOJ9XlxAh8nrBsPzo1gBrOrQOSSSxONLvgeGPicHS0A3 2LREGlKTVFk48Nf15sZuaJjNydgzhrAYvrG/xTHPqybW2R0oEzFsO56qmHMV4W4gT5er rRazze6yadcwC0yuwHySXVm8jseWasJ9BUY6eLG4BuISPB2fRTkPnf9Dk1BW9eXMFpKa 1ZVycNZfU7+WYdX5KGlXGO/QcqI4uccd1XRlNNOZo6e8F5ZiLYUvg6am5E5ET5DcneS/ J6rQ== X-Gm-Message-State: AOAM530yg7MS/G0qCQCHbmYHZ6T/blLOQFdVcx7c2+IuM7XrpoWTnXmg 0p9kxEbYmkvSorXNCgethNQ= X-Google-Smtp-Source: ABdhPJyuOTDdsUnt1yb7WW+NRJmeVFGOWeL67LCaX3vde5vYx61mSkUMtKzLTWNpTwg6F1RLu5vGwA== X-Received: by 2002:a05:6a00:134c:b0:4e1:75b:ca4b with SMTP id k12-20020a056a00134c00b004e1075bca4bmr24166481pfu.37.1646092695126; Mon, 28 Feb 2022 15:58:15 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id on15-20020a17090b1d0f00b001b9d1b5f901sm396963pjb.47.2022.02.28.15.58.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 15:58:14 -0800 (PST) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, songliubraving@fb.com, linmiaohe@huawei.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, akpm@linux-foundation.org, tytso@mit.edu, adilger.kernel@dilger.ca, darrick.wong@oracle.com Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/8] mm: thp: only regular file could be THP eligible Date: Mon, 28 Feb 2022 15:57:37 -0800 Message-Id: <20220228235741.102941-5-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220228235741.102941-1-shy828301@gmail.com> References: <20220228235741.102941-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org Since commit a4aeaa06d45e ("mm: khugepaged: skip huge page collapse for special files"), khugepaged just collapses THP for regular file which is the intended usecase for readonly fs THP. Only show regular file as THP eligible accordingly. And make file_thp_enabled() available for khugepaged too in order to remove duplicate code. Signed-off-by: Yang Shi Reported-by: kernel test robot Reported-by: Dan Carpenter --- include/linux/huge_mm.h | 9 +++++++++ mm/huge_memory.c | 11 ++--------- mm/khugepaged.c | 9 ++------- 3 files changed, 13 insertions(+), 16 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index e4c18ba8d3bf..e6d867f72458 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -172,6 +172,15 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) return false; } +static inline bool file_thp_enabled(struct vm_area_struct *vma) +{ + struct inode *inode = vma->vm_file->f_inode; + + return (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS)) && vma->vm_file && + (vma->vm_flags & VM_EXEC) && + !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode); +} + bool transparent_hugepage_active(struct vm_area_struct *vma); #define transparent_hugepage_use_zero_page() \ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 406a3c28c026..a87b3df63209 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -64,13 +64,6 @@ static atomic_t huge_zero_refcount; struct page *huge_zero_page __read_mostly; unsigned long huge_zero_pfn __read_mostly = ~0UL; -static inline bool file_thp_enabled(struct vm_area_struct *vma) -{ - return transhuge_vma_enabled(vma, vma->vm_flags) && vma->vm_file && - !inode_is_open_for_write(vma->vm_file->f_inode) && - (vma->vm_flags & VM_EXEC); -} - bool transparent_hugepage_active(struct vm_area_struct *vma) { /* The addr is used to check if the vma size fits */ @@ -82,8 +75,8 @@ bool transparent_hugepage_active(struct vm_area_struct *vma) return __transparent_hugepage_enabled(vma); if (vma_is_shmem(vma)) return shmem_huge_enabled(vma); - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS)) - return file_thp_enabled(vma); + if (transhuge_vma_enabled(vma, vma->vm_flags) && file_thp_enabled(vma)) + return true; return false; } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a0e4fa33660e..3dbac3e23f43 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -465,13 +465,8 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, return false; /* Only regular file is valid */ - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && vma->vm_file && - (vm_flags & VM_EXEC)) { - struct inode *inode = vma->vm_file->f_inode; - - return !inode_is_open_for_write(inode) && - S_ISREG(inode->i_mode); - } + if (file_thp_enabled(vma)) + return true; if (!vma->anon_vma || vma->vm_ops) return false; From patchwork Mon Feb 28 23:57:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12763913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ADA7C4167B for ; Mon, 28 Feb 2022 23:58:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231808AbiB1X7M (ORCPT ); Mon, 28 Feb 2022 18:59:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231736AbiB1X7K (ORCPT ); Mon, 28 Feb 2022 18:59:10 -0500 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92C92506D5; Mon, 28 Feb 2022 15:58:19 -0800 (PST) Received: by mail-pj1-x1035.google.com with SMTP id v4so12577442pjh.2; Mon, 28 Feb 2022 15:58:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PYzHG/r/zQJfX7Jqdvp+ttkJo6GETokQfZcD3GwLWs8=; b=SwTpvGt1o++d9q/YF6TkggxExOdqLIPnW6LotUSHwj+vou9wxNG/ApPWwYoJlBAb2L m2PII6WBUyNjfhBbIzPYAWKG+RWhIFep0dOuKK3KGRr5OWmi8bQrANDC7tFYVCXdyVOI jsiCyJg7Ryf02RqlLKWSWCnL8nqqg3b0obMQ6iibCh+FOGbuleLru+hTqPity+EQ2na9 WVhYXH+dzGnZqQRqFuhwN5wV152RMgWgc31oOZd/DneDkFd//MADonXge+HZkWl+OSlW 5ou5ygKDfNQz9WjWHxhdNO71sz4DZ/jSF2kgDEm5SHDE+Nc4qj9hjEDB6+4GQlaNtG+i KI/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PYzHG/r/zQJfX7Jqdvp+ttkJo6GETokQfZcD3GwLWs8=; b=00FdCpvk+dFEwEHTxeOXYoCpdZMBz9SM7J0WaDXBTjwg2ujZuQe0cl+AyderYiSAWr 45QlrD05hKrx6Pg//JdxTOwJD/nD0MvQ7GpZy+0BOoGnOMfDeRFVIgmHpwVphukUxCI0 f8m/Ndhyt4MTxMwU7FdWlKnBHjm3O3RRQq41nOrfe/o3tp7O5bKxepttPC/C3BKrQdsI WOj7TalTUnD4k5ejgJsO3ELV1j+3WdzUu9EkRZfBPJ4ZANbUHA3eKPjVzuQhyIQHZU43 mAqGwvGdNHjrAD0WbR/foUbtRtQxYgCtppD1Fo7gtZMqHh5Yk+KMc9913iAaRlLP/+eN n5xw== X-Gm-Message-State: AOAM533HvVLTabUX113PWmOGJJmUswdHW9rl4s4Sr9psZq+tFLjyPblL caJ6L4ulsBVwfiEEHNmkQUc= X-Google-Smtp-Source: ABdhPJyurHvAr5zzgjNeHZWTfLVrv7GL1aHUSkClbjW1FEdxYfjL1nZKILiili3ESpSsU8yJM0MUTQ== X-Received: by 2002:a17:902:7296:b0:14f:2a67:b400 with SMTP id d22-20020a170902729600b0014f2a67b400mr23297170pll.172.1646092698866; Mon, 28 Feb 2022 15:58:18 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id on15-20020a17090b1d0f00b001b9d1b5f901sm396963pjb.47.2022.02.28.15.58.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 15:58:18 -0800 (PST) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, songliubraving@fb.com, linmiaohe@huawei.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, akpm@linux-foundation.org, tytso@mit.edu, adilger.kernel@dilger.ca, darrick.wong@oracle.com Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 5/8] mm: khugepaged: make khugepaged_enter() void function Date: Mon, 28 Feb 2022 15:57:38 -0800 Message-Id: <20220228235741.102941-6-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220228235741.102941-1-shy828301@gmail.com> References: <20220228235741.102941-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org The most callers of khugepaged_enter() don't care about the return value. Only dup_mmap(), anonymous THP page fault and MADV_HUGEPAGE handle the error by returning -ENOMEM. Actually it is not harmful for them to ignore the error case either. It also sounds overkilling to fail fork() and page fault early due to khugepaged_enter() error, and MADV_HUGEPAGE does set VM_HUGEPAGE flag regardless of the error. Signed-off-by: Yang Shi --- include/linux/khugepaged.h | 30 ++++++++++++------------------ kernel/fork.c | 4 +--- mm/huge_memory.c | 4 ++-- mm/khugepaged.c | 18 +++++++----------- 4 files changed, 22 insertions(+), 34 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index 2fcc01891b47..0423d3619f26 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -12,10 +12,10 @@ extern struct attribute_group khugepaged_attr_group; extern int khugepaged_init(void); extern void khugepaged_destroy(void); extern int start_stop_khugepaged(void); -extern int __khugepaged_enter(struct mm_struct *mm); +extern void __khugepaged_enter(struct mm_struct *mm); extern void __khugepaged_exit(struct mm_struct *mm); -extern int khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags); +extern void khugepaged_enter_vma_merge(struct vm_area_struct *vma, + unsigned long vm_flags); extern void khugepaged_min_free_kbytes_update(void); #ifdef CONFIG_SHMEM extern void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr); @@ -40,11 +40,10 @@ static inline void collapse_pte_mapped_thp(struct mm_struct *mm, (transparent_hugepage_flags & \ (1<flags)) - return __khugepaged_enter(mm); - return 0; + __khugepaged_enter(mm); } static inline void khugepaged_exit(struct mm_struct *mm) @@ -53,7 +52,7 @@ static inline void khugepaged_exit(struct mm_struct *mm) __khugepaged_exit(mm); } -static inline int khugepaged_enter(struct vm_area_struct *vma, +static inline void khugepaged_enter(struct vm_area_struct *vma, unsigned long vm_flags) { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags)) @@ -62,27 +61,22 @@ static inline int khugepaged_enter(struct vm_area_struct *vma, (khugepaged_req_madv() && (vm_flags & VM_HUGEPAGE))) && !(vm_flags & VM_NOHUGEPAGE) && !test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) - if (__khugepaged_enter(vma->vm_mm)) - return -ENOMEM; - return 0; + __khugepaged_enter(vma->vm_mm); } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ -static inline int khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) +static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) { - return 0; } static inline void khugepaged_exit(struct mm_struct *mm) { } -static inline int khugepaged_enter(struct vm_area_struct *vma, - unsigned long vm_flags) +static inline void khugepaged_enter(struct vm_area_struct *vma, + unsigned long vm_flags) { - return 0; } -static inline int khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags) +static inline void khugepaged_enter_vma_merge(struct vm_area_struct *vma, + unsigned long vm_flags) { - return 0; } static inline void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) diff --git a/kernel/fork.c b/kernel/fork.c index a024bf6254df..dc85418c426a 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -523,9 +523,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, retval = ksm_fork(mm, oldmm); if (retval) goto out; - retval = khugepaged_fork(mm, oldmm); - if (retval) - goto out; + khugepaged_fork(mm, oldmm); prev = NULL; for (mpnt = oldmm->mmap; mpnt; mpnt = mpnt->vm_next) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index a87b3df63209..ec2490d6af09 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -725,8 +725,8 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return VM_FAULT_FALLBACK; if (unlikely(anon_vma_prepare(vma))) return VM_FAULT_OOM; - if (unlikely(khugepaged_enter(vma, vma->vm_flags))) - return VM_FAULT_OOM; + khugepaged_enter(vma, vma->vm_flags); + if (!(vmf->flags & FAULT_FLAG_WRITE) && !mm_forbids_zeropage(vma->vm_mm) && transparent_hugepage_use_zero_page()) { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 3dbac3e23f43..b87af297e652 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -366,8 +366,7 @@ int hugepage_madvise(struct vm_area_struct *vma, * register it here without waiting a page fault that * may not happen any time soon. */ - if (khugepaged_enter_vma_merge(vma, *vm_flags)) - return -ENOMEM; + khugepaged_enter_vma_merge(vma, *vm_flags); break; case MADV_NOHUGEPAGE: *vm_flags &= ~VM_HUGEPAGE; @@ -476,20 +475,20 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, return true; } -int __khugepaged_enter(struct mm_struct *mm) +void __khugepaged_enter(struct mm_struct *mm) { struct mm_slot *mm_slot; int wakeup; mm_slot = alloc_mm_slot(); if (!mm_slot) - return -ENOMEM; + return; /* __khugepaged_exit() must not run from under us */ VM_BUG_ON_MM(khugepaged_test_exit(mm), mm); if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) { free_mm_slot(mm_slot); - return 0; + return; } spin_lock(&khugepaged_mm_lock); @@ -505,11 +504,9 @@ int __khugepaged_enter(struct mm_struct *mm) mmgrab(mm); if (wakeup) wake_up_interruptible(&khugepaged_wait); - - return 0; } -int khugepaged_enter_vma_merge(struct vm_area_struct *vma, +void khugepaged_enter_vma_merge(struct vm_area_struct *vma, unsigned long vm_flags) { unsigned long hstart, hend; @@ -520,13 +517,12 @@ int khugepaged_enter_vma_merge(struct vm_area_struct *vma, * file-private shmem THP is not supported. */ if (!hugepage_vma_check(vma, vm_flags)) - return 0; + return; hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; hend = vma->vm_end & HPAGE_PMD_MASK; if (hstart < hend) - return khugepaged_enter(vma, vm_flags); - return 0; + khugepaged_enter(vma, vm_flags); } void __khugepaged_exit(struct mm_struct *mm) From patchwork Mon Feb 28 23:57:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12763914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A57F9C43217 for ; Mon, 28 Feb 2022 23:58:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231822AbiB1X7Q (ORCPT ); Mon, 28 Feb 2022 18:59:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231772AbiB1X7K (ORCPT ); Mon, 28 Feb 2022 18:59:10 -0500 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4C2F13573F; Mon, 28 Feb 2022 15:58:23 -0800 (PST) Received: by mail-pf1-x42f.google.com with SMTP id d187so12535614pfa.10; Mon, 28 Feb 2022 15:58:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g+Z88nKMaMfnoSMNb28nSkaCjqMBUKAv77UuVBn2dOA=; b=Y/eAj4zOI32uQO07S2u9fw4pyH5VUHMWjTICZaR2LXS1QNnXSUZWy8gwZObI0JlAuT eV7p0ri47A+rjBcmwwR0YL1tP8tc7DgO9DSTYR3BFkxLcKOec3g/0ZKKLLrLHbLuSTDJ 1t3XLM3wv3r6DoCOg4FiXvLqtLPe8epceOXb7JXhjc2NQ9H14MlP/qxYUVORZzEsy0vc PxP/F2Uslq/UxPlOvY9swJgpq7Easigg3RYr0FLqZ4wHbLvFMQMaiQbEIwmCazkhxHUp AyEhW8Shpi8rx1XYpm/lisIWpPZQpD1WPyXPU8sLHszlNGHtFo+kWUoz7BuMO94Uy5WG PIzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g+Z88nKMaMfnoSMNb28nSkaCjqMBUKAv77UuVBn2dOA=; b=AULGKj2BVtFbAM/FzRRDayUf/auvy+2DI17IWtzY7gmJnPfD5GyAnwz2eWc2f/mF7N b7UUShmnOTThUZ70Th4+AUGNdNJYIV0gMDcrbhNZ6CjsSlZvolkX3K4FBt0QJTP+bnyq 1kq2tQDBznaaJWsLaElZRYLcW+dOIKR0HIkIKv2jH+queYShf9pKhkHWScw+q/ZhitI9 hid759xLDpzTGv6K9JebsmgLDqi0dX9IDxlBre8fvLIbCzEBbhL9lG4zEB1J7SI8BPpK lLsWwE0yhpZFcqSowc6EBf14TsPpuC5GPHmfMn1bGN9cq9LT3UXvh1Wz7QKd7Q95IuQp 1wQA== X-Gm-Message-State: AOAM530iur5uNhuZ7tX48sge3ndcu3JYB2RprNQQrgGz5mhDJuhqvjfb yuAxezClpinjU3nlVyqQ61I= X-Google-Smtp-Source: ABdhPJxBLejg9ODYXcpvvGlLqL7Xi68l/FsoGoG/EIgU1d8hoBFNE9JpZGiFHfRFdlAWUyzpUpx/8Q== X-Received: by 2002:a63:790d:0:b0:373:cc0b:5b6a with SMTP id u13-20020a63790d000000b00373cc0b5b6amr19448962pgc.119.1646092703220; Mon, 28 Feb 2022 15:58:23 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id on15-20020a17090b1d0f00b001b9d1b5f901sm396963pjb.47.2022.02.28.15.58.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 15:58:22 -0800 (PST) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, songliubraving@fb.com, linmiaohe@huawei.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, akpm@linux-foundation.org, tytso@mit.edu, adilger.kernel@dilger.ca, darrick.wong@oracle.com Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 6/8] mm: khugepaged: move some khugepaged_* functions to khugepaged.c Date: Mon, 28 Feb 2022 15:57:39 -0800 Message-Id: <20220228235741.102941-7-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220228235741.102941-1-shy828301@gmail.com> References: <20220228235741.102941-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org This move also makes the following patches easier. The following patches will call khugepaged_enter() for regular filesystems to make readonly FS THP collapse more consistent. They need to use some macros defined in huge_mm.h, for example, HPAGE_PMD_*, but it seems not preferred to polute filesystems code with including unnecessary header files. With this move the filesystems code just need include khugepaged.h, which is quite small and the use is quite specific, to call khugepaged_enter() to hook mm with khugepaged. And the khugepaged_* functions actually are wrappers for some non-inline functions, so it seems the benefits are not too much to keep them inline. This also helps to reuse hugepage_vma_check() for khugepaged_enter() so that we could remove some duplicate checks. Signed-off-by: Yang Shi --- include/linux/khugepaged.h | 33 ++++++--------------------------- mm/khugepaged.c | 20 ++++++++++++++++++++ 2 files changed, 26 insertions(+), 27 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index 0423d3619f26..54e169116d49 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -16,6 +16,12 @@ extern void __khugepaged_enter(struct mm_struct *mm); extern void __khugepaged_exit(struct mm_struct *mm); extern void khugepaged_enter_vma_merge(struct vm_area_struct *vma, unsigned long vm_flags); +extern void khugepaged_fork(struct mm_struct *mm, + struct mm_struct *oldmm); +extern void khugepaged_exit(struct mm_struct *mm); +extern void khugepaged_enter(struct vm_area_struct *vma, + unsigned long vm_flags); + extern void khugepaged_min_free_kbytes_update(void); #ifdef CONFIG_SHMEM extern void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr); @@ -33,36 +39,9 @@ static inline void collapse_pte_mapped_thp(struct mm_struct *mm, #define khugepaged_always() \ (transparent_hugepage_flags & \ (1<flags)) - __khugepaged_enter(mm); -} - -static inline void khugepaged_exit(struct mm_struct *mm) -{ - if (test_bit(MMF_VM_HUGEPAGE, &mm->flags)) - __khugepaged_exit(mm); -} - -static inline void khugepaged_enter(struct vm_area_struct *vma, - unsigned long vm_flags) -{ - if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags)) - if ((khugepaged_always() || - (shmem_file(vma->vm_file) && shmem_huge_enabled(vma)) || - (khugepaged_req_madv() && (vm_flags & VM_HUGEPAGE))) && - !(vm_flags & VM_NOHUGEPAGE) && - !test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) - __khugepaged_enter(vma->vm_mm); -} #else /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index b87af297e652..4cb4379ecf25 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -557,6 +557,26 @@ void __khugepaged_exit(struct mm_struct *mm) } } +void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) +{ + if (test_bit(MMF_VM_HUGEPAGE, &oldmm->flags)) + __khugepaged_enter(mm); +} + +void khugepaged_exit(struct mm_struct *mm) +{ + if (test_bit(MMF_VM_HUGEPAGE, &mm->flags)) + __khugepaged_exit(mm); +} + +void khugepaged_enter(struct vm_area_struct *vma, unsigned long vm_flags) +{ + if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && + khugepaged_enabled()) + if (hugepage_vma_check(vma, vm_flags)) + __khugepaged_enter(vma->vm_mm); +} + static void release_pte_page(struct page *page) { mod_node_page_state(page_pgdat(page), From patchwork Mon Feb 28 23:57:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12763915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC425C433FE for ; Mon, 28 Feb 2022 23:58:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231783AbiB1X7V (ORCPT ); Mon, 28 Feb 2022 18:59:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231607AbiB1X7L (ORCPT ); Mon, 28 Feb 2022 18:59:11 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7092136EF1; Mon, 28 Feb 2022 15:58:26 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id j10-20020a17090a94ca00b001bc2a9596f6so649926pjw.5; Mon, 28 Feb 2022 15:58:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=e8qThGmR6qWWYZNQEqh4yIj3Ne/feg2xUmJaASesT2M=; b=LcgiviPPtKQf+Yoyq7Ym1LQh5IjQuQxUzZMQnQbNObkY79MjvEU+0kT6e2d2QOYJ28 tmpc48fdX2xECAAqffdE8n4vm5kr12od2ZdR6uC5+gf7A48Ht9EdyR+1YOjiyugwFLrm 5W4ivZBBhe+WR1ochAVmnUXIQZqGo592s0Kc6Pojz1QdmoYvfgESOIMeu/AgXzb0B7Ao VUCwG/TqnODlUmhIKslMesq9l8MNCzhovKjS1Nd0KBgSbVc8TAjwo4zmQa5io6BrNT2+ NT+4fl2mPyYMgZXW3z5h8oE3c+Ds5tKm+K7yegxLhKtChJShB9U7Gs7AAco/a80ERQgu ES0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=e8qThGmR6qWWYZNQEqh4yIj3Ne/feg2xUmJaASesT2M=; b=ULTvVe2PtbMw3C4hmyrW5+dxanpq5s5/znQ46+v7hDZs64mr7OeGXpOLSS/mbjKXa+ 2vZGP3nsvDXPvKetg2oVCUC7y/xkj2AMAQVDjGnDMtmYoCGfgir/1QwLMmWJLLNVl3wn vOOuheOlakuieLXjRxgmNUDMHWEnNAlGH/BZwzjlLW/A9EZeBOgEOeDTu2BhH9y+mvvA to2Zws4aHq9hokBenPavQRznsGCzES/heTicw5qskygfTHH9xBDnR0ScHdrdBU+o0npE k6TADA29O417CxsHSwb6xKHKS1YMKsywCJoV0QexW98RoXkb2Re5dWYNSk3oOk86ccTN rXFg== X-Gm-Message-State: AOAM531l9vX5/hjgPwkz7YDNUTVr7UO2ghkh74GW8GiHoFYWgGXw8m/6 4jhL1Sfd66ZoCCpEV57zp+o= X-Google-Smtp-Source: ABdhPJyFc6bKerDrlYwteOuLB48KU0sCJ2MnBiYTLqL/dBe2GXiIupe81JUUbwoTRmj9GFlCEYKzlQ== X-Received: by 2002:a17:90b:3b4d:b0:1bc:a5a7:b389 with SMTP id ot13-20020a17090b3b4d00b001bca5a7b389mr18923376pjb.148.1646092706213; Mon, 28 Feb 2022 15:58:26 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id on15-20020a17090b1d0f00b001b9d1b5f901sm396963pjb.47.2022.02.28.15.58.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 15:58:25 -0800 (PST) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, songliubraving@fb.com, linmiaohe@huawei.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, akpm@linux-foundation.org, tytso@mit.edu, adilger.kernel@dilger.ca, darrick.wong@oracle.com Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 7/8] mm: khugepaged: introduce khugepaged_enter_file() helper Date: Mon, 28 Feb 2022 15:57:40 -0800 Message-Id: <20220228235741.102941-8-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220228235741.102941-1-shy828301@gmail.com> References: <20220228235741.102941-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org The following patch will have filesystems code call khugepaged_enter() to make readonly FS THP collapse more consistent. Extract the current implementation used by shmem in khugepaged_enter_file() helper so that it could be reused by other filesystems and export the symbol for modules. Signed-off-by: Yang Shi --- include/linux/khugepaged.h | 6 ++++++ mm/khugepaged.c | 11 +++++++++++ mm/shmem.c | 14 ++++---------- 3 files changed, 21 insertions(+), 10 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index 54e169116d49..06464e9a1f91 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -21,6 +21,8 @@ extern void khugepaged_fork(struct mm_struct *mm, extern void khugepaged_exit(struct mm_struct *mm); extern void khugepaged_enter(struct vm_area_struct *vma, unsigned long vm_flags); +extern void khugepaged_enter_file(struct vm_area_struct *vma, + unsigned long vm_flags); extern void khugepaged_min_free_kbytes_update(void); #ifdef CONFIG_SHMEM @@ -53,6 +55,10 @@ static inline void khugepaged_enter(struct vm_area_struct *vma, unsigned long vm_flags) { } +static inline void khugepaged_enter_file(struct vm_area_struct *vma, + unsigned long vm_flags) +{ +} static inline void khugepaged_enter_vma_merge(struct vm_area_struct *vma, unsigned long vm_flags) { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 4cb4379ecf25..93c9072983e2 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -577,6 +577,17 @@ void khugepaged_enter(struct vm_area_struct *vma, unsigned long vm_flags) __khugepaged_enter(vma->vm_mm); } +void khugepaged_enter_file(struct vm_area_struct *vma, unsigned long vm_flags) +{ + if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && + khugepaged_enabled() && + (((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < + (vma->vm_end & HPAGE_PMD_MASK))) + if (hugepage_vma_check(vma, vm_flags)) + __khugepaged_enter(vma->vm_mm); +} +EXPORT_SYMBOL_GPL(khugepaged_enter_file); + static void release_pte_page(struct page *page) { mod_node_page_state(page_pgdat(page), diff --git a/mm/shmem.c b/mm/shmem.c index a09b29ec2b45..c2346e5d2b24 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2233,11 +2233,9 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma) file_accessed(file); vma->vm_ops = &shmem_vm_ops; - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && - ((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < - (vma->vm_end & HPAGE_PMD_MASK)) { - khugepaged_enter(vma, vma->vm_flags); - } + + khugepaged_enter_file(vma, vma->vm_flags); + return 0; } @@ -4132,11 +4130,7 @@ int shmem_zero_setup(struct vm_area_struct *vma) vma->vm_file = file; vma->vm_ops = &shmem_vm_ops; - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && - ((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < - (vma->vm_end & HPAGE_PMD_MASK)) { - khugepaged_enter(vma, vma->vm_flags); - } + khugepaged_enter_file(vma, vma->vm_flags); return 0; } From patchwork Mon Feb 28 23:57:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12763916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28784C43219 for ; Mon, 28 Feb 2022 23:58:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231803AbiB1X7X (ORCPT ); Mon, 28 Feb 2022 18:59:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231804AbiB1X7L (ORCPT ); Mon, 28 Feb 2022 18:59:11 -0500 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7969E38798; Mon, 28 Feb 2022 15:58:31 -0800 (PST) Received: by mail-pf1-x431.google.com with SMTP id x18so12542322pfh.5; Mon, 28 Feb 2022 15:58:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lpJtQjEfn/J2FKfZSO5FEN5nDVRpWWfEH9M+CcSvJbU=; b=VLRQXiy3wvMknW7AOO7IVvKT7MqkbdlU3FLw3ClWWTqHa1MhRB+cT54O6woccfVv23 bPVM9qXq6z9fQnsc1ZNinyshivIpU9XgnlE6GxxVR1pVJGieloj7NEL4/9Y56UqfEefA nt0ayjMOUb9cb5rL/nilpAkiprwoLdMwikh2BzDI0fmBq2RU3iHJb5syWs73VxIvmf8s vaJN5zmC0ipjUwAO6ioaulNabDVSklq6nQ0EHwtl1muE1bO4Tn2al5xuHpb3H/mVmLqH zt4VDKst1elD4wKiZ07Ed36eOtFfAxmYtn6WW6JoTyUtsIPpcErbDYEt5qorV2Crwkr8 TRUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lpJtQjEfn/J2FKfZSO5FEN5nDVRpWWfEH9M+CcSvJbU=; b=6Apbuw3lvV/Z5fx0GMn/HTF/OQCtS4bzuTgzmOrHWdGjhUgs69nQXq5Jb2NIzCAJjV dj5JSIxVQZwUsnSecfnyfaRArtCPYTGfvqz22zuaey04WOhcH3AqBCwpZz3+ZzIUKajV LzXfhJpAcFvk0+OePaQZkpWEw4DoQLwl3FnDcAd5ybtN2dWxR57aJYIOH+uO8pksxrQS 0mwjLjhigX10N6JryCqnsTtR7qf58nUznYj+SqzpnDtbhjFEPdsJ8NbyEQ23w61I7MdD 0rihIycoSwwspo+KCSZhATZFTKg6mOWyyPNxnpTfx6IwjFZRYQOUuqGnScU7XcKRPkhQ A5Iw== X-Gm-Message-State: AOAM530jeoODsO2AD0KSe61e0X11XDHHEInbD0Ih6Ahi9SLiKj5Cake+ OzkN48zFtY+NRpwkS1qOMfk= X-Google-Smtp-Source: ABdhPJzyv0UQbz9tZczy02j/XLMhXWEhyZDVPmYSr4FxI3UgPBCDu42TpFtDCT4zx3Gb1O0sASg6pA== X-Received: by 2002:a05:6a00:22d3:b0:4f3:d439:7189 with SMTP id f19-20020a056a0022d300b004f3d4397189mr18780779pfj.79.1646092711038; Mon, 28 Feb 2022 15:58:31 -0800 (PST) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id on15-20020a17090b1d0f00b001b9d1b5f901sm396963pjb.47.2022.02.28.15.58.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 15:58:30 -0800 (PST) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, songliubraving@fb.com, linmiaohe@huawei.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, akpm@linux-foundation.org, tytso@mit.edu, adilger.kernel@dilger.ca, darrick.wong@oracle.com Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 8/8] fs: register suitable readonly vmas for khugepaged Date: Mon, 28 Feb 2022 15:57:41 -0800 Message-Id: <20220228235741.102941-9-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220228235741.102941-1-shy828301@gmail.com> References: <20220228235741.102941-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org The readonly FS THP relies on khugepaged to collapse THP for suitable vmas. But it is kind of "random luck" for khugepaged to see the readonly FS vmas (https://lore.kernel.org/linux-mm/00f195d4-d039-3cf2-d3a1-a2c88de397a0@suse.cz/) since currently the vmas are registered to khugepaged when: - Anon huge pmd page fault - VMA merge - MADV_HUGEPAGE - Shmem mmap If the above conditions are not met, even though khugepaged is enabled it won't see readonly FS vmas at all. MADV_HUGEPAGE could be specified explicitly to tell khugepaged to collapse this area, but when khugepaged mode is "always" it should scan suitable vmas as long as VM_NOHUGEPAGE is not set. So make sure readonly FS vmas are registered to khugepaged to make the behavior more consistent. Registering the vmas in mmap path seems more preferred from performance point of view since page fault path is definitely hot path. Reported-by: Vlastimil Babka Signed-off-by: Yang Shi --- fs/ext4/file.c | 4 ++++ fs/xfs/xfs_file.c | 4 ++++ 2 files changed, 8 insertions(+) diff --git a/fs/ext4/file.c b/fs/ext4/file.c index 8cc11715518a..b894cd5aff44 100644 --- a/fs/ext4/file.c +++ b/fs/ext4/file.c @@ -30,6 +30,7 @@ #include #include #include +#include #include "ext4.h" #include "ext4_jbd2.h" #include "xattr.h" @@ -782,6 +783,9 @@ static int ext4_file_mmap(struct file *file, struct vm_area_struct *vma) } else { vma->vm_ops = &ext4_file_vm_ops; } + + khugepaged_enter_file(vma, vma->vm_flags); + return 0; } diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 5bddb1e9e0b3..d94144b1fb0f 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -30,6 +30,7 @@ #include #include #include +#include static const struct vm_operations_struct xfs_file_vm_ops; @@ -1407,6 +1408,9 @@ xfs_file_mmap( vma->vm_ops = &xfs_file_vm_ops; if (IS_DAX(inode)) vma->vm_flags |= VM_HUGEPAGE; + + khugepaged_enter_file(vma, vma->vm_flags); + return 0; }