From patchwork Tue May 10 20:32:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12845487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0857C433FE for ; Tue, 10 May 2022 20:32:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234137AbiEJUcs (ORCPT ); Tue, 10 May 2022 16:32:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233963AbiEJUcl (ORCPT ); Tue, 10 May 2022 16:32:41 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23AD4163F66; Tue, 10 May 2022 13:32:40 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id j6so121091pfe.13; Tue, 10 May 2022 13:32:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=V/h1Bxfr/jQDa/bJ5pjWluPLjQmyeyytVr0ONMMooUQ=; b=Qnmtgk/hCNFFRvhcnYNPCvoSnG/uEqkiWxWsnaMLO9uVBgzw0dKJQy4UeeGQ1riYe7 yPMyHNjfVgiSL9P2FckFihY2AuZzL3kqf6HHQE9TAwSlW/LhQYrXDEvp+QQfJun0c735 LCkure5IfIdN5tH4OPuNjmRmdTnhvzZw91UzHnlGKXL0NM7xIRL3LSeIYlTt5vm2z8ZS MDARxEwqJPe1PVltyREXaxFKuu+woKn61n++bjPSuRGBHgeDyuUE4zoHvhsgLEx1Lm2d +7Wos5EuFrRrkM1FG66Y4V50+ccVz3AgxuLttfAQm154vQH1Re71tw6qroG8NHRWYDjc RNjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=V/h1Bxfr/jQDa/bJ5pjWluPLjQmyeyytVr0ONMMooUQ=; b=hvBW73PChiSbcmZ6nqN4ls1MawyvtddrJPPwLIkWNSDvMuQeoJKRKDWupiCJU5uCsf pfJ8NzYyT65xs4A/4pBCFgMLsvwySH6ebFh/VD6fR/plmKicxMeGhIiiMEtqxyvOCy5d yxrXr7EMskMkQombINawEQXI6GbpO82Dswn64k/8ihKOTDAKTnA+R3yiZCnMbiv+LlPq 0wHyOgQmmqX9+3TXt/6kJHUUBsNDz/geKsFyumfBUxEP7pXQ+aFgVqfB13XNH7DQjMRk yTohUMc4M1ZCPB0o3IiaSrOnJjx0Hxvrh63j81nnnP+19rcN+/m29GzRXYThn9sNUOkx av6g== X-Gm-Message-State: AOAM532Tyj/d+g1dmh1WjfQArtLMaATv79snbaCXCF/oG/GcZ4ty5Dy7 fKZ01v8/e63nKmI7xckDY60= X-Google-Smtp-Source: ABdhPJwmfDABb4iz4Cc9sy3pydFHFLqsV/JMgsSn7iApJsP4GkgrAs0teyvTfeRic3/HIoQdrtQi7w== X-Received: by 2002:a63:9d8a:0:b0:3ab:6ae4:fc25 with SMTP id i132-20020a639d8a000000b003ab6ae4fc25mr18336737pgd.496.1652214759743; Tue, 10 May 2022 13:32:39 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id v17-20020a1709028d9100b0015e8d4eb1d4sm58898plo.30.2022.05.10.13.32.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 13:32:39 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 1/8] sched: coredump.h: clarify the use of MMF_VM_HUGEPAGE Date: Tue, 10 May 2022 13:32:15 -0700 Message-Id: <20220510203222.24246-2-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220510203222.24246-1-shy828301@gmail.com> References: <20220510203222.24246-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org MMF_VM_HUGEPAGE is set as long as the mm is available for khugepaged by khugepaged_enter(), not only when VM_HUGEPAGE is set on vma. Correct the comment to avoid confusion. Reviewed-by: Miaohe Lin Acked-by: Song Liu Acked-by: Vlastmil Babka Signed-off-by: Yang Shi --- include/linux/sched/coredump.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h index 4d9e3a656875..4d0a5be28b70 100644 --- a/include/linux/sched/coredump.h +++ b/include/linux/sched/coredump.h @@ -57,7 +57,8 @@ static inline int get_dumpable(struct mm_struct *mm) #endif /* leave room for more dump flags */ #define MMF_VM_MERGEABLE 16 /* KSM may merge identical pages */ -#define MMF_VM_HUGEPAGE 17 /* set when VM_HUGEPAGE is set on vma */ +#define MMF_VM_HUGEPAGE 17 /* set when mm is available for + khugepaged */ /* * This one-shot flag is dropped due to necessity of changing exe once again * on NFS restore From patchwork Tue May 10 20:32:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12845488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82A3DC433EF for ; Tue, 10 May 2022 20:32:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234017AbiEJUcz (ORCPT ); Tue, 10 May 2022 16:32:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234015AbiEJUcm (ORCPT ); Tue, 10 May 2022 16:32:42 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B3F519FF6F; Tue, 10 May 2022 13:32:41 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id j6so121091pfe.13; Tue, 10 May 2022 13:32:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XG6VTF0MKKyBvJ7LiNjeRAoppMqMyK0KQDU/n3DucWk=; b=PMhjaTnCNiBqvYANKcfUWI+ZsxV1u8tVoKwehsvBgPR2RO8ckI8e3lCcRt+GGQFVxq K+aNHihlxCeMd7YlipMUDpbilZ7k/WknqF1VpYC0ElnBfhKfuSDV0NQhLrepmM6O7fBl 9uqv2lXcB3zE/Op1BjFkvflCXb5sMblUoAoACyX6Tgg9svJKMuCABXHPlntdP9V8MUwc OnNgmbK1aux9SskLR/NTo2Gp9nQNB7mkaBDgIg4cbPfpW3jFKiRUaVSDX0txy306jYi/ Q0F5OJoRAg+orbNa3CPxZlNyYcEmwY5T3aX6ist/XLS7R5yHo78lSMS5A5Udyv8Mtlm0 VA1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XG6VTF0MKKyBvJ7LiNjeRAoppMqMyK0KQDU/n3DucWk=; b=QdnJuHb9y5VPS9ljbstjgDRBt7qvCvwpIpNQyjdcVSsb+ExDxbRm8K2x+4hIJtJhYP DvYedexg/GjdauArmXcjXiZ/B8mRfGkJnHfocmZO2gdeRWhoPSwj7n9FXK3+B2CcsGl4 tocqE4FJPRW7knXeMvCktBgmbzZN4wHi7Zws7zxHDTfk81Sijd9gPE0k4l5umKOwkbbA 2FW6hxQDUFPYkcudHWJMZ3Ya/Qp5QlaQ3ZIlrOVnmSP+UaApBAhakBiVga9jxjESEdEE 53H2nVZGYTJMEtH2g03wUCEgAfprVthTR+mytg8PiaXwlNTNaNZPHmO8mtidpqi7Gn3K Nv2g== X-Gm-Message-State: AOAM532jIfSODp/z92s6RaW2XqTmHTBEl9jrPzF7zSmloh5D+3Fd6Q48 mCnWon4nMnLMIZAoAxSFgDA= X-Google-Smtp-Source: ABdhPJxDBFjRdbGsSAuMlA39SdbMIa9m6FJbsVEC4ShwNFejY3QJzjUGmyo3GyjuRcco2U598+vqwg== X-Received: by 2002:a63:2303:0:b0:3da:eae2:583a with SMTP id j3-20020a632303000000b003daeae2583amr4116533pgj.123.1652214761202; Tue, 10 May 2022 13:32:41 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id v17-20020a1709028d9100b0015e8d4eb1d4sm58898plo.30.2022.05.10.13.32.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 13:32:40 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 2/8] mm: khugepaged: remove redundant check for VM_NO_KHUGEPAGED Date: Tue, 10 May 2022 13:32:16 -0700 Message-Id: <20220510203222.24246-3-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220510203222.24246-1-shy828301@gmail.com> References: <20220510203222.24246-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The hugepage_vma_check() called by khugepaged_enter_vma_merge() does check VM_NO_KHUGEPAGED. Remove the check from caller and move the check in hugepage_vma_check() up. More checks may be run for VM_NO_KHUGEPAGED vmas, but MADV_HUGEPAGE is definitely not a hot path, so cleaner code does outweigh. Reviewed-by: Miaohe Lin Acked-by: Song Liu Acked-by: Vlastmil Babka Signed-off-by: Yang Shi --- mm/khugepaged.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 76c4ad60b9a9..dc8849d9dde4 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -365,8 +365,7 @@ int hugepage_madvise(struct vm_area_struct *vma, * register it here without waiting a page fault that * may not happen any time soon. */ - if (!(*vm_flags & VM_NO_KHUGEPAGED) && - khugepaged_enter_vma_merge(vma, *vm_flags)) + if (khugepaged_enter_vma_merge(vma, *vm_flags)) return -ENOMEM; break; case MADV_NOHUGEPAGE: @@ -445,6 +444,9 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, if (!transhuge_vma_enabled(vma, vm_flags)) return false; + if (vm_flags & VM_NO_KHUGEPAGED) + return false; + if (vma->vm_file && !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, HPAGE_PMD_NR)) return false; @@ -470,7 +472,8 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, return false; if (vma_is_temporary_stack(vma)) return false; - return !(vm_flags & VM_NO_KHUGEPAGED); + + return true; } int __khugepaged_enter(struct mm_struct *mm) From patchwork Tue May 10 20:32:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12845490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DB3CC433F5 for ; Tue, 10 May 2022 20:32:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234696AbiEJUc5 (ORCPT ); Tue, 10 May 2022 16:32:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233041AbiEJUcq (ORCPT ); Tue, 10 May 2022 16:32:46 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A2901D2FED; Tue, 10 May 2022 13:32:43 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id c1-20020a17090a558100b001dca2694f23so139673pji.3; Tue, 10 May 2022 13:32:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xGfl4tySzp7alFhccaOml+d9CvpyhEwjlT3yjUxnaGM=; b=oyecuH2qIUB2Fn7kgYRrw4VNjuLMH1LgyyzcuskJbpv8K9b50xQtbfsaLyDw3vNr8k hf15D+OK5TZyzPRGlLajNp2rk62uDJVZoJ/UjCpnCJOTFJ5anwCkL62If8oZuGpdxwbU ZXZKM8ZaedmGH2PUnAdMMvryrgVR0WMJlE3Q8flaehfjvIYFjYpHBoERbn+dzRBJhz/M aegxNq59fonJxoX8M1jRfKm4h9CmToH9D+wDA2rZW8mIeE8uWpNyunVz843kyWjA9bzh xLuQ+hcfovSLXMruI2t9LA3BXkGXshZy6KfcstL05HopjRCl9obBYfeFu+fsrSpGKKBj QtIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xGfl4tySzp7alFhccaOml+d9CvpyhEwjlT3yjUxnaGM=; b=3mOF4V36DhAAQZokJ1ZZ26ypZE2A6QaMybq+2vQxU9Il85Eax8G9Wzwd99dcGQLOMi NdgCFf2Jkblt1ht3MjK+SSqKKcNIHWE01+rMkgWFrrEbbiv2hJRiyObxltx17CwT+Xia Xywex0PzhxWbLBEQ+TmNkgUTMR1cP5lWpWXXV2Lc6Sx/2IUeJnvVDmYUDmLyODotFfFz 0pcQwKKiV9KA7DrHqvmmUDg0vK12uPe9iHexvuHdKtEwwAqh+eUyztat6dmcQsghLEyG ZC5dKSGf097y7+PvA/jUfLjEwqkkH3js8T6MtD5pR80LcALwPUnig7oFC8mfYK8LNq1q mV+w== X-Gm-Message-State: AOAM533mRSs/UQ9RT9lXSSEivU134FxxYLJRmReWQSREk3agonUtDUeN 4chS5wu4u0H0n2yhGDnp0yk= X-Google-Smtp-Source: ABdhPJyViXcJ5aaAcoYE0zU3KvjlbGjVfyBwnobcIlCyWD1xAGHGjKSFQ99xQcrrqlIdbgCnn9NMRg== X-Received: by 2002:a17:90b:35cb:b0:1dc:7905:c4c1 with SMTP id nb11-20020a17090b35cb00b001dc7905c4c1mr1630769pjb.95.1652214762733; Tue, 10 May 2022 13:32:42 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id v17-20020a1709028d9100b0015e8d4eb1d4sm58898plo.30.2022.05.10.13.32.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 13:32:42 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 3/8] mm: khugepaged: skip DAX vma Date: Tue, 10 May 2022 13:32:17 -0700 Message-Id: <20220510203222.24246-4-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220510203222.24246-1-shy828301@gmail.com> References: <20220510203222.24246-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The DAX vma may be seen by khugepaged when the mm has other khugepaged suitable vmas. So khugepaged may try to collapse THP for DAX vma, but it will fail due to page sanity check, for example, page is not on LRU. So it is not harmful, but it is definitely pointless to run khugepaged against DAX vma, so skip it in early check. Reviewed-by: Miaohe Lin Acked-by: Song Liu Acked-by: Vlastmil Babka Signed-off-by: Yang Shi --- mm/khugepaged.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index dc8849d9dde4..a2380d88c3ea 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -447,6 +447,10 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, if (vm_flags & VM_NO_KHUGEPAGED) return false; + /* Don't run khugepaged against DAX vma */ + if (vma_is_dax(vma)) + return false; + if (vma->vm_file && !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, HPAGE_PMD_NR)) return false; From patchwork Tue May 10 20:32:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12845489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA1EBC433FE for ; Tue, 10 May 2022 20:32:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234668AbiEJUc4 (ORCPT ); Tue, 10 May 2022 16:32:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234099AbiEJUcs (ORCPT ); Tue, 10 May 2022 16:32:48 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8EE41D8101; Tue, 10 May 2022 13:32:44 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id q4so14899844plr.11; Tue, 10 May 2022 13:32:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=q+fBA/niEzyMz/dVLkAQq7D2wtbVcVqUWXIX9SvJc74=; b=bmMV7n1Zd+0zJjtyvRqCG2AEpBXxPGKK9NySmncUY9Xq3S2vqznztNrSjK7iz5cTdO 7HsPHxuKJrQYaBvDH0MVxJyhEj+HeiC0+CvQYyQh9V1qooj8xAf9QmbVflLnlkb2995g W3tsDXC5LYnGI1/GnGBqiW6+BaCMAk1jc4zXq0t6o8k4YWQrGnU02WJ6rttAJkegH1LF MKa6RFMg2R+Z/LpxRPJH/hWhFv9rx1IQMjTdTOyCPfXOFCf4Q2wOzH65PaJPB9nOoPsJ B+2+1U/DAYiU2q9nt7UKR4sL55Q7eRQ8O0gZWdSBQkr4hMcjwP9MbO5Te/CG03IJFw/Y mEHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=q+fBA/niEzyMz/dVLkAQq7D2wtbVcVqUWXIX9SvJc74=; b=B25rUxd026SpxogkYNj1Tw8cxhguD5PFL90blBD1K8WIdoZsSgzJuNQWcALBRB9pF2 QQNhhP5UDpWrGS2NrZYkZZi9nM746uV2uvGKNC1PNFNfaG1oXb2yHUq7ej7F5zUtmpyR Wm5kyLdvKRZhb6b1yuNkG5zCfANrby5xTk78feKz3gKKi9qwHvkjLSsV9MpyOGeR32Pr CHwfKcT9A/3HozPhn2ogA/z/9agW9PuuFReovfhZ5QE994kqmqwEZk6HJqehM4/GbwC1 KpW18UiEO2hw9bUm5WsbujvjhGCEUJ5Hg9su8DKqM5BBrJ/VxwQQs8PmyAuraXhi0GxY PkWQ== X-Gm-Message-State: AOAM533PnAyCglirRVsw2UMGQ/bzdoC/J48e4YJC0KKAcLemytkyvUoa RdeAf1opTv4KLxP7Vvhz6Y8= X-Google-Smtp-Source: ABdhPJwI0XJpTDCvRLHH0Oz4A3G+1R0BD+GuNT2gqg7+DLCFYM/pa9ZmQMp7JZgjuxzoaWEXGggZEA== X-Received: by 2002:a17:90b:4b51:b0:1dc:9172:f344 with SMTP id mi17-20020a17090b4b5100b001dc9172f344mr1676754pjb.130.1652214764273; Tue, 10 May 2022 13:32:44 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id v17-20020a1709028d9100b0015e8d4eb1d4sm58898plo.30.2022.05.10.13.32.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 13:32:43 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 4/8] mm: thp: only regular file could be THP eligible Date: Tue, 10 May 2022 13:32:18 -0700 Message-Id: <20220510203222.24246-5-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220510203222.24246-1-shy828301@gmail.com> References: <20220510203222.24246-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Since commit a4aeaa06d45e ("mm: khugepaged: skip huge page collapse for special files"), khugepaged just collapses THP for regular file which is the intended usecase for readonly fs THP. Only show regular file as THP eligible accordingly. And make file_thp_enabled() available for khugepaged too in order to remove duplicate code. Acked-by: Song Liu Acked-by: Vlastmil Babka Signed-off-by: Yang Shi --- include/linux/huge_mm.h | 14 ++++++++++++++ mm/huge_memory.c | 11 ++--------- mm/khugepaged.c | 9 ++------- 3 files changed, 18 insertions(+), 16 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index fbf36bb1be22..de29821231c9 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -173,6 +173,20 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) return false; } +static inline bool file_thp_enabled(struct vm_area_struct *vma) +{ + struct inode *inode; + + if (!vma->vm_file) + return false; + + inode = vma->vm_file->f_inode; + + return (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS)) && + (vma->vm_flags & VM_EXEC) && + !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode); +} + bool transparent_hugepage_active(struct vm_area_struct *vma); #define transparent_hugepage_use_zero_page() \ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d0c26a3b3b17..82434a9d4499 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -69,13 +69,6 @@ static atomic_t huge_zero_refcount; struct page *huge_zero_page __read_mostly; unsigned long huge_zero_pfn __read_mostly = ~0UL; -static inline bool file_thp_enabled(struct vm_area_struct *vma) -{ - return transhuge_vma_enabled(vma, vma->vm_flags) && vma->vm_file && - !inode_is_open_for_write(vma->vm_file->f_inode) && - (vma->vm_flags & VM_EXEC); -} - bool transparent_hugepage_active(struct vm_area_struct *vma) { /* The addr is used to check if the vma size fits */ @@ -87,8 +80,8 @@ bool transparent_hugepage_active(struct vm_area_struct *vma) return __transparent_hugepage_enabled(vma); if (vma_is_shmem(vma)) return shmem_huge_enabled(vma); - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS)) - return file_thp_enabled(vma); + if (transhuge_vma_enabled(vma, vma->vm_flags) && file_thp_enabled(vma)) + return true; return false; } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a2380d88c3ea..c0d3215008ba 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -464,13 +464,8 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, return false; /* Only regular file is valid */ - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && vma->vm_file && - (vm_flags & VM_EXEC)) { - struct inode *inode = vma->vm_file->f_inode; - - return !inode_is_open_for_write(inode) && - S_ISREG(inode->i_mode); - } + if (file_thp_enabled(vma)) + return true; if (!vma->anon_vma || !vma_is_anonymous(vma)) return false; From patchwork Tue May 10 20:32:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12845491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A297CC433F5 for ; Tue, 10 May 2022 20:33:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234015AbiEJUc6 (ORCPT ); Tue, 10 May 2022 16:32:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233975AbiEJUcy (ORCPT ); Tue, 10 May 2022 16:32:54 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 170801E59EE; Tue, 10 May 2022 13:32:46 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id e24so270694pjt.2; Tue, 10 May 2022 13:32:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9bzLXPvJlgXInWKC0z0h8GbSnE9t748YFrNrW9sv0OI=; b=WjvoN+m7G642VMfbU7y7zcodnFSwJKNI8fyCg6WKeEpTIH/equCwbRBnEtxhSsnJhg fhCoCrPsm4pYIXuNj1z4bsDWzM1z8+S6z0dYtRLwGekuSs8oaOxcdHyymd7RavAg4K9e ks12ff48l1DYz9LgyOrWAHZrwDGsVuuNXj86JpBJqraJrwv/Rc/5b8V3NEBh2qCkT/JZ n3XtXcyIzKwY2C4SrHnXZ5V2TftIlEcpTLUMa+N7Q6q0+ciTlOD0PAXBUxaikpG6NLvW O4QPt4JKd025ddS0y8luVYzS+aX6rU7lvUlo6sGJpPvf4BRNxZPDPbXUCYoSdnRbIGBn C0DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9bzLXPvJlgXInWKC0z0h8GbSnE9t748YFrNrW9sv0OI=; b=3CmvNrLuBCQBjGYaKATsaEyrLoMHSpM0IcE2ElBMXNGNqDnYvgIM86EvP87kVOy9Xj CLzH/J6SRTLSg5AwDNxtx63LSUyIK1pSkNy06hfCHVFFKmcmOqIN+1RleuwyVygw4Zcx 0ssXG9yL62amNDDbXHBxJNsRiFDjZ90LFuyYfJZUUV5J3VVhGBznT7pVCdHyAPZHKHdb H8X/rjxP+AaGjARfe5KsjFetKMys9MNhIsLuPcCMuidGWFgFq/Rvg7Tk6negfzBde35O wwD6g1d3YH45sHYWcOxNYVa+0D/OZxIuDyG68Dv0yt8fl+6+qqWt0DtvbLMsfqhl9EJ6 BbXw== X-Gm-Message-State: AOAM53142AEglB4/77CGTu9OGPcgPAL5hM9+obdH418ebt4MKbA/y0q/ OtOnkxymMn8sqR6bXwz5KDs= X-Google-Smtp-Source: ABdhPJzetWxUglmKPTIJxEKFg5ditoKwCl0dD+Zohh3huaS4wvuUbDFl6MB0fZWj/K2+e7s1mXtubw== X-Received: by 2002:a17:90a:bd95:b0:1d9:6735:e9ef with SMTP id z21-20020a17090abd9500b001d96735e9efmr1618657pjr.157.1652214765595; Tue, 10 May 2022 13:32:45 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id v17-20020a1709028d9100b0015e8d4eb1d4sm58898plo.30.2022.05.10.13.32.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 13:32:45 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 5/8] mm: khugepaged: make khugepaged_enter() void function Date: Tue, 10 May 2022 13:32:19 -0700 Message-Id: <20220510203222.24246-6-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220510203222.24246-1-shy828301@gmail.com> References: <20220510203222.24246-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The most callers of khugepaged_enter() don't care about the return value. Only dup_mmap(), anonymous THP page fault and MADV_HUGEPAGE handle the error by returning -ENOMEM. Actually it is not harmful for them to ignore the error case either. It also sounds overkilling to fail fork() and page fault early due to khugepaged_enter() error, and MADV_HUGEPAGE does set VM_HUGEPAGE flag regardless of the error. Acked-by: Song Liu Acked-by: Vlastmil Babka Signed-off-by: Yang Shi --- include/linux/khugepaged.h | 30 ++++++++++++------------------ kernel/fork.c | 4 +--- mm/huge_memory.c | 4 ++-- mm/khugepaged.c | 18 +++++++----------- 4 files changed, 22 insertions(+), 34 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index 2fcc01891b47..0423d3619f26 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -12,10 +12,10 @@ extern struct attribute_group khugepaged_attr_group; extern int khugepaged_init(void); extern void khugepaged_destroy(void); extern int start_stop_khugepaged(void); -extern int __khugepaged_enter(struct mm_struct *mm); +extern void __khugepaged_enter(struct mm_struct *mm); extern void __khugepaged_exit(struct mm_struct *mm); -extern int khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags); +extern void khugepaged_enter_vma_merge(struct vm_area_struct *vma, + unsigned long vm_flags); extern void khugepaged_min_free_kbytes_update(void); #ifdef CONFIG_SHMEM extern void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr); @@ -40,11 +40,10 @@ static inline void collapse_pte_mapped_thp(struct mm_struct *mm, (transparent_hugepage_flags & \ (1<flags)) - return __khugepaged_enter(mm); - return 0; + __khugepaged_enter(mm); } static inline void khugepaged_exit(struct mm_struct *mm) @@ -53,7 +52,7 @@ static inline void khugepaged_exit(struct mm_struct *mm) __khugepaged_exit(mm); } -static inline int khugepaged_enter(struct vm_area_struct *vma, +static inline void khugepaged_enter(struct vm_area_struct *vma, unsigned long vm_flags) { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags)) @@ -62,27 +61,22 @@ static inline int khugepaged_enter(struct vm_area_struct *vma, (khugepaged_req_madv() && (vm_flags & VM_HUGEPAGE))) && !(vm_flags & VM_NOHUGEPAGE) && !test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) - if (__khugepaged_enter(vma->vm_mm)) - return -ENOMEM; - return 0; + __khugepaged_enter(vma->vm_mm); } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ -static inline int khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) +static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) { - return 0; } static inline void khugepaged_exit(struct mm_struct *mm) { } -static inline int khugepaged_enter(struct vm_area_struct *vma, - unsigned long vm_flags) +static inline void khugepaged_enter(struct vm_area_struct *vma, + unsigned long vm_flags) { - return 0; } -static inline int khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags) +static inline void khugepaged_enter_vma_merge(struct vm_area_struct *vma, + unsigned long vm_flags) { - return 0; } static inline void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) diff --git a/kernel/fork.c b/kernel/fork.c index 536dc3289734..6692f5d78371 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -608,9 +608,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, retval = ksm_fork(mm, oldmm); if (retval) goto out; - retval = khugepaged_fork(mm, oldmm); - if (retval) - goto out; + khugepaged_fork(mm, oldmm); retval = mas_expected_entries(&mas, oldmm->map_count); if (retval) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 82434a9d4499..80e8b58b4f39 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -726,8 +726,8 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return VM_FAULT_FALLBACK; if (unlikely(anon_vma_prepare(vma))) return VM_FAULT_OOM; - if (unlikely(khugepaged_enter(vma, vma->vm_flags))) - return VM_FAULT_OOM; + khugepaged_enter(vma, vma->vm_flags); + if (!(vmf->flags & FAULT_FLAG_WRITE) && !mm_forbids_zeropage(vma->vm_mm) && transparent_hugepage_use_zero_page()) { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index c0d3215008ba..7815218ab960 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -365,8 +365,7 @@ int hugepage_madvise(struct vm_area_struct *vma, * register it here without waiting a page fault that * may not happen any time soon. */ - if (khugepaged_enter_vma_merge(vma, *vm_flags)) - return -ENOMEM; + khugepaged_enter_vma_merge(vma, *vm_flags); break; case MADV_NOHUGEPAGE: *vm_flags &= ~VM_HUGEPAGE; @@ -475,20 +474,20 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, return true; } -int __khugepaged_enter(struct mm_struct *mm) +void __khugepaged_enter(struct mm_struct *mm) { struct mm_slot *mm_slot; int wakeup; mm_slot = alloc_mm_slot(); if (!mm_slot) - return -ENOMEM; + return; /* __khugepaged_exit() must not run from under us */ VM_BUG_ON_MM(khugepaged_test_exit(mm), mm); if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) { free_mm_slot(mm_slot); - return 0; + return; } spin_lock(&khugepaged_mm_lock); @@ -504,11 +503,9 @@ int __khugepaged_enter(struct mm_struct *mm) mmgrab(mm); if (wakeup) wake_up_interruptible(&khugepaged_wait); - - return 0; } -int khugepaged_enter_vma_merge(struct vm_area_struct *vma, +void khugepaged_enter_vma_merge(struct vm_area_struct *vma, unsigned long vm_flags) { unsigned long hstart, hend; @@ -519,13 +516,12 @@ int khugepaged_enter_vma_merge(struct vm_area_struct *vma, * file-private shmem THP is not supported. */ if (!hugepage_vma_check(vma, vm_flags)) - return 0; + return; hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; hend = vma->vm_end & HPAGE_PMD_MASK; if (hstart < hend) - return khugepaged_enter(vma, vm_flags); - return 0; + khugepaged_enter(vma, vm_flags); } void __khugepaged_exit(struct mm_struct *mm) From patchwork Tue May 10 20:32:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12845492 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C856C433EF for ; Tue, 10 May 2022 20:33:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234413AbiEJUdA (ORCPT ); Tue, 10 May 2022 16:33:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234374AbiEJUcy (ORCPT ); Tue, 10 May 2022 16:32:54 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51A681FC7E7; Tue, 10 May 2022 13:32:47 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id fv2so246564pjb.4; Tue, 10 May 2022 13:32:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qd2V4gYzSQy9/AY2gyoqLCp31tKNiHFMo2KNXlpicIc=; b=nnkjdEyWUoaYouv5dBAoGcvMsB87vwqUX748f2MsldAO7yY2nJU4YYvQv7c2KBgswg tJs67T7a9GYnD6yfXc0BrOiQvzpX93PsvRUnk8L1FLg5mOY7so3PMu9dNQsBnjcJh8gQ NG9gh/QcWXwoeOubR/w9wI+JMA/K/3/iyv3UvszSqPTEJa9KwXDp8zde+LXoHI7yvoid Xu6TP07Fpxw1KtaX8FkDQqbO68QHiWqxZsAuHuU0Pn2Uxs7VtX7L46oPgndAYLcHGX+R EV2ssx29fWm6xMW65GB6Pm3lXH1BFltML328WqhU0glpcmuAog2eLaj6KdLUSWgT9lGh P90g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qd2V4gYzSQy9/AY2gyoqLCp31tKNiHFMo2KNXlpicIc=; b=0k5YuRlaVyZvHixleEnq6Pgkal63JsO3OwGp2x+uGsJ47kjCybmLKFZ3koCCOgpJb6 hPdZCf28XPltm+G+I3S8hb8CpKi0msyQeRlgzovfVL26DwRY9Grmsd3x5Yo6LVqKgfEX FQ4R1wNYoI7pcFPXTWyk+zmFS1B1atLyko4gY4oqQNHZKDIijXALDIgKHoQr933gUe1E 5yTnco30tDhlbV3eLEXEDubJEO+6Gcsr2g01VIV470A4xwmUTo32DTYJjOhU3tMoHdB8 ryFHttUgcKhyfxu8L8VGcQ693e9m6I5QrBJVaJYo7rv9jRakAGuhhyoQoHKNDIbz/ls1 w9BA== X-Gm-Message-State: AOAM533jV7My6MBy3WqTmWxqdnyE5v6HUZrGFzi6K55kZDUEvD4vfdef zXDIETXtGL6x9J+pwiqOySA= X-Google-Smtp-Source: ABdhPJyJfGQKpSGT3s9qaPsV4W+cAXWw7gFsDEedVDpmC7IxU3X+gl0uNVAXM3FDMXRcdZNwCPJvkw== X-Received: by 2002:a17:90b:4a03:b0:1dc:756a:2463 with SMTP id kk3-20020a17090b4a0300b001dc756a2463mr1673723pjb.68.1652214767036; Tue, 10 May 2022 13:32:47 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id v17-20020a1709028d9100b0015e8d4eb1d4sm58898plo.30.2022.05.10.13.32.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 13:32:46 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 6/8] mm: khugepaged: make hugepage_vma_check() non-static Date: Tue, 10 May 2022 13:32:20 -0700 Message-Id: <20220510203222.24246-7-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220510203222.24246-1-shy828301@gmail.com> References: <20220510203222.24246-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The hugepage_vma_check() could be reused by khugepaged_enter() and khugepaged_enter_vma_merge(), but it is static in khugepaged.c. Make it non-static and declare it in khugepaged.h. Suggested-by: Vlastimil Babka Signed-off-by: Yang Shi --- include/linux/khugepaged.h | 14 ++++++-------- mm/khugepaged.c | 25 +++++++++---------------- 2 files changed, 15 insertions(+), 24 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index 0423d3619f26..c340f6bb39d6 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -3,8 +3,6 @@ #define _LINUX_KHUGEPAGED_H #include /* MMF_VM_HUGEPAGE */ -#include - #ifdef CONFIG_TRANSPARENT_HUGEPAGE extern struct attribute_group khugepaged_attr_group; @@ -12,6 +10,8 @@ extern struct attribute_group khugepaged_attr_group; extern int khugepaged_init(void); extern void khugepaged_destroy(void); extern int start_stop_khugepaged(void); +extern bool hugepage_vma_check(struct vm_area_struct *vma, + unsigned long vm_flags); extern void __khugepaged_enter(struct mm_struct *mm); extern void __khugepaged_exit(struct mm_struct *mm); extern void khugepaged_enter_vma_merge(struct vm_area_struct *vma, @@ -55,13 +55,11 @@ static inline void khugepaged_exit(struct mm_struct *mm) static inline void khugepaged_enter(struct vm_area_struct *vma, unsigned long vm_flags) { - if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags)) - if ((khugepaged_always() || - (shmem_file(vma->vm_file) && shmem_huge_enabled(vma)) || - (khugepaged_req_madv() && (vm_flags & VM_HUGEPAGE))) && - !(vm_flags & VM_NOHUGEPAGE) && - !test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) + if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && + khugepaged_enabled()) { + if (hugepage_vma_check(vma, vm_flags)) __khugepaged_enter(vma->vm_mm); + } } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 7815218ab960..dec449339964 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -437,8 +437,8 @@ static inline int khugepaged_test_exit(struct mm_struct *mm) return atomic_read(&mm->mm_users) == 0; } -static bool hugepage_vma_check(struct vm_area_struct *vma, - unsigned long vm_flags) +bool hugepage_vma_check(struct vm_area_struct *vma, + unsigned long vm_flags) { if (!transhuge_vma_enabled(vma, vm_flags)) return false; @@ -508,20 +508,13 @@ void __khugepaged_enter(struct mm_struct *mm) void khugepaged_enter_vma_merge(struct vm_area_struct *vma, unsigned long vm_flags) { - unsigned long hstart, hend; - - /* - * khugepaged only supports read-only files for non-shmem files. - * khugepaged does not yet work on special mappings. And - * file-private shmem THP is not supported. - */ - if (!hugepage_vma_check(vma, vm_flags)) - return; - - hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; - hend = vma->vm_end & HPAGE_PMD_MASK; - if (hstart < hend) - khugepaged_enter(vma, vm_flags); + if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && + khugepaged_enabled() && + (((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < + (vma->vm_end & HPAGE_PMD_MASK))) { + if (hugepage_vma_check(vma, vm_flags)) + __khugepaged_enter(vma->vm_mm); + } } void __khugepaged_exit(struct mm_struct *mm) From patchwork Tue May 10 20:32:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12845493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FBECC433F5 for ; Tue, 10 May 2022 20:33:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235052AbiEJUdB (ORCPT ); Tue, 10 May 2022 16:33:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234404AbiEJUcy (ORCPT ); Tue, 10 May 2022 16:32:54 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32829201C25; Tue, 10 May 2022 13:32:49 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id c11so17773080plg.13; Tue, 10 May 2022 13:32:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+FkmAy2Ymz4AwqTAeIx4CgD4TCwehEDHJukpdkFOQzQ=; b=S3no0u2ExtqBELoR4IiwVmiS8rE1/wSFCbQoP/VlwVfnVxYi1iSkOT2f+wekgSFcqN jCZtKJ7IhOFxMumWdLnmSIM1SojVhLmzixtKhDpZS5Opn0etBfh1gYixPeKKMCf81AoC 0q+DLEPIP9yV2o9aqhhmUn48fnctKW3eYniWqkasdUZ5DUEjbKHNRD4JoftzRXVsdulD Iyyjp7BRtoN58fsGQVs2utbIlJ2LDrOMA+WzrkaSvw/P9EBcBRtxo1pVRPhs/LFG3Re+ 9MjRFR3lb8F8DjjpupiJU3HNZT7YZKXTqpLBoUGrwFaUptyR8VAalWhxCVl5L0/FFdOt Z22g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+FkmAy2Ymz4AwqTAeIx4CgD4TCwehEDHJukpdkFOQzQ=; b=k0lo5aIXeE+0l1Suewjq4Gr7vugItcahz/xO5SEJbW4i8rd8U9HDPLLpyh4ZGsjNIJ ZPGdfrN6xr93nfdsfyT64zE4HSq3y6WV7wVMBG/b097GF5eNF40ad0Y4cOjy3uE6AFWv wBNvfrOzbiMemBYlwDzlNI9fQRVdG7O0cv+Jobes078hD8z0WAx0Z5/VFpyMUzvtIDnL pU7VMSgwXEf8DBAqCnBi7QUhiwXL8caeZHhjRh+5mUXBWOi4HiCAU2g7bYngOya1aN2+ LMd/US94x+YCM/k2089vswOgc4gW8lMg/kRSiFJXyqGeAZic2H+2BV37AVBay1XxhCtw 95bQ== X-Gm-Message-State: AOAM530q/gKtisoBNROkiUmdlZZ3ipnyecXw2agcyQLFe3lm5U0TP2uh +am0jDZF68vJUuqJp+6cp8g= X-Google-Smtp-Source: ABdhPJwmyjvgMYxxCNpnJ0YhIIczNSpnId2smrKzWQlwP1GG4myYSir5NGIWA9p/smt5p0ZDzIquBg== X-Received: by 2002:a17:902:b698:b0:158:faee:442f with SMTP id c24-20020a170902b69800b00158faee442fmr22821784pls.75.1652214768706; Tue, 10 May 2022 13:32:48 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id v17-20020a1709028d9100b0015e8d4eb1d4sm58898plo.30.2022.05.10.13.32.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 13:32:47 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 7/8] mm: khugepaged: introduce khugepaged_enter_vma() helper Date: Tue, 10 May 2022 13:32:21 -0700 Message-Id: <20220510203222.24246-8-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220510203222.24246-1-shy828301@gmail.com> References: <20220510203222.24246-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The khugepaged_enter_vma_merge() actually does as the same thing as the khugepaged_enter() section called by shmem_mmap(), so consolidate them into one helper and rename it to khugepaged_enter_vma(). Acked-by: Vlastmil Babka Signed-off-by: Yang Shi --- include/linux/khugepaged.h | 8 ++++---- mm/khugepaged.c | 6 +++--- mm/mmap.c | 12 ++++++------ mm/shmem.c | 12 ++---------- 4 files changed, 15 insertions(+), 23 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index c340f6bb39d6..392d34c3c59a 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -14,8 +14,8 @@ extern bool hugepage_vma_check(struct vm_area_struct *vma, unsigned long vm_flags); extern void __khugepaged_enter(struct mm_struct *mm); extern void __khugepaged_exit(struct mm_struct *mm); -extern void khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags); +extern void khugepaged_enter_vma(struct vm_area_struct *vma, + unsigned long vm_flags); extern void khugepaged_min_free_kbytes_update(void); #ifdef CONFIG_SHMEM extern void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr); @@ -72,8 +72,8 @@ static inline void khugepaged_enter(struct vm_area_struct *vma, unsigned long vm_flags) { } -static inline void khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags) +static inline void khugepaged_enter_vma(struct vm_area_struct *vma, + unsigned long vm_flags) { } static inline void collapse_pte_mapped_thp(struct mm_struct *mm, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index dec449339964..32db587c5224 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -365,7 +365,7 @@ int hugepage_madvise(struct vm_area_struct *vma, * register it here without waiting a page fault that * may not happen any time soon. */ - khugepaged_enter_vma_merge(vma, *vm_flags); + khugepaged_enter_vma(vma, *vm_flags); break; case MADV_NOHUGEPAGE: *vm_flags &= ~VM_HUGEPAGE; @@ -505,8 +505,8 @@ void __khugepaged_enter(struct mm_struct *mm) wake_up_interruptible(&khugepaged_wait); } -void khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags) +void khugepaged_enter_vma(struct vm_area_struct *vma, + unsigned long vm_flags) { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && khugepaged_enabled() && diff --git a/mm/mmap.c b/mm/mmap.c index 3445a8c304af..34ff1600426c 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1122,7 +1122,7 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm, end, prev->vm_pgoff, NULL, prev); if (err) return NULL; - khugepaged_enter_vma_merge(prev, vm_flags); + khugepaged_enter_vma(prev, vm_flags); return prev; } @@ -1149,7 +1149,7 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm, } if (err) return NULL; - khugepaged_enter_vma_merge(area, vm_flags); + khugepaged_enter_vma(area, vm_flags); return area; } @@ -2046,7 +2046,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address) } } anon_vma_unlock_write(vma->anon_vma); - khugepaged_enter_vma_merge(vma, vma->vm_flags); + khugepaged_enter_vma(vma, vma->vm_flags); return error; } #endif /* CONFIG_STACK_GROWSUP || CONFIG_IA64 */ @@ -2127,7 +2127,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address) } } anon_vma_unlock_write(vma->anon_vma); - khugepaged_enter_vma_merge(vma, vma->vm_flags); + khugepaged_enter_vma(vma, vma->vm_flags); return error; } @@ -2635,7 +2635,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, /* Actually expand, if possible */ if (vma && !vma_expand(&mas, vma, merge_start, merge_end, vm_pgoff, next)) { - khugepaged_enter_vma_merge(vma, vm_flags); + khugepaged_enter_vma(vma, vm_flags); goto expanded; } @@ -3051,7 +3051,7 @@ static int do_brk_flags(struct ma_state *mas, struct vm_area_struct *vma, anon_vma_interval_tree_post_update_vma(vma); anon_vma_unlock_write(vma->anon_vma); } - khugepaged_enter_vma_merge(vma, flags); + khugepaged_enter_vma(vma, flags); goto out; } diff --git a/mm/shmem.c b/mm/shmem.c index 29701be579f8..89f6f4fec3f9 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2232,11 +2232,7 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma) file_accessed(file); vma->vm_ops = &shmem_vm_ops; - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && - ((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < - (vma->vm_end & HPAGE_PMD_MASK)) { - khugepaged_enter(vma, vma->vm_flags); - } + khugepaged_enter_vma(vma, vma->vm_flags); return 0; } @@ -4137,11 +4133,7 @@ int shmem_zero_setup(struct vm_area_struct *vma) vma->vm_file = file; vma->vm_ops = &shmem_vm_ops; - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && - ((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < - (vma->vm_end & HPAGE_PMD_MASK)) { - khugepaged_enter(vma, vma->vm_flags); - } + khugepaged_enter_vma(vma, vma->vm_flags); return 0; } From patchwork Tue May 10 20:32:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12845494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32A97C433EF for ; Tue, 10 May 2022 20:33:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234741AbiEJUdZ (ORCPT ); Tue, 10 May 2022 16:33:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234304AbiEJUcz (ORCPT ); Tue, 10 May 2022 16:32:55 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B68B255089; Tue, 10 May 2022 13:32:50 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id w17-20020a17090a529100b001db302efed6so136188pjh.4; Tue, 10 May 2022 13:32:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=m2zZRYMCn2b2gDLpYI5z0secDrQJa1a/mRP4v8GDdaI=; b=WFhOORKCopujXevIEId5Pou5+f06pmdKjIe9jKkQl6fsJfPMy2ZxbNmrBw/2Yml9o9 MoS9B2R084AHZT40mOyNfmDmn+do82XYuo+DrjNwUZHbtLDt+HC6/KCbTs9BsGY6xLZI 02uFB4QsRjxxaDGTmrwBMMkq6RjAyY0qzhY4N4Mw5ltluWRLsQzPevqYMNIuYFXmQLsh YWBRKj8FxJM0bvYtfLdWexGP2Bqn9TXxRkyBrizdT9yvs5VQeu45FQcNdzUG/IQ7j7qe vdGuWvwsWudDK5+hkjCfIuu6aksOU7DDtw5iskC+jRcQQO/3xQzqGFyK7G+O0uHJ33/N jaIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m2zZRYMCn2b2gDLpYI5z0secDrQJa1a/mRP4v8GDdaI=; b=nYihOBs5PHcxP3gjWk/Bwyil6FzW8Ipv+VnxV6azvyku6gwFNfeV8ulNOmdOMEmpxW IKLVLphxXzyIGuTEhUpdGCFJPsX+9VJr1fBxhRKo5cvpAha7U09y6ZfGv9LbGDRVJcDi F/L2USaLs7PJRGEbuRIAqq75Kqnev5CIjDS7n/dgKdgHpUtzsVgnibMsiuaDiVbpM8Ik okv3OT5ZZU9Lts/5uB4VBOfnP3Sux4Byt0QGfOysSafNm36ibkIbOMwNZAYHzgCt93cu oMNmOdhq60RaLVP/XbPdD97ZAMIZ9QpsE7qmnYrWHorb9PMS9jKRIw520oqi78OK5xCS cnoA== X-Gm-Message-State: AOAM532WlywxE2be9cQv4eysDE9/7FSpL1+GZfuyS/qnBtipVIhuE0Kt yF0QG/K6NKmrFg7yju8rys4= X-Google-Smtp-Source: ABdhPJyF3jtsrDiJCOCcE0BCnoWu8584chaVsDcym+NHrrec7AOOrK4v54yoEsWhmn+5HTcoIBdieA== X-Received: by 2002:a17:903:2312:b0:15e:a6c8:a313 with SMTP id d18-20020a170903231200b0015ea6c8a313mr22095118plh.122.1652214770262; Tue, 10 May 2022 13:32:50 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id v17-20020a1709028d9100b0015e8d4eb1d4sm58898plo.30.2022.05.10.13.32.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 May 2022 13:32:49 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v4 PATCH 8/8] mm: mmap: register suitable readonly file vmas for khugepaged Date: Tue, 10 May 2022 13:32:22 -0700 Message-Id: <20220510203222.24246-9-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220510203222.24246-1-shy828301@gmail.com> References: <20220510203222.24246-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The readonly FS THP relies on khugepaged to collapse THP for suitable vmas. But the behavior is inconsistent for "always" mode (https://lore.kernel.org/linux-mm/00f195d4-d039-3cf2-d3a1-a2c88de397a0@suse.cz/). The "always" mode means THP allocation should be tried all the time and khugepaged should try to collapse THP all the time. Of course the allocation and collapse may fail due to other factors and conditions. Currently file THP may not be collapsed by khugepaged even though all the conditions are met. That does break the semantics of "always" mode. So make sure readonly FS vmas are registered to khugepaged to fix the break. Registering suitable vmas in common mmap path, that could cover both readonly FS vmas and shmem vmas, so removed the khugepaged calls in shmem.c. Still need to keep the khugepaged call in vma_merge() since vma_merge() is called in a lot of places, for example, madvise, mprotect, etc. Reported-by: Vlastimil Babka Acked-by: Vlastmil Babka Signed-off-by: Yang Shi --- mm/mmap.c | 6 ++++++ mm/shmem.c | 4 ---- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 34ff1600426c..6d7a6c7b50bb 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2745,6 +2745,12 @@ unsigned long mmap_region(struct file *file, unsigned long addr, i_mmap_unlock_write(vma->vm_file->f_mapping); } + /* + * vma_merge() calls khugepaged_enter_vma() either, the below + * call covers the non-merge case. + */ + khugepaged_enter_vma(vma, vma->vm_flags); + /* Once vma denies write, undo our temporary denial count */ unmap_writable: if (file && vm_flags & VM_SHARED) diff --git a/mm/shmem.c b/mm/shmem.c index 89f6f4fec3f9..67a3f3b05fb2 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -34,7 +34,6 @@ #include #include #include -#include #include #include #include @@ -2232,7 +2231,6 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma) file_accessed(file); vma->vm_ops = &shmem_vm_ops; - khugepaged_enter_vma(vma, vma->vm_flags); return 0; } @@ -4133,8 +4131,6 @@ int shmem_zero_setup(struct vm_area_struct *vma) vma->vm_file = file; vma->vm_ops = &shmem_vm_ops; - khugepaged_enter_vma(vma, vma->vm_flags); - return 0; }