From patchwork Mon Apr 4 20:02:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12800704 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A67FC352A8 for ; Mon, 4 Apr 2022 21:17:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243116AbiDDVTm (ORCPT ); Mon, 4 Apr 2022 17:19:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43676 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380404AbiDDUFA (ORCPT ); Mon, 4 Apr 2022 16:05:00 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8014E30F6A; Mon, 4 Apr 2022 13:03:04 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id nt14-20020a17090b248e00b001ca601046a4so396610pjb.0; Mon, 04 Apr 2022 13:03:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NCkHW7qdKf6bzLGj3j7v6pAZ5FXathduMO1XmoapD2w=; b=NCYcMbdBRvRn+XRjiGfIxN2tcqtf3m28j6Ss9IzYKdTVClDjf3RZa3dGeoRDcblFz7 lpG/BEu5k3wkLaxvKPQeQaLes4MrBITFco3y2M+Kp347fIot9hIiXvKKmF+dJigCPYG4 B/q/jRSOjGQXfPhlJmn3+5voL7xHCn5jGOPR8dpHxjDGbtAlGbfMNyDprgwmJKVJm4zv PHkwsPH645GSCAosRwop6qgP6Nb5WtrIINCwtfEALkz+5PaTyaRoMErryMtRF30s22oU EDxOpJKcFLVvu9HhhIjSytflYUAOLG73XjN8C5QEmVPurGV9Incx69Dw9oqg2WXRtzQE Rjuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NCkHW7qdKf6bzLGj3j7v6pAZ5FXathduMO1XmoapD2w=; b=vZBvg8M0BhC4VwlSp9BiwplDKsNOn8WqUWGO6hQasPiPJxOwNJpTwXV5KfFx8b4Mal O2j2VUf0vYTVmECbF5NJdeBT/zv2slsNgK2NytBNU6b6zEV8yIJwKIn15ayXY6300UHM 9MwjG37yDsYaa9TRJkIeGWirEyDrIJf3unoVrnCkNIcMNE7tkZ43CSUkhzRoAmfDx2NZ jYRDttqNqzDBOK2ne7c3NzsK/sSp/43f67BT8U2bHBULPimiqlTrjZ2U3xGroWivvb3P 5mXLJC2DuUlGCrBKy/gEtiq4z7mNxO9xqWrmZqwr/VM/gxRe9kbQDmDmf+cbF/aMr9Xy +kKg== X-Gm-Message-State: AOAM5339geaSuOXWF2j4nZkt1mbDhJi6euW0RFmRD+Ox8dtedF4o0e2a 1gElhO7G8S6p1gog5sfryN0= X-Google-Smtp-Source: ABdhPJwwyRU/9G37xXjcgkqkjgM4U7jgrGc/4YgXWVgLfx/3lCnWccRDoLxMFlr3jYl/E2J/XSzSHw== X-Received: by 2002:a17:902:bf07:b0:150:9b8a:a14f with SMTP id bi7-20020a170902bf0700b001509b8aa14fmr1450757plb.127.1649102584016; Mon, 04 Apr 2022 13:03:04 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id bw17-20020a056a00409100b004fadad3b93esm12779295pfb.142.2022.04.04.13.03.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 13:03:03 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 1/8] sched: coredump.h: clarify the use of MMF_VM_HUGEPAGE Date: Mon, 4 Apr 2022 13:02:43 -0700 Message-Id: <20220404200250.321455-2-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220404200250.321455-1-shy828301@gmail.com> References: <20220404200250.321455-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org MMF_VM_HUGEPAGE is set as long as the mm is available for khugepaged by khugepaged_enter(), not only when VM_HUGEPAGE is set on vma. Correct the comment to avoid confusion. Reviewed-by: Miaohe Lin Acked-by: Song Liu Signed-off-by: Yang Shi Acked-by: Vlastmil Babka --- include/linux/sched/coredump.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h index 4d9e3a656875..4d0a5be28b70 100644 --- a/include/linux/sched/coredump.h +++ b/include/linux/sched/coredump.h @@ -57,7 +57,8 @@ static inline int get_dumpable(struct mm_struct *mm) #endif /* leave room for more dump flags */ #define MMF_VM_MERGEABLE 16 /* KSM may merge identical pages */ -#define MMF_VM_HUGEPAGE 17 /* set when VM_HUGEPAGE is set on vma */ +#define MMF_VM_HUGEPAGE 17 /* set when mm is available for + khugepaged */ /* * This one-shot flag is dropped due to necessity of changing exe once again * on NFS restore From patchwork Mon Apr 4 20:02:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12800708 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72D3BC47082 for ; Mon, 4 Apr 2022 21:17:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379509AbiDDVTu (ORCPT ); Mon, 4 Apr 2022 17:19:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380406AbiDDUFC (ORCPT ); Mon, 4 Apr 2022 16:05:02 -0400 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A3A430576; Mon, 4 Apr 2022 13:03:06 -0700 (PDT) Received: by mail-pl1-x62c.google.com with SMTP id o20so3309569pla.13; Mon, 04 Apr 2022 13:03:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pjgpRMK0NYimSGrt6nJpOa6PTGJdKr7+fAKLKjoWSDA=; b=IECkqfJQKLLWFdbYM3okONxP8OCi2B/qhAroDXHSWfm5bYnfJL/3ruGjY7jjefY0Gl L5WjDq/N9jwBTvcrJCsT4J0fzr2xjvuh8bWlexy6BVuINn3TpgJJBr3tFnwRqKaSbXQk UYJi5I759jd55kj8vaDvPYhYuuRuxJrwRLhlVhmuNcZgQQxWNw03wVSIYlfZopIore1n Tt8mE/Gd9OPxT4FZm62VVyFVklJsp0cXJdXN7OoL2bdskeBCGMfODwfLgV3uYuYcubAa Q/M/j+t5uN2t9sdXHQticavAavTRY2vjlqVVwxtzg9HpDw8PcXGrXbh7M9fdlwtfra5I AU1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pjgpRMK0NYimSGrt6nJpOa6PTGJdKr7+fAKLKjoWSDA=; b=qakfGMyHe1n3bm/gekWuFWEXJvYp8lZthgoKFlqSCLHzUZgZaU9b3FdBcDdyimQqOp hhEdTzrAn7f8EaKeUhccFiDlTvcTjWucREi34xDt5lQCe21FLfthS0x6DPVsVzRlHXnL rwwRnvE6VnhYtFGVc3uZG3o7B1c2KK4ZGTIK1FJOnXumbQNSi2vNwGajuPdsE7CoqFJG +9413gOWf2a0AhrCQ+iMe1LXiLTYnukjKgNrCYf+VNsmRqE2WXw2ww8duwxGXjUnozk0 ckBEi2A0Om7xhwk5dsnWJWv+KKqbeHg9e44zSSgc7yYWJTqPpKzVuvbJlGpt6UDs37tt JzVQ== X-Gm-Message-State: AOAM5324Cl7jjK2bIPfh/zaLjzfJt62oL04ZpcVW6jq4PGZYy28ekpfO 6wPsYEU7c1N1tYnw5/WAa0SLW7loiOY= X-Google-Smtp-Source: ABdhPJwmqyYOoY4I6R15zAUVTjRG2/sPdiZ1ExIttI9HW/6wIEkHQfrDPIEksQO0dehGXzHTVVqHfA== X-Received: by 2002:a17:90b:1b4f:b0:1c6:d91b:9d0 with SMTP id nv15-20020a17090b1b4f00b001c6d91b09d0mr920843pjb.72.1649102585569; Mon, 04 Apr 2022 13:03:05 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id bw17-20020a056a00409100b004fadad3b93esm12779295pfb.142.2022.04.04.13.03.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 13:03:05 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 2/8] mm: khugepaged: remove redundant check for VM_NO_KHUGEPAGED Date: Mon, 4 Apr 2022 13:02:44 -0700 Message-Id: <20220404200250.321455-3-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220404200250.321455-1-shy828301@gmail.com> References: <20220404200250.321455-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The hugepage_vma_check() called by khugepaged_enter_vma_merge() does check VM_NO_KHUGEPAGED. Remove the check from caller and move the check in hugepage_vma_check() up. More checks may be run for VM_NO_KHUGEPAGED vmas, but MADV_HUGEPAGE is definitely not a hot path, so cleaner code does outweigh. Reviewed-by: Miaohe Lin Acked-by: Song Liu Signed-off-by: Yang Shi Acked-by: Vlastimil Babka --- mm/khugepaged.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a4e5eaf3eb01..7d197d9e3258 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -365,8 +365,7 @@ int hugepage_madvise(struct vm_area_struct *vma, * register it here without waiting a page fault that * may not happen any time soon. */ - if (!(*vm_flags & VM_NO_KHUGEPAGED) && - khugepaged_enter_vma_merge(vma, *vm_flags)) + if (khugepaged_enter_vma_merge(vma, *vm_flags)) return -ENOMEM; break; case MADV_NOHUGEPAGE: @@ -445,6 +444,9 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, if (!transhuge_vma_enabled(vma, vm_flags)) return false; + if (vm_flags & VM_NO_KHUGEPAGED) + return false; + if (vma->vm_file && !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, HPAGE_PMD_NR)) return false; @@ -470,7 +472,8 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, return false; if (vma_is_temporary_stack(vma)) return false; - return !(vm_flags & VM_NO_KHUGEPAGED); + + return true; } int __khugepaged_enter(struct mm_struct *mm) From patchwork Mon Apr 4 20:02:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12800701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90E27C43217 for ; Mon, 4 Apr 2022 21:17:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379324AbiDDVTO (ORCPT ); Mon, 4 Apr 2022 17:19:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380407AbiDDUFE (ORCPT ); Mon, 4 Apr 2022 16:05:04 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F15E30576; Mon, 4 Apr 2022 13:03:07 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id p17so9028094plo.9; Mon, 04 Apr 2022 13:03:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5Byfy9VbIfRqB/lMq9E6coezN6yPt+vkfM62rVKg8Ao=; b=AxElP8EkjsWu9uFPszBvkLOkxsuJWaGma6j/kzg1J45uxCQnKl53MKPLXM3bzLcB+I UwsfPob2VOt2V0XPQHOnu7TAxvVCTNSE6KuScOlC36TWg6HJUDLUHa3UKvMJ9tgVEMxM 9mCufQX3uNL8r407B9+wD7q4K36WEcTb94BSQY1zv4p3JQmfLZ+CfjlQOg0dLcMIL1xq u/37RX8brDJ70ogZGKbqWohxUI4JqurrzTqjcOhQQuCP0Ts7lmMczbesU4zwZch+8Nfe YJib5ZcU28w7VIwuyZk4D/d6e8f3uSR1N1molGoJqNfbFIIpiMh3azZJOiAIyidZztZT MxGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5Byfy9VbIfRqB/lMq9E6coezN6yPt+vkfM62rVKg8Ao=; b=gTrDsvwfNSMoTrzFwQhih+AcVt3tPnBmEB4wJ9+GP4knSuk0aKIJkqueWChVy//m8h hp8kdQQBEVaVAYG8EDQDgu9eRZGcDD1iYC5sZPaYH2IxmnCBtCfRCDaBEEX6Hg0y3Kwc p9Ei7hD2rSoAKbr3x+BcIW4GPQ1iCEtp8tXT6vfxe78d6VGFinIvObQUjC5yf4a1Unpd w56LxHBHdKjNkrqdhltboyw2nRCT9pTQEU/MdagV7BQHLPbOoQXd1E5nVMZZn5W31e9j SNxEatWZW1Rw4TYbnTy1WhxNYaKcEkU3rdFGTOIe0ijbpiSG7ua20l3uQBkaUiIF0eo/ 5ckg== X-Gm-Message-State: AOAM533aWCqhS4OpLv0r2odYGm3jGVPaedYVkrGJCb8ikTFKLjLl/6S5 Im4uP/zCgR7IrrkjxK2PeO0= X-Google-Smtp-Source: ABdhPJwDGZ1eLAnRqgsHcj4iCzKkBHl8n8iAS//8VpbDXoJE+obiPqKFFGMKejpUxqvN5w0R05aYZA== X-Received: by 2002:a17:90a:888:b0:1ca:a9ac:c866 with SMTP id v8-20020a17090a088800b001caa9acc866mr857483pjc.203.1649102587116; Mon, 04 Apr 2022 13:03:07 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id bw17-20020a056a00409100b004fadad3b93esm12779295pfb.142.2022.04.04.13.03.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 13:03:06 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 3/8] mm: khugepaged: skip DAX vma Date: Mon, 4 Apr 2022 13:02:45 -0700 Message-Id: <20220404200250.321455-4-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220404200250.321455-1-shy828301@gmail.com> References: <20220404200250.321455-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The DAX vma may be seen by khugepaged when the mm has other khugepaged suitable vmas. So khugepaged may try to collapse THP for DAX vma, but it will fail due to page sanity check, for example, page is not on LRU. So it is not harmful, but it is definitely pointless to run khugepaged against DAX vma, so skip it in early check. Reviewed-by: Miaohe Lin Acked-by: Song Liu Signed-off-by: Yang Shi Acked-by: Vlastimil Babka --- mm/khugepaged.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 7d197d9e3258..964a4d2c942a 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -447,6 +447,10 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, if (vm_flags & VM_NO_KHUGEPAGED) return false; + /* Don't run khugepaged against DAX vma */ + if (vma_is_dax(vma)) + return false; + if (vma->vm_file && !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, HPAGE_PMD_NR)) return false; From patchwork Mon Apr 4 20:02:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12800706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB6D5C352AA for ; Mon, 4 Apr 2022 21:17:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356016AbiDDVTn (ORCPT ); Mon, 4 Apr 2022 17:19:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380408AbiDDUFG (ORCPT ); Mon, 4 Apr 2022 16:05:06 -0400 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3400730576; Mon, 4 Apr 2022 13:03:09 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id b13so10057017pfv.0; Mon, 04 Apr 2022 13:03:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BXLHEFwxWdm5e2nqHirZ7xmXS6hF3uNKov0CWo+0WO8=; b=YHcz6zZf1ucCYQxs+BMo39i9t7OYROE0FpXZIgsWDaCsFsP9IDsqC4+3466mFNuIoG XKIxMX/EeLaGaDSkKXYDFXgr8HlDPlMuePpgvGfBZZzWdJZVhx0N+ih7KKqf5szrrHnG U2wo/VL4ygXkXvcglMVPrPHRDkifd17EqbT23AB19ccgs8/Bx2WixuZmCXKcJXjImMCi 9pRUoe3dQlpums5on1sx94iJXwq3mFhiR9L5ujIOE0fwiTJFVJbx9Nq/TPSCPK9/I2yE laPfy1lMXdXzUfgdglhd6cCi38CCqOWi0XfaDResSt6TIo2znuOjhuUfNN/4SJFyrfBA f7AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BXLHEFwxWdm5e2nqHirZ7xmXS6hF3uNKov0CWo+0WO8=; b=db5Pk4OZchByTCLXVe0f7bB4eG1qzZiVmT/WlcNLNF04KN/p7u/pFC783QihU9rNRK vPGN2O3DyuikCU0On6LpViget67OHF10KiR2nFvFZx2TOTpiL+QxePq7zi6iuzSjL3kN Fn0NoMEi8ly1Gnj2OqtR3vgNZuXJU2CsZynD535ijSMETYJcqXF85C8GD+YHlJxjfxM5 6zQiIvuwt5zKZXLq4b4LA6708f+TRVIKnqKH/3e70uoZZzWJAn+41z22lfzLPHPuAr2W kEVoZ6mWREkuHgaqqrELYisrl8t1v2rfTDqPu5L2zYl5uVoO+d2aH7maWuljdewcGSPf Ri0w== X-Gm-Message-State: AOAM530RcO35zbW6Evn3+qZ9ak+3nziEm4Hi7OEHqvH1oUaKXxh1PNSR OOZPC0wDjHAsHiU6lRXj5kw= X-Google-Smtp-Source: ABdhPJyagGXZMLfLFwAcYGG46fuX5fs53R3Ia/YY5p1GvsQ1lu5ZQE6+/01O8gqaGg7avP+BtQr8dA== X-Received: by 2002:a63:2ad0:0:b0:398:31d7:9955 with SMTP id q199-20020a632ad0000000b0039831d79955mr1273444pgq.198.1649102588645; Mon, 04 Apr 2022 13:03:08 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id bw17-20020a056a00409100b004fadad3b93esm12779295pfb.142.2022.04.04.13.03.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 13:03:08 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 4/8] mm: thp: only regular file could be THP eligible Date: Mon, 4 Apr 2022 13:02:46 -0700 Message-Id: <20220404200250.321455-5-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220404200250.321455-1-shy828301@gmail.com> References: <20220404200250.321455-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Since commit a4aeaa06d45e ("mm: khugepaged: skip huge page collapse for special files"), khugepaged just collapses THP for regular file which is the intended usecase for readonly fs THP. Only show regular file as THP eligible accordingly. And make file_thp_enabled() available for khugepaged too in order to remove duplicate code. Acked-by: Song Liu Signed-off-by: Yang Shi Acked-by: Vlastimil Babka --- include/linux/huge_mm.h | 14 ++++++++++++++ mm/huge_memory.c | 11 ++--------- mm/khugepaged.c | 9 ++------- 3 files changed, 18 insertions(+), 16 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 2999190adc22..62a6f667850d 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -172,6 +172,20 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) return false; } +static inline bool file_thp_enabled(struct vm_area_struct *vma) +{ + struct inode *inode; + + if (!vma->vm_file) + return false; + + inode = vma->vm_file->f_inode; + + return (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS)) && + (vma->vm_flags & VM_EXEC) && + !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode); +} + bool transparent_hugepage_active(struct vm_area_struct *vma); #define transparent_hugepage_use_zero_page() \ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2fe38212e07c..183b793fd28e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -68,13 +68,6 @@ static atomic_t huge_zero_refcount; struct page *huge_zero_page __read_mostly; unsigned long huge_zero_pfn __read_mostly = ~0UL; -static inline bool file_thp_enabled(struct vm_area_struct *vma) -{ - return transhuge_vma_enabled(vma, vma->vm_flags) && vma->vm_file && - !inode_is_open_for_write(vma->vm_file->f_inode) && - (vma->vm_flags & VM_EXEC); -} - bool transparent_hugepage_active(struct vm_area_struct *vma) { /* The addr is used to check if the vma size fits */ @@ -86,8 +79,8 @@ bool transparent_hugepage_active(struct vm_area_struct *vma) return __transparent_hugepage_enabled(vma); if (vma_is_shmem(vma)) return shmem_huge_enabled(vma); - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS)) - return file_thp_enabled(vma); + if (transhuge_vma_enabled(vma, vma->vm_flags) && file_thp_enabled(vma)) + return true; return false; } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 964a4d2c942a..609c1bc0a027 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -464,13 +464,8 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, return false; /* Only regular file is valid */ - if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && vma->vm_file && - (vm_flags & VM_EXEC)) { - struct inode *inode = vma->vm_file->f_inode; - - return !inode_is_open_for_write(inode) && - S_ISREG(inode->i_mode); - } + if (file_thp_enabled(vma)) + return true; if (!vma->anon_vma || vma->vm_ops) return false; From patchwork Mon Apr 4 20:02:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12800702 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17423C4167E for ; Mon, 4 Apr 2022 21:17:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379345AbiDDVTW (ORCPT ); Mon, 4 Apr 2022 17:19:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380409AbiDDUFH (ORCPT ); Mon, 4 Apr 2022 16:05:07 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BE01530576; Mon, 4 Apr 2022 13:03:10 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id m18so9046697plx.3; Mon, 04 Apr 2022 13:03:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=s/lBuFlDBU1GRGFU9YRcAqT+yGPT0++0CD70PJQOVug=; b=jV2+yjJwu1XGCS/IRhjcd4DZs/IJskOYfgOFxua1OeyC9E1XiBQ1wmird8bypPrcZd MoGBN5NQmHruIHbraHzI416gKP9u1Rk/wCKqGnIARi5LArpYMbo9qrGE7m+wZIrMYgB7 sQCOXxRoKp6iGqfMZBH/dbXVaI7kIT84aRYJK1SkaC2+vsBgI3iSqVcH8CfF7R8x9LgO WnRl0woQcCgIkuorfpvG3+oj8JjHG/unaFUIs1HTA9jTUoUOhNUCfrzFgi9TP0Lbt06K Ut85ccYdHygANYMcQX+k3Qz+za4rSK9z6yduraH5Uf2d5HYFHkA6SkVP4GD+n5IbvCPJ UC2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=s/lBuFlDBU1GRGFU9YRcAqT+yGPT0++0CD70PJQOVug=; b=B+guag9GMmgsdMyNDdnW9ymXyMqp1yv8qw5DL8C7a1QOWsXzYDBIj8Ei795WuNVxze /SEOchA4bT7rjl6PdiF+9AFyiS3Xg0IUHlHIMKkkoA7NnaKSxtaK2x+x9pl0Oh46n1iV KyNjciiBzWEfTmoSEne6fJ21AdaEEGTbCTmKkYMzkQF6Ud4CjYWetzJ1SXCgKjMfHUDu ImQpqot+A39ehmQNVXhqEA3wjVqoSbqJpWuIQ/A/Fpx17BQmU7zQMXTHP4ahAXRtIXyu 5SbEbHJHpJCP976hl6PULDjHspuwOW2FNfpt4dxwPXDns0yPbiSZCEH06mF/r6OHPqRw UVeA== X-Gm-Message-State: AOAM530d58xQHA6ydf+T65Me3tmkJkru11/hBRpqq1pQexQj6qjnp078 MAB7fy/AbPSVmbQdaRo7QsI= X-Google-Smtp-Source: ABdhPJzg9Vfz5QEnjinv2EOkeMGgWfDWMGTZSI+KOlHEC65AKTLMoaowQLItcjuPWMR/DDhthX79vA== X-Received: by 2002:a17:902:7245:b0:156:20aa:1534 with SMTP id c5-20020a170902724500b0015620aa1534mr1673619pll.94.1649102590218; Mon, 04 Apr 2022 13:03:10 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id bw17-20020a056a00409100b004fadad3b93esm12779295pfb.142.2022.04.04.13.03.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 13:03:09 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 5/8] mm: khugepaged: make khugepaged_enter() void function Date: Mon, 4 Apr 2022 13:02:47 -0700 Message-Id: <20220404200250.321455-6-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220404200250.321455-1-shy828301@gmail.com> References: <20220404200250.321455-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The most callers of khugepaged_enter() don't care about the return value. Only dup_mmap(), anonymous THP page fault and MADV_HUGEPAGE handle the error by returning -ENOMEM. Actually it is not harmful for them to ignore the error case either. It also sounds overkilling to fail fork() and page fault early due to khugepaged_enter() error, and MADV_HUGEPAGE does set VM_HUGEPAGE flag regardless of the error. Acked-by: Song Liu Signed-off-by: Yang Shi Acked-by: Vlastimil Babka --- include/linux/khugepaged.h | 30 ++++++++++++------------------ kernel/fork.c | 4 +--- mm/huge_memory.c | 4 ++-- mm/khugepaged.c | 18 +++++++----------- 4 files changed, 22 insertions(+), 34 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index 2fcc01891b47..0423d3619f26 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -12,10 +12,10 @@ extern struct attribute_group khugepaged_attr_group; extern int khugepaged_init(void); extern void khugepaged_destroy(void); extern int start_stop_khugepaged(void); -extern int __khugepaged_enter(struct mm_struct *mm); +extern void __khugepaged_enter(struct mm_struct *mm); extern void __khugepaged_exit(struct mm_struct *mm); -extern int khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags); +extern void khugepaged_enter_vma_merge(struct vm_area_struct *vma, + unsigned long vm_flags); extern void khugepaged_min_free_kbytes_update(void); #ifdef CONFIG_SHMEM extern void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr); @@ -40,11 +40,10 @@ static inline void collapse_pte_mapped_thp(struct mm_struct *mm, (transparent_hugepage_flags & \ (1<flags)) - return __khugepaged_enter(mm); - return 0; + __khugepaged_enter(mm); } static inline void khugepaged_exit(struct mm_struct *mm) @@ -53,7 +52,7 @@ static inline void khugepaged_exit(struct mm_struct *mm) __khugepaged_exit(mm); } -static inline int khugepaged_enter(struct vm_area_struct *vma, +static inline void khugepaged_enter(struct vm_area_struct *vma, unsigned long vm_flags) { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags)) @@ -62,27 +61,22 @@ static inline int khugepaged_enter(struct vm_area_struct *vma, (khugepaged_req_madv() && (vm_flags & VM_HUGEPAGE))) && !(vm_flags & VM_NOHUGEPAGE) && !test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) - if (__khugepaged_enter(vma->vm_mm)) - return -ENOMEM; - return 0; + __khugepaged_enter(vma->vm_mm); } #else /* CONFIG_TRANSPARENT_HUGEPAGE */ -static inline int khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) +static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) { - return 0; } static inline void khugepaged_exit(struct mm_struct *mm) { } -static inline int khugepaged_enter(struct vm_area_struct *vma, - unsigned long vm_flags) +static inline void khugepaged_enter(struct vm_area_struct *vma, + unsigned long vm_flags) { - return 0; } -static inline int khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags) +static inline void khugepaged_enter_vma_merge(struct vm_area_struct *vma, + unsigned long vm_flags) { - return 0; } static inline void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) diff --git a/kernel/fork.c b/kernel/fork.c index 9796897560ab..0d13baf86650 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -612,9 +612,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, retval = ksm_fork(mm, oldmm); if (retval) goto out; - retval = khugepaged_fork(mm, oldmm); - if (retval) - goto out; + khugepaged_fork(mm, oldmm); prev = NULL; for (mpnt = oldmm->mmap; mpnt; mpnt = mpnt->vm_next) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 183b793fd28e..4fd5a6a79d44 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -725,8 +725,8 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return VM_FAULT_FALLBACK; if (unlikely(anon_vma_prepare(vma))) return VM_FAULT_OOM; - if (unlikely(khugepaged_enter(vma, vma->vm_flags))) - return VM_FAULT_OOM; + khugepaged_enter(vma, vma->vm_flags); + if (!(vmf->flags & FAULT_FLAG_WRITE) && !mm_forbids_zeropage(vma->vm_mm) && transparent_hugepage_use_zero_page()) { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 609c1bc0a027..b69eda934d70 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -365,8 +365,7 @@ int hugepage_madvise(struct vm_area_struct *vma, * register it here without waiting a page fault that * may not happen any time soon. */ - if (khugepaged_enter_vma_merge(vma, *vm_flags)) - return -ENOMEM; + khugepaged_enter_vma_merge(vma, *vm_flags); break; case MADV_NOHUGEPAGE: *vm_flags &= ~VM_HUGEPAGE; @@ -475,20 +474,20 @@ static bool hugepage_vma_check(struct vm_area_struct *vma, return true; } -int __khugepaged_enter(struct mm_struct *mm) +void __khugepaged_enter(struct mm_struct *mm) { struct mm_slot *mm_slot; int wakeup; mm_slot = alloc_mm_slot(); if (!mm_slot) - return -ENOMEM; + return; /* __khugepaged_exit() must not run from under us */ VM_BUG_ON_MM(khugepaged_test_exit(mm), mm); if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) { free_mm_slot(mm_slot); - return 0; + return; } spin_lock(&khugepaged_mm_lock); @@ -504,11 +503,9 @@ int __khugepaged_enter(struct mm_struct *mm) mmgrab(mm); if (wakeup) wake_up_interruptible(&khugepaged_wait); - - return 0; } -int khugepaged_enter_vma_merge(struct vm_area_struct *vma, +void khugepaged_enter_vma_merge(struct vm_area_struct *vma, unsigned long vm_flags) { unsigned long hstart, hend; @@ -519,13 +516,12 @@ int khugepaged_enter_vma_merge(struct vm_area_struct *vma, * file-private shmem THP is not supported. */ if (!hugepage_vma_check(vma, vm_flags)) - return 0; + return; hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; hend = vma->vm_end & HPAGE_PMD_MASK; if (hstart < hend) - return khugepaged_enter(vma, vm_flags); - return 0; + khugepaged_enter(vma, vm_flags); } void __khugepaged_exit(struct mm_struct *mm) From patchwork Mon Apr 4 20:02:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12800699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1645C433F5 for ; Mon, 4 Apr 2022 21:17:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379303AbiDDVTG (ORCPT ); Mon, 4 Apr 2022 17:19:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380410AbiDDUFJ (ORCPT ); Mon, 4 Apr 2022 16:05:09 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 67F6130576; Mon, 4 Apr 2022 13:03:12 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id t13so9201615pgn.8; Mon, 04 Apr 2022 13:03:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=299dhLN8fZQaqsT9OGmphjyBEYT6w3EQIbst3RvXrxI=; b=pNHjfPkGU3nk4TyLoHEURVpFplmDQJP/QKBA1MTf9iPA+5O96M5KmGbjPNHgH3kGdo PB8MP/4BFQMW8iZUT0rXAvH6ICgHnflXm0yZiIGUM7ObvsZjlYZ1kOzxdSYTQ8oKUlcD cyxjX9o+hs4GHpbvU7N2kQhZtmPKCBsWlLx0CuJIf+mTLae52Gt2eRcr7K83q0NUB28Y 5ztyPyaLXwKf5AcFHDFb/wXTRVMhRIA5p7kNnRPfO85xZkwHO/SKyosZrLqhwLQsDT8W hNrKMR2Ofm3nfe4E6BIpLMjFYbX0PQryTsa8GkPDukq569Dkfw3xpXh7ltx8WWQNiRp9 ND+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=299dhLN8fZQaqsT9OGmphjyBEYT6w3EQIbst3RvXrxI=; b=oi7OFuwS2vbcJ7nlzWAb2j37XOWPBhzaP+P6IPnxEvLDGIvloDRNXPp7S7SOTuAf/l CaTW6ZXkkiHFIX233/QvhGLa9UcSSFvVUJV7Xh5tASEKnurfED90tDsXM6AlZKbJMyuG 0rH4HGJZCgZy6sHjmNrHMjHHJHI2aqebl90ZFIVJ/PhsuvYHHTKgbODy0d35M9gYLmBm 7MJp5Eq57Dh8P/ip28sHCmLZOvNrz6l7KSNBaKNSNTXLRolpYWxHTQvaWo3G0GpGhdbi 52fO+vNVOVnq0SnT6AJsSndOvombK2sIyw/IPNNFzc/xEj3OTM4OJufXtctKYt9DI3KO aJZg== X-Gm-Message-State: AOAM531muh3UC5UgLKjyrhrC5ymtbp62axNTahdrz2p+dRfjexeHoSru 2mLer/BSq0KxAQt4+h7bqG4= X-Google-Smtp-Source: ABdhPJznJU9yE4ofpoTqB4D8WgSLsYOq7fqnscvbG/DJjVa+BvOJgo1PB6ZgI+8pMnraeasjO3kvtg== X-Received: by 2002:a63:4a5d:0:b0:399:5cd:1589 with SMTP id j29-20020a634a5d000000b0039905cd1589mr1293015pgl.22.1649102591800; Mon, 04 Apr 2022 13:03:11 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id bw17-20020a056a00409100b004fadad3b93esm12779295pfb.142.2022.04.04.13.03.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 13:03:11 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 6/8] mm: khugepaged: move some khugepaged_* functions to khugepaged.c Date: Mon, 4 Apr 2022 13:02:48 -0700 Message-Id: <20220404200250.321455-7-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220404200250.321455-1-shy828301@gmail.com> References: <20220404200250.321455-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org To reuse hugepage_vma_check() for khugepaged_enter() so that we could remove some duplicate code. But moving hugepage_vma_check() to khugepaged.h needs to include huge_mm.h in it, it seems not optimal to bloat khugepaged.h. And the khugepaged_* functions actually are wrappers for some non-inline functions, so it seems the benefits are not too much to keep them inline. So move the khugepaged_* functions to khugepaged.c, any callers just need to include khugepaged.h which is quite small. For example, the following patches will call khugepaged_enter() in filemap page fault path for regular filesystems to make readonly FS THP collapse more consistent. The filemap.c just needs to include khugepaged.h. Acked-by: Song Liu Signed-off-by: Yang Shi --- include/linux/khugepaged.h | 37 ++++++------------------------------- mm/khugepaged.c | 20 ++++++++++++++++++++ 2 files changed, 26 insertions(+), 31 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index 0423d3619f26..6acf9701151e 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -2,10 +2,6 @@ #ifndef _LINUX_KHUGEPAGED_H #define _LINUX_KHUGEPAGED_H -#include /* MMF_VM_HUGEPAGE */ -#include - - #ifdef CONFIG_TRANSPARENT_HUGEPAGE extern struct attribute_group khugepaged_attr_group; @@ -16,6 +12,12 @@ extern void __khugepaged_enter(struct mm_struct *mm); extern void __khugepaged_exit(struct mm_struct *mm); extern void khugepaged_enter_vma_merge(struct vm_area_struct *vma, unsigned long vm_flags); +extern void khugepaged_fork(struct mm_struct *mm, + struct mm_struct *oldmm); +extern void khugepaged_exit(struct mm_struct *mm); +extern void khugepaged_enter(struct vm_area_struct *vma, + unsigned long vm_flags); + extern void khugepaged_min_free_kbytes_update(void); #ifdef CONFIG_SHMEM extern void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr); @@ -33,36 +35,9 @@ static inline void collapse_pte_mapped_thp(struct mm_struct *mm, #define khugepaged_always() \ (transparent_hugepage_flags & \ (1<flags)) - __khugepaged_enter(mm); -} - -static inline void khugepaged_exit(struct mm_struct *mm) -{ - if (test_bit(MMF_VM_HUGEPAGE, &mm->flags)) - __khugepaged_exit(mm); -} - -static inline void khugepaged_enter(struct vm_area_struct *vma, - unsigned long vm_flags) -{ - if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags)) - if ((khugepaged_always() || - (shmem_file(vma->vm_file) && shmem_huge_enabled(vma)) || - (khugepaged_req_madv() && (vm_flags & VM_HUGEPAGE))) && - !(vm_flags & VM_NOHUGEPAGE) && - !test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) - __khugepaged_enter(vma->vm_mm); -} #else /* CONFIG_TRANSPARENT_HUGEPAGE */ static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) { diff --git a/mm/khugepaged.c b/mm/khugepaged.c index b69eda934d70..ec5b0a691d87 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -556,6 +556,26 @@ void __khugepaged_exit(struct mm_struct *mm) } } +void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) +{ + if (test_bit(MMF_VM_HUGEPAGE, &oldmm->flags)) + __khugepaged_enter(mm); +} + +void khugepaged_exit(struct mm_struct *mm) +{ + if (test_bit(MMF_VM_HUGEPAGE, &mm->flags)) + __khugepaged_exit(mm); +} + +void khugepaged_enter(struct vm_area_struct *vma, unsigned long vm_flags) +{ + if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && + khugepaged_enabled()) + if (hugepage_vma_check(vma, vm_flags)) + __khugepaged_enter(vma->vm_mm); +} + static void release_pte_page(struct page *page) { mod_node_page_state(page_pgdat(page), From patchwork Mon Apr 4 20:02:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12800707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E352FC35296 for ; Mon, 4 Apr 2022 21:17:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379533AbiDDVTu (ORCPT ); Mon, 4 Apr 2022 17:19:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380411AbiDDUFK (ORCPT ); Mon, 4 Apr 2022 16:05:10 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AA0430F57; Mon, 4 Apr 2022 13:03:14 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id b15so9992640pfm.5; Mon, 04 Apr 2022 13:03:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FgKTSjMxp3AdwYz23ZAB1EJPIeK5nseej60E5OuWwX0=; b=Q1A3YlfgEbe8lLn0skK/++u9FqKcXEy32OnnDrIyJYpqURGO1hpoUykNLgw4agvXSP w7vqfhvAjNEbyICxsSt9wuLLGSKFFs3irFnCLN/eHA4nlKtz0V5Dx6+p0BEi2BgLwidF NcWLIMZ/iWj2y4M5XC7JxDZH8spQXT8QJxCnpJqOf/4O8O6tMAzWhbuCZUpVpZOb2AGM cyW4y0XBUDwtngu5qmWA1Dw5YE0H1fYD0uDhRHtnxT3mzcYt83eU9v7v5TUzahx3OeHw ejbXEJ3J5u3MzCi/ia0iLBqINzXhAiPh0SJk6/sg5j+I0pETLHMgbcd7VVpAzhwAuCFX pvvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FgKTSjMxp3AdwYz23ZAB1EJPIeK5nseej60E5OuWwX0=; b=qQ8fdrzmIKz0TnbUiG2B/q+hAcaNByVM0ifB1U5dGXvuFJ+G1RsYVh1oZJL3/JH0Dr F2XgJxZWT3wMwoqPhejBEtfYYjsV5CgqHVornN+CLIYWrf0i3ko4yG3+2KgIH22reXZl C1q9quK+FnLVM1OKDhzaoqKNnTibQZzmQwAzPdB8c8h8w4SzuxTT/R75bUwGlhx6Rgoa GQQQOOua5y7GC01R+EccnOti/V5FrGOvRcb778HkzTuAvAb36Vkdn15JVJUBdtLW9veq OLj952xqEKdDujdIxxwT3796rN1OvF9y/KqsTNTXM7ey36695+CVCAbOXcN3oIWo9Ls1 qvAA== X-Gm-Message-State: AOAM530Wm4/XTEJH2/SN4UAHuBaSpn14kjwkWp11nIU5RFM/S/gVxn9K Hv6Thir3M9Fc6nnU6KSO1TY= X-Google-Smtp-Source: ABdhPJznqnP9rpTwiXQKLYVOrJeLRb7oT1LQF5qT/hkPokfJzaSck749CHWu+2vv9VToyaEKUJAX/w== X-Received: by 2002:a63:8c2:0:b0:380:bfd8:9e10 with SMTP id 185-20020a6308c2000000b00380bfd89e10mr1275436pgi.422.1649102593484; Mon, 04 Apr 2022 13:03:13 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id bw17-20020a056a00409100b004fadad3b93esm12779295pfb.142.2022.04.04.13.03.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 13:03:13 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 7/8] mm: khugepaged: introduce khugepaged_enter_vma() helper Date: Mon, 4 Apr 2022 13:02:49 -0700 Message-Id: <20220404200250.321455-8-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220404200250.321455-1-shy828301@gmail.com> References: <20220404200250.321455-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The khugepaged_enter_vma_merge() actually does as the same thing as the khugepaged_enter() section called by shmem_mmap(), so consolidate them into one helper and rename it to khugepaged_enter_vma(). Signed-off-by: Yang Shi Acked-by: Vlastimil Babka --- include/linux/khugepaged.h | 8 ++++---- mm/khugepaged.c | 26 +++++++++----------------- mm/mmap.c | 8 ++++---- mm/shmem.c | 12 ++---------- 4 files changed, 19 insertions(+), 35 deletions(-) diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h index 6acf9701151e..f4b12be155ab 100644 --- a/include/linux/khugepaged.h +++ b/include/linux/khugepaged.h @@ -10,8 +10,8 @@ extern void khugepaged_destroy(void); extern int start_stop_khugepaged(void); extern void __khugepaged_enter(struct mm_struct *mm); extern void __khugepaged_exit(struct mm_struct *mm); -extern void khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags); +extern void khugepaged_enter_vma(struct vm_area_struct *vma, + unsigned long vm_flags); extern void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm); extern void khugepaged_exit(struct mm_struct *mm); @@ -49,8 +49,8 @@ static inline void khugepaged_enter(struct vm_area_struct *vma, unsigned long vm_flags) { } -static inline void khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags) +static inline void khugepaged_enter_vma(struct vm_area_struct *vma, + unsigned long vm_flags) { } static inline void collapse_pte_mapped_thp(struct mm_struct *mm, diff --git a/mm/khugepaged.c b/mm/khugepaged.c index ec5b0a691d87..c5c3202d7401 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -365,7 +365,7 @@ int hugepage_madvise(struct vm_area_struct *vma, * register it here without waiting a page fault that * may not happen any time soon. */ - khugepaged_enter_vma_merge(vma, *vm_flags); + khugepaged_enter_vma(vma, *vm_flags); break; case MADV_NOHUGEPAGE: *vm_flags &= ~VM_HUGEPAGE; @@ -505,23 +505,15 @@ void __khugepaged_enter(struct mm_struct *mm) wake_up_interruptible(&khugepaged_wait); } -void khugepaged_enter_vma_merge(struct vm_area_struct *vma, - unsigned long vm_flags) +void khugepaged_enter_vma(struct vm_area_struct *vma, + unsigned long vm_flags) { - unsigned long hstart, hend; - - /* - * khugepaged only supports read-only files for non-shmem files. - * khugepaged does not yet work on special mappings. And - * file-private shmem THP is not supported. - */ - if (!hugepage_vma_check(vma, vm_flags)) - return; - - hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; - hend = vma->vm_end & HPAGE_PMD_MASK; - if (hstart < hend) - khugepaged_enter(vma, vm_flags); + if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && + khugepaged_enabled() && + (((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < + (vma->vm_end & HPAGE_PMD_MASK))) + if (hugepage_vma_check(vma, vm_flags)) + __khugepaged_enter(vma->vm_mm); } void __khugepaged_exit(struct mm_struct *mm) diff --git a/mm/mmap.c b/mm/mmap.c index 3aa839f81e63..604c8dece5dd 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1218,7 +1218,7 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm, end, prev->vm_pgoff, NULL, prev); if (err) return NULL; - khugepaged_enter_vma_merge(prev, vm_flags); + khugepaged_enter_vma(prev, vm_flags); return prev; } @@ -1245,7 +1245,7 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm, } if (err) return NULL; - khugepaged_enter_vma_merge(area, vm_flags); + khugepaged_enter_vma(area, vm_flags); return area; } @@ -2460,7 +2460,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address) } } anon_vma_unlock_write(vma->anon_vma); - khugepaged_enter_vma_merge(vma, vma->vm_flags); + khugepaged_enter_vma(vma, vma->vm_flags); validate_mm(mm); return error; } @@ -2538,7 +2538,7 @@ int expand_downwards(struct vm_area_struct *vma, } } anon_vma_unlock_write(vma->anon_vma); - khugepaged_enter_vma_merge(vma, vma->vm_flags); + khugepaged_enter_vma(vma, vma->vm_flags); validate_mm(mm); return error; } diff --git a/mm/shmem.c b/mm/shmem.c index 529c9ad3e926..92eca974771d 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2239,11 +2239,7 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma) file_accessed(file); vma->vm_ops = &shmem_vm_ops; - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && - ((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < - (vma->vm_end & HPAGE_PMD_MASK)) { - khugepaged_enter(vma, vma->vm_flags); - } + khugepaged_enter_vma(vma, vma->vm_flags); return 0; } @@ -4136,11 +4132,7 @@ int shmem_zero_setup(struct vm_area_struct *vma) vma->vm_file = file; vma->vm_ops = &shmem_vm_ops; - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && - ((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < - (vma->vm_end & HPAGE_PMD_MASK)) { - khugepaged_enter(vma, vma->vm_flags); - } + khugepaged_enter_vma(vma, vma->vm_flags); return 0; } From patchwork Mon Apr 4 20:02:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12800705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18B19C43219 for ; Mon, 4 Apr 2022 21:17:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379141AbiDDVTk (ORCPT ); Mon, 4 Apr 2022 17:19:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380412AbiDDUFL (ORCPT ); Mon, 4 Apr 2022 16:05:11 -0400 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9906130576; Mon, 4 Apr 2022 13:03:15 -0700 (PDT) Received: by mail-pf1-x42e.google.com with SMTP id w7so9961097pfu.11; Mon, 04 Apr 2022 13:03:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UG1sy1iNDRLB0uN0cB6h3KgifvWMYW6S1PnONIaP3Qs=; b=bnyhwFA2JenCWTOWzQqWYVxa9U6bgYLgMEzDgfNWHGyZsV/BIcrzi093QQp94ckdhr Uw8gXhPnm8EuKl/5kb5GqXeC3lGRqiOLu9IvMiBPkwR4H0Ox20Wgtkdm7WK4E0FFoW7p wBwJ4LVJJbbzN5VkYE7c1wBpW+agLiIyWjn4uBQfG9gXmt4xRAypdwfzxiB5ZaojQqk2 miEKZA17rtUekHkwxJf4j57ZKUr/cvVqaMRt1mi5xuwpogTpPjzRWKaLpFWJfAafCj5H izyLOdxXzfD3wtidP2aNkn5eO5R5SVaFkrQE9Hxs7MyYzSHQ6jXu89Vin4glfydzoCMV 9u6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UG1sy1iNDRLB0uN0cB6h3KgifvWMYW6S1PnONIaP3Qs=; b=b01SCSKJCIDu0g1WqRvbUB/8PYI0DlnevFlUkgpF/O3c3EkAbY24elPny1hWOrKFQl qR+T+Ny/nkNDKhni7EF/hmpcAPaKsd2gLKCMqFz5LSeUEMGEs/KEF6M+mi147ybq8xz6 D4/z2oX8OFPFzuefiGyZa9RpsgWdehCSEktfdCm/7CzLWH2EhpLMzxIkNkMAsUlS9Nft is+LraH7GirgUc7Z2GN8j870gukWCaMCKmDuhY7kOMCDLT55WTyIu6MOpuSOAE34uIVI vIEK6gzzOWrNv54i2hMSwsQU/r+f0fF08qfuNhKiUQ3PJjNdOSZOKvMaS1iP8vBfNPPB HrfA== X-Gm-Message-State: AOAM530n8JS3toJHrf40dLrbqVHhc2tgOGm4KYQrWBwKbWNe+c4cvxhl xxmoGTRZ5uHX2zBDXPRuHmk= X-Google-Smtp-Source: ABdhPJyDNRgC+2cRPyN77hk18FFO+ZRAYgMHHVdxa0JbddzqzK8q+SgHKF/+4FYmXT2hQ1FHod/anw== X-Received: by 2002:a63:4862:0:b0:385:fb1d:fc54 with SMTP id x34-20020a634862000000b00385fb1dfc54mr1284620pgk.57.1649102595118; Mon, 04 Apr 2022 13:03:15 -0700 (PDT) Received: from localhost.localdomain (c-67-174-241-145.hsd1.ca.comcast.net. [67.174.241.145]) by smtp.gmail.com with ESMTPSA id bw17-20020a056a00409100b004fadad3b93esm12779295pfb.142.2022.04.04.13.03.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Apr 2022 13:03:14 -0700 (PDT) From: Yang Shi To: vbabka@suse.cz, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, songliubraving@fb.com, riel@surriel.com, willy@infradead.org, ziy@nvidia.com, tytso@mit.edu, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 8/8] mm: mmap: register suitable readonly file vmas for khugepaged Date: Mon, 4 Apr 2022 13:02:50 -0700 Message-Id: <20220404200250.321455-9-shy828301@gmail.com> X-Mailer: git-send-email 2.26.3 In-Reply-To: <20220404200250.321455-1-shy828301@gmail.com> References: <20220404200250.321455-1-shy828301@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The readonly FS THP relies on khugepaged to collapse THP for suitable vmas. But it is kind of "random luck" for khugepaged to see the readonly FS vmas (https://lore.kernel.org/linux-mm/00f195d4-d039-3cf2-d3a1-a2c88de397a0@suse.cz/) since currently the vmas are registered to khugepaged when: - Anon huge pmd page fault - VMA merge - MADV_HUGEPAGE - Shmem mmap If the above conditions are not met, even though khugepaged is enabled it won't see readonly FS vmas at all. MADV_HUGEPAGE could be specified explicitly to tell khugepaged to collapse this area, but when khugepaged mode is "always" it should scan suitable vmas as long as VM_NOHUGEPAGE is not set. So make sure readonly FS vmas are registered to khugepaged to make the behavior more consistent. Registering suitable vmas in common mmap path, that could cover both readonly FS vmas and shmem vmas, so removed the khugepaged calls in shmem.c. Still need to keep the khugepaged call in vma_merge() since vma_merge() is called in a lot of places, for example, madvise, mprotect, etc. Reported-by: Vlastimil Babka Signed-off-by: Yang Shi Acked-by: Vlastimil Babka --- mm/mmap.c | 6 ++++++ mm/shmem.c | 4 ---- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 604c8dece5dd..616ebbc2d052 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1842,6 +1842,12 @@ unsigned long mmap_region(struct file *file, unsigned long addr, } vma_link(mm, vma, prev, rb_link, rb_parent); + + /* + * vma_merge() calls khugepaged_enter_vma() either, the below + * call covers the non-merge case. + */ + khugepaged_enter_vma(vma, vma->vm_flags); /* Once vma denies write, undo our temporary denial count */ unmap_writable: if (file && vm_flags & VM_SHARED) diff --git a/mm/shmem.c b/mm/shmem.c index 92eca974771d..0c448080d210 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -34,7 +34,6 @@ #include #include #include -#include #include #include #include @@ -2239,7 +2238,6 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma) file_accessed(file); vma->vm_ops = &shmem_vm_ops; - khugepaged_enter_vma(vma, vma->vm_flags); return 0; } @@ -4132,8 +4130,6 @@ int shmem_zero_setup(struct vm_area_struct *vma) vma->vm_file = file; vma->vm_ops = &shmem_vm_ops; - khugepaged_enter_vma(vma, vma->vm_flags); - return 0; }