From patchwork Tue Mar 15 04:23:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: luofei X-Patchwork-Id: 12781023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81392C433F5 for ; Tue, 15 Mar 2022 04:24:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1275E8D0003; Tue, 15 Mar 2022 00:24:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D6FD8D0001; Tue, 15 Mar 2022 00:24:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EDFCF8D0003; Tue, 15 Mar 2022 00:24:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id DBF918D0001 for ; Tue, 15 Mar 2022 00:24:23 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 96F858249980 for ; Tue, 15 Mar 2022 04:24:23 +0000 (UTC) X-FDA: 79245328806.17.E374A5C Received: from spam.unicloud.com (mx.unispc.com [220.194.70.58]) by imf22.hostedemail.com (Postfix) with ESMTP id 26B3DC000A for ; Tue, 15 Mar 2022 04:24:21 +0000 (UTC) Received: from eage.unicloud.com ([220.194.70.35]) by spam.unicloud.com with ESMTP id 22F4OAbb021710; Tue, 15 Mar 2022 12:24:10 +0800 (GMT-8) (envelope-from luofei@unicloud.com) Received: from localhost.localdomain (10.10.1.7) by zgys-ex-mb09.Unicloud.com (10.10.0.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Tue, 15 Mar 2022 12:24:09 +0800 From: luofei To: , CC: , , luofei Subject: [PATCH] hugetlbfs: fix description about atomic allocation of vmemmap pages when free huge page Date: Tue, 15 Mar 2022 00:23:55 -0400 Message-ID: <20220315042355.362810-1-luofei@unicloud.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.10.1.7] X-ClientProxiedBy: zgys-ex-mb10.Unicloud.com (10.10.0.6) To zgys-ex-mb09.Unicloud.com (10.10.0.24) X-DNSRBL: X-MAIL: spam.unicloud.com 22F4OAbb021710 X-Rspam-User: X-Rspamd-Queue-Id: 26B3DC000A X-Stat-Signature: yueoo8ruiok1pnutk5bk9wix6i8pfcri Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf22.hostedemail.com: domain of luofei@unicloud.com designates 220.194.70.58 as permitted sender) smtp.mailfrom=luofei@unicloud.com X-Rspamd-Server: rspam03 X-HE-Tag: 1647318261-488624 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000268, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: No matter what context update_and_free_page() is called in, the flag for allocating the vmemmap page is fixed (GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE), and no atomic allocation is involved, so the description of atomicity here is somewhat inappropriate. and the atomic parameter naming of update_and_free_page() is somewhat misleading. Signed-off-by: luofei --- mm/hugetlb.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f8ca7cca3c1a..239ef82b7897 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1570,8 +1570,8 @@ static void __update_and_free_page(struct hstate *h, struct page *page) /* * As update_and_free_page() can be called under any context, so we cannot - * use GFP_KERNEL to allocate vmemmap pages. However, we can defer the - * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate + * use GFP_ATOMIC to allocate vmemmap pages. However, we can defer the + * actual freeing in a workqueue to prevent waits caused by allocating * the vmemmap pages. * * free_hpage_workfn() locklessly retrieves the linked list of pages to be @@ -1617,16 +1617,14 @@ static inline void flush_free_hpage_work(struct hstate *h) } static void update_and_free_page(struct hstate *h, struct page *page, - bool atomic) + bool delay) { - if (!HPageVmemmapOptimized(page) || !atomic) { + if (!HPageVmemmapOptimized(page) || !delay) { __update_and_free_page(h, page); return; } /* - * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap pages. - * * Only call schedule_work() if hpage_freelist is previously * empty. Otherwise, schedule_work() had been called but the workfn * hasn't retrieved the list yet.