From patchwork Wed Mar 16 03:16:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: luofei X-Patchwork-Id: 12782170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F94BC433F5 for ; Wed, 16 Mar 2022 03:16:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC5788D0002; Tue, 15 Mar 2022 23:16:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C750D8D0001; Tue, 15 Mar 2022 23:16:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B3C8E8D0002; Tue, 15 Mar 2022 23:16:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id A18398D0001 for ; Tue, 15 Mar 2022 23:16:25 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 7432C80A7B for ; Wed, 16 Mar 2022 03:16:25 +0000 (UTC) X-FDA: 79248786330.07.0500EFD Received: from spam.unicloud.com (mx.gosinoic.com [220.194.70.58]) by imf18.hostedemail.com (Postfix) with ESMTP id 3BF6F1C000B for ; Wed, 16 Mar 2022 03:16:23 +0000 (UTC) Received: from eage.unicloud.com ([220.194.70.35]) by spam.unicloud.com with ESMTP id 22G3GIB8054293; Wed, 16 Mar 2022 11:16:19 +0800 (GMT-8) (envelope-from luofei@unicloud.com) Received: from localhost.localdomain (10.10.1.7) by zgys-ex-mb09.Unicloud.com (10.10.0.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Wed, 16 Mar 2022 11:16:18 +0800 From: luofei To: , CC: , , luofei Subject: [PATCH v2] hugetlb: Fix comments about avoiding atomic allocation of vmemmap pages Date: Tue, 15 Mar 2022 23:16:02 -0400 Message-ID: <20220316031602.377452-1-luofei@unicloud.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.10.1.7] X-ClientProxiedBy: zgys-ex-mb11.Unicloud.com (10.10.0.28) To zgys-ex-mb09.Unicloud.com (10.10.0.24) X-DNSRBL: X-MAIL: spam.unicloud.com 22G3GIB8054293 X-Rspamd-Queue-Id: 3BF6F1C000B X-Stat-Signature: jdwhrzica1y1k9zyg53tdbh76couq93t X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf18.hostedemail.com: domain of luofei@unicloud.com designates 220.194.70.58 as permitted sender) smtp.mailfrom=luofei@unicloud.com X-Rspamd-Server: rspam02 X-HE-Tag: 1647400583-464734 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since there is no longer an atomic allocation of vmemmap pages, but a fixed flag(GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE) is used. The description of atomicity here is some what inappropriate. And the atomic parameter naming of update_and_free_page() may be misleading, add a comment here. Signed-off-by: luofei Reviewed-by: Muchun Song --- mm/hugetlb.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f8ca7cca3c1a..fbf598bbc4e3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1569,10 +1569,12 @@ static void __update_and_free_page(struct hstate *h, struct page *page) } /* - * As update_and_free_page() can be called under any context, so we cannot - * use GFP_KERNEL to allocate vmemmap pages. However, we can defer the - * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate - * the vmemmap pages. + * Freeing hugetlb pages in done in update_and_free_page(). When freeing + * a hugetlb page, vmemmap pages may need to be allocated. The routine + * alloc_huge_page_vmemmap() can possibly sleep as it uses GFP_KERNEL. + * However, update_and_free_page() can be called under any context. To + * avoid the possibility of sleeping in a context where sleeping is not + * allowed, defer the actual freeing in a workqueue where sleeping is allowed. * * free_hpage_workfn() locklessly retrieves the linked list of pages to be * freed and frees them one-by-one. As the page->mapping pointer is going @@ -1616,6 +1618,10 @@ static inline void flush_free_hpage_work(struct hstate *h) flush_work(&free_hpage_work); } +/* + * atomic == true indicates called from a context where sleeping is + * not allowed. + */ static void update_and_free_page(struct hstate *h, struct page *page, bool atomic) { @@ -1625,7 +1631,8 @@ static void update_and_free_page(struct hstate *h, struct page *page, } /* - * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap pages. + * Defer freeing to avoid possible sleeping when allocating + * vmemmap pages. * * Only call schedule_work() if hpage_freelist is previously * empty. Otherwise, schedule_work() had been called but the workfn