From patchwork Tue Sep 10 14:06:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13798564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4336EEB64DE for ; Tue, 10 Sep 2024 14:06:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A04A58D0075; Tue, 10 Sep 2024 10:06:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B4B58D0002; Tue, 10 Sep 2024 10:06:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 856758D0075; Tue, 10 Sep 2024 10:06:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 66D718D0002 for ; Tue, 10 Sep 2024 10:06:47 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B6C35140114 for ; Tue, 10 Sep 2024 14:06:46 +0000 (UTC) X-FDA: 82549004412.18.5837448 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf08.hostedemail.com (Postfix) with ESMTP id 25F4C160020 for ; Tue, 10 Sep 2024 14:06:42 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725977089; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=HdGrr97yRL7aLnU4sq+JM61icULnxFBWYBMGUZUesqY=; b=bxZZbHunMp31LOMCoB0Eh1tSQwt1VLMr7OLpSdh5hMZO+C6YsUsvem+s4WzI577kLSRS7p D3aCsCoMn2TfAhgz6eUm9DqP7eZ/ckh+qwZNBlLAftV94QZFIOeeQVLbimXZ37Ue+hHaph 5NYyRJSJP/xpJqbgPh0xOp4Z1+DE9M4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725977089; a=rsa-sha256; cv=none; b=ciEWN/0K5PRT1ZpYWyXHZh7ojb+YgakoAOa/TDFRPip2bMR4HHTdEEA8ZzGa93VmhNQhnH KCkamCedjzKqXCzxp/qvRwBhiuY9dgOjet4620LBJdtgDCkKpr9wFOch5BGTPdd/Sd930l UpFG+2Vmyvo2G86KRC8ABOqkABBFKkA= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf08.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4X356h0F35zfc1c; Tue, 10 Sep 2024 22:04:28 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 30D9E1402E2; Tue, 10 Sep 2024 22:06:38 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 10 Sep 2024 22:06:37 +0800 From: Kefeng Wang To: Andrew Morton CC: Ryan Roberts , David Hildenbrand , , Barry Song , Hugh Dickins , , Kefeng Wang Subject: [PATCH] mm: set hugepage to false when anon mthp allocation Date: Tue, 10 Sep 2024 22:06:25 +0800 Message-ID: <20240910140625.175700-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 25F4C160020 X-Stat-Signature: 7eubjnxwrafpyk3fjejuyu4xo6ctemrr X-Rspam-User: X-HE-Tag: 1725977202-222601 X-HE-Meta: U2FsdGVkX19zWBoEzn3rdFtjOOES/gcQMtvthUdpivJLzKvPZojJOwoNqo3x9ls90Z3OC2ysywHBGU4ZejTjEgv4SpiB7gbBLisXjKUQG03kO579tQdZ5/plKW2VRqOaHQl5v75qFXYbgcs0WZA4r8C854930XCa4zF340Zrc3jlbqhJo6bilbdYS6SYXaFvZN5B7r/t7rvgpiX2rBP940jnfzTVs68C/Qupbp557xRTIsgcBspPUgYdFs6hPPw6Fj2S9bLZuuV27mv7K7VOV0EXlY9XopfzzgTZzOOHhRdzXVC/uQ5Oyr2qjunssAzfkvx+cp4kuCBN7rDB942Tf1kzMXvFdGGFCGm3TQH5/7nCgPITzxe+WHy8crXe993wDxtSLsMivY943cNLcjq7sbshaGOzAdxv52pia/OW3h82QwCEAvrUSdZOe1GpPOSQ28P2HXZRLwaAPCY1PdPkuKv21gz/PXFkP34UhKraE5e52CHMkfgGTg+CaZ5B5JUIs69rnVvqk1wBDRNIT6Q4ijiQruVGSQPzUJVUvH+2y4dEued6qeqltdjoUinmSsGJw+p3bmul1p9zcCT2D/D5Mgk84vI3aDlvu+xFRiJOTdNQbXzEDvjm2+QXdw3UGhKRJ+qaZCcfd770P6RZ0qNyLOsCXf3T+3CdUt7pa6bF+N1qQ+B1MZl6hqa07Lh1C0PIx/JdaRaEdb0WzvrvTSvRyRCKDlc7FQK1yqyAFmXIdvdO7sjsVgW+GtpzonIoRGL06gHcpFmwtzq+puKYmdehZESycAPlbdvjGW99f576wE+Uw68r1PRtnC3MCUscBJ2Le6Wwc9dzqh0a/U0ji2ItRmK07soEv90ajnoeROXgZMdAI5Fd2N+1Sx7BYebXigF9TjTin/bkIdj/8syXeIwJ/ySBSKD+VsGdMPzRK2AXjLjdAS02RaV+oHL31c7sobSherlYyMjrW2LsLEdm2bq baNcn58K omVTVGUTcald0OA7uCW58NqAaHCs5lF9eVfUnVNIEApAYrVtl0iek/+vEbzkWGdXZOAxPjFubCgHoNgyvTUBsqatSbypi1k1720wd6cCRX1NqDNFUev5pTTxpxTLH0610zNG5NbtXaP10bOXDMJmS7t4ISA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When the hugepage parameter is true in vma_alloc_folio(), it indicates that we only try allocation on preferred node if possible for PMD_ORDER, but it could lead to lots of failures for large folio allocation, luckily the hugepage parameter was deprecated since commit ddc1a5cbc05d ("mempolicy: alloc_pages_mpol() for NUMA policy without vma"), so no effect on runtime behavior. Signed-off-by: Kefeng Wang --- Found the issue when backport mthp to inner kernel without ddc1a5cbc05d, but for mainline, there is no issue, no clue why hugepage parameter was retained, maybe just kill the parameter for mainline? mm/memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index b84443e689a8..89a15858348a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4479,7 +4479,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) gfp = vma_thp_gfp_mask(vma); while (orders) { addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); - folio = vma_alloc_folio(gfp, order, vma, addr, true); + folio = vma_alloc_folio(gfp, order, vma, addr, false); if (folio) { if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);