From patchwork Sat Feb 22 02:44:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13986514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16177C021B3 for ; Sat, 22 Feb 2025 02:44:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9E8AD280001; Fri, 21 Feb 2025 21:44:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 970516B008A; Fri, 21 Feb 2025 21:44:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7EAA6280001; Fri, 21 Feb 2025 21:44:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 5E7D76B0089 for ; Fri, 21 Feb 2025 21:44:44 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 13789B49D2 for ; Sat, 22 Feb 2025 02:44:44 +0000 (UTC) X-FDA: 83146037688.22.F57D2A8 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by imf12.hostedemail.com (Postfix) with ESMTP id 24A2D40005 for ; Sat, 22 Feb 2025 02:44:41 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=nC6L5NVJ; spf=pass (imf12.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740192282; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oJx8bfHvcsPdmDmREOhTX7J+3gMm5tOq1v0fsOdzZbU=; b=1mcBLyX6Mfg7c1zoA4TaeQINSOYzzACKyTUrh53FtoOwgckf25BU4H9dT84k3cJM12gLmV pxzIbvARycxyYm9xGpxzr+Xh2ewr+kcBeddBzh4T4ZdCLHdlYiKda9j/J3eGsWvmn2tHWN cR7mvYepppdCOdSMETMyTx4stFJbs54= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=nC6L5NVJ; spf=pass (imf12.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740192282; a=rsa-sha256; cv=none; b=ploY6WJsoev0tJ4FDaUqdkRbzKD6/hkJSeWj6VMCBB+dJhBAFZMrJZQQmTfzG+DCD8Y4FJ 7DXjTODNmp9GvxtI5CmaEI5GJX789PZb7Xx1ds9zvW5V9G/KF7rjtEv38CPRShGQGK+TmU 0iHiWcclkAJ24nhSHxG9caoPq2+xCXE= Received: by mail-pj1-f50.google.com with SMTP id 98e67ed59e1d1-2fbfe16cc39so5521366a91.3 for ; Fri, 21 Feb 2025 18:44:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740192281; x=1740797081; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oJx8bfHvcsPdmDmREOhTX7J+3gMm5tOq1v0fsOdzZbU=; b=nC6L5NVJF3ZjMOnW1oMDiB4awPHa3UPq8D6/9jmx1P2BwCQ1IYyYPqIMOPyblaOjXk LsKm/O1/r8ZZ6aFkh+wLUim5VYtKEmL3ZAEFlmL2dZT1n9beJO6VglvmnsACo1W7X2PG mjDdX/v2iklzIf2MfZamyLKCMqJ7Rq62I+Y4tHEffkcbgjucYuVygXWZtXsRP8LYW6b4 EEKi/DlJ/wEtlQlx/D2MEWXtKbWVcwHHnskn+5ED4CWwO2pqpjijmLOyR5ecUc07HdZt td46ewRtU/eP36c9obt/bEFGH9exq4Jygg6ACHR+QJg+DCklPxMPjbHHhdDRnuebCy6A PQSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740192281; x=1740797081; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oJx8bfHvcsPdmDmREOhTX7J+3gMm5tOq1v0fsOdzZbU=; b=DpaRAYaVrRxdks0MLXOYyYM1IOCO1CQJULiClgSnvsqNnoKn7MlSXVOlFD3ddDvhxl xFfEEZQumjiwywKyTZdq2f/pPX9RIkFOs9xAKRSqhoFE1X6SrzaXmix72kboWE8vTmpq umdVXMOU+uR3it/FeXRA8PBI7RYFpvedivbfYJDmmMVGJT0nqD3Oq/HBX4gWauIxk/Cv B/iBTmk692SuvoEyxysP0LftUIMKNJQ0FVrLqvoFO2nzTRDA2zEd6bWmyiygDAu78db9 JjZv7+16V1s8dPynQpY5NOJETbd6Sl+kjbEnruFfnZsNVurGjOiEircb2CSITBOgEfBg 0VGA== X-Forwarded-Encrypted: i=1; AJvYcCVvh3+CGTvQtjYfRx8JzO56sHKBbCM5q73as9NhzGSKUc5tcLf363i+CWlod5NqUPftr38c2Dw81g==@kvack.org X-Gm-Message-State: AOJu0YzqK5AtfJudbImgGPNv9PBIF6zzumbRGbC4c6mn0i1Lab0/WypI cLQpVdmNesSMMdMzlgtG8mq7ZTy6z163x9YRAKeq8S9tykk5owwd X-Gm-Gg: ASbGncucJF9UVxe89ME/BDKVUpabsxunQiplvBOjadPA+Vjw7mtchQx/iu6YmEqRE7H LZ5bHlSaV4pvrRVm89nDCsWXH6sKVtJZd32vheYA2KHbGeTqHKSel/09UGOdY/VhwPPhR70BFkt JoCwK9d57O8MB2re1hR9nAeNfdDf16q+TWcmlRm6eJ9nrD8TG7TZVNFRmT8w0n1qIcIAMVNWKAP CalXY/vnSGbGi8jIGcT/FUqIdV/uVziSSXMWeQMUi1NZ7YP4QkNEtztPIp3a3EfW06xhg2Y7V/S rCLlFqYvkRCyBut1ZCuQvvT2PWV5fIkDbL7ETkutZTsekB0wtOqyvw== X-Google-Smtp-Source: AGHT+IFCLpvNiUlzSLAmScZOHfvBMu91BobDCkKJ+lN7CcdzTuYNWTOHAeEoc/CAjS1q1J3gFUhfzQ== X-Received: by 2002:a05:6a00:98f:b0:732:1eb2:7bf3 with SMTP id d2e1a72fcca58-73426d9f1abmr9857341b3a.21.1740192280978; Fri, 21 Feb 2025 18:44:40 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:fd1b]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-73275615173sm11027845b3a.168.2025.02.21.18.44.38 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 21 Feb 2025 18:44:40 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org, peterz@infradead.org, vbabka@suse.cz, bigeasy@linutronix.de, rostedt@goodmis.org, houtao1@huawei.com, hannes@cmpxchg.org, shakeel.butt@linux.dev, mhocko@suse.com, willy@infradead.org, tglx@linutronix.de, jannh@google.com, tj@kernel.org, linux-mm@kvack.org, kernel-team@fb.com Subject: [PATCH bpf-next v9 2/6] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation Date: Fri, 21 Feb 2025 18:44:23 -0800 Message-Id: <20250222024427.30294-3-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250222024427.30294-1-alexei.starovoitov@gmail.com> References: <20250222024427.30294-1-alexei.starovoitov@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 24A2D40005 X-Stat-Signature: qk4sk7pgtekkeukng3stbdtt4fn3w3x5 X-Rspamd-Server: rspam03 X-HE-Tag: 1740192281-719558 X-HE-Meta: U2FsdGVkX1+pTdRpZC8Aq2QSjG/SFnKUFPZXH080ohcF5oAIefYw6NcDUk7j63VoJ66DIeS9ccOrn4t80jD8DHHl81g546ugbVdGzQzd21gW1IJ/ekj2i7P9g9Eycc/Zhw1y46VHe/1ZKhyvy8yPpQjh7z6tfH6FCFDSp05of4q9XXBC74EurXvE1wUz+oYCWprmI0RVHn/Gsvq3mzD7TIMa8GTNDPNmGOnK38RZlWyynqVQG5nts4el/kl4EneRmt3+M6Et2w6t4DFVXaEzZYONFxRBJR3ZejiaNGodp2GHfEp7aExDIQByr4fTKsGTx6r5Er5L8kPkktm9c5v2OaUnBNxkMtCE+QaCtlCqnf36xSgSYck5uWtvcWOMsl2lw7svZnGUZ9J5L0ajMXJmnpnvOocStiViFRE01HnOJrv+g+IU2AqICBSYQsXnTXtUMml/OxywWe5avvoFyJe4vzhXVuB27I9Ej5CNmeLlQZ8TzFajnebhdjX0rKsAgaB5Z2CG26oay/1yW/6tzSBhH079c+EUqZZJpQgFQtekxa8O7Rnl6ZmKG5aAqjMdlOjonBjhx6QA4jCgYgeWU5+V7rr+hcQtKwdntVrPiR7Y5bZt8OWvg5XHszJ7aa33zKwONzkHdq4HPmXpe7NFXSJHb1CmcZZ175Olxd2Xh0K1dZXVMccQnvEzHNgHkYiAXIuBLOkGnD5Job0nGCrbLaON1hPJKNpC5jwkf/RA9+YKrR0TJ0dxzgR1FTWzg+G9rHHww3czmcfuBzCzjmt4X399skQ8jSOG9e4eWNy/mU+IzInZ3j9wEXGXxd6zWhgT6BUoapBr+glS8ttytfuxshkaf0HnVXVe9SB9TNUD4xvipvaOwYITXLCnd/jA4jucPrjdLVfcNO9zknbx1VgDCtqdiG5TgE9zlNc8EZk5xY4ca0gGp+2Gj4XI24SjyUgELkLX9/s2uQcLM5fOvEFjFbw fHdZNeR1 1hopLYLbbfT7Htwy7E3NGTWHgFlXhLgubxmF4lcc8YMamiI8nj7SGwKF+ZjZcOaG3M+zyD923lP94D8EcqF3EpwO2j34P4Ju6fluIbv0SKQ2yr4MN9jmL42Zcjya9RAQWPhLcHc02g9zraRUY/H9SNNBKRDG1oZr+QRRcd9PPvV7+cL8KBjMyRQL7X3KY5+w8H9ruas6ybowOMH6VIJvhzYDiJPtwuBJbZE3CpoVlrEy7b/PmAZORyNV8LDVGPHnNJ+zGS61Hc3WVIR1F9HNTpONmly8ngr63LsZk5SkUMqGL5jbuxSjk7BTGatLUjFIt5+G3whIOfskwbs1V6uOKyNwS0lLC8xU/adtmKcy1no24viXgCArKToagmjIPyk2RmL+fxIz0RRh+1ZrUT2beQfhIOXAeUGr+y3iX7tmup+BKEM3FbQMK+QZD4shQfZaJVeyW5mGKhNCHynsCgI1XBXNuVKECNCtlIU3c6FEPmHjNgV+sLme1+jZz5MD75Sa/CCawHXSBmSCrRIx1YH61x3HTdipVhDKoj/Wec1ne/vLlMVBBUItgUWdVPW6HswWyyPTpg2t16Ny1F9LCYY2XEfQrZiUOSftsDuPmkJ2yI82/oNA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexei Starovoitov Tracing BPF programs execute from tracepoints and kprobes where running context is unknown, but they need to request additional memory. The prior workarounds were using pre-allocated memory and BPF specific freelists to satisfy such allocation requests. Instead, introduce gfpflags_allow_spinning() condition that signals to the allocator that running context is unknown. Then rely on percpu free list of pages to allocate a page. try_alloc_pages() -> get_page_from_freelist() -> rmqueue() -> rmqueue_pcplist() will spin_trylock to grab the page from percpu free list. If it fails (due to re-entrancy or list being empty) then rmqueue_bulk()/rmqueue_buddy() will attempt to spin_trylock zone->lock and grab the page from there. spin_trylock() is not safe in PREEMPT_RT when in NMI or in hard IRQ. Bailout early in such case. The support for gfpflags_allow_spinning() mode for free_page and memcg comes in the next patches. This is a first step towards supporting BPF requirements in SLUB and getting rid of bpf_mem_alloc. That goal was discussed at LSFMM: https://lwn.net/Articles/974138/ Acked-by: Michal Hocko Acked-by: Vlastimil Babka Acked-by: Sebastian Andrzej Siewior Reviewed-by: Shakeel Butt Signed-off-by: Alexei Starovoitov --- include/linux/gfp.h | 22 ++++++++++ lib/stackdepot.c | 5 ++- mm/internal.h | 1 + mm/page_alloc.c | 104 ++++++++++++++++++++++++++++++++++++++++++-- 4 files changed, 127 insertions(+), 5 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 6bb1a5a7a4ae..5d9ee78c74e4 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -39,6 +39,25 @@ static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags) return !!(gfp_flags & __GFP_DIRECT_RECLAIM); } +static inline bool gfpflags_allow_spinning(const gfp_t gfp_flags) +{ + /* + * !__GFP_DIRECT_RECLAIM -> direct claim is not allowed. + * !__GFP_KSWAPD_RECLAIM -> it's not safe to wake up kswapd. + * All GFP_* flags including GFP_NOWAIT use one or both flags. + * try_alloc_pages() is the only API that doesn't specify either flag. + * + * This is stronger than GFP_NOWAIT or GFP_ATOMIC because + * those are guaranteed to never block on a sleeping lock. + * Here we are enforcing that the allocation doesn't ever spin + * on any locks (i.e. only trylocks). There is no high level + * GFP_$FOO flag for this use in try_alloc_pages() as the + * regular page allocator doesn't fully support this + * allocation mode. + */ + return !(gfp_flags & __GFP_RECLAIM); +} + #ifdef CONFIG_HIGHMEM #define OPT_ZONE_HIGHMEM ZONE_HIGHMEM #else @@ -335,6 +354,9 @@ static inline struct page *alloc_page_vma_noprof(gfp_t gfp, } #define alloc_page_vma(...) alloc_hooks(alloc_page_vma_noprof(__VA_ARGS__)) +struct page *try_alloc_pages_noprof(int nid, unsigned int order); +#define try_alloc_pages(...) alloc_hooks(try_alloc_pages_noprof(__VA_ARGS__)) + extern unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order); #define __get_free_pages(...) alloc_hooks(get_free_pages_noprof(__VA_ARGS__)) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 245d5b416699..377194969e61 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -591,7 +591,8 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries, depot_stack_handle_t handle = 0; struct page *page = NULL; void *prealloc = NULL; - bool can_alloc = depot_flags & STACK_DEPOT_FLAG_CAN_ALLOC; + bool allow_spin = gfpflags_allow_spinning(alloc_flags); + bool can_alloc = (depot_flags & STACK_DEPOT_FLAG_CAN_ALLOC) && allow_spin; unsigned long flags; u32 hash; @@ -630,7 +631,7 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries, prealloc = page_address(page); } - if (in_nmi()) { + if (in_nmi() || !allow_spin) { /* We can never allocate in NMI context. */ WARN_ON_ONCE(can_alloc); /* Best effort; bail if we fail to take the lock. */ diff --git a/mm/internal.h b/mm/internal.h index 109ef30fee11..10a8b4b3b86e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1187,6 +1187,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_NOFRAGMENT 0x0 #endif #define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ +#define ALLOC_TRYLOCK 0x400 /* Only use spin_trylock in allocation path */ #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ /* Flags that allow allocations below the min watermark. */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 579789600a3c..1f2a4e1c70ae 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2307,7 +2307,11 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, unsigned long flags; int i; - spin_lock_irqsave(&zone->lock, flags); + if (!spin_trylock_irqsave(&zone->lock, flags)) { + if (unlikely(alloc_flags & ALLOC_TRYLOCK)) + return 0; + spin_lock_irqsave(&zone->lock, flags); + } for (i = 0; i < count; ++i) { struct page *page = __rmqueue(zone, order, migratetype, alloc_flags); @@ -2907,7 +2911,11 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, do { page = NULL; - spin_lock_irqsave(&zone->lock, flags); + if (!spin_trylock_irqsave(&zone->lock, flags)) { + if (unlikely(alloc_flags & ALLOC_TRYLOCK)) + return NULL; + spin_lock_irqsave(&zone->lock, flags); + } if (alloc_flags & ALLOC_HIGHATOMIC) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); if (!page) { @@ -4511,7 +4519,12 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, might_alloc(gfp_mask); - if (should_fail_alloc_page(gfp_mask, order)) + /* + * Don't invoke should_fail logic, since it may call + * get_random_u32() and printk() which need to spin_lock. + */ + if (!(*alloc_flags & ALLOC_TRYLOCK) && + should_fail_alloc_page(gfp_mask, order)) return false; *alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, *alloc_flags); @@ -7071,3 +7084,88 @@ static bool __free_unaccepted(struct page *page) } #endif /* CONFIG_UNACCEPTED_MEMORY */ + +/** + * try_alloc_pages - opportunistic reentrant allocation from any context + * @nid - node to allocate from + * @order - allocation order size + * + * Allocates pages of a given order from the given node. This is safe to + * call from any context (from atomic, NMI, and also reentrant + * allocator -> tracepoint -> try_alloc_pages_noprof). + * Allocation is best effort and to be expected to fail easily so nobody should + * rely on the success. Failures are not reported via warn_alloc(). + * See always fail conditions below. + * + * Return: allocated page or NULL on failure. + */ +struct page *try_alloc_pages_noprof(int nid, unsigned int order) +{ + /* + * Do not specify __GFP_DIRECT_RECLAIM, since direct claim is not allowed. + * Do not specify __GFP_KSWAPD_RECLAIM either, since wake up of kswapd + * is not safe in arbitrary context. + * + * These two are the conditions for gfpflags_allow_spinning() being true. + * + * Specify __GFP_NOWARN since failing try_alloc_pages() is not a reason + * to warn. Also warn would trigger printk() which is unsafe from + * various contexts. We cannot use printk_deferred_enter() to mitigate, + * since the running context is unknown. + * + * Specify __GFP_ZERO to make sure that call to kmsan_alloc_page() below + * is safe in any context. Also zeroing the page is mandatory for + * BPF use cases. + * + * Though __GFP_NOMEMALLOC is not checked in the code path below, + * specify it here to highlight that try_alloc_pages() + * doesn't want to deplete reserves. + */ + gfp_t alloc_gfp = __GFP_NOWARN | __GFP_ZERO | __GFP_NOMEMALLOC; + unsigned int alloc_flags = ALLOC_TRYLOCK; + struct alloc_context ac = { }; + struct page *page; + + /* + * In PREEMPT_RT spin_trylock() will call raw_spin_lock() which is + * unsafe in NMI. If spin_trylock() is called from hard IRQ the current + * task may be waiting for one rt_spin_lock, but rt_spin_trylock() will + * mark the task as the owner of another rt_spin_lock which will + * confuse PI logic, so return immediately if called form hard IRQ or + * NMI. + * + * Note, irqs_disabled() case is ok. This function can be called + * from raw_spin_lock_irqsave region. + */ + if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq())) + return NULL; + if (!pcp_allowed_order(order)) + return NULL; + +#ifdef CONFIG_UNACCEPTED_MEMORY + /* Bailout, since try_to_accept_memory_one() needs to take a lock */ + if (has_unaccepted_memory()) + return NULL; +#endif + /* Bailout, since _deferred_grow_zone() needs to take a lock */ + if (deferred_pages_enabled()) + return NULL; + + if (nid == NUMA_NO_NODE) + nid = numa_node_id(); + + prepare_alloc_pages(alloc_gfp, order, nid, NULL, &ac, + &alloc_gfp, &alloc_flags); + + /* + * Best effort allocation from percpu free list. + * If it's empty attempt to spin_trylock zone->lock. + */ + page = get_page_from_freelist(alloc_gfp, order, alloc_flags, &ac); + + /* Unlike regular alloc_pages() there is no __alloc_pages_slowpath(). */ + + trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype); + kmsan_alloc_page(page, order, alloc_gfp); + return page; +}