From patchwork Tue Jan 14 02:19:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13938319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85727C02180 for ; Tue, 14 Jan 2025 02:19:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 13B29280001; Mon, 13 Jan 2025 21:19:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0EB8C6B0095; Mon, 13 Jan 2025 21:19:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ECE80280001; Mon, 13 Jan 2025 21:19:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D2F1D6B0093 for ; Mon, 13 Jan 2025 21:19:39 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 7C565C04F9 for ; Tue, 14 Jan 2025 02:19:39 +0000 (UTC) X-FDA: 83004451278.26.8652B0E Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf01.hostedemail.com (Postfix) with ESMTP id 8FEB34000F for ; Tue, 14 Jan 2025 02:19:37 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SUTLrB+I; spf=pass (imf01.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736821177; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=akgdVcj5xEeNq6DmKIL978uvRiSwDxIZz8GY/zQxfoo=; b=jN5TxVrU50iXG/HBXOWt92dnCuFlK3k/Q6h3GkL3ESnkap2NXFfqZFeaWw34M0zcf1JCUz XfQQgVt6vloMWUYv5kmYnn/pixWr+IYgveujtqnyTKuc9hiK1cgmgWMwFzXCC5DY/a8Bne nPJA+S88G9rFqVTp+87yDK6AazYciHQ= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SUTLrB+I; spf=pass (imf01.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736821177; a=rsa-sha256; cv=none; b=0Cykhzp3jUvT9xnef3A9VsMpXO1uBtlsYuEF1trS4EnznBMM2bI8R0Y6/alpgC/dsUWtRT saRUPaZaoQZSoKx2id8d2W+lBQA7brtDvTW+PovDeCXnYi2n3pkW+FxcAf+FSq1MsbxcOO IEVAvYg7TWMTtv0/EUfZjHqnOxRllqs= Received: by mail-pj1-f41.google.com with SMTP id 98e67ed59e1d1-2ef72924e53so8326294a91.3 for ; Mon, 13 Jan 2025 18:19:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736821176; x=1737425976; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=akgdVcj5xEeNq6DmKIL978uvRiSwDxIZz8GY/zQxfoo=; b=SUTLrB+IjJJjye91I/NLwb7pAglp8GBJR/f41rM2yoYEa1any2p1Z9V93CjERbTZzq U9fIF6wX0nuxqSrXK4+ZlF0QijsbJsBweASClqpU7k8xU+9J88CkBXLx7UbJ9kq7s8Er dpRi5LmtezUG2/F5tw7XphYQLpTL/Jhg3K22vrOC+2jaDaq8kSNbdqUy8kQvqhVpg3aE 1eMJz1tsAki6Hk0QigdeofZIHPUjzaT82/gvN+4BP+Ib+jg3tV8JR4yfZYXuhg6N7D6h //oDU/47cRiGXIWJnZLx+c5Qsabby2lxhf2RrjoITJKRjz6xuFj4jLiCOxeLiKeYcZ5Z J37Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736821176; x=1737425976; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=akgdVcj5xEeNq6DmKIL978uvRiSwDxIZz8GY/zQxfoo=; b=Zc8XPQV0LheejRineauLjWA+IGNz2RCVR1ZL5QHF6dvNl4b9qFJAhwdhbbFcsdORSo Y017RyGbkbhaTPmKJiBt/WpM4hc/ay/s0ed8fhYXfiLP9oBbPD+T62uo37Tts/7rwaNt KtJXx6386F3eXsI1p869Bkk7tSda8MnJ3Fw+2gmBxbrkypOlxQaPmxp19ygG7N1mLYzr LzkZ7YTRHBPi2jaI2eSgxTtDTdwwE7+nZJxpmXcQXIA4RdUGG3pT8mh9vEod9cOub9P1 T+QGcTjoKRzm4uMG+SpeoQ3d1Z7hnC/JTZ2u+baZZNR9unycikW2bpgahirF+ZNsrPu6 9Xfw== X-Forwarded-Encrypted: i=1; AJvYcCU201WzChZ7a8Q0hTLANkK2zACdOHsuUNxOevJMQa2JV2iftQ2udXjcApyo5YE1+P90EA/yWt9Baw==@kvack.org X-Gm-Message-State: AOJu0YzuI3807kkdoZH6Bs1SEO2/T2jNnx9c03x4MaVT2Xf4u6NgArQX Wgnsw+9T4xQxgbQ4BLrguvbM2zAA4TNVTznPQgd/6k6xNl2aNC4I X-Gm-Gg: ASbGncsWLvERJzBHM5XVScIt8sHwQVBRN+YPlsV87G8ITFxTmZAvErWWvCO2cu5IH0c gyd1AxCkBYJA9p91eTvrqR7gk/NPjvofcfgyDj/z0FLQD+GJ+7RBqt70WCnKwebKS2D2EeyvXav n7dnjDGT9+n33eTwz4wbrtXD41jbm/7IOt9E84jPNPwaMOCS8BY+niZ9TVZM6lYTgVQKboDD6zB pzgv4fYLzZWnUfQCqtvnANsekCEb7kfnPD13J5h2iUnCNkQ7t8/18s6yzxMnt8lJSZlFK3TZnxg 8LnJQpnu X-Google-Smtp-Source: AGHT+IHnBshZ4mFPAZZGQFlNI39VdZrv1X1j+B7cThKNElaLFNGw9ViuDJNNiPM+Xr5/Kv9IgpitWg== X-Received: by 2002:a17:90b:2712:b0:2ee:48bf:7dc9 with SMTP id 98e67ed59e1d1-2f548eb3a99mr35445590a91.15.1736821176273; Mon, 13 Jan 2025 18:19:36 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:4043]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2f55942eed5sm8658076a91.27.2025.01.13.18.19.34 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 18:19:35 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org, peterz@infradead.org, vbabka@suse.cz, bigeasy@linutronix.de, rostedt@goodmis.org, houtao1@huawei.com, hannes@cmpxchg.org, shakeel.butt@linux.dev, mhocko@suse.com, willy@infradead.org, tglx@linutronix.de, jannh@google.com, tj@kernel.org, linux-mm@kvack.org, kernel-team@fb.com Subject: [PATCH bpf-next v4 1/6] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation Date: Mon, 13 Jan 2025 18:19:17 -0800 Message-Id: <20250114021922.92609-2-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250114021922.92609-1-alexei.starovoitov@gmail.com> References: <20250114021922.92609-1-alexei.starovoitov@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 8FEB34000F X-Rspamd-Server: rspam12 X-Stat-Signature: cf6cs38c611yf356zdhxu3cuendghogw X-Rspam-User: X-HE-Tag: 1736821177-4403 X-HE-Meta: U2FsdGVkX1/6Q4ZjmmNOAUiRTdimDtKAxH/7VtZ1jARHvNlArlRyLL0m0MuBUtgINpWUB7cUnIhRLDv/9JyPgOPKe9hjlbtHExIlghxBJcs41wakDdgWORg0fY/221j4BWDhxaPouJ6E26HMXrF5HRHbic45hG7qxkNxjwlPKKIlNb2O2iM+QODauFFW5awZL7Fn1na3hFMcf5a7GsqunXBhi/qFCKVl4/PJdRc4uz0Mu2eEQmP/jgEfCcI0al01jU4F3e3Rwl7M1fBTdJosBDdPvtgY2n9340BMNWvMvnsfkO4iqZ0GQb8h+9+HNFWYGj4yF4Y1VE7BtrlZeCRq10VZRDEVz9XQnHALUnB49MkB003wtW7MkEPURRu7YQtKKQv6iETkFOD3FUW7U0xasdTbQKbiqj8P+vW/pEiCW863ckENGL27iHBnefP+mLE0YGQoVNiNnPHCArYNQaHv/AvMMzmbDpKWpb4ZPAASzbFOAONDlM6vRCQj2TWXptx4KZLakct9khFX0xYM8BJeu5ZBN5SK/XyQPjZEEZnc+s+F8cXtWjCsnbHTjBgQLEAxp1zwT/86y6yZOMzZg9h3is67CLMm8daqivIwZvAk742V4msl4GNwr124zdKibVDG5quLyCRvFjAWVhJDUprKHVR27rAvQJRjAjwvq9+6QxfKees61zSdHSWiQSaIuMiBd5qwiTqq+gA/YrgVwAFlLor1W9pAmXEi+QYDT7ezHN6JS8UFJbxJRegUCkIMw5M2Iq39tsrtX4NV7cMfsTuryiy3oX+/glDVZDahzT02z6IbpzszTt3aoUJWN0C+CIF4N78UqYp+Hma7MQD01tOhigTtYYIrP+oqWkbVCaTcMY4JL7nR8buMIuR8N7dKnwxBlY00gI9sDJ5fDkX22cyqXj5D6fFDyg+JlrcGCq5VtMfWqYzONvafxPUNQ7zFapdXxGhKPNHYVo+AEBPAPgy x/UAPdPs iTWiF7bpC7aK1qFZ7ZBstMWeK5BSk0e5PVGAd2wmKJaDSlBg/Uj5cYmTGFWvOsv9J7b4c6nE7Ta3uxLAflMBT45kprDh66TV/jlxj4z8EWrjzxGqkiUktnsw1ygl9YV2PnfErwjJz/3KOQeAONGL47rjKAMNETu5rw+Qr9q+MS/k48y4gbJmn/uj472oC1d4il4bsjuDnriJ58vyb4/2RkIwWUM+JunkQKXEe5wA+QZUfTPKRCRkftZv1xfIza67M+8ep+t1H9fnVn2LRpe2WdhIXlf0qq59IyF8b5zAFcGJ9GFLmUCfBb2yXq3VG4moGHn6fSwxkj7/5uyHJIlCVjDwNoyXGQFS3f658h/dyvQOwUx//WC1J5t8VRwkFQNTHyqtrfvt/tE3vUaBqzBWonu2pGJKC5H0TEnF1xzQj3tz0l2PF3knKTnAXHxQjPqhyV+PKoBihwgOZ4mIoE/vI77E/ebD5XKShsBNq0Mr5txUeESPKUn/15KDgd1hCRRZuwPwUXjOpZO6lU1eqDpZPmTDXLVbREZQWMxCWvNSu1TkfoCNgvnPuPIL1Ww== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexei Starovoitov Tracing BPF programs execute from tracepoints and kprobes where running context is unknown, but they need to request additional memory. The prior workarounds were using pre-allocated memory and BPF specific freelists to satisfy such allocation requests. Instead, introduce gfpflags_allow_spinning() condition that signals to the allocator that running context is unknown. Then rely on percpu free list of pages to allocate a page. The rmqueue_pcplist() should be able to pop the page from. If it fails (due to IRQ re-entrancy or list being empty) then try_alloc_pages() attempts to spin_trylock zone->lock and refill percpu freelist as normal. BPF program may execute with IRQs disabled and zone->lock is sleeping in RT, so trylock is the only option. In theory we can introduce percpu reentrance counter and increment it every time spin_lock_irqsave(&zone->lock, flags) is used, but we cannot rely on it. Even if this cpu is not in page_alloc path the spin_lock_irqsave() is not safe, since BPF prog might be called from tracepoint where preemption is disabled. So trylock only. Note, free_page and memcg are not taught about gfpflags_allow_spinning() condition. The support comes in the next patches. This is a first step towards supporting BPF requirements in SLUB and getting rid of bpf_mem_alloc. That goal was discussed at LSFMM: https://lwn.net/Articles/974138/ Signed-off-by: Alexei Starovoitov Acked-by: Michal Hocko --- include/linux/gfp.h | 22 ++++++++++++ mm/internal.h | 1 + mm/page_alloc.c | 85 +++++++++++++++++++++++++++++++++++++++++++-- 3 files changed, 105 insertions(+), 3 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index b0fe9f62d15b..b41bb6e01781 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -39,6 +39,25 @@ static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags) return !!(gfp_flags & __GFP_DIRECT_RECLAIM); } +static inline bool gfpflags_allow_spinning(const gfp_t gfp_flags) +{ + /* + * !__GFP_DIRECT_RECLAIM -> direct claim is not allowed. + * !__GFP_KSWAPD_RECLAIM -> it's not safe to wake up kswapd. + * All GFP_* flags including GFP_NOWAIT use one or both flags. + * try_alloc_pages() is the only API that doesn't specify either flag. + * + * This is stronger than GFP_NOWAIT or GFP_ATOMIC because + * those are guaranteed to never block on a sleeping lock. + * Here we are enforcing that the allaaction doesn't ever spin + * on any locks (i.e. only trylocks). There is no highlevel + * GFP_$FOO flag for this use in try_alloc_pages() as the + * regular page allocator doesn't fully support this + * allocation mode. + */ + return !(gfp_flags & __GFP_RECLAIM); +} + #ifdef CONFIG_HIGHMEM #define OPT_ZONE_HIGHMEM ZONE_HIGHMEM #else @@ -347,6 +366,9 @@ static inline struct page *alloc_page_vma_noprof(gfp_t gfp, } #define alloc_page_vma(...) alloc_hooks(alloc_page_vma_noprof(__VA_ARGS__)) +struct page *try_alloc_pages_noprof(int nid, unsigned int order); +#define try_alloc_pages(...) alloc_hooks(try_alloc_pages_noprof(__VA_ARGS__)) + extern unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order); #define __get_free_pages(...) alloc_hooks(get_free_pages_noprof(__VA_ARGS__)) diff --git a/mm/internal.h b/mm/internal.h index cb8d8e8e3ffa..5454fa610aac 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1174,6 +1174,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_NOFRAGMENT 0x0 #endif #define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ +#define ALLOC_TRYLOCK 0x400 /* Only use spin_trylock in allocation path */ #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ /* Flags that allow allocations below the min watermark. */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1cb4b8c8886d..0f4be88ff131 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2304,7 +2304,11 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, unsigned long flags; int i; - spin_lock_irqsave(&zone->lock, flags); + if (!spin_trylock_irqsave(&zone->lock, flags)) { + if (unlikely(alloc_flags & ALLOC_TRYLOCK)) + return 0; + spin_lock_irqsave(&zone->lock, flags); + } for (i = 0; i < count; ++i) { struct page *page = __rmqueue(zone, order, migratetype, alloc_flags); @@ -2904,7 +2908,11 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, do { page = NULL; - spin_lock_irqsave(&zone->lock, flags); + if (!spin_trylock_irqsave(&zone->lock, flags)) { + if (unlikely(alloc_flags & ALLOC_TRYLOCK)) + return NULL; + spin_lock_irqsave(&zone->lock, flags); + } if (alloc_flags & ALLOC_HIGHATOMIC) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); if (!page) { @@ -4509,7 +4517,8 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, might_alloc(gfp_mask); - if (should_fail_alloc_page(gfp_mask, order)) + if (!(*alloc_flags & ALLOC_TRYLOCK) && + should_fail_alloc_page(gfp_mask, order)) return false; *alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, *alloc_flags); @@ -7023,3 +7032,73 @@ static bool __free_unaccepted(struct page *page) } #endif /* CONFIG_UNACCEPTED_MEMORY */ + +struct page *try_alloc_pages_noprof(int nid, unsigned int order) +{ + /* + * Do not specify __GFP_DIRECT_RECLAIM, since direct claim is not allowed. + * Do not specify __GFP_KSWAPD_RECLAIM either, since wake up of kswapd + * is not safe in arbitrary context. + * + * These two are the conditions for gfpflags_allow_spinning() being true. + * + * Specify __GFP_NOWARN since failing try_alloc_pages() is not a reason + * to warn. Also warn would trigger printk() which is unsafe from + * various contexts. We cannot use printk_deferred_enter() to mitigate, + * since the running context is unknown. + * + * Specify __GFP_ZERO to make sure that call to kmsan_alloc_page() below + * is safe in any context. Also zeroing the page is mandatory for + * BPF use cases. + * + * Though __GFP_NOMEMALLOC is not checked in the code path below, + * specify it here to highlight that try_alloc_pages() + * doesn't want to deplete reserves. + */ + gfp_t alloc_gfp = __GFP_NOWARN | __GFP_ZERO | __GFP_NOMEMALLOC; + unsigned int alloc_flags = ALLOC_TRYLOCK; + struct alloc_context ac = { }; + struct page *page; + + /* + * In RT spin_trylock() may call raw_spin_lock() which is unsafe in NMI. + * If spin_trylock() is called from hard IRQ the current task may be + * waiting for one rt_spin_lock, but rt_spin_trylock() will mark the + * task as the owner of another rt_spin_lock which will confuse PI + * logic, so return immediately if called form hard IRQ or NMI. + * + * Note, irqs_disabled() case is ok. This function can be called + * from raw_spin_lock_irqsave region. + */ + if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq())) + return NULL; + if (!pcp_allowed_order(order)) + return NULL; + +#ifdef CONFIG_UNACCEPTED_MEMORY + /* Bailout, since try_to_accept_memory_one() needs to take a lock */ + if (has_unaccepted_memory()) + return NULL; +#endif + /* Bailout, since _deferred_grow_zone() needs to take a lock */ + if (deferred_pages_enabled()) + return NULL; + + if (nid == NUMA_NO_NODE) + nid = numa_node_id(); + + prepare_alloc_pages(alloc_gfp, order, nid, NULL, &ac, + &alloc_gfp, &alloc_flags); + + /* + * Best effort allocation from percpu free list. + * If it's empty attempt to spin_trylock zone->lock. + */ + page = get_page_from_freelist(alloc_gfp, order, alloc_flags, &ac); + + /* Unlike regular alloc_pages() there is no __alloc_pages_slowpath(). */ + + trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype); + kmsan_alloc_page(page, order, alloc_gfp); + return page; +}