From patchwork Wed Oct 2 06:54:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 13819403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EE06CF3199 for ; Wed, 2 Oct 2024 06:55:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 24AE36B031C; Wed, 2 Oct 2024 02:55:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F6FF6B031F; Wed, 2 Oct 2024 02:55:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA3E46B031C; Wed, 2 Oct 2024 02:55:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AF0466B0315 for ; Wed, 2 Oct 2024 02:55:03 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 665D116198B for ; Wed, 2 Oct 2024 06:55:03 +0000 (UTC) X-FDA: 82627750086.02.C8523BF Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf23.hostedemail.com (Postfix) with ESMTP id AF8EA140006 for ; Wed, 2 Oct 2024 06:55:00 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Vlm7lIwr; spf=pass (imf23.hostedemail.com: domain of namhyung@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=namhyung@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727851973; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MLtkiGV4MD1yMU8jgglHJy3X/2Yb9MXKwmwxeJSMn1c=; b=HzS061QP00vRcuCUOicSGIXTE5KEtWpO+yY0JSaW8SEXTU8sUwjmYvn+mLx9aLZ+KqARmV lefGj+NGpRen6IY+c8Q9SahL233CSo+rt3Jo/RZXCqCJtazh9bE1FcDbE9qfS5e86amCX9 nc+vZW7mFCxovtqH740v6SKBR6+k3dg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727851973; a=rsa-sha256; cv=none; b=MHaVLBKBjGvY80WByzRMRhWq9bXJq0jVPI1qCGepSTKALHwE1NKe2WAMNpeB3Z5eBf3udX noHX9Ab8UW7s4s8p5nAcHDaymhIfrw+N1O6dCl/XG+kXhvCamCmpKkuKG+l47LhCKSdxNm z3JIP/jeAdo8bQq1Ya+RR/rWdkPdeZM= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Vlm7lIwr; spf=pass (imf23.hostedemail.com: domain of namhyung@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=namhyung@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 3F35E5C5653; Wed, 2 Oct 2024 06:54:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4FE8CC4CED4; Wed, 2 Oct 2024 06:54:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727852099; bh=Aek5cZQ6n85ip3bRtfPexSBg6htaWNu16xazCDWdW4Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vlm7lIwr/dhafAwMaF9u4SxUvkeHiao0dELBvsNjPyceEnzkv9XU0OVWndLR1ySRn jat6+BuMZBfvf6xkXuqC1/4CXf7fMpCc1hipaPvz3/RzIiO7BjvhTe9uoBs9QxTOEu B6z5PH+RnI5sJ2p5Hw1DsyTzmJRsXRuX+Wa6iWAl4QDy5IY0GUsW+7bPYKi8QXhjnj /c/LhPqLupR+UrpI5mzVp4TderYm917KdTqoUwoXhPuXYNZ38QIOtjBQ03FizsF+3d 8msmT0qfuG3rGem3XoWDS5MYCg1QKwJfMQodctnyfZHnFA/S41NhdDPRfwPK7BXN/b FcYIAlRI3U8YQ== From: Namhyung Kim To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , LKML , bpf@vger.kernel.org, Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, Arnaldo Carvalho de Melo Subject: [PATCH v3 bpf-next 1/3] bpf: Add kmem_cache iterator Date: Tue, 1 Oct 2024 23:54:54 -0700 Message-ID: <20241002065456.1580143-2-namhyung@kernel.org> X-Mailer: git-send-email 2.46.1.824.gd892dcdcdd-goog In-Reply-To: <20241002065456.1580143-1-namhyung@kernel.org> References: <20241002065456.1580143-1-namhyung@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: AF8EA140006 X-Stat-Signature: 3xy5xzwa58ergroksh6h9d65s5jmquti X-HE-Tag: 1727852100-634433 X-HE-Meta: U2FsdGVkX19PMpDdU+3CycXqXYkqUayOm362ypuGKC7mrAH6T56Ob7oulRm+/3yeDa/R9ng1ht+ltnIeomB7XlWL8RKyNOW5iqFPZXQW1bbLlVNbQm8Ihxq/Rrdkvv7xG8JpTKL8+OBHn7ySoV9Gk+uWkjV9Fgtg4lYqVMhWzac/KdGIohhjVgN3ZEaTDmVdX+FfZGW2j6PXvYWvrB6gWmZKwUpm0s3dpHVKcWL/2hBrJ8YVLbuJcCYjrAWRSucm1X/LEyQHtWzc7+FDdq7LM6sNLPWR3qJAliMjZB3kyfpdlIxKdcpGspcZGJmEe3irrGEXNhWMj3lfQtJTz02ZfWjn9yZpuBhBwPSutfWeDsrhcrFcicszCEVHX7OCraqg7hN7eynu0Bfy64GehzBJQzP51HiiLo7JdZlvRer9iPEloYrLZttZfbaR+xels1EjXd+C8JdXLCgCHXwDwQo/crvTTeI68V/uTDJSheQHYG/dhdGgmrUp+BiqCA8SdJ1LcW4KQeiTwcAVrXNDGBEUQyDcVzRMQEqxEhyRYSLn/CypY5gAmR0kg6i7OOxAhs3MNTqXhcjXOPfS6eKomEgi5odYtwQvyTSSplpD4d+ZQDFC+8FUaxqZ3KHtkFh5NAKFXscYsFQzT+NrYxrvI+oVNj65E9k8bitDNhqBRNtH4XkWUT6925ltBSQoMD67RVMQNh2qDgjn0/jXx1SrfzWqSRnd54f4rO8eLskmIZseamlCIreBkdiRe56XXrB2rQo1o+ES8TuG818K+uNaAk+Mr8STlqza3qrza/MszwX63qJejjenqKt/zDT23OnXbliVvyGxrHqNaxAf2S8P3/J/LZMpclYmP2m4CoStLCS0PbL72a/6j56xGf+awPpx/fJqyH+1PdX1uHt3be2f6QTEkdX6W4O4j+gkpEuUI0N4gmM3v9yyJzweng/7Afpdl4zI/grLLQtiA3nu9RM1bOk lyLV1Op6 9+S3U1I2FBCU8lstHPn2ilWhpRf7ct7HxxIv1LAb25/d5D3rAJz0B8IBIXQOys703UOiOHDAJuF6ijWlZgPHgzhjM9aXhm/k/glfXdJB+y1pgmglICk0CL3fUp7PrS32/wv+waePfrRGUzNvpiMyGCqhDhVmD58heEreFBPbKR1Muh58sUGij/qDuryyAGBBMtn36H130tkSN0SQvBAOghfPAs4D1NTfYwrqdhQPFLldksxZT4j/8+b3gs62NKolwcNyojb9Iq5zYOD6C11zBoVX/IVI7zydf/USIE7j2HI9Uj9hbgNiUkMxklinR/26zuq31jxWX5tPAA72F2Wt0/SfjRN5uj1rIF+ie62S1NPMSz2s= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The new "kmem_cache" iterator will traverse the list of slab caches and call attached BPF programs for each entry. It should check the argument (ctx.s) if it's NULL before using it. Now the iteration grabs the slab_mutex only if it traverse the list and releases the mutex when it runs the BPF program. The kmem_cache entry is protected by a refcount during the execution. It includes the internal "mm/slab.h" header to access kmem_cache, slab_caches and slab_mutex. Hope it's ok to mm folks. Signed-off-by: Namhyung Kim --- I've removed the Acked-by's from Roman and Vlastimil since it's changed not to hold the slab_mutex and to manage the refcount. Please review this change again! include/linux/btf_ids.h | 1 + kernel/bpf/Makefile | 1 + kernel/bpf/kmem_cache_iter.c | 165 +++++++++++++++++++++++++++++++++++ 3 files changed, 167 insertions(+) create mode 100644 kernel/bpf/kmem_cache_iter.c base-commit: 9502a7de5a61bec3bda841a830560c5d6d40ecac diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h index c0e3e1426a82f5c4..139bdececdcfaefb 100644 --- a/include/linux/btf_ids.h +++ b/include/linux/btf_ids.h @@ -283,5 +283,6 @@ extern u32 btf_tracing_ids[]; extern u32 bpf_cgroup_btf_id[]; extern u32 bpf_local_storage_map_btf_id[]; extern u32 btf_bpf_map_id[]; +extern u32 bpf_kmem_cache_btf_id[]; #endif diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile index 9b9c151b5c826b31..105328f0b9c04e37 100644 --- a/kernel/bpf/Makefile +++ b/kernel/bpf/Makefile @@ -52,3 +52,4 @@ obj-$(CONFIG_BPF_PRELOAD) += preload/ obj-$(CONFIG_BPF_SYSCALL) += relo_core.o obj-$(CONFIG_BPF_SYSCALL) += btf_iter.o obj-$(CONFIG_BPF_SYSCALL) += btf_relocate.o +obj-$(CONFIG_BPF_SYSCALL) += kmem_cache_iter.o diff --git a/kernel/bpf/kmem_cache_iter.c b/kernel/bpf/kmem_cache_iter.c new file mode 100644 index 0000000000000000..a77c08b82c6bc965 --- /dev/null +++ b/kernel/bpf/kmem_cache_iter.c @@ -0,0 +1,165 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (c) 2024 Google */ +#include +#include +#include +#include +#include + +#include "../../mm/slab.h" /* kmem_cache, slab_caches and slab_mutex */ + +struct bpf_iter__kmem_cache { + __bpf_md_ptr(struct bpf_iter_meta *, meta); + __bpf_md_ptr(struct kmem_cache *, s); +}; + +static void *kmem_cache_iter_seq_start(struct seq_file *seq, loff_t *pos) +{ + loff_t cnt = 0; + struct kmem_cache *s = NULL; + + mutex_lock(&slab_mutex); + + /* + * Find an entry at the given position in the slab_caches list instead + * of keeping a reference (of the last visited entry, if any) out of + * slab_mutex. It might miss something if one is deleted in the middle + * while it releases the lock. But it should be rare and there's not + * much we can do about it. + */ + list_for_each_entry(s, &slab_caches, list) { + if (cnt == *pos) { + /* + * Make sure this entry remains in the list by getting + * a new reference count. Note that boot_cache entries + * have a negative refcount, so don't touch them. + */ + if (s->refcount > 0) + s->refcount++; + break; + } + + cnt++; + } + mutex_unlock(&slab_mutex); + + if (cnt != *pos) + return NULL; + + ++*pos; + return s; +} + +static void kmem_cache_iter_seq_stop(struct seq_file *seq, void *v) +{ + struct bpf_iter_meta meta; + struct bpf_iter__kmem_cache ctx = { + .meta = &meta, + .s = v, + }; + struct bpf_prog *prog; + bool destroy = false; + + meta.seq = seq; + prog = bpf_iter_get_info(&meta, true); + if (prog) + bpf_iter_run_prog(prog, &ctx); + + mutex_lock(&slab_mutex); + if (ctx.s && ctx.s->refcount > 0) + destroy = true; + mutex_unlock(&slab_mutex); + + if (destroy) + kmem_cache_destroy(ctx.s); +} + +static void *kmem_cache_iter_seq_next(struct seq_file *seq, void *v, loff_t *pos) +{ + struct kmem_cache *s = v; + struct kmem_cache *next = NULL; + bool destroy = false; + + ++*pos; + + mutex_lock(&slab_mutex); + + if (list_last_entry(&slab_caches, struct kmem_cache, list) != s) { + next = list_next_entry(s, list); + if (next->refcount > 0) + next->refcount++; + } + + /* Skip kmem_cache_destroy() for active entries */ + if (s->refcount > 1) + s->refcount--; + else if (s->refcount == 1) + destroy = true; + + mutex_unlock(&slab_mutex); + + if (destroy) + kmem_cache_destroy(s); + + return next; +} + +static int kmem_cache_iter_seq_show(struct seq_file *seq, void *v) +{ + struct bpf_iter_meta meta; + struct bpf_iter__kmem_cache ctx = { + .meta = &meta, + .s = v, + }; + struct bpf_prog *prog; + int ret = 0; + + meta.seq = seq; + prog = bpf_iter_get_info(&meta, false); + if (prog) + ret = bpf_iter_run_prog(prog, &ctx); + + return ret; +} + +static const struct seq_operations kmem_cache_iter_seq_ops = { + .start = kmem_cache_iter_seq_start, + .next = kmem_cache_iter_seq_next, + .stop = kmem_cache_iter_seq_stop, + .show = kmem_cache_iter_seq_show, +}; + +BTF_ID_LIST_GLOBAL_SINGLE(bpf_kmem_cache_btf_id, struct, kmem_cache) + +static const struct bpf_iter_seq_info kmem_cache_iter_seq_info = { + .seq_ops = &kmem_cache_iter_seq_ops, +}; + +static void bpf_iter_kmem_cache_show_fdinfo(const struct bpf_iter_aux_info *aux, + struct seq_file *seq) +{ + seq_puts(seq, "kmem_cache iter\n"); +} + +DEFINE_BPF_ITER_FUNC(kmem_cache, struct bpf_iter_meta *meta, + struct kmem_cache *s) + +static struct bpf_iter_reg bpf_kmem_cache_reg_info = { + .target = "kmem_cache", + .feature = BPF_ITER_RESCHED, + .show_fdinfo = bpf_iter_kmem_cache_show_fdinfo, + .ctx_arg_info_size = 1, + .ctx_arg_info = { + { offsetof(struct bpf_iter__kmem_cache, s), + PTR_TO_BTF_ID_OR_NULL | PTR_TRUSTED }, + }, + .seq_info = &kmem_cache_iter_seq_info, +}; + +static int __init bpf_kmem_cache_iter_init(void) +{ + bpf_kmem_cache_reg_info.ctx_arg_info[0].btf_id = bpf_kmem_cache_btf_id[0]; + return bpf_iter_reg_target(&bpf_kmem_cache_reg_info); +} + +late_initcall(bpf_kmem_cache_iter_init);