From patchwork Thu Jan 12 15:53:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 13098274 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91DFEC61DB3 for ; Thu, 12 Jan 2023 15:53:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 85AF68E000C; Thu, 12 Jan 2023 10:53:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E6078E0001; Thu, 12 Jan 2023 10:53:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C1928E000C; Thu, 12 Jan 2023 10:53:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3D81C8E0001 for ; Thu, 12 Jan 2023 10:53:50 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 17AE7160778 for ; Thu, 12 Jan 2023 15:53:50 +0000 (UTC) X-FDA: 80346592620.29.4566B7E Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com [209.85.160.173]) by imf23.hostedemail.com (Postfix) with ESMTP id 52295140015 for ; Thu, 12 Jan 2023 15:53:48 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=FvZrTql4; spf=pass (imf23.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.160.173 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673538828; a=rsa-sha256; cv=none; b=8KbylCi4Zve9b3674od3RkSNGbEpHlGE2jdtmbpIVFcLIMsBaHQRCD74vQRo3yLWq2EKO1 +9A32eAckfpN98E+mv0CGsNR08A3dSQvw4caSSnEOy5jzFZR0XUBYjhgedSvK8D50QL3fE ASRQ3uwZjJOHcw/qgJ7PNaH5oJzDoFk= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=FvZrTql4; spf=pass (imf23.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.160.173 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673538828; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NL12hQzfY2vYmbpWyvvoyioaohRiBjaClaYN//mRPYQ=; b=IqTL/9Jj71KYgzvs9A6JAJNTa8Wn1e47IMw2E4OO7GQPuQNH4KlRfTqh5LLRF3yVI63pIs 72Ef2L9FpboxorfTIb550e6b0gi3jFUmX3qDsVkEsnrrnkEsG+VKaEgT/UOx40qn5/OpxY N60TgmWp2sUMEKoRzsC6qQOf7lurWXQ= Received: by mail-qt1-f173.google.com with SMTP id e22so4306147qts.1 for ; Thu, 12 Jan 2023 07:53:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NL12hQzfY2vYmbpWyvvoyioaohRiBjaClaYN//mRPYQ=; b=FvZrTql4zacpEwvmFGOwf9iV0y/EKQVGFKFf9nq0POhyw8MxgeuS8VG92UdznVgtJV 4H/D98zVL/EKDRWUVUv/bTaooQENTOj5PpMz29hVWkYQ1ZRPLQnUY0Sd6PEvXnxMvOev CeMRpuj/IXWH4Vm8MrrLicNtBXkYMSZtgn4X8EjFb01T/oX9RWsGAhyPBJqs1g7zORKA /3WMr8O0TX4mM2Dlnrjcl6ZOmHlUO6+0cXRwaE9beomhlC3WKe/kL9xgoJWOlFRTBqkT 2f+qA2OwYHqfQ9ANF6cayhBlOmcn6ezYYrkSk1V5VljoG+j79m8UkN5b+XxuanHgZFCt XUew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NL12hQzfY2vYmbpWyvvoyioaohRiBjaClaYN//mRPYQ=; b=ZSAxu1Sq0qrgJgx3G5+U3TBNDH39f+cfLvXsCYz7AIyWajUO9qYtwLOKJy1jFNLZSu NYuqohSGO9R6R7HMP127iNEeyBewBnrBfAY1SZa+/IWPaSF/WwPSYwrcu7zNFVgt4rEM lpCeAINiP48Hj4iQcQCZxWQyRGiWxpJWWTYCVY4WUUr4gDff0LB12tPzOpZK1cbOFwod xer0Z1qqb/9/u/qIB4NnEQJS5AMTT9F5t0WAdsOFol550bD3bSsdykytc1PwoUlCwS4K 2SesnqID7iNbDCgW6NoktDP69SvTDb8TTvCJWx+fzLKPAoPQVC3kpi6X5WiEBEsYQzgL X8xg== X-Gm-Message-State: AFqh2krucwutzlYunimGUndj/dj3Mv+M3xK7wbB375YgU33szeZPZjsu 7y1N+C+uh5lqNMUyQ/8owLM= X-Google-Smtp-Source: AMrXdXsWItcapcAoDWCDnJzKExTlnOFYmnkVbKpZGZioHjBrHENK8zzquTPTX1PHjAt5R5HXxVzEMA== X-Received: by 2002:ac8:5386:0:b0:3a4:f758:fc6d with SMTP id x6-20020ac85386000000b003a4f758fc6dmr103306816qtp.46.1673538827319; Thu, 12 Jan 2023 07:53:47 -0800 (PST) Received: from vultr.guest ([173.199.122.241]) by smtp.gmail.com with ESMTPSA id l17-20020ac848d1000000b003ab43dabfb1sm9280836qtr.55.2023.01.12.07.53.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 07:53:46 -0800 (PST) From: Yafang Shao To: 42.hyeyoo@gmail.com, vbabka@suse.cz, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, tj@kernel.org, dennis@kernel.org, cl@linux.com, akpm@linux-foundation.org, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, roman.gushchin@linux.dev Cc: linux-mm@kvack.org, bpf@vger.kernel.org, Yafang Shao Subject: [RFC PATCH bpf-next v2 10/11] bpf: add and use bpf map free helpers Date: Thu, 12 Jan 2023 15:53:25 +0000 Message-Id: <20230112155326.26902-11-laoar.shao@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230112155326.26902-1-laoar.shao@gmail.com> References: <20230112155326.26902-1-laoar.shao@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 52295140015 X-Rspamd-Server: rspam01 X-Stat-Signature: ngo548uu14nb1o5h3uh1eineu7r4ybxw X-HE-Tag: 1673538828-41720 X-HE-Meta: U2FsdGVkX198VqHWA6VlIOn7J1uWII0oJmeNWGoLJmMjXBmW34rq/pjli7vUHaipnrTfu2kq/E+MWbFYrKbHDQBsPXlIKaEykyTUfZzQG2BtUdXhLfqfuQQ3Is7KmS/F7J8Z56pndg5FAe9sB9ahFJUNPCCzdLCGQTM4LQKrcTB2ZIhot7tedAHJxrQcLrJk47FqicKNeOlzEcLYv+SGsG9JRw9zOt5fy36F1Zasr1QYa9z4OjBIIN1BSyMzMOtSUp2d65vMTruVjTv9pvKM6IvV7QeJlGugeSCrllJHqlOSZF+TAR4zTaeprZNATLaKzR5rdkH4+o3uHVFCJyqmWSuVeiLY2MUaMDe2ViP8KIIOmmmuisDDVty3HukGIzMXFLjmNmG5nVClitb2ZU7RuaJ0FoOOSLH6gA95RJpUHynS1QsKL594G5nmbgBBCITEy26BuSlrN8wLpwEYKI/uGtKlfgvRajI0Zl+eWcGDucshlZL/0GSx4pgN/mDX4Ep1YATbIDU/qR3oF4o1EzZhY09X5kOulp32rxPozEzGP7zZphTFdPs0sg4sIh5pp51+Z/klQOM11zNtTzAj8vEXkqiLX0XfeZygXNZpNm9+PgNvv6s6CP/uOs2JUYb6+LmRK39sH7ZXY9S99jMh4Pf450neSpzD9kT7CFVa5OuxCK7GMMCZ4aRsZYp9TZ+dl0NMDXMoSZCAWgvQWTf3p0QVMb8srnCehbB+yc6kvtnY/tZLn2+qiEVQf5pWWyxdxDeLW1XKudjU2KIzrADDd5Gr+CA4vnxeFuREGlQUudfETlUDAnOPpQDgEH3bi3PQWMwIUvz8sIwrr93QR/iBCRLFC5kwPjYPIRz60wiho2gnngPBtjwZ8u6BBzZeLazWOfOTPmEHivhe1Io63xCcPUBK4XmsWmAn84NpyjjIzeOWENFk5IMKrxYYneQBBaE0xbMDVeF9KiaIOWM1C7DLt4p eEUQRls8 BMO1/yTFvQS3M6pqR0X5L+4WkEO/ylb7wvhwPDrbfekZx9guee8TXeJaXd4ycIhxdNtklROKA2pGb7+DjUreFmb/zqAI9yN1mWDViYOpWt8UA79jUrlpUmH4Wi1p2Kcj1cnX1M27B4ry47s/uu7OyoUS6rXs+7wWcLzoFrT/c1rkCEFULw3LhpBa2Gg8+zfPh3BS1I9uDF1X+BCZJ7lwUzVQZsbb49bZp/45Y83918jQ+4HU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some new helpers are introduced to free bpf memory, instead of using the generic free helpers. Then we can do something in these new helpers to track the free of bpf memory in the future. Signed-off-by: Yafang Shao --- include/linux/bpf.h | 19 +++++++++++++++++++ kernel/bpf/arraymap.c | 4 ++-- kernel/bpf/bpf_cgrp_storage.c | 2 +- kernel/bpf/bpf_inode_storage.c | 2 +- kernel/bpf/bpf_local_storage.c | 20 ++++++++++---------- kernel/bpf/bpf_task_storage.c | 2 +- kernel/bpf/cpumap.c | 13 ++++++------- kernel/bpf/devmap.c | 10 ++++++---- kernel/bpf/hashtab.c | 8 ++++---- kernel/bpf/helpers.c | 2 +- kernel/bpf/local_storage.c | 12 ++++++------ kernel/bpf/lpm_trie.c | 14 +++++++------- net/core/bpf_sk_storage.c | 4 ++-- net/core/sock_map.c | 2 +- net/xdp/xskmap.c | 2 +- 15 files changed, 68 insertions(+), 48 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index fb14cc6..17c218e 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1869,6 +1869,24 @@ int generic_map_delete_batch(struct bpf_map *map, struct bpf_map *bpf_map_get_curr_or_next(u32 *id); struct bpf_prog *bpf_prog_get_curr_or_next(u32 *id); + +static inline void bpf_map_kfree(const void *ptr) +{ + kfree(ptr); +} + +static inline void bpf_map_kvfree(const void *ptr) +{ + kvfree(ptr); +} + +static inline void bpf_map_free_percpu(void __percpu *ptr) +{ + free_percpu(ptr); +} + +#define bpf_map_kfree_rcu(ptr, rhf...) kvfree_rcu(ptr, ## rhf) + #ifdef CONFIG_MEMCG_KMEM void *bpf_map_kmalloc_node(const struct bpf_map *map, size_t size, gfp_t flags, int node); @@ -1877,6 +1895,7 @@ void *bpf_map_kvcalloc(struct bpf_map *map, size_t n, size_t size, gfp_t flags); void __percpu *bpf_map_alloc_percpu(const struct bpf_map *map, size_t size, size_t align, gfp_t flags); + #else static inline void * bpf_map_kmalloc_node(const struct bpf_map *map, size_t size, gfp_t flags, diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index e64a417..d218bf0 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -24,7 +24,7 @@ static void bpf_array_free_percpu(struct bpf_array *array) int i; for (i = 0; i < array->map.max_entries; i++) { - free_percpu(array->pptrs[i]); + bpf_map_free_percpu(array->pptrs[i]); cond_resched(); } } @@ -1132,7 +1132,7 @@ static void prog_array_map_free(struct bpf_map *map) list_del_init(&elem->list); kfree(elem); } - kfree(aux); + bpf_map_kfree(aux); fd_array_map_free(map); } diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c index 6cdf6d9..d0ac4eb 100644 --- a/kernel/bpf/bpf_cgrp_storage.c +++ b/kernel/bpf/bpf_cgrp_storage.c @@ -64,7 +64,7 @@ void bpf_cgrp_storage_free(struct cgroup *cgroup) rcu_read_unlock(); if (free_cgroup_storage) - kfree_rcu(local_storage, rcu); + bpf_map_kfree_rcu(local_storage, rcu); } static struct bpf_local_storage_data * diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c index 05f4c66..0297f36 100644 --- a/kernel/bpf/bpf_inode_storage.c +++ b/kernel/bpf/bpf_inode_storage.c @@ -78,7 +78,7 @@ void bpf_inode_storage_free(struct inode *inode) rcu_read_unlock(); if (free_inode_storage) - kfree_rcu(local_storage, rcu); + bpf_map_kfree_rcu(local_storage, rcu); } static void *bpf_fd_inode_storage_lookup_elem(struct bpf_map *map, void *key) diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c index 35f4138..2fd79a1 100644 --- a/kernel/bpf/bpf_local_storage.c +++ b/kernel/bpf/bpf_local_storage.c @@ -93,9 +93,9 @@ void bpf_local_storage_free_rcu(struct rcu_head *rcu) */ local_storage = container_of(rcu, struct bpf_local_storage, rcu); if (rcu_trace_implies_rcu_gp()) - kfree(local_storage); + bpf_map_kfree(local_storage); else - kfree_rcu(local_storage, rcu); + bpf_map_kfree_rcu(local_storage, rcu); } static void bpf_selem_free_rcu(struct rcu_head *rcu) @@ -104,9 +104,9 @@ static void bpf_selem_free_rcu(struct rcu_head *rcu) selem = container_of(rcu, struct bpf_local_storage_elem, rcu); if (rcu_trace_implies_rcu_gp()) - kfree(selem); + bpf_map_kfree(selem); else - kfree_rcu(selem, rcu); + bpf_map_kfree_rcu(selem, rcu); } /* local_storage->lock must be held and selem->local_storage == local_storage. @@ -162,7 +162,7 @@ static bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_stor if (use_trace_rcu) call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_rcu); else - kfree_rcu(selem, rcu); + bpf_map_kfree_rcu(selem, rcu); return free_local_storage; } @@ -191,7 +191,7 @@ static void __bpf_selem_unlink_storage(struct bpf_local_storage_elem *selem, call_rcu_tasks_trace(&local_storage->rcu, bpf_local_storage_free_rcu); else - kfree_rcu(local_storage, rcu); + bpf_map_kfree_rcu(local_storage, rcu); } } @@ -358,7 +358,7 @@ int bpf_local_storage_alloc(void *owner, return 0; uncharge: - kfree(storage); + bpf_map_kfree(storage); mem_uncharge(smap, owner, sizeof(*storage)); return err; } @@ -402,7 +402,7 @@ struct bpf_local_storage_data * err = bpf_local_storage_alloc(owner, smap, selem, gfp_flags); if (err) { - kfree(selem); + bpf_map_kfree(selem); mem_uncharge(smap, owner, smap->elem_size); return ERR_PTR(err); } @@ -496,7 +496,7 @@ struct bpf_local_storage_data * raw_spin_unlock_irqrestore(&local_storage->lock, flags); if (selem) { mem_uncharge(smap, owner, smap->elem_size); - kfree(selem); + bpf_map_kfree(selem); } return ERR_PTR(err); } @@ -713,6 +713,6 @@ void bpf_local_storage_map_free(struct bpf_map *map, */ synchronize_rcu(); - kvfree(smap->buckets); + bpf_map_kvfree(smap->buckets); bpf_map_area_free(smap); } diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c index 1e48605..7287b02 100644 --- a/kernel/bpf/bpf_task_storage.c +++ b/kernel/bpf/bpf_task_storage.c @@ -91,7 +91,7 @@ void bpf_task_storage_free(struct task_struct *task) rcu_read_unlock(); if (free_task_storage) - kfree_rcu(local_storage, rcu); + bpf_map_kfree_rcu(local_storage, rcu); } static void *bpf_pid_task_storage_lookup_elem(struct bpf_map *map, void *key) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index e0b2d01..3470c13 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -164,8 +164,8 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu) /* The queue should be empty at this point */ __cpu_map_ring_cleanup(rcpu->queue); ptr_ring_cleanup(rcpu->queue, NULL); - kfree(rcpu->queue); - kfree(rcpu); + bpf_map_kfree(rcpu->queue); + bpf_map_kfree(rcpu); } } @@ -484,11 +484,11 @@ static int __cpu_map_load_bpf_program(struct bpf_cpu_map_entry *rcpu, free_ptr_ring: ptr_ring_cleanup(rcpu->queue, NULL); free_queue: - kfree(rcpu->queue); + bpf_map_kfree(rcpu->queue); free_bulkq: - free_percpu(rcpu->bulkq); + bpf_map_free_percpu(rcpu->bulkq); free_rcu: - kfree(rcpu); + bpf_map_kfree(rcpu); return NULL; } @@ -502,8 +502,7 @@ static void __cpu_map_entry_free(struct rcu_head *rcu) * find this entry. */ rcpu = container_of(rcu, struct bpf_cpu_map_entry, rcu); - - free_percpu(rcpu->bulkq); + bpf_map_free_percpu(rcpu->bulkq); /* Cannot kthread_stop() here, last put free rcpu resources */ put_cpu_map_entry(rcpu); } diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index d01e4c5..be10c47 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -218,7 +218,7 @@ static void dev_map_free(struct bpf_map *map) if (dev->xdp_prog) bpf_prog_put(dev->xdp_prog); dev_put(dev->dev); - kfree(dev); + bpf_map_kfree(dev); } } @@ -234,7 +234,7 @@ static void dev_map_free(struct bpf_map *map) if (dev->xdp_prog) bpf_prog_put(dev->xdp_prog); dev_put(dev->dev); - kfree(dev); + bpf_map_kfree(dev); } bpf_map_area_free(dtab->netdev_map); @@ -791,12 +791,14 @@ static void *dev_map_hash_lookup_elem(struct bpf_map *map, void *key) static void __dev_map_entry_free(struct rcu_head *rcu) { struct bpf_dtab_netdev *dev; + struct bpf_dtab *dtab; dev = container_of(rcu, struct bpf_dtab_netdev, rcu); if (dev->xdp_prog) bpf_prog_put(dev->xdp_prog); dev_put(dev->dev); - kfree(dev); + dtab = dev->dtab; + bpf_map_kfree(dev); } static int dev_map_delete_elem(struct bpf_map *map, void *key) @@ -881,7 +883,7 @@ static struct bpf_dtab_netdev *__dev_map_alloc_node(struct net *net, err_put_dev: dev_put(dev->dev); err_out: - kfree(dev); + bpf_map_kfree(dev); return ERR_PTR(-EINVAL); } diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 5aa2b55..1047788 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -266,7 +266,7 @@ static void htab_free_elems(struct bpf_htab *htab) pptr = htab_elem_get_ptr(get_htab_elem(htab, i), htab->map.key_size); - free_percpu(pptr); + bpf_map_free_percpu(pptr); cond_resched(); } free_elems: @@ -584,7 +584,7 @@ static struct bpf_map *htab_map_alloc(union bpf_attr *attr) if (htab->use_percpu_counter) percpu_counter_destroy(&htab->pcount); for (i = 0; i < HASHTAB_MAP_LOCK_COUNT; i++) - free_percpu(htab->map_locked[i]); + bpf_map_free_percpu(htab->map_locked[i]); bpf_map_area_free(htab->buckets); bpf_mem_alloc_destroy(&htab->pcpu_ma); bpf_mem_alloc_destroy(&htab->ma); @@ -1511,14 +1511,14 @@ static void htab_map_free(struct bpf_map *map) prealloc_destroy(htab); } - free_percpu(htab->extra_elems); + bpf_map_free_percpu(htab->extra_elems); bpf_map_area_free(htab->buckets); bpf_mem_alloc_destroy(&htab->pcpu_ma); bpf_mem_alloc_destroy(&htab->ma); if (htab->use_percpu_counter) percpu_counter_destroy(&htab->pcount); for (i = 0; i < HASHTAB_MAP_LOCK_COUNT; i++) - free_percpu(htab->map_locked[i]); + bpf_map_free_percpu(htab->map_locked[i]); lockdep_unregister_key(&htab->lockdep_key); bpf_map_area_free(htab); } diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 458db2d..49b0040 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1383,7 +1383,7 @@ void bpf_timer_cancel_and_free(void *val) */ if (this_cpu_read(hrtimer_running) != t) hrtimer_cancel(&t->timer); - kfree(t); + bpf_map_kfree(t); } BPF_CALL_2(bpf_kptr_xchg, void *, map_value, void *, ptr) diff --git a/kernel/bpf/local_storage.c b/kernel/bpf/local_storage.c index e90d9f6..ed5cf5b 100644 --- a/kernel/bpf/local_storage.c +++ b/kernel/bpf/local_storage.c @@ -174,7 +174,7 @@ static int cgroup_storage_update_elem(struct bpf_map *map, void *key, check_and_init_map_value(map, new->data); new = xchg(&storage->buf, new); - kfree_rcu(new, rcu); + bpf_map_kfree_rcu(new, rcu); return 0; } @@ -526,7 +526,7 @@ struct bpf_cgroup_storage *bpf_cgroup_storage_alloc(struct bpf_prog *prog, return storage; enomem: - kfree(storage); + bpf_map_kfree(storage); return ERR_PTR(-ENOMEM); } @@ -535,8 +535,8 @@ static void free_shared_cgroup_storage_rcu(struct rcu_head *rcu) struct bpf_cgroup_storage *storage = container_of(rcu, struct bpf_cgroup_storage, rcu); - kfree(storage->buf); - kfree(storage); + bpf_map_kfree(storage->buf); + bpf_map_kfree(storage); } static void free_percpu_cgroup_storage_rcu(struct rcu_head *rcu) @@ -544,8 +544,8 @@ static void free_percpu_cgroup_storage_rcu(struct rcu_head *rcu) struct bpf_cgroup_storage *storage = container_of(rcu, struct bpf_cgroup_storage, rcu); - free_percpu(storage->percpu_buf); - kfree(storage); + bpf_map_free_percpu(storage->percpu_buf); + bpf_map_kfree(storage); } void bpf_cgroup_storage_free(struct bpf_cgroup_storage *storage) diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c index d833496..b2bee07 100644 --- a/kernel/bpf/lpm_trie.c +++ b/kernel/bpf/lpm_trie.c @@ -379,7 +379,7 @@ static int trie_update_elem(struct bpf_map *map, trie->n_entries--; rcu_assign_pointer(*slot, new_node); - kfree_rcu(node, rcu); + bpf_map_kfree_rcu(node, rcu); goto out; } @@ -421,8 +421,8 @@ static int trie_update_elem(struct bpf_map *map, if (new_node) trie->n_entries--; - kfree(new_node); - kfree(im_node); + bpf_map_kfree(new_node); + bpf_map_kfree(im_node); } spin_unlock_irqrestore(&trie->lock, irq_flags); @@ -503,8 +503,8 @@ static int trie_delete_elem(struct bpf_map *map, void *_key) else rcu_assign_pointer( *trim2, rcu_access_pointer(parent->child[0])); - kfree_rcu(parent, rcu); - kfree_rcu(node, rcu); + bpf_map_kfree_rcu(parent, rcu); + bpf_map_kfree_rcu(node, rcu); goto out; } @@ -518,7 +518,7 @@ static int trie_delete_elem(struct bpf_map *map, void *_key) rcu_assign_pointer(*trim, rcu_access_pointer(node->child[1])); else RCU_INIT_POINTER(*trim, NULL); - kfree_rcu(node, rcu); + bpf_map_kfree_rcu(node, rcu); out: spin_unlock_irqrestore(&trie->lock, irq_flags); @@ -602,7 +602,7 @@ static void trie_free(struct bpf_map *map) continue; } - kfree(node); + bpf_map_kfree(node); RCU_INIT_POINTER(*slot, NULL); break; } diff --git a/net/core/bpf_sk_storage.c b/net/core/bpf_sk_storage.c index bb378c3..7b6d7fd 100644 --- a/net/core/bpf_sk_storage.c +++ b/net/core/bpf_sk_storage.c @@ -64,7 +64,7 @@ void bpf_sk_storage_free(struct sock *sk) rcu_read_unlock(); if (free_sk_storage) - kfree_rcu(sk_storage, rcu); + bpf_map_kfree_rcu(sk_storage, rcu); } static void bpf_sk_storage_map_free(struct bpf_map *map) @@ -203,7 +203,7 @@ int bpf_sk_storage_clone(const struct sock *sk, struct sock *newsk) } else { ret = bpf_local_storage_alloc(newsk, smap, copy_selem, GFP_ATOMIC); if (ret) { - kfree(copy_selem); + bpf_map_kfree(copy_selem); atomic_sub(smap->elem_size, &newsk->sk_omem_alloc); bpf_map_put(map); diff --git a/net/core/sock_map.c b/net/core/sock_map.c index 22fa2c5..059e55c 100644 --- a/net/core/sock_map.c +++ b/net/core/sock_map.c @@ -888,7 +888,7 @@ static void sock_hash_free_elem(struct bpf_shtab *htab, struct bpf_shtab_elem *elem) { atomic_dec(&htab->count); - kfree_rcu(elem, rcu); + bpf_map_kfree_rcu(elem, rcu); } static void sock_hash_delete_from_link(struct bpf_map *map, struct sock *sk, diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c index 771d0fa..1cb24b1 100644 --- a/net/xdp/xskmap.c +++ b/net/xdp/xskmap.c @@ -33,7 +33,7 @@ static struct xsk_map_node *xsk_map_node_alloc(struct xsk_map *map, static void xsk_map_node_free(struct xsk_map_node *node) { bpf_map_put(&node->map->map); - kfree(node); + bpf_map_kfree(node); } static void xsk_map_sock_add(struct xdp_sock *xs, struct xsk_map_node *node)