From patchwork Tue Feb 21 20:06:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13148373 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67F70C636D7 for ; Tue, 21 Feb 2023 20:06:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230061AbjBUUGx (ORCPT ); Tue, 21 Feb 2023 15:06:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230041AbjBUUGw (ORCPT ); Tue, 21 Feb 2023 15:06:52 -0500 Received: from mail-ed1-x542.google.com (mail-ed1-x542.google.com [IPv6:2a00:1450:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 446EC27997 for ; Tue, 21 Feb 2023 12:06:51 -0800 (PST) Received: by mail-ed1-x542.google.com with SMTP id x10so21141022edd.13 for ; Tue, 21 Feb 2023 12:06:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+dM2SLqS21zDKJx6x4MhUEm4dpwG3TWg25MgSQwFL7Y=; b=MLrkiDadcb+RPuz+lXW4p0aYy7Jw1Xe9Z3W0Sl71mAeg82ivBLWL8LNUc2dJxQLHCC Trui4sPOaAqJTQTTZGnUk/IHsZtiW6DMOZVyNY43OsmSpbXT1NF6ep1X9dJK9cciV/v4 Tgy9w+qqbti8xqs5Dv6ess2iXAzJu0X8PFo1RlyuB9bnIyT+o//UQJe0B6E8x9n6SZ56 vy0KlWkTBX1nVCW40BzrIeo7+9Ga/+FUyjp/NPUCIYLu49rLCMDwnieC8j3TLija357o JcZfT5Rxr86LgKIKoF5TSJTnObTqxfKEab9VCx+EzWSTExnr2G/YyMIQ8hnIKTMTGwuU OeKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+dM2SLqS21zDKJx6x4MhUEm4dpwG3TWg25MgSQwFL7Y=; b=1rsrJ02d7/Zg89qDPeiWu/Jqhw/6prXpI3TigFWCgYnfPeQB13A4vWjSeOG0wpX3OY nXEZVhRfCDRs84wFve8mu7PHpC0z7rDVjE0ueD3B4Cj1LlvldIXFEoTrT2wLu3ZFP7Cc yEfiTQOVtvW/qBedgxwo9tVM4PIZyy73YgUfxK4YuWauCCaHtX9jh4xrArV/dDvEIzg4 sCstik5yapJHzuXRECNY2p45e78gY1UjJuQWBP2zG+vU0F81HVGByey8KP9uwDPXA1cO Bj4kanaAgf4WmRzAMCf0zbN65wvQyYRYNhmUXwSh/9OcYkbfvIedzw2ybmO8OkcXS9B6 bXuQ== X-Gm-Message-State: AO0yUKWULxVOxZI3VZhA2xSsNJgqJoI2EHljuv2uCkkDEGKZYZvGAzbL yRbHPP74kJv6OH9xA6a8zjXyflFc8FX/KQ== X-Google-Smtp-Source: AK7set8UO6/6FbZhmiqSuFIwNCDP+8gxCtiH1Xu2HtKtHpq5r+xqS6ZxDiqqhzq2IIu4A6oVLEB30w== X-Received: by 2002:a17:907:7288:b0:8bc:9bce:7eb6 with SMTP id dt8-20020a170907728800b008bc9bce7eb6mr15626549ejc.7.1677010009291; Tue, 21 Feb 2023 12:06:49 -0800 (PST) Received: from localhost ([2001:620:618:580:2:80b3:0:6d0]) by smtp.gmail.com with ESMTPSA id g9-20020a170906348900b008b11ba87bf4sm6316521ejb.209.2023.02.21.12.06.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Feb 2023 12:06:48 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , KP Singh , Dave Marchevsky , David Vernet Subject: [PATCH bpf-next v2 1/7] bpf: Support kptrs in percpu hashmap and percpu LRU hashmap Date: Tue, 21 Feb 2023 21:06:40 +0100 Message-Id: <20230221200646.2500777-2-memxor@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230221200646.2500777-1-memxor@gmail.com> References: <20230221200646.2500777-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6181; i=memxor@gmail.com; h=from:subject; bh=Y9p91t4feL/dpDYVE2Cl/S2CG69w6xu2l7fuqwPvi+k=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBj9SRLGndslOT9Tib4peABYZTK9T21wOGKJ4ZLkWdi FU0l8O6JAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY/UkSwAKCRBM4MiGSL8RyuyeD/ 9qs4+yQGJucmwwUiMhq+GCsIdau+wG/xg8mkv5ogw4Lf0Ky+YyHxMwApAfqcUer+l5Q/DT6VayK4+8 s66KK7/0Ht+BwKlAJHBKw5nI4/R+nwCYxb3H43N8r337iEuowCcjhgWAeZh/T6bIWWq0I/U6c8L30i Y594e8Y9wAN0zk289PPhz9MRksOVZkf1avymkCTfTBGMdiZpFdB7ee/npcY+lKMZEMvxhPhspZ/td2 O5CP3lL0zJN9k6kgPnU1YTKMqf6aZ21zknpnUloaHh08GcXjC9FIwgu7UdfylRqnGMVOFDwgnNpyIj /bmgeuBnp3nlBSSpv+1kySLcYurUVNu/R5omwd/cwV04z02R0aIS5DmMxS23UcGC+8A7vZodaXPc5J apBcXTi7azok3obYDgB5TEBqm9oTfUc7LaN8VqsdeD76J2ZD70JEG8ZQ+g+5ofq4XtWWtN3EfUMyyy OeY85F71h79EuQSjJaCt5vNhUpLanpBe1OE0o2yHQCzlMhabSSBh/bld4tdOJB2hWwBuZsdcPYpJwD C7PkDRlBnIG6l6bl0P6VrzMQj4hOcioQO0WF6NEAO2je2/ZSA+Y5h2wH6CpB7ctoTZDpQu6YHbj4/3 LGyoh+OTnrW54WrhV6PvYkDEA5eaLzH+/GBYpCJ2TmuXRc/TdQIqnx164yjQ== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Enable support for kptrs in percpu BPF hashmap and percpu BPF LRU hashmap by wiring up the freeing of these kptrs from percpu map elements. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/hashtab.c | 59 +++++++++++++++++++++++++++----------------- kernel/bpf/syscall.c | 2 ++ 2 files changed, 39 insertions(+), 22 deletions(-) diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 5dfcb5ad0d06..653aeb481c79 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -249,7 +249,18 @@ static void htab_free_prealloced_fields(struct bpf_htab *htab) struct htab_elem *elem; elem = get_htab_elem(htab, i); - bpf_obj_free_fields(htab->map.record, elem->key + round_up(htab->map.key_size, 8)); + if (htab_is_percpu(htab)) { + void __percpu *pptr = htab_elem_get_ptr(elem, htab->map.key_size); + int cpu; + + for_each_possible_cpu(cpu) { + bpf_obj_free_fields(htab->map.record, per_cpu_ptr(pptr, cpu)); + cond_resched(); + } + } else { + bpf_obj_free_fields(htab->map.record, elem->key + round_up(htab->map.key_size, 8)); + cond_resched(); + } cond_resched(); } } @@ -759,9 +770,17 @@ static int htab_lru_map_gen_lookup(struct bpf_map *map, static void check_and_free_fields(struct bpf_htab *htab, struct htab_elem *elem) { - void *map_value = elem->key + round_up(htab->map.key_size, 8); + if (htab_is_percpu(htab)) { + void __percpu *pptr = htab_elem_get_ptr(elem, htab->map.key_size); + int cpu; - bpf_obj_free_fields(htab->map.record, map_value); + for_each_possible_cpu(cpu) + bpf_obj_free_fields(htab->map.record, per_cpu_ptr(pptr, cpu)); + } else { + void *map_value = elem->key + round_up(htab->map.key_size, 8); + + bpf_obj_free_fields(htab->map.record, map_value); + } } /* It is called from the bpf_lru_list when the LRU needs to delete @@ -858,9 +877,9 @@ static int htab_map_get_next_key(struct bpf_map *map, void *key, void *next_key) static void htab_elem_free(struct bpf_htab *htab, struct htab_elem *l) { + check_and_free_fields(htab, l); if (htab->map.map_type == BPF_MAP_TYPE_PERCPU_HASH) bpf_mem_cache_free(&htab->pcpu_ma, l->ptr_to_pptr); - check_and_free_fields(htab, l); bpf_mem_cache_free(&htab->ma, l); } @@ -918,14 +937,13 @@ static void pcpu_copy_value(struct bpf_htab *htab, void __percpu *pptr, { if (!onallcpus) { /* copy true value_size bytes */ - memcpy(this_cpu_ptr(pptr), value, htab->map.value_size); + copy_map_value(&htab->map, this_cpu_ptr(pptr), value); } else { u32 size = round_up(htab->map.value_size, 8); int off = 0, cpu; for_each_possible_cpu(cpu) { - bpf_long_memcpy(per_cpu_ptr(pptr, cpu), - value + off, size); + copy_map_value_long(&htab->map, per_cpu_ptr(pptr, cpu), value + off); off += size; } } @@ -940,16 +958,14 @@ static void pcpu_init_value(struct bpf_htab *htab, void __percpu *pptr, * (onallcpus=false always when coming from bpf prog). */ if (!onallcpus) { - u32 size = round_up(htab->map.value_size, 8); int current_cpu = raw_smp_processor_id(); int cpu; for_each_possible_cpu(cpu) { if (cpu == current_cpu) - bpf_long_memcpy(per_cpu_ptr(pptr, cpu), value, - size); - else - memset(per_cpu_ptr(pptr, cpu), 0, size); + copy_map_value_long(&htab->map, per_cpu_ptr(pptr, cpu), value); + else /* Since elem is preallocated, we cannot touch special fields */ + zero_map_value(&htab->map, per_cpu_ptr(pptr, cpu)); } } else { pcpu_copy_value(htab, pptr, value, onallcpus); @@ -1575,9 +1591,8 @@ static int __htab_map_lookup_and_delete_elem(struct bpf_map *map, void *key, pptr = htab_elem_get_ptr(l, key_size); for_each_possible_cpu(cpu) { - bpf_long_memcpy(value + off, - per_cpu_ptr(pptr, cpu), - roundup_value_size); + copy_map_value_long(&htab->map, value + off, per_cpu_ptr(pptr, cpu)); + check_and_init_map_value(&htab->map, value + off); off += roundup_value_size; } } else { @@ -1772,8 +1787,8 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map, pptr = htab_elem_get_ptr(l, map->key_size); for_each_possible_cpu(cpu) { - bpf_long_memcpy(dst_val + off, - per_cpu_ptr(pptr, cpu), size); + copy_map_value_long(&htab->map, dst_val + off, per_cpu_ptr(pptr, cpu)); + check_and_init_map_value(&htab->map, dst_val + off); off += size; } } else { @@ -2046,9 +2061,9 @@ static int __bpf_hash_map_seq_show(struct seq_file *seq, struct htab_elem *elem) roundup_value_size = round_up(map->value_size, 8); pptr = htab_elem_get_ptr(elem, map->key_size); for_each_possible_cpu(cpu) { - bpf_long_memcpy(info->percpu_value_buf + off, - per_cpu_ptr(pptr, cpu), - roundup_value_size); + copy_map_value_long(map, info->percpu_value_buf + off, + per_cpu_ptr(pptr, cpu)); + check_and_init_map_value(map, info->percpu_value_buf + off); off += roundup_value_size; } ctx.value = info->percpu_value_buf; @@ -2292,8 +2307,8 @@ int bpf_percpu_hash_copy(struct bpf_map *map, void *key, void *value) */ pptr = htab_elem_get_ptr(l, map->key_size); for_each_possible_cpu(cpu) { - bpf_long_memcpy(value + off, - per_cpu_ptr(pptr, cpu), size); + copy_map_value_long(map, value + off, per_cpu_ptr(pptr, cpu)); + check_and_init_map_value(map, value + off); off += size; } ret = 0; diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index e3fcdc9836a6..da117a2a83b2 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1059,7 +1059,9 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, case BPF_KPTR_UNREF: case BPF_KPTR_REF: if (map->map_type != BPF_MAP_TYPE_HASH && + map->map_type != BPF_MAP_TYPE_PERCPU_HASH && map->map_type != BPF_MAP_TYPE_LRU_HASH && + map->map_type != BPF_MAP_TYPE_LRU_PERCPU_HASH && map->map_type != BPF_MAP_TYPE_ARRAY && map->map_type != BPF_MAP_TYPE_PERCPU_ARRAY) { ret = -EOPNOTSUPP; From patchwork Tue Feb 21 20:06:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13148374 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D352CC64EC4 for ; Tue, 21 Feb 2023 20:06:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230000AbjBUUGy (ORCPT ); Tue, 21 Feb 2023 15:06:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230041AbjBUUGy (ORCPT ); Tue, 21 Feb 2023 15:06:54 -0500 Received: from mail-ed1-x544.google.com (mail-ed1-x544.google.com [IPv6:2a00:1450:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85DF130B3F for ; Tue, 21 Feb 2023 12:06:52 -0800 (PST) Received: by mail-ed1-x544.google.com with SMTP id x10so21141216edd.13 for ; Tue, 21 Feb 2023 12:06:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+LChYyXkhlE5HW9JG8xAPc5FXE/dZC/OSRTkTH6gnD4=; b=egUjmMBasbupOim318P/hqQYKjjT9ALuk+C02Lw6cLkwWWrJOIDlF5d2fBAfwtNxwN bZx2iI+bfDV3rNWuqMpnMVpPjwHU8TvJS16oBvwIFtAtLbrxW8t3PylpiO4NEXSlbAQT kD7r7rPPQzdipJ3v090MeMmpsVYPv2vlT0AmAzVyY1sy5l913wUUoPZ05ibrYTVD+EoO Qtt3aqFvKwy7Uqsw44ZsOTZS1q/fmGfeAInyfzt7fjTPp/Ypg3gHG9lKY9MIKlrJlJHu BZ5536nsHEp1mJHOET6N+UXR/4DwKDe4cMaR9f+3tjrkDgXtnxT5e28FmhLDVet5zJ1x Btvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+LChYyXkhlE5HW9JG8xAPc5FXE/dZC/OSRTkTH6gnD4=; b=GIww3HynJVrkTnT70uk2p/fGC4XtkrWBvxepQs6abT3+x3ZgInZrcIqGronEucSJ7m xzcxRCRNfRJJ/fyd85RSBpAPfPQvpynJAW8hHq1AxwkwPZtdfCVNApeXjXHrgcGBWttg bJV9OP7bIqkWq3piHabEzaybuvJ5w8AxOWE8SiHBMrv9/dQdLWFuud5/EtcCH2Tk7//q 15QEd5J9Ab32LgOWatW6g5SuA3/KvaCD3Ov13qepKodJoKpBkIDjlwVJYNPYsOR8u/lz 1pFXOgbvrQeGjnK9vkMG6ecv+VUIDL/QHskmDDLFah+G9ZsxUnX43czIghyso8p82Sot jlyg== X-Gm-Message-State: AO0yUKWWx1IOIeDkCg74lv/TeKEx5GSYP8fpLdjzEKLBwtGZCT1XHCUK ChdlPR/sGfU/gF678gfNLlzOvcVzyKTr/g== X-Google-Smtp-Source: AK7set80GD/iBVLGWzoZTeYV7XzkRfg2MRm/lEFs8U441a647/3W2w2KWclT08QIrUBzOgvNBnX+gw== X-Received: by 2002:a17:906:f88a:b0:8b1:32dd:3af with SMTP id lg10-20020a170906f88a00b008b132dd03afmr13655893ejb.28.1677010010514; Tue, 21 Feb 2023 12:06:50 -0800 (PST) Received: from localhost ([2001:620:618:580:2:80b3:0:6d0]) by smtp.gmail.com with ESMTPSA id s23-20020a170906501700b008b10d5b092csm7438526ejj.119.2023.02.21.12.06.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Feb 2023 12:06:50 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Martin KaFai Lau , KP Singh , "Paul E . McKenney" , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Dave Marchevsky , David Vernet Subject: [PATCH bpf-next v2 2/7] bpf: Support kptrs in local storage maps Date: Tue, 21 Feb 2023 21:06:41 +0100 Message-Id: <20230221200646.2500777-3-memxor@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230221200646.2500777-1-memxor@gmail.com> References: <20230221200646.2500777-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=5283; i=memxor@gmail.com; h=from:subject; bh=/IKK2SSxTO6Dvec4kNAJOkXu2gKi23SAmz4Xy39MaTw=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBj9SRLHp1PqjC3dbF4QiT1W026xEc7xJpizYBuOiiT JJFTLC2JAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY/UkSwAKCRBM4MiGSL8RypT3D/ 929+4uDey8iVj8yHNe5lXGnK46+ZWSG8vEnDJi7rQ0b59EyH+3UaL4G/Ttz53f5Z92pLD7loZGOTSk zbQZmD86aDJ29CYypBSiorQYPHxOlFWkqlsItR5DMPe/OG4usCxYhpCQx3wJR2AsVILBK9wc3m3a2d WvEcz2QwJ4o8bYzHK/6ct+nax2jeEJm+JoY8UYuRx0v+FDxN0kwrGTFq/waKMWqlxjwdx6EXLQicg/ xPliadgt78XmtK70jNFKPk84YS1jldEMAtXcQHe3xHFBp9omFu5ExjjQ8ghFEOGKAh1fLzAkSiYFtd M5uaciBLy+foPC6OAJhGIWUqolH9b/BA3uNPzd+DSUZy5xLnfEx1jKSKMeSquyUuGNLdnWzcFBhIEO 76VAjP1c+cQjYP8V29T4VhM2JdhuPBRM5rXpMFvbdVq+9uWVw3EwCMRdkS9/DclxCv6Zm7qDcgQzDY vbQPdnk/RzOOXRGYJqVfN5SblEKeBmgUiYa7rTfBVliiQgeTpUDApYQoqgVyc3NPCWOEvhU6dC2Bu1 iBD7fS4ACuKjWZls1Kmz4mcS8j424bnYRrJ8NCp5oIDKbmaO11DJZ8E+IQAahCUzi/BtTBZggWP4Gv UUqtO1gf41Y6vtTxub79YG4hCm/jgKolf4aIrpXv2mLuKvXu6wLye7iuHd2Q== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Enable support for kptrs in local storage maps by wiring up the freeing of these kptrs from map value. Cc: Martin KaFai Lau Cc: KP Singh Cc: Paul E. McKenney Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/bpf_local_storage.c | 35 ++++++++++++++++++++++++++++++---- kernel/bpf/syscall.c | 6 +++++- kernel/bpf/verifier.c | 12 ++++++++---- 3 files changed, 44 insertions(+), 9 deletions(-) diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c index 35f4138a54dc..2803b85b30b2 100644 --- a/kernel/bpf/bpf_local_storage.c +++ b/kernel/bpf/bpf_local_storage.c @@ -75,6 +75,7 @@ bpf_selem_alloc(struct bpf_local_storage_map *smap, void *owner, if (selem) { if (value) copy_map_value(&smap->map, SDATA(selem)->data, value); + /* No need to call check_and_init_map_value as memory is zero init */ return selem; } @@ -103,10 +104,17 @@ static void bpf_selem_free_rcu(struct rcu_head *rcu) struct bpf_local_storage_elem *selem; selem = container_of(rcu, struct bpf_local_storage_elem, rcu); + bpf_obj_free_fields(SDATA(selem)->smap->map.record, SDATA(selem)->data); + kfree(selem); +} + +static void bpf_selem_free_tasks_trace_rcu(struct rcu_head *rcu) +{ + /* Free directly if Tasks Trace RCU GP also implies RCU GP */ if (rcu_trace_implies_rcu_gp()) - kfree(selem); + bpf_selem_free_rcu(rcu); else - kfree_rcu(selem, rcu); + call_rcu(rcu, bpf_selem_free_rcu); } /* local_storage->lock must be held and selem->local_storage == local_storage. @@ -160,9 +168,9 @@ static bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_stor RCU_INIT_POINTER(local_storage->cache[smap->cache_idx], NULL); if (use_trace_rcu) - call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_rcu); + call_rcu_tasks_trace(&selem->rcu, bpf_selem_free_tasks_trace_rcu); else - kfree_rcu(selem, rcu); + call_rcu(&selem->rcu, bpf_selem_free_rcu); return free_local_storage; } @@ -713,6 +721,25 @@ void bpf_local_storage_map_free(struct bpf_map *map, */ synchronize_rcu(); + /* Only delay freeing of smap, buckets are not needed anymore */ kvfree(smap->buckets); + + /* When local storage has special fields, callbacks for + * bpf_selem_free_rcu and bpf_selem_free_tasks_trace_rcu will keep using + * the map BTF record, we need to execute an RCU barrier to wait for + * them as the record will be freed right after our map_free callback. + */ + if (!IS_ERR_OR_NULL(smap->map.record)) { + rcu_barrier_tasks_trace(); + /* We cannot skip rcu_barrier() when rcu_trace_implies_rcu_gp() + * is true, because while call_rcu invocation is skipped in that + * case in bpf_selem_free_tasks_trace_rcu (and all local storage + * maps pass use_trace_rcu = true), there can be call_rcu + * callbacks based on use_trace_rcu = false in the earlier while + * ((selem = ...)) loop or from bpf_local_storage_unlink_nolock + * called from owner's free path. + */ + rcu_barrier(); + } bpf_map_area_free(smap); } diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index da117a2a83b2..eb50025b03c1 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1063,7 +1063,11 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf, map->map_type != BPF_MAP_TYPE_LRU_HASH && map->map_type != BPF_MAP_TYPE_LRU_PERCPU_HASH && map->map_type != BPF_MAP_TYPE_ARRAY && - map->map_type != BPF_MAP_TYPE_PERCPU_ARRAY) { + map->map_type != BPF_MAP_TYPE_PERCPU_ARRAY && + map->map_type != BPF_MAP_TYPE_SK_STORAGE && + map->map_type != BPF_MAP_TYPE_INODE_STORAGE && + map->map_type != BPF_MAP_TYPE_TASK_STORAGE && + map->map_type != BPF_MAP_TYPE_CGRP_STORAGE) { ret = -EOPNOTSUPP; goto free_map_tab; } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 272563a0b770..9a4e7efaf28f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7126,22 +7126,26 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env, break; case BPF_MAP_TYPE_SK_STORAGE: if (func_id != BPF_FUNC_sk_storage_get && - func_id != BPF_FUNC_sk_storage_delete) + func_id != BPF_FUNC_sk_storage_delete && + func_id != BPF_FUNC_kptr_xchg) goto error; break; case BPF_MAP_TYPE_INODE_STORAGE: if (func_id != BPF_FUNC_inode_storage_get && - func_id != BPF_FUNC_inode_storage_delete) + func_id != BPF_FUNC_inode_storage_delete && + func_id != BPF_FUNC_kptr_xchg) goto error; break; case BPF_MAP_TYPE_TASK_STORAGE: if (func_id != BPF_FUNC_task_storage_get && - func_id != BPF_FUNC_task_storage_delete) + func_id != BPF_FUNC_task_storage_delete && + func_id != BPF_FUNC_kptr_xchg) goto error; break; case BPF_MAP_TYPE_CGRP_STORAGE: if (func_id != BPF_FUNC_cgrp_storage_get && - func_id != BPF_FUNC_cgrp_storage_delete) + func_id != BPF_FUNC_cgrp_storage_delete && + func_id != BPF_FUNC_kptr_xchg) goto error; break; case BPF_MAP_TYPE_BLOOM_FILTER: From patchwork Tue Feb 21 20:06:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13148375 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B2EAC636D7 for ; Tue, 21 Feb 2023 20:06:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230041AbjBUUG4 (ORCPT ); Tue, 21 Feb 2023 15:06:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230091AbjBUUGy (ORCPT ); Tue, 21 Feb 2023 15:06:54 -0500 Received: from mail-ed1-x542.google.com (mail-ed1-x542.google.com [IPv6:2a00:1450:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3A0627997 for ; Tue, 21 Feb 2023 12:06:53 -0800 (PST) Received: by mail-ed1-x542.google.com with SMTP id f13so21417614edz.6 for ; Tue, 21 Feb 2023 12:06:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iKxKQ9sJzV8fykIuAK7NqDRbhBj4NpFuno90ez4gL84=; b=SPyGFmu/PE5tccHyDhwlmkvMBKGdvNR8TCQklSJ7U/0wwvq7aLXokhMkB7fMs6fNxh ojXuoQ78g9H9QfbiNlwZ4dtR8iYVCibK+heSggMD/xntemAmogZCkFxkc3ET5rkj7jJX p3/0hXJpRjZ0PQVnZZuDu4xlSxscgQfaRkHmzZP/8BnAl9S43wcvONCPkeB5l3B1UhQU 71O91n7N9aq+ep6PFDxrDb8GEGWpmU9aiUme+hDJgdqlRTTVFbZ6m9zF9Vq/coZ2JmK4 3If3Lpqt/nHqmk2p5ezVRYydEYm+EdhWlQWF1pmn7rWrYkyCmci3QuFQD6koTWVvCzF1 jyaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iKxKQ9sJzV8fykIuAK7NqDRbhBj4NpFuno90ez4gL84=; b=60gcbAUIleEREBieuyEYFgyBPbtMV3kkj5IvYm8l5DW2d0txlC7WoJ3FU8sNP92Fbo LYTfXR+7CpnztqCnPejrXaO+aJ7nfqX1WhBIYMAmNqRyOXUKDz1evj3SwDIWK0s4YzhR lyEP7Miz8dAC3wpl0fYbbJw+kcyH8Ju6WrCznbenWl4wtz8ZMeuSdRxfMQ7u21Jd/cgM yXUKaKSmG8DNFaQc2bpl40fyJ86j0sXmypUhouX9ZpHywjhhJHwXaN6VsqdMLf/qI0hA ATJeI2CFmlKVX9TPJf/y4NVKgsNHoloeEDNy6SIUhAhfQ32HnJQC548con6IU56s3eFM nopA== X-Gm-Message-State: AO0yUKUt8QNzM4fjVtzMvhNs6/VFKx1ggYE52XMT75WveDL9XB+bqapC W5inXCv4HBXNBtNU2fnxFCAK0pjiuCR+ew== X-Google-Smtp-Source: AK7set+z0UC1jrYhOyjneVJ/vFhDcYdf2F45VelJymo+zjYpaxFronaUnn1zQLWwaj4A8ZNGuXAnrA== X-Received: by 2002:a17:907:98b1:b0:8e4:96c4:94a with SMTP id ju17-20020a17090798b100b008e496c4094amr1037751ejc.56.1677010012001; Tue, 21 Feb 2023 12:06:52 -0800 (PST) Received: from localhost ([2001:620:618:580:2:80b3:0:6d0]) by smtp.gmail.com with ESMTPSA id u4-20020a1709060b0400b008c607dd7cefsm4385714ejg.79.2023.02.21.12.06.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Feb 2023 12:06:51 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Martin KaFai Lau , KP Singh , Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Dave Marchevsky , David Vernet Subject: [PATCH bpf-next v2 3/7] bpf: Annotate data races in bpf_local_storage Date: Tue, 21 Feb 2023 21:06:42 +0100 Message-Id: <20230221200646.2500777-4-memxor@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230221200646.2500777-1-memxor@gmail.com> References: <20230221200646.2500777-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2660; i=memxor@gmail.com; h=from:subject; bh=30/AAcSNo2Q5iYQoDav5BEz50DtTtdsaIGwvWTJd5tQ=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBj9SRLHQZnoUYeVaSBhDu/63T8K3Eq+JSqs4jVdf7G 0pHGjKqJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY/UkSwAKCRBM4MiGSL8RyoPTEA C6xkasGfZmUegv3OfN3C3l+EHpZQymhD7vpPSetkYQ5GWwPH8+Eu3b8rV9usH/V2DdDLIQE+iSTqQU 8aI6f/pQhZhiWC6BW9vNNQ1ZswXpaaM39GfwcAaem9C+dlSdPW7uPNY1mFUcspHDnphYt0iP5RDaJK 1m39Sb65ZlfX77aqM+3bKAx3z6/WXHVFs+TETWba7QhZfae+S1OdxxGWWcpqqdPcoXNnPOdH1oxSeP 3bizQGnt30J4rRhE1Vpa5J+IccdKQxjfnd0hbJsiHsDv9+dWUkArkFW+B9REBSIvkW6t+Ky9WOxpj8 o5igvAPX/HjXdMJKepXqGHSjdQM6sFcpFPu9kDZgW8WhQLlOralLVvyjCa88TfuZOIN4JYagVPCPxb jyprQr2ougfDdaVXNCuQ5k31vgmbGOElcv3TB0/ywrCLGwS1DKs18uVi9XeRfKD11bE3vC0nREiQyS gxr7PVmU2rKdS0JsF2xgjRgPJLzSb6Nuu3lOqNbQ4Rr3wVuuWEJRVhx14urf+lddJT6W+ySVtAVGu1 lyA3WecADJg5i0hoyP3MmHFrMGeX2TKp0YfxNq+dMXH97eozUJVlLcq3wO4MPoDtBO+AgJS4dw3yum HUBq7FsDa9rGg0s3pWNYh4otWIFAKu6nNj6QnsH4sSMgMFWnG7VccWoSVw9g== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net There are a few cases where hlist_node is checked to be unhashed without holding the lock protecting its modification. In this case, one must use hlist_unhashed_lockless to avoid load tearing and KCSAN reports. Fix this by using lockless variant in places not protected by the lock. Since this is not prompted by any actual KCSAN reports but only from code review, I have not included a fixes tag. Cc: Martin KaFai Lau Cc: KP Singh Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/bpf_local_storage.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c index 2803b85b30b2..2f61b38db674 100644 --- a/kernel/bpf/bpf_local_storage.c +++ b/kernel/bpf/bpf_local_storage.c @@ -51,11 +51,21 @@ owner_storage(struct bpf_local_storage_map *smap, void *owner) return map->ops->map_owner_storage_ptr(owner); } +static bool selem_linked_to_storage_lockless(const struct bpf_local_storage_elem *selem) +{ + return !hlist_unhashed_lockless(&selem->snode); +} + static bool selem_linked_to_storage(const struct bpf_local_storage_elem *selem) { return !hlist_unhashed(&selem->snode); } +static bool selem_linked_to_map_lockless(const struct bpf_local_storage_elem *selem) +{ + return !hlist_unhashed_lockless(&selem->map_node); +} + static bool selem_linked_to_map(const struct bpf_local_storage_elem *selem) { return !hlist_unhashed(&selem->map_node); @@ -182,7 +192,7 @@ static void __bpf_selem_unlink_storage(struct bpf_local_storage_elem *selem, bool free_local_storage = false; unsigned long flags; - if (unlikely(!selem_linked_to_storage(selem))) + if (unlikely(!selem_linked_to_storage_lockless(selem))) /* selem has already been unlinked from sk */ return; @@ -216,7 +226,7 @@ void bpf_selem_unlink_map(struct bpf_local_storage_elem *selem) struct bpf_local_storage_map_bucket *b; unsigned long flags; - if (unlikely(!selem_linked_to_map(selem))) + if (unlikely(!selem_linked_to_map_lockless(selem))) /* selem has already be unlinked from smap */ return; @@ -428,7 +438,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap, err = check_flags(old_sdata, map_flags); if (err) return ERR_PTR(err); - if (old_sdata && selem_linked_to_storage(SELEM(old_sdata))) { + if (old_sdata && selem_linked_to_storage_lockless(SELEM(old_sdata))) { copy_map_value_locked(&smap->map, old_sdata->data, value, false); return old_sdata; From patchwork Tue Feb 21 20:06:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13148376 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0AD3C61DA3 for ; Tue, 21 Feb 2023 20:06:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230113AbjBUUG6 (ORCPT ); Tue, 21 Feb 2023 15:06:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229931AbjBUUG5 (ORCPT ); Tue, 21 Feb 2023 15:06:57 -0500 Received: from mail-ed1-x541.google.com (mail-ed1-x541.google.com [IPv6:2a00:1450:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BCA82E803 for ; Tue, 21 Feb 2023 12:06:55 -0800 (PST) Received: by mail-ed1-x541.google.com with SMTP id h16so22133382edz.10 for ; Tue, 21 Feb 2023 12:06:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yLBEn3jLcMOSmLSoNKVjC/08G00nx8V0vNuukGuHwCI=; b=G7+hk36fjR8pSqp378xP+4giFa/CmjeNWFeilCllAm6jNkaPUByLlDFngCrix1MrRN rhr6JBTh6Fvb4iPTBbguZULxLySj9yxPq8XTBE+XPkeVjjWwQ+4XHMHAojJp5boflaoA A2O1uDyb2/3CECNIJ9Cxpol4qOPhwyHbaq50zIKqXTPUDpi9HA9lXVEO3+/E0jUSJlpJ gp6BVkpmAmkXiPKv/B0TjwCiqzSJ2IH0u/zcwy7xPs0wzFxmuFuR9QpY9sB+Fs58M/Kz tKKfr1FCajIZ+fQ47u6DygWtqU7rEL6jP0dgRCJPzNAJNCs7u6RTgniTvOwVsXt3/gsw rkMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yLBEn3jLcMOSmLSoNKVjC/08G00nx8V0vNuukGuHwCI=; b=HkzyN3SFg3/zyPRbraKLznJ5zpRIpTFwdh74RJ8m0j/jOJ2AIOh50VpXrPgZSE53y8 SdYQ+9yq8UQU3cIJBAr4ZxE+Rno3QYcFT0JbYYeMzyNwSWs1Lw0d4UtprxWk8irDbcQ6 4hQQZAs6B2MkieV5i9EWbyIQ7FGaySLWfc1R/yjVm37FUoNTsIKQn95+uegFPsJB+uqK qR4vBH02HkU2G92zdezO8v87PMSNCWHAut4tl+qYxHvJKcS+9NcSSb/eBrCXIQCx8XQC ka6FVEbLtk/ROP6qfJxwgo+OR2LED5DKEE6YzVSISpvQBNeb2eo4zHhEPTi7Rxg1TVmT xMLQ== X-Gm-Message-State: AO0yUKUQ255KIig6R/egnjAWv6e78ADs3RMdDyKt89kyVKR1ke7Bei49 QzAb4DjZPLugAw2hpmRWWOqX6f3dBBozpw== X-Google-Smtp-Source: AK7set/jxnE/UdXCrtVPtMkgJQBDSeY8qpeFZYFPfTQQaoMijp3LCSgwYh/kleCUIXozfaAtDdrI/w== X-Received: by 2002:a05:6402:5022:b0:4aa:dedd:41e0 with SMTP id p34-20020a056402502200b004aadedd41e0mr7309881eda.8.1677010013219; Tue, 21 Feb 2023 12:06:53 -0800 (PST) Received: from localhost ([2001:620:618:580:2:80b3:0:6d0]) by smtp.gmail.com with ESMTPSA id w7-20020a50d787000000b004aad0a9144fsm3133970edi.51.2023.02.21.12.06.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Feb 2023 12:06:52 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , KP Singh , Dave Marchevsky , David Vernet Subject: [PATCH bpf-next v2 4/7] bpf: Remove unused MEM_ALLOC | PTR_TRUSTED checks Date: Tue, 21 Feb 2023 21:06:43 +0100 Message-Id: <20230221200646.2500777-5-memxor@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230221200646.2500777-1-memxor@gmail.com> References: <20230221200646.2500777-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1446; i=memxor@gmail.com; h=from:subject; bh=sLl/ONgY3LjqhuNq2QP2UePimgBpCudu+JdJnC28uVc=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBj9SRLPWgTTP1c6F+b6jgW9VwGoxjIc4mGmqcUALza J7YGkT6JAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY/UkSwAKCRBM4MiGSL8RyhDBD/ 9exI1CMsHt8ii9v/atn3neWJZH6hvAUONujG2xUSZWo6B4XcXnmxu6h40W3Vob5TbMPZBobybBmbHL AssgCgP1Yi5OKpxGiE60sGlfQtgKYKun7vGCsP1kr+4OVYaCmN0XkinEiPYXK6WXmzeE50yja1NtCF I9UPzcTwdnX6b70juAWm3LMjX2BPEbUq+tyJmIWAcHmYmebm2tAF5n2Syy48g1hA6QyvOiq5qMRxXi xO0IMzMIc8f0gvbUPAwtdQR98ZJIBX04a9sdcecAYvgudTIKXTpv3Eho9EnfPyjxbyUDHhwzxb9d38 0/UV16fnA3SssuEtFxmgUEzv/6bZ4sN1+XBFxgpM+19Yv4WEVwxRLScgh+Z2ZXMmNODTP+XdyzC95t HPYhFGm2g6i51p+NwgvDwcS2ncNoFhkELiSky5Ut4CwTk68Z1Dy5pF3UdPg7LpRHjtBwX+gJV1uX5R a8Wn3RJG/LgZ3/Rdz5SyPoHUhhROu7VKlo8tFjPkRlkRntI9oKEInXpFPpkhV3RujiIkzSQNN4Fo5y sefnIe5gZjG8E2kghe9rhPjyloabRBHy0prbv3cNKUQBQ2xXGoE2s6FaHxRRQpv0wo4FHyEIbPtL8F B0tb6Zv2OdoWVhfsepfZDnBiVuxW+Qc1IR6T20Jay3elZmtqmSIvYRe63/oA== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The plan is to supposedly tag everything with PTR_TRUSTED eventually, however those changes should bring in their respective code, instead of leaving it around right now. It is arguable whether PTR_TRUSTED is required for all types, when it's only use case is making PTR_TO_BTF_ID a bit stronger, while all other types are trusted by default. Hence, just drop the two instances which do not occur in the verifier for now to avoid reader confusion. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 9a4e7efaf28f..6837657b46bf 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -6651,7 +6651,6 @@ int check_func_arg_reg_off(struct bpf_verifier_env *env, case PTR_TO_BTF_ID | MEM_ALLOC: case PTR_TO_BTF_ID | PTR_TRUSTED: case PTR_TO_BTF_ID | MEM_RCU: - case PTR_TO_BTF_ID | MEM_ALLOC | PTR_TRUSTED: case PTR_TO_BTF_ID | MEM_ALLOC | NON_OWN_REF: /* When referenced PTR_TO_BTF_ID is passed to release function, * its fixed offset must be 0. In the other cases, fixed offset @@ -9210,7 +9209,6 @@ static int check_reg_allocation_locked(struct bpf_verifier_env *env, struct bpf_ ptr = reg->map_ptr; break; case PTR_TO_BTF_ID | MEM_ALLOC: - case PTR_TO_BTF_ID | MEM_ALLOC | PTR_TRUSTED: ptr = reg->btf; break; default: From patchwork Tue Feb 21 20:06:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13148377 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BD02C636D7 for ; Tue, 21 Feb 2023 20:07:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230091AbjBUUG6 (ORCPT ); Tue, 21 Feb 2023 15:06:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230097AbjBUUG6 (ORCPT ); Tue, 21 Feb 2023 15:06:58 -0500 Received: from mail-ed1-x544.google.com (mail-ed1-x544.google.com [IPv6:2a00:1450:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A770C30B3C for ; Tue, 21 Feb 2023 12:06:56 -0800 (PST) Received: by mail-ed1-x544.google.com with SMTP id ec43so21440203edb.8 for ; Tue, 21 Feb 2023 12:06:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1gjATKTPjeef4UxhZLE+OVD3fftepy3M/KUpIRkKPPE=; b=at8R2zMXSQ5cnwEn81PMxocNPbkZmT2ngWt0PeVj0LG7RMvvjrISeePNHq+EhsoJtp BNiGId1oNrPjEm+FPGe2eNRxxHzNpEkRwAQX0ufCAxu8CAFyeVmpUE5PfmwZgStWpGZh 91r1VZWydpOtW3DYovk6/FJXbxf3Ejtc3qq8orSXrE1TCouDBzPXTtzaNWFg+yw4NfSi k8J+2wFAUKSSOy2Fxn3qdkuCBT81d5bFD0U4Sj63O4Zltf16gBDWta8WROLN8BPVCXI1 q/GI63GUXGSybD9oflB+YCdQuDU1WoA9RowaOXq5iMaXtO3rMF8OCpos+Hm9/M3Z830n C2wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1gjATKTPjeef4UxhZLE+OVD3fftepy3M/KUpIRkKPPE=; b=oltu1P1PGZMOla1OgNrdXTJpPPXh15vDFYFGWldz1brOM4A5pZZ6nQ6yQcWSfTe2TF L/Cf9ZhyyPtsQKPMU3d7xevGP/bhjm36CZY5YgaYixzU5YMJnlEhlcWiXAWUwgPjnDZ9 Mg4ew3986HvSoodrOhLNrJUd5+Xng15lk6Yf1jM5a6anWndYoTZ6wGG6u9eVAESxJVcx nijoW1ByXRuLKWuK7GjMD0ZstyZtpr3brOy29/ixc/f035TtsI+ZyIEo9BaZEkoRs9Op L3AqfC1MmAl08D36b8f3Busz8gGiuONfVcsLqQciUJPIvIj8yNQ0nhtPIgeXebJj9IO3 RoKg== X-Gm-Message-State: AO0yUKVonPb/elL87Zlt4KTxRa/y/xitkePoLNdy3TJwzsI+sJ1dYtPK nwukQCq+qOa8fE7MbWNS5bdzlSu0BppEHw== X-Google-Smtp-Source: AK7set8k+b++4ppKOF40b4mYhgBku1dh4Ei+pe+DsUVOR7nYhtLikZ3raXk7RQ+8gJNIrN91I1fKVg== X-Received: by 2002:a17:907:9620:b0:8ab:a378:5f96 with SMTP id gb32-20020a170907962000b008aba3785f96mr13131498ejc.3.1677010014629; Tue, 21 Feb 2023 12:06:54 -0800 (PST) Received: from localhost ([2001:620:618:580:2:80b3:0:6d0]) by smtp.gmail.com with ESMTPSA id x16-20020a170906711000b008b17879ec95sm6805651ejj.22.2023.02.21.12.06.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Feb 2023 12:06:54 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , KP Singh , Dave Marchevsky , David Vernet Subject: [PATCH bpf-next v2 5/7] bpf: Fix check_reg_type for PTR_TO_BTF_ID Date: Tue, 21 Feb 2023 21:06:44 +0100 Message-Id: <20230221200646.2500777-6-memxor@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230221200646.2500777-1-memxor@gmail.com> References: <20230221200646.2500777-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2598; i=memxor@gmail.com; h=from:subject; bh=Lqdy/IKzOnseTIegz0Zq4nxx9kSW/9SvKf/gr8GzPMU=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBj9SRMwXAfDptv4JkR1DmGQNTV3abjCUocDju4yb7q Apnj8QeJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY/UkTAAKCRBM4MiGSL8RyjatD/ 0YaOgSFs3BqtRMpi7Zf9jjuPDgaQ88HnkoXx9OqarW3UbYg5qXTRESiGRrREH5Aq36jpe6/ymZKbtN NcR/dN1GVZKKoudtNAlDVsfXFGWtudtDkAnHWNzr8swb4kEzZLK+MpuzvvQsS7gzuvZuES0mDfS/ES FaCeObmP9W9RA/DYo0ywVx/AWf+A/+xvphxflXIGZ6kC8wVNpmnPeVwDCTWYNe/Y+LejRWaVOGU13x t1EnIfVN5sAr3aiEq/HfZMuVqGA0DGT+xIEqCDvwZlrF6dWUvJsuukr+y4Xe3HeagtAFaBg7k3bio9 vgRf2O9ckB+o9kMzK1fIKjB+vEDXco3SwRjGN/lZyju82eLe0o/IVxO6dBE8zHjtDV5MtQgGyHSLlP QmrrmfC0qcLnY3rh4PrTwzln4AMqBQ1E1NZczmVarVxNVZEBhcCPFGgvBuvxvQJ3RwSZXdTQhuDUds Eg+0WQyI5sxrrwiWeHR5EhhmpaLR/HSmI+mEw9GAqcv93IeCRJOcMURHjy46OafSwi7tDtIcirB++N ZavYwghngFKdWXkcLhlV7SdGz8ggvVen1bljxEadwaE3d+rITHuAatrtOLx0X8zJplLuTr4/4sk/QP 2rYv2xTTuK/vJrjE8B+65wiGSd6TfAsxzfKX/caZ0Dnyhjofl5Jux8jfgY6w== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net The current code does type matching for the case where reg->type is PTR_TO_BTF_ID or has the PTR_TRUSTED flag. However, this only needs to occur for non-MEM_ALLOC and non-MEM_PERCPU cases, but will include both as per the current code. The MEM_ALLOC case with or without PTR_TRUSTED needs to be handled specially by the code for type_is_alloc case, while MEM_PERCPU case must be ignored. Hence, to restore correct behavior and for clarity, explicitly list out the handled PTR_TO_BTF_ID types which should be handled for each case using a switch statement. Helpers currently only take: PTR_TO_BTF_ID PTR_TO_BTF_ID | PTR_TRUSTED PTR_TO_BTF_ID | MEM_RCU PTR_TO_BTF_ID | MEM_ALLOC PTR_TO_BTF_ID | MEM_PERCPU PTR_TO_BTF_ID | MEM_PERCPU | PTR_TRUSTED This fix was also described (for the MEM_ALLOC case) in [0]. [0]: https://lore.kernel.org/bpf/20221121160657.h6z7xuvedybp5y7s@apollo Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 6837657b46bf..8dbd20735e92 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -6522,7 +6522,14 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno, return -EACCES; found: - if (reg->type == PTR_TO_BTF_ID || reg->type & PTR_TRUSTED) { + if (base_type(reg->type) != PTR_TO_BTF_ID) + return 0; + + switch ((int)reg->type) { + case PTR_TO_BTF_ID: + case PTR_TO_BTF_ID | PTR_TRUSTED: + case PTR_TO_BTF_ID | MEM_RCU: + { /* For bpf_sk_release, it needs to match against first member * 'struct sock_common', hence make an exception for it. This * allows bpf_sk_release to work for multiple socket types. @@ -6558,13 +6565,23 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno, return -EACCES; } } - } else if (type_is_alloc(reg->type)) { + break; + } + case PTR_TO_BTF_ID | MEM_ALLOC: if (meta->func_id != BPF_FUNC_spin_lock && meta->func_id != BPF_FUNC_spin_unlock) { verbose(env, "verifier internal error: unimplemented handling of MEM_ALLOC\n"); return -EFAULT; } + /* Handled by helper specific checks */ + break; + case PTR_TO_BTF_ID | MEM_PERCPU: + case PTR_TO_BTF_ID | MEM_PERCPU | PTR_TRUSTED: + /* Handled by helper specific checks */ + break; + default: + verbose(env, "verifier internal error: invalid PTR_TO_BTF_ID register for type match\n"); + return -EFAULT; } - return 0; } From patchwork Tue Feb 21 20:06:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13148378 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D867EC64EC4 for ; Tue, 21 Feb 2023 20:07:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229931AbjBUUHA (ORCPT ); Tue, 21 Feb 2023 15:07:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230116AbjBUUG7 (ORCPT ); Tue, 21 Feb 2023 15:06:59 -0500 Received: from mail-ed1-x542.google.com (mail-ed1-x542.google.com [IPv6:2a00:1450:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3AB330E92 for ; Tue, 21 Feb 2023 12:06:57 -0800 (PST) Received: by mail-ed1-x542.google.com with SMTP id o12so22379394edb.9 for ; Tue, 21 Feb 2023 12:06:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1r/OJAG6pYjz3uYMx2mH7/j/3RiixHDF+PC6j6HMZ2A=; b=LpK5ORRCXb5j85TVScJgOm5+fk9Me1j9guv5NKFzAuaGPUUCU3VufCY5p8YNDVaecT vx0OrucmyVoDovBmARMnGT0M88WrAVXE3Mv2mlv3PTScdZBJ2M3eHwdA8o/Cj6kVsd9r ngno0fDuVSjxRHyhHEz8ouZJLdrlgR1UJAv+WusLHplt6eqzRYc3P4A1E4ee9yrfj3tn sh+1Mh0yFcdlzMuMOXhDqdKbSdwv83h2teti/s/ORQFfDJH5UQbVU3vlw7r5Dkcsazqn fAZmAzIbcG8Zkx5WK0rAZeildimZbo+T4oyfdh4ooiMlZqltAtwe/0c20AgI5dAtdiyF QCEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1r/OJAG6pYjz3uYMx2mH7/j/3RiixHDF+PC6j6HMZ2A=; b=j6EaKoWSHRX6C33JyFnLrjkfQZYexwDWm53ZzmRBlQmV8oTI9aPm2Lgbck0zJgS3R+ FXF0lNp+4JGv4prC5symvp9Q17PeillN40KlPZKSDqm+iG2INB9JccFPg3khAqMGIFea EPh1LFycmpRBh2/GrgobRb06xqDLYgrp+YHwIuVDP7BcMagZTAbHKsSt0GkKNRrzEU7l vqLj9x1fvp8vx61eLkl5QEc0xZojoIDFThDGvTUSSgNu0sJ8iSOMjHuCGJikf4+WUpdx /0x9QfXRjUEVDgjCIhdqfGwYPep0vsvf3XofN1oXBvIVtKCKXAyVOV/tjSW+0nzDMJqe 7nQw== X-Gm-Message-State: AO0yUKUcya5Qkd5P3f2cJHqBwRbO76kVx00Jr4qe1WQyMYrre4qnicyn jlyf8+ab9hx4KhBn7PoTb1R2v+n7m28uKg== X-Google-Smtp-Source: AK7set8rAhH4YtYvuTZ1eMPmABae6QBWC53epv9HqFnLIOkr/tMzLk0Hv4sGza2H9zwOXtyjIl5rxQ== X-Received: by 2002:a17:906:9f25:b0:8b1:7569:ad58 with SMTP id fy37-20020a1709069f2500b008b17569ad58mr10251875ejc.2.1677010016063; Tue, 21 Feb 2023 12:06:56 -0800 (PST) Received: from localhost ([2001:620:618:580:2:80b3:0:6d0]) by smtp.gmail.com with ESMTPSA id q20-20020a170906771400b008e57b5e0ce9sm156879ejm.108.2023.02.21.12.06.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Feb 2023 12:06:55 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , KP Singh , Dave Marchevsky , David Vernet Subject: [PATCH bpf-next v2 6/7] bpf: Wrap register invalidation with a helper Date: Tue, 21 Feb 2023 21:06:45 +0100 Message-Id: <20230221200646.2500777-7-memxor@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230221200646.2500777-1-memxor@gmail.com> References: <20230221200646.2500777-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3186; i=memxor@gmail.com; h=from:subject; bh=nH2wX2lbjSnpaunG5MSM50qcXyLVo8LAtKNm1jgtDWo=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBj9SRMq+lsdIPAVzWlfBe5DxeOpg2l+MjZPHhrixuj esKP2luJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY/UkTAAKCRBM4MiGSL8RyqfcEA ClMMgCHwOxkufwpEQBIgXoFjrFrCAOOsiMbyOxEYuTLZpLmXSkxW4v/RO/BCMdESnKruDWVUmnVIWd H6agHu0toDF3QnDainxTYPKMFUwNPye6NNN33k3m3piiEP183J2Fbz7De9Hf5YnKnMrnrMBSPQSv/2 o6YwL4flzOVVooBJPcychnGB/y0/tDb+3WqfYJBV8r/YssaRwA28+jwFAdUHUfqlDCXYEeFpCUczoE YJOLQ0gG8jP6jmAE2DHVA4gLL4nXYcuFH4xsT7g+/+RUQ1mW3YHv7cZuTho/hSQjpze0KsgHkAXdwm Mtd6JmTDVeW5/ugCkg2XlFo/0eXLIvdk5xnP45bcUvi5RhFo6nQTQW0A76IYhfXvej4FHpBgy4N7GW fb3ZrxR/ler3k+Wz4VG88zp1s1sbCh68prELXugxexTDYnXu6bQ1/ZkdUW7dBILhImN+YJvw9Ut0yI 4o6DgUxAkjJab3DrMZguJ8luzcaQ4JDWLl7U/mZM6iTSVrPhrK4T/gYPQOpvWzDwjWS7kCngOxrvg5 0QhKL2fOmrVClag/RjVBpGo6ccXTYRatSX/TsM6cilzAg+DW2iv60YGoRGu6KgbodL1F2hlasMXxTQ rs9/glH3bRsQYFqrHgEX6S9gh6783y+yHfQcgXXa36qv9oEjUHCf4jqRxwQg== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Typically, verifier should use env->allow_ptr_leaks when invaliding registers for users that don't have CAP_PERFMON or CAP_SYS_ADMIN to avoid leaking the pointer value. This is similar in spirit to c67cae551f0d ("bpf: Tighten ptr_to_btf_id checks."). In a lot of the existing checks, we know the capabilities are present, hence we don't do the check. Instead of being inconsistent in the application of the check, wrap the action of invalidating a register into a helper named 'mark_invalid_reg' and use it in a uniform fashion to replace open coded invalidation operations, so that the check is always made regardless of the call site and we don't have to remember whether it needs to be done or not for each case. Signed-off-by: Kumar Kartikeya Dwivedi --- kernel/bpf/verifier.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 8dbd20735e92..d856ee74ad63 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -895,6 +895,14 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re static void __mark_reg_unknown(const struct bpf_verifier_env *env, struct bpf_reg_state *reg); +static void mark_reg_invalid(const struct bpf_verifier_env *env, struct bpf_reg_state *reg) +{ + if (!env->allow_ptr_leaks) + __mark_reg_not_init(env, reg); + else + __mark_reg_unknown(env, reg); +} + static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env, struct bpf_func_state *state, int spi) { @@ -934,12 +942,8 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env, /* Dynptr slices are only PTR_TO_MEM_OR_NULL and PTR_TO_MEM */ if (dreg->type != (PTR_TO_MEM | PTR_MAYBE_NULL) && dreg->type != PTR_TO_MEM) continue; - if (dreg->dynptr_id == dynptr_id) { - if (!env->allow_ptr_leaks) - __mark_reg_not_init(env, dreg); - else - __mark_reg_unknown(env, dreg); - } + if (dreg->dynptr_id == dynptr_id) + mark_reg_invalid(env, dreg); })); /* Do not release reference state, we are destroying dynptr on stack, @@ -7383,7 +7387,7 @@ static void clear_all_pkt_pointers(struct bpf_verifier_env *env) bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({ if (reg_is_pkt_pointer_any(reg)) - __mark_reg_unknown(env, reg); + mark_reg_invalid(env, reg); })); } @@ -7428,12 +7432,8 @@ static int release_reference(struct bpf_verifier_env *env, return err; bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({ - if (reg->ref_obj_id == ref_obj_id) { - if (!env->allow_ptr_leaks) - __mark_reg_not_init(env, reg); - else - __mark_reg_unknown(env, reg); - } + if (reg->ref_obj_id == ref_obj_id) + mark_reg_invalid(env, reg); })); return 0; @@ -7446,7 +7446,7 @@ static void invalidate_non_owning_refs(struct bpf_verifier_env *env) bpf_for_each_reg_in_vstate(env->cur_state, unused, reg, ({ if (type_is_non_owning_ref(reg->type)) - __mark_reg_unknown(env, reg); + mark_reg_invalid(env, reg); })); } From patchwork Tue Feb 21 20:06:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 13148379 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60EDFC61DA3 for ; Tue, 21 Feb 2023 20:07:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229800AbjBUUHF (ORCPT ); Tue, 21 Feb 2023 15:07:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55538 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230116AbjBUUHD (ORCPT ); Tue, 21 Feb 2023 15:07:03 -0500 Received: from mail-ed1-x541.google.com (mail-ed1-x541.google.com [IPv6:2a00:1450:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55ABA2E803 for ; Tue, 21 Feb 2023 12:06:59 -0800 (PST) Received: by mail-ed1-x541.google.com with SMTP id o12so22379623edb.9 for ; Tue, 21 Feb 2023 12:06:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C/NVtpPbIharfM3JtAtkn9c6Phwb0DxB7fDZfLZuZSg=; b=hRzrm2WRDKcx6GUHKtujqu1frHFeH3Fp2etTPZIe+llPbYg7jpoata35B7wcp/x6bZ ZIbnFJb9gIWXLxJEC3e2WvH4yC0qdtUQJ3sQ346XB61cRmYkUVf8hhuzGEuUvl9o6XrQ MMtG7TijlsEJUgrDJzr1CDnEZyvXiOKqg6swLtPWqduCfLikn01uWWTlngXGtVK/H27o 1mNS6lDZYHsXcBg0H5B96G0plr19jO++CNcgoe1Aho248fMdlGpv6khGR2Xy40135hAk IULyzkkzWLWYu8lNFWwgDtxGHrtdT6s3jXM2cypoOE8Qg8qSq0Z83blG09+EwOZApJyj GDIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C/NVtpPbIharfM3JtAtkn9c6Phwb0DxB7fDZfLZuZSg=; b=L7GC6IMBdi0q7tMe51Th+W1LIgC8z1yyOt/P/ZbUAY9k/GfvTKOejsGWpvGCw+HYB1 s3B3lVbfRwqCgwMeHx5gfgoTQTSnnTsDCmgfrB7gJNo4yAJZ8pUXQxfOjQa+DfjLh6Ae cv48Q+zu835iSYlmXy7h9/fvMSLYrh9HpCveylyFUQHLBM7ydVxn/qPsQYOJCzNp7eGR Di05OGRL7f+kJR0BABXV90pMomEN04I21VqDjFdkKAVJ+Y+bQNw668VQyNzM++dY0pbI +OFG4pV8N2GHYVMYxQWOPCTRki4NcHKC8RgJsDeeV6olveGPPD4y32diQr02RdDkSjxX f6GQ== X-Gm-Message-State: AO0yUKUfG6UoLJlISa5/QDdRIT+o9y8fN/sIAj+15iIxDZyNRPgx/dqG ueBeg3eJnjbR2OBAYV1h0tHlRqV4vGcy2g== X-Google-Smtp-Source: AK7set/KRB4eBNvEjEuELEHcrfR5roDg83kOA728QYQz+oBXDM9e3AE60XRnCXubgLKs41XwZxuSVg== X-Received: by 2002:a17:907:75c2:b0:8b1:3008:b4f3 with SMTP id jl2-20020a17090775c200b008b13008b4f3mr12940825ejc.52.1677010017199; Tue, 21 Feb 2023 12:06:57 -0800 (PST) Received: from localhost ([2001:620:618:580:2:80b3:0:6d0]) by smtp.gmail.com with ESMTPSA id z11-20020a1709060acb00b008ba9e67ea4asm5196967ejf.133.2023.02.21.12.06.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Feb 2023 12:06:56 -0800 (PST) From: Kumar Kartikeya Dwivedi To: bpf@vger.kernel.org Cc: Alexei Starovoitov , Andrii Nakryiko , Daniel Borkmann , Martin KaFai Lau , KP Singh , Dave Marchevsky , David Vernet Subject: [PATCH bpf-next v2 7/7] selftests/bpf: Add more tests for kptrs in maps Date: Tue, 21 Feb 2023 21:06:46 +0100 Message-Id: <20230221200646.2500777-8-memxor@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230221200646.2500777-1-memxor@gmail.com> References: <20230221200646.2500777-1-memxor@gmail.com> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=19308; i=memxor@gmail.com; h=from:subject; bh=vSpSwfPNYeIwUOJaWTGbqKmoF26xmBGz6bWmsDk1WJY=; b=owEBbQKS/ZANAwAIAUzgyIZIvxHKAcsmYgBj9SRMYN1QmBbrE8DyWDiPUMa0dZdeIzQnxwfaBFcc b88CySuJAjMEAAEIAB0WIQRLvip+Buz51YI8YRFM4MiGSL8RygUCY/UkTAAKCRBM4MiGSL8RyptSD/ wIUCCIV0GfTIFnWNokaHWwY8YAYHFUyw1choCoI/gEa6mELFzh9SrFgV1EezKNNMTyZuPxIgJb3817 Nscf/X29wAzfaXR17HhjVz1yIQyx0B6ae29idN0Y6T34tZYhg+IKRq+WA0/FOsBJdemNSrUpw9W7Kk ERH9K1Nl2NhrYyoIGuLMLgG6tSLbR1/5cGRrsDzJwZVz2lp8xI7x/+1ciohtc0B+GgyYM62fmM3OYI puOOZShuhfsd3AUuCC+UKi/wPZ2LT/a9l20kje8F6ZCr0ltGNBlbDS/xQWtfyRUfCijEx/NrXPoWM4 ffxT2WOb7bngtAnJHjsNM5lna13y+jXJvVh7IOroDMwbUXcvmH/N+L++CTNI/GwNXgYcd+A5UcBYoT MHo/pNTUuvfxafSry6A2mE7yMnbYxh2Tk8qWzwGwoyaNFkndB6Hefx2b+I1ewF9pi8rKW98NAmwG1Q 0dFIFdxdK3dncnZHJnUm5WYZJH0rOr8B78aMjY68azMN44fX7Ud20VqYUVzqwQhLcL7hho2Fx8AZe8 7VgdZqu6WhY9eGkhCU2AaTcXEtl6HJDmn2qEQmFNsb4W8CtvUv/k0HruRlAQ7fQo5HtMGrl0EvSZo9 baOGuDL7imtfhf4CbVDTqmksAl0dt7IsrS5pfuCo1UMTbzDizFp4VxYoOzgw== X-Developer-Key: i=memxor@gmail.com; a=openpgp; fpr=4BBE2A7E06ECF9D5823C61114CE0C88648BF11CA Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Firstly, ensure programs successfully load when using all of the supported maps. Then, extend existing tests to test more cases at runtime. We are currently testing both the synchronous freeing of items and asynchronous destruction when map is freed, but the code needs to be adjusted a bit to be able to also accomodate support for percpu maps. We now do a delete on the item (and update for array maps which has a similar effect for kptrs) to perform a synchronous free of the kptr, and test destruction both for the synchronous and asynchronous deletion. Next time the program runs, it should observe the refcount as 1 since all existing references should have been released by then. By running the program after both possible paths freeing kptrs, we establish that they correctly release resources. Next, we augment the existing test to also test the same code path shared by all local storage maps using a task local storage map. Signed-off-by: Kumar Kartikeya Dwivedi --- .../selftests/bpf/prog_tests/map_kptr.c | 122 ++++-- tools/testing/selftests/bpf/progs/map_kptr.c | 353 +++++++++++++++--- 2 files changed, 410 insertions(+), 65 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/map_kptr.c b/tools/testing/selftests/bpf/prog_tests/map_kptr.c index 3533a4ecad01..550497ee04f0 100644 --- a/tools/testing/selftests/bpf/prog_tests/map_kptr.c +++ b/tools/testing/selftests/bpf/prog_tests/map_kptr.c @@ -7,67 +7,143 @@ static void test_map_kptr_success(bool test_run) { + LIBBPF_OPTS(bpf_test_run_opts, lopts); LIBBPF_OPTS(bpf_test_run_opts, opts, .data_in = &pkt_v4, .data_size_in = sizeof(pkt_v4), .repeat = 1, ); + int key = 0, ret, cpu; struct map_kptr *skel; - int key = 0, ret; - char buf[16]; + struct bpf_link *link; + char buf[16], *pbuf; skel = map_kptr__open_and_load(); if (!ASSERT_OK_PTR(skel, "map_kptr__open_and_load")) return; - ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref), &opts); - ASSERT_OK(ret, "test_map_kptr_ref refcount"); - ASSERT_OK(opts.retval, "test_map_kptr_ref retval"); + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref1), &opts); + ASSERT_OK(ret, "test_map_kptr_ref1 refcount"); + ASSERT_OK(opts.retval, "test_map_kptr_ref1 retval"); ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref2), &opts); ASSERT_OK(ret, "test_map_kptr_ref2 refcount"); ASSERT_OK(opts.retval, "test_map_kptr_ref2 retval"); + link = bpf_program__attach(skel->progs.test_ls_map_kptr_ref1); + if (!ASSERT_OK_PTR(link, "bpf_program__attach ref1")) + goto exit; + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_ls_map_kptr_ref1), &lopts); + ASSERT_OK(ret, "test_ls_map_kptr_ref1 refcount"); + ASSERT_EQ((lopts.retval << 16) >> 16, 9000, "test_ls_map_kptr_ref1 retval"); + if (!ASSERT_OK(bpf_link__destroy(link), "bpf_link__destroy")) + goto exit; + + link = bpf_program__attach(skel->progs.test_ls_map_kptr_ref2); + if (!ASSERT_OK_PTR(link, "bpf_program__attach ref2")) + goto exit; + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_ls_map_kptr_ref2), &lopts); + ASSERT_OK(ret, "test_ls_map_kptr_ref2 refcount"); + ASSERT_EQ((lopts.retval << 16) >> 16, 9000, "test_ls_map_kptr_ref2 retval"); + if (!ASSERT_OK(bpf_link__destroy(link), "bpf_link__destroy")) + goto exit; + if (test_run) goto exit; + cpu = libbpf_num_possible_cpus(); + if (!ASSERT_GT(cpu, 0, "libbpf_num_possible_cpus")) + goto exit; + + pbuf = calloc(cpu, sizeof(buf)); + if (!ASSERT_OK_PTR(pbuf, "calloc(pbuf)")) + goto exit; + ret = bpf_map__update_elem(skel->maps.array_map, &key, sizeof(key), buf, sizeof(buf), 0); ASSERT_OK(ret, "array_map update"); - ret = bpf_map__update_elem(skel->maps.array_map, - &key, sizeof(key), buf, sizeof(buf), 0); - ASSERT_OK(ret, "array_map update2"); + skel->data->ref--; + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts); + ASSERT_OK(ret, "test_map_kptr_ref3 refcount"); + ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval"); + + ret = bpf_map__update_elem(skel->maps.pcpu_array_map, + &key, sizeof(key), pbuf, cpu * sizeof(buf), 0); + ASSERT_OK(ret, "pcpu_array_map update"); + skel->data->ref--; + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts); + ASSERT_OK(ret, "test_map_kptr_ref3 refcount"); + ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval"); - ret = bpf_map__update_elem(skel->maps.hash_map, - &key, sizeof(key), buf, sizeof(buf), 0); - ASSERT_OK(ret, "hash_map update"); ret = bpf_map__delete_elem(skel->maps.hash_map, &key, sizeof(key), 0); ASSERT_OK(ret, "hash_map delete"); + skel->data->ref--; + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts); + ASSERT_OK(ret, "test_map_kptr_ref3 refcount"); + ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval"); + + ret = bpf_map__delete_elem(skel->maps.pcpu_hash_map, &key, sizeof(key), 0); + ASSERT_OK(ret, "pcpu_hash_map delete"); + skel->data->ref--; + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts); + ASSERT_OK(ret, "test_map_kptr_ref3 refcount"); + ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval"); - ret = bpf_map__update_elem(skel->maps.hash_malloc_map, - &key, sizeof(key), buf, sizeof(buf), 0); - ASSERT_OK(ret, "hash_malloc_map update"); ret = bpf_map__delete_elem(skel->maps.hash_malloc_map, &key, sizeof(key), 0); ASSERT_OK(ret, "hash_malloc_map delete"); + skel->data->ref--; + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts); + ASSERT_OK(ret, "test_map_kptr_ref3 refcount"); + ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval"); + + ret = bpf_map__delete_elem(skel->maps.pcpu_hash_malloc_map, &key, sizeof(key), 0); + ASSERT_OK(ret, "pcpu_hash_malloc_map delete"); + skel->data->ref--; + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts); + ASSERT_OK(ret, "test_map_kptr_ref3 refcount"); + ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval"); - ret = bpf_map__update_elem(skel->maps.lru_hash_map, - &key, sizeof(key), buf, sizeof(buf), 0); - ASSERT_OK(ret, "lru_hash_map update"); ret = bpf_map__delete_elem(skel->maps.lru_hash_map, &key, sizeof(key), 0); ASSERT_OK(ret, "lru_hash_map delete"); + skel->data->ref--; + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts); + ASSERT_OK(ret, "test_map_kptr_ref3 refcount"); + ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval"); + + ret = bpf_map__delete_elem(skel->maps.lru_pcpu_hash_map, &key, sizeof(key), 0); + ASSERT_OK(ret, "lru_pcpu_hash_map delete"); + skel->data->ref--; + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_map_kptr_ref3), &opts); + ASSERT_OK(ret, "test_map_kptr_ref3 refcount"); + ASSERT_OK(opts.retval, "test_map_kptr_ref3 retval"); + + link = bpf_program__attach(skel->progs.test_ls_map_kptr_ref_del); + if (!ASSERT_OK_PTR(link, "bpf_program__attach ref_del")) + goto exit; + ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs.test_ls_map_kptr_ref_del), &lopts); + ASSERT_OK(ret, "test_ls_map_kptr_ref_del delete"); + skel->data->ref--; + ASSERT_EQ((lopts.retval << 16) >> 16, 9000, "test_ls_map_kptr_ref_del retval"); + if (!ASSERT_OK(bpf_link__destroy(link), "bpf_link__destroy")) + goto exit; + free(pbuf); exit: map_kptr__destroy(skel); } void test_map_kptr(void) { - if (test__start_subtest("success")) { + RUN_TESTS(map_kptr_fail); + + if (test__start_subtest("success-map")) { + test_map_kptr_success(true); + + ASSERT_OK(kern_sync_rcu(), "sync rcu"); + /* Observe refcount dropping to 1 on bpf_map_free_deferred */ test_map_kptr_success(false); - /* Do test_run twice, so that we see refcount going back to 1 - * after we leave it in map from first iteration. - */ + + ASSERT_OK(kern_sync_rcu(), "sync rcu"); + /* Observe refcount dropping to 1 on synchronous delete elem */ test_map_kptr_success(true); } - - RUN_TESTS(map_kptr_fail); } diff --git a/tools/testing/selftests/bpf/progs/map_kptr.c b/tools/testing/selftests/bpf/progs/map_kptr.c index 228ec45365a8..f8d7f2adccc9 100644 --- a/tools/testing/selftests/bpf/progs/map_kptr.c +++ b/tools/testing/selftests/bpf/progs/map_kptr.c @@ -15,6 +15,13 @@ struct array_map { __uint(max_entries, 1); } array_map SEC(".maps"); +struct pcpu_array_map { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __type(key, int); + __type(value, struct map_value); + __uint(max_entries, 1); +} pcpu_array_map SEC(".maps"); + struct hash_map { __uint(type, BPF_MAP_TYPE_HASH); __type(key, int); @@ -22,6 +29,13 @@ struct hash_map { __uint(max_entries, 1); } hash_map SEC(".maps"); +struct pcpu_hash_map { + __uint(type, BPF_MAP_TYPE_PERCPU_HASH); + __type(key, int); + __type(value, struct map_value); + __uint(max_entries, 1); +} pcpu_hash_map SEC(".maps"); + struct hash_malloc_map { __uint(type, BPF_MAP_TYPE_HASH); __type(key, int); @@ -30,6 +44,14 @@ struct hash_malloc_map { __uint(map_flags, BPF_F_NO_PREALLOC); } hash_malloc_map SEC(".maps"); +struct pcpu_hash_malloc_map { + __uint(type, BPF_MAP_TYPE_PERCPU_HASH); + __type(key, int); + __type(value, struct map_value); + __uint(max_entries, 1); + __uint(map_flags, BPF_F_NO_PREALLOC); +} pcpu_hash_malloc_map SEC(".maps"); + struct lru_hash_map { __uint(type, BPF_MAP_TYPE_LRU_HASH); __type(key, int); @@ -37,6 +59,41 @@ struct lru_hash_map { __uint(max_entries, 1); } lru_hash_map SEC(".maps"); +struct lru_pcpu_hash_map { + __uint(type, BPF_MAP_TYPE_LRU_PERCPU_HASH); + __type(key, int); + __type(value, struct map_value); + __uint(max_entries, 1); +} lru_pcpu_hash_map SEC(".maps"); + +struct cgrp_ls_map { + __uint(type, BPF_MAP_TYPE_CGRP_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC); + __type(key, int); + __type(value, struct map_value); +} cgrp_ls_map SEC(".maps"); + +struct task_ls_map { + __uint(type, BPF_MAP_TYPE_TASK_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC); + __type(key, int); + __type(value, struct map_value); +} task_ls_map SEC(".maps"); + +struct inode_ls_map { + __uint(type, BPF_MAP_TYPE_INODE_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC); + __type(key, int); + __type(value, struct map_value); +} inode_ls_map SEC(".maps"); + +struct sk_ls_map { + __uint(type, BPF_MAP_TYPE_SK_STORAGE); + __uint(map_flags, BPF_F_NO_PREALLOC); + __type(key, int); + __type(value, struct map_value); +} sk_ls_map SEC(".maps"); + #define DEFINE_MAP_OF_MAP(map_type, inner_map_type, name) \ struct { \ __uint(type, map_type); \ @@ -160,6 +217,58 @@ int test_map_kptr(struct __sk_buff *ctx) return 0; } +SEC("tp_btf/cgroup_mkdir") +int BPF_PROG(test_cgrp_map_kptr, struct cgroup *cgrp, const char *path) +{ + struct map_value *v; + + v = bpf_cgrp_storage_get(&cgrp_ls_map, cgrp, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE); + if (v) + test_kptr(v); + return 0; +} + +SEC("lsm/inode_unlink") +int BPF_PROG(test_task_map_kptr, struct inode *inode, struct dentry *victim) +{ + struct task_struct *task; + struct map_value *v; + + task = bpf_get_current_task_btf(); + if (!task) + return 0; + v = bpf_task_storage_get(&task_ls_map, task, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE); + if (v) + test_kptr(v); + return 0; +} + +SEC("lsm/inode_unlink") +int BPF_PROG(test_inode_map_kptr, struct inode *inode, struct dentry *victim) +{ + struct map_value *v; + + v = bpf_inode_storage_get(&inode_ls_map, inode, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE); + if (v) + test_kptr(v); + return 0; +} + +SEC("tc") +int test_sk_map_kptr(struct __sk_buff *ctx) +{ + struct map_value *v; + struct bpf_sock *sk; + + sk = ctx->sk; + if (!sk) + return 0; + v = bpf_sk_storage_get(&sk_ls_map, sk, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE); + if (v) + test_kptr(v); + return 0; +} + SEC("tc") int test_map_in_map_kptr(struct __sk_buff *ctx) { @@ -189,106 +298,266 @@ int test_map_in_map_kptr(struct __sk_buff *ctx) return 0; } -SEC("tc") -int test_map_kptr_ref(struct __sk_buff *ctx) +int ref = 1; + +static __always_inline +int test_map_kptr_ref_pre(struct map_value *v) { struct prog_test_ref_kfunc *p, *p_st; unsigned long arg = 0; - struct map_value *v; - int key = 0, ret; + int ret; p = bpf_kfunc_call_test_acquire(&arg); if (!p) return 1; + ref++; p_st = p->next; - if (p_st->cnt.refs.counter != 2) { + if (p_st->cnt.refs.counter != ref) { ret = 2; goto end; } - v = bpf_map_lookup_elem(&array_map, &key); - if (!v) { - ret = 3; - goto end; - } - p = bpf_kptr_xchg(&v->ref_ptr, p); if (p) { - ret = 4; + ret = 3; goto end; } - if (p_st->cnt.refs.counter != 2) - return 5; + if (p_st->cnt.refs.counter != ref) + return 4; p = bpf_kfunc_call_test_kptr_get(&v->ref_ptr, 0, 0); if (!p) - return 6; - if (p_st->cnt.refs.counter != 3) { - ret = 7; + return 5; + ref++; + if (p_st->cnt.refs.counter != ref) { + ret = 6; goto end; } bpf_kfunc_call_test_release(p); - if (p_st->cnt.refs.counter != 2) - return 8; + ref--; + if (p_st->cnt.refs.counter != ref) + return 7; p = bpf_kptr_xchg(&v->ref_ptr, NULL); if (!p) - return 9; + return 8; bpf_kfunc_call_test_release(p); - if (p_st->cnt.refs.counter != 1) - return 10; + ref--; + if (p_st->cnt.refs.counter != ref) + return 9; p = bpf_kfunc_call_test_acquire(&arg); if (!p) - return 11; + return 10; + ref++; p = bpf_kptr_xchg(&v->ref_ptr, p); if (p) { - ret = 12; + ret = 11; goto end; } - if (p_st->cnt.refs.counter != 2) - return 13; + if (p_st->cnt.refs.counter != ref) + return 12; /* Leave in map */ return 0; end: + ref--; bpf_kfunc_call_test_release(p); return ret; } -SEC("tc") -int test_map_kptr_ref2(struct __sk_buff *ctx) +static __always_inline +int test_map_kptr_ref_post(struct map_value *v) { struct prog_test_ref_kfunc *p, *p_st; - struct map_value *v; - int key = 0; - - v = bpf_map_lookup_elem(&array_map, &key); - if (!v) - return 1; p_st = v->ref_ptr; - if (!p_st || p_st->cnt.refs.counter != 2) - return 2; + if (!p_st || p_st->cnt.refs.counter != ref) + return 1; p = bpf_kptr_xchg(&v->ref_ptr, NULL); if (!p) - return 3; - if (p_st->cnt.refs.counter != 2) { + return 2; + if (p_st->cnt.refs.counter != ref) { bpf_kfunc_call_test_release(p); - return 4; + return 3; } p = bpf_kptr_xchg(&v->ref_ptr, p); if (p) { bpf_kfunc_call_test_release(p); - return 5; + return 4; } - if (p_st->cnt.refs.counter != 2) - return 6; + if (p_st->cnt.refs.counter != ref) + return 5; + + return 0; +} + +#define TEST(map) \ + v = bpf_map_lookup_elem(&map, &key); \ + if (!v) \ + return -1; \ + ret = test_map_kptr_ref_pre(v); \ + if (ret) \ + return ret; + +#define TEST_PCPU(map) \ + v = bpf_map_lookup_percpu_elem(&map, &key, 0); \ + if (!v) \ + return -1; \ + ret = test_map_kptr_ref_pre(v); \ + if (ret) \ + return ret; + +SEC("tc") +int test_map_kptr_ref1(struct __sk_buff *ctx) +{ + struct map_value *v, val = {}; + int key = 0, ret; + + bpf_map_update_elem(&hash_map, &key, &val, 0); + bpf_map_update_elem(&hash_malloc_map, &key, &val, 0); + bpf_map_update_elem(&lru_hash_map, &key, &val, 0); + + bpf_map_update_elem(&pcpu_hash_map, &key, &val, 0); + bpf_map_update_elem(&pcpu_hash_malloc_map, &key, &val, 0); + bpf_map_update_elem(&lru_pcpu_hash_map, &key, &val, 0); + + TEST(array_map); + TEST(hash_map); + TEST(hash_malloc_map); + TEST(lru_hash_map); + + TEST_PCPU(pcpu_array_map); + TEST_PCPU(pcpu_hash_map); + TEST_PCPU(pcpu_hash_malloc_map); + TEST_PCPU(lru_pcpu_hash_map); + + return 0; +} + +#undef TEST +#undef TEST_PCPU + +#define TEST(map) \ + v = bpf_map_lookup_elem(&map, &key); \ + if (!v) \ + return -1; \ + ret = test_map_kptr_ref_post(v); \ + if (ret) \ + return ret; + +#define TEST_PCPU(map) \ + v = bpf_map_lookup_percpu_elem(&map, &key, 0); \ + if (!v) \ + return -1; \ + ret = test_map_kptr_ref_post(v); \ + if (ret) \ + return ret; + +SEC("tc") +int test_map_kptr_ref2(struct __sk_buff *ctx) +{ + struct map_value *v; + int key = 0, ret; + + TEST(array_map); + TEST(hash_map); + TEST(hash_malloc_map); + TEST(lru_hash_map); + + TEST_PCPU(pcpu_array_map); + TEST_PCPU(pcpu_hash_map); + TEST_PCPU(pcpu_hash_malloc_map); + TEST_PCPU(lru_pcpu_hash_map); return 0; } +#undef TEST +#undef TEST_PCPU + +SEC("tc") +int test_map_kptr_ref3(struct __sk_buff *ctx) +{ + struct prog_test_ref_kfunc *p; + unsigned long sp = 0; + + p = bpf_kfunc_call_test_acquire(&sp); + if (!p) + return 1; + ref++; + if (p->cnt.refs.counter != ref) { + bpf_kfunc_call_test_release(p); + return 2; + } + bpf_kfunc_call_test_release(p); + ref--; + return 0; +} + +SEC("fmod_ret/bpf_modify_return_test") +int BPF_PROG(test_ls_map_kptr_ref1, int a, int *b) +{ + struct task_struct *current; + struct map_value *v; + int ret; + + current = bpf_get_current_task_btf(); + if (!current) + return 100; + v = bpf_task_storage_get(&task_ls_map, current, NULL, 0); + if (v) + return 150; + v = bpf_task_storage_get(&task_ls_map, current, NULL, BPF_LOCAL_STORAGE_GET_F_CREATE); + if (!v) + return 200; + ret = test_map_kptr_ref_pre(v); + if (ret) + return ret; + return 9000; +} + +SEC("fmod_ret/bpf_modify_return_test") +int BPF_PROG(test_ls_map_kptr_ref2, int a, int *b) +{ + struct task_struct *current; + struct map_value *v; + int ret; + + current = bpf_get_current_task_btf(); + if (!current) + return 100; + v = bpf_task_storage_get(&task_ls_map, current, NULL, 0); + if (!v) + return 200; + ret = test_map_kptr_ref_post(v); + if (ret) + return ret; + return 9000; +} + +SEC("fmod_ret/bpf_modify_return_test") +int BPF_PROG(test_ls_map_kptr_ref_del, int a, int *b) +{ + struct task_struct *current; + struct map_value *v; + int ret; + + current = bpf_get_current_task_btf(); + if (!current) + return 100; + v = bpf_task_storage_get(&task_ls_map, current, NULL, 0); + if (!v) + return 200; + if (!v->ref_ptr) + return 300; + ret = bpf_task_storage_delete(&task_ls_map, current); + if (ret) + return ret; + return 9000; +} + char _license[] SEC("license") = "GPL";