From patchwork Tue Nov 8 14:05:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= X-Patchwork-Id: 13036337 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FD01C4332F for ; Tue, 8 Nov 2022 14:07:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235091AbiKHOHE (ORCPT ); Tue, 8 Nov 2022 09:07:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235076AbiKHOHD (ORCPT ); Tue, 8 Nov 2022 09:07:03 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0809686AD for ; Tue, 8 Nov 2022 06:06:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667916372; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Sl3yRy218aHzyQly56lFG09Q5E7qWzHqb5L/W0IjWek=; b=X0POqJU/TI5Rz4ak6BWbpvvWjRNlydZwzxLx4xVsklY6uDuvXUqu0yxyRH7Y7Jc6kpIBWJ RNsYkr1o+85wb/RFxr1S8XtzVFk/k6k7jxPrFxstzNejXiNcTBbkoILQFJLtJCcIQILdQi jokwQAaRxiNixzj27O1bZgWWxNG4Dfc= Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-562-cwpmC9nUO5iN-68k74lP9A-1; Tue, 08 Nov 2022 09:06:08 -0500 X-MC-Unique: cwpmC9nUO5iN-68k74lP9A-1 Received: by mail-ej1-f72.google.com with SMTP id nc4-20020a1709071c0400b0078a5ceb571bso8397015ejc.4 for ; Tue, 08 Nov 2022 06:06:08 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Sl3yRy218aHzyQly56lFG09Q5E7qWzHqb5L/W0IjWek=; b=J+OqmsjKzn4d/Z7BpoOOjtihzQOw5TneXxZygeLwgSgE1GWUBEmUWCjnWHycO6GrHF dDHsaG58Znqjjb/d00gGnhZKbHcVKMALkIQj44N5t7hT64EiY/66BV5P7t2L9HNt/La+ qQ72+5jQCedaP0lbb+xhTnVz9m6PNK5k29gkauQyq6jAU8RYqgodK1/506ZxOlkH4pp7 6dEFBEFKCEgKqyxNxmLhsVuMHkRS7x+FFRws7xFKs72NF4j3wCeacxE7yOBBTHQuYkFM FHT2KYOcMEX1Zpk8PdJimLMtJS6GHBF6XTnMLc8RPjOcm/vofS7lbjEpuI6bVQOgPBuA SUXQ== X-Gm-Message-State: ACrzQf1fBn8B4C7EV4XzTle9Je9UlhfLFewbW7JWkmYJ1v/ux2+yvDOh 0MeHwojvmpsz/mm+c8+4BgebCkq9tjJW8zN96DLHBFagAzYdjKbjDVwaG1+62ThWqd+x4wbLLrS lMyFjwV92sMOZMKNr X-Received: by 2002:a05:6402:22f1:b0:462:f6eb:6c6b with SMTP id dn17-20020a05640222f100b00462f6eb6c6bmr55960114edb.365.1667916365697; Tue, 08 Nov 2022 06:06:05 -0800 (PST) X-Google-Smtp-Source: AMsMyM5NJ2carxvOY0HrxOeFub8azKnPISPzrmR2iGDEEi5HaGLkHXFnfsllxFWTzbboNTOG2giGoA== X-Received: by 2002:a05:6402:22f1:b0:462:f6eb:6c6b with SMTP id dn17-20020a05640222f100b00462f6eb6c6bmr55959945edb.365.1667916363837; Tue, 08 Nov 2022 06:06:03 -0800 (PST) Received: from alrua-x1.borgediget.toke.dk ([2a0c:4d80:42:443::2]) by smtp.gmail.com with ESMTPSA id a16-20020a170906369000b0078d9c2c8250sm4679728ejc.84.2022.11.08.06.06.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Nov 2022 06:06:03 -0800 (PST) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id E65E278152D; Tue, 8 Nov 2022 15:06:02 +0100 (CET) From: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Song Liu , Stanislav Fomichev , netdev@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH bpf-next v3 1/3] dev: Move received_rps counter next to RPS members in softnet data Date: Tue, 8 Nov 2022 15:05:59 +0100 Message-Id: <20221108140601.149971-2-toke@redhat.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221108140601.149971-1-toke@redhat.com> References: <20221108140601.149971-1-toke@redhat.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Move the received_rps counter value next to the other RPS-related members in softnet_data. This closes two four-byte holes in the structure, making room for another pointer in the first two cache lines without bumping the xmit struct to its own line. Acked-by: Song Liu Reviewed-by: Stanislav Fomichev Signed-off-by: Toke Høiland-Jørgensen --- include/linux/netdevice.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 4b5052db978f..31c53d409743 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3114,7 +3114,6 @@ struct softnet_data { /* stats */ unsigned int processed; unsigned int time_squeeze; - unsigned int received_rps; #ifdef CONFIG_RPS struct softnet_data *rps_ipi_list; #endif @@ -3147,6 +3146,7 @@ struct softnet_data { unsigned int cpu; unsigned int input_queue_tail; #endif + unsigned int received_rps; unsigned int dropped; struct sk_buff_head input_pkt_queue; struct napi_struct backlog; From patchwork Tue Nov 8 14:06:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= X-Patchwork-Id: 13036338 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59DE2C43219 for ; Tue, 8 Nov 2022 14:07:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235087AbiKHOHF (ORCPT ); Tue, 8 Nov 2022 09:07:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235077AbiKHOHD (ORCPT ); Tue, 8 Nov 2022 09:07:03 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FD9169DE5 for ; Tue, 8 Nov 2022 06:06:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667916367; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xrFIBMqXX9h6pvecuxNnaKXU9eCfL4NHKAWPiOl9PKs=; b=hvtRPuKW7OBtipSeq53g5WiyArMgNlc3vQFp7t49DswaBIpWDX5mOQR62SEsBAYueEruyj STgv00v6RPT/xWvo8lmUPR5w7FkySx5Ntf/D9gfBxKLbj3bB9mUgyb0YmcWLpmzQsFCOv1 EiyBPJ6XtWKHSIC1z+PWLobHLbOQ5Rk= Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com [209.85.218.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-214-nlUgQh1ZN8S12g4iCKxcmQ-1; Tue, 08 Nov 2022 09:06:06 -0500 X-MC-Unique: nlUgQh1ZN8S12g4iCKxcmQ-1 Received: by mail-ej1-f71.google.com with SMTP id nb1-20020a1709071c8100b007ae4083d6f5so5639147ejc.15 for ; Tue, 08 Nov 2022 06:06:06 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xrFIBMqXX9h6pvecuxNnaKXU9eCfL4NHKAWPiOl9PKs=; b=Rxhru47UZAJFYVvemAQU22kVSflncLeUHxRfOYM7WaG2g5sQq33z25Z6alWudKHY/A tX6NH3j32gJw/SKiLuvcSs5ItEfy6ROQeUrdIGkIGz/wOHNDJqPpHjrY2LmFCVZzsR1s T50iKSFp9pehGNf3TDgrqb5AuU/AZvwdCvlDE6ijyjA3iyE690fjmcB1dhAA3Re+WA4X /FfOSLliZN0w0BYwVgOkhzH3BLfSj8JeYjXOcu9zfgedJ+DWrxBIaVQlf/jmPJ8EnyWe N8KXEmdBZbyCvLbNlxp0KxvVaFAfOoWutGGgKXSFedLX4D8k3wui7OSt73hHFbCpCi8m 5YlQ== X-Gm-Message-State: ACrzQf3vca04f0LTNc13jzS+6RDDJQGqfzdYuNxhplbRopaXMVpxwmvw 7FOOeXnXpVgPOAGB188XIZwq7g1X2k7o3ie3iyi3arQObEouwdP5IecX+fsnVkrtroQiBBFGNnV Oz6jNZcllVGS6pURZ X-Received: by 2002:a17:906:db0a:b0:781:f24:a782 with SMTP id xj10-20020a170906db0a00b007810f24a782mr53513532ejb.399.1667916364921; Tue, 08 Nov 2022 06:06:04 -0800 (PST) X-Google-Smtp-Source: AMsMyM6FUzjrqav6a7ZobeEXWObV4PTUGcNxRWHoLlu6GN3MnIHnNXxESEiA0rdGuJqqpW2FS5S4Ww== X-Received: by 2002:a17:906:db0a:b0:781:f24:a782 with SMTP id xj10-20020a170906db0a00b007810f24a782mr53513472ejb.399.1667916364399; Tue, 08 Nov 2022 06:06:04 -0800 (PST) Received: from alrua-x1.borgediget.toke.dk ([45.145.92.2]) by smtp.gmail.com with ESMTPSA id cf20-20020a170906b2d400b007389c5a45f0sm4729629ejb.148.2022.11.08.06.06.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Nov 2022 06:06:04 -0800 (PST) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id A45A378152F; Tue, 8 Nov 2022 15:06:03 +0100 (CET) From: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon Cc: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Eric Dumazet , Paolo Abeni , bpf@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf-next v3 2/3] bpf: Expand map key argument of bpf_redirect_map to u64 Date: Tue, 8 Nov 2022 15:06:00 +0100 Message-Id: <20221108140601.149971-3-toke@redhat.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221108140601.149971-1-toke@redhat.com> References: <20221108140601.149971-1-toke@redhat.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net For queueing packets in XDP we want to add a new redirect map type with support for 64-bit indexes. To prepare fore this, expand the width of the 'key' argument to the bpf_redirect_map() helper. Since BPF registers are always 64-bit, this should be safe to do after the fact. Acked-by: Song Liu Reviewed-by: Stanislav Fomichev Signed-off-by: Toke Høiland-Jørgensen --- include/linux/bpf.h | 2 +- include/linux/filter.h | 12 ++++++------ include/uapi/linux/bpf.h | 2 +- kernel/bpf/cpumap.c | 4 ++-- kernel/bpf/devmap.c | 4 ++-- kernel/bpf/verifier.c | 2 +- net/core/filter.c | 4 ++-- net/xdp/xskmap.c | 4 ++-- 8 files changed, 17 insertions(+), 17 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 798aec816970..863a5db1e370 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -135,7 +135,7 @@ struct bpf_map_ops { struct bpf_local_storage __rcu ** (*map_owner_storage_ptr)(void *owner); /* Misc helpers.*/ - int (*map_redirect)(struct bpf_map *map, u32 ifindex, u64 flags); + int (*map_redirect)(struct bpf_map *map, u64 key, u64 flags); /* map_meta_equal must be implemented for maps that can be * used as an inner map. It is a runtime check to ensure diff --git a/include/linux/filter.h b/include/linux/filter.h index efc42a6e3aed..e6c1c277ffdc 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -643,13 +643,13 @@ struct bpf_nh_params { }; struct bpf_redirect_info { - u32 flags; - u32 tgt_index; + u64 tgt_index; void *tgt_value; struct bpf_map *map; + u32 flags; + u32 kern_flags; u32 map_id; enum bpf_map_type map_type; - u32 kern_flags; struct bpf_nh_params nh; }; @@ -1504,7 +1504,7 @@ static inline bool bpf_sk_lookup_run_v6(struct net *net, int protocol, } #endif /* IS_ENABLED(CONFIG_IPV6) */ -static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u32 ifindex, +static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u64 index, u64 flags, const u64 flag_mask, void *lookup_elem(struct bpf_map *map, u32 key)) { @@ -1515,7 +1515,7 @@ static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u32 ifind if (unlikely(flags & ~(action_mask | flag_mask))) return XDP_ABORTED; - ri->tgt_value = lookup_elem(map, ifindex); + ri->tgt_value = lookup_elem(map, index); if (unlikely(!ri->tgt_value) && !(flags & BPF_F_BROADCAST)) { /* If the lookup fails we want to clear out the state in the * redirect_info struct completely, so that if an eBPF program @@ -1527,7 +1527,7 @@ static __always_inline int __bpf_xdp_redirect_map(struct bpf_map *map, u32 ifind return flags & action_mask; } - ri->tgt_index = ifindex; + ri->tgt_index = index; ri->map_id = map->id; ri->map_type = map->map_type; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 94659f6b3395..9fd462e2f8c0 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -2647,7 +2647,7 @@ union bpf_attr { * Return * 0 on success, or a negative error in case of failure. * - * long bpf_redirect_map(struct bpf_map *map, u32 key, u64 flags) + * long bpf_redirect_map(struct bpf_map *map, u64 key, u64 flags) * Description * Redirect the packet to the endpoint referenced by *map* at * index *key*. Depending on its type, this *map* can contain diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index bb03fdba73bb..fa9dc11d5d64 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -664,9 +664,9 @@ static int cpu_map_get_next_key(struct bpf_map *map, void *key, void *next_key) return 0; } -static int cpu_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags) +static int cpu_map_redirect(struct bpf_map *map, u64 index, u64 flags) { - return __bpf_xdp_redirect_map(map, ifindex, flags, 0, + return __bpf_xdp_redirect_map(map, index, flags, 0, __cpu_map_lookup_elem); } diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index f9a87dcc5535..d01e4c55b376 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -992,14 +992,14 @@ static int dev_map_hash_update_elem(struct bpf_map *map, void *key, void *value, map, key, value, map_flags); } -static int dev_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags) +static int dev_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags) { return __bpf_xdp_redirect_map(map, ifindex, flags, BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS, __dev_map_lookup_elem); } -static int dev_hash_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags) +static int dev_hash_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags) { return __bpf_xdp_redirect_map(map, ifindex, flags, BPF_F_BROADCAST | BPF_F_EXCLUDE_INGRESS, diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d3b75aa0c54d..f81b70a8d4ea 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -14366,7 +14366,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env) BUILD_BUG_ON(!__same_type(ops->map_peek_elem, (int (*)(struct bpf_map *map, void *value))NULL)); BUILD_BUG_ON(!__same_type(ops->map_redirect, - (int (*)(struct bpf_map *map, u32 ifindex, u64 flags))NULL)); + (int (*)(struct bpf_map *map, u64 index, u64 flags))NULL)); BUILD_BUG_ON(!__same_type(ops->map_for_each_callback, (int (*)(struct bpf_map *map, bpf_callback_t callback_fn, diff --git a/net/core/filter.c b/net/core/filter.c index cb3b635e35be..d062f85794f7 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -4414,10 +4414,10 @@ static const struct bpf_func_proto bpf_xdp_redirect_proto = { .arg2_type = ARG_ANYTHING, }; -BPF_CALL_3(bpf_xdp_redirect_map, struct bpf_map *, map, u32, ifindex, +BPF_CALL_3(bpf_xdp_redirect_map, struct bpf_map *, map, u64, key, u64, flags) { - return map->ops->map_redirect(map, ifindex, flags); + return map->ops->map_redirect(map, key, flags); } static const struct bpf_func_proto bpf_xdp_redirect_map_proto = { diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c index acc8e52a4f5f..771d0fa90ef5 100644 --- a/net/xdp/xskmap.c +++ b/net/xdp/xskmap.c @@ -231,9 +231,9 @@ static int xsk_map_delete_elem(struct bpf_map *map, void *key) return 0; } -static int xsk_map_redirect(struct bpf_map *map, u32 ifindex, u64 flags) +static int xsk_map_redirect(struct bpf_map *map, u64 index, u64 flags) { - return __bpf_xdp_redirect_map(map, ifindex, flags, 0, + return __bpf_xdp_redirect_map(map, index, flags, 0, __xsk_map_lookup_elem); } From patchwork Tue Nov 8 14:06:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= X-Patchwork-Id: 13036340 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02ED2C4167B for ; Tue, 8 Nov 2022 14:07:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235094AbiKHOHO (ORCPT ); Tue, 8 Nov 2022 09:07:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235102AbiKHOHJ (ORCPT ); Tue, 8 Nov 2022 09:07:09 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B222E70563 for ; Tue, 8 Nov 2022 06:06:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667916368; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dVvkgZ8BF9CC85JQ/vfrhCVMFoovZDzaH0lpMRAf/NY=; b=TQQknR69QkRjiY8GCfYllGGNUlU8/N5GN6OKYF/eIpwqglv2AYAC68YYoUko+d52z8qNIW z+n5GxOZ171l4rsBEBVz1JRMF5PwmHqvtH3FN5pCjlt0vgwuK9YNg4q5S+UhRGOuH0CkP9 EsI8MOBbCkUReJ7ZtpFNcUOYlhEHBgQ= Received: from mail-ej1-f72.google.com (mail-ej1-f72.google.com [209.85.218.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-267-NrOmnHpLMqaYulAr6XQcUg-1; Tue, 08 Nov 2022 09:06:07 -0500 X-MC-Unique: NrOmnHpLMqaYulAr6XQcUg-1 Received: by mail-ej1-f72.google.com with SMTP id sg37-20020a170907a42500b007adaedb5ba2so8430314ejc.18 for ; Tue, 08 Nov 2022 06:06:07 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dVvkgZ8BF9CC85JQ/vfrhCVMFoovZDzaH0lpMRAf/NY=; b=kPQXpCiz6xOPDUXL8FvGDywVaKIkPWDQ6IH3pNNHA9C0DdSaxIYeHu0lRq4dLSegYs EzVFpTvOWuKG9XstWD04GhH1b1Q1QPdkoHEq4Uh7lioL8V1GricRai36QNFUoAKVXVJz gL6MzvcpnUPPBdhGZL/jlTpykmie808AnPyfpAWk1TrUEpnb0BaP4bIRO47oQQlzRxa+ I62BWWTQtJilx+YxxTmj4dXHVI2td0/No8KeOR4SC48mf54AnKMNxnWPNeupDAX2JkuD sArklngWWZPEXZtvkU7RTefJQPARGW0l9hOw7rrUHGUbPYKxyN2zA3TPPEiB0Rpd+Llz gt1Q== X-Gm-Message-State: ACrzQf3/r6QqTNA9NRNJ9by0cHR7pYgL7Kam2d/OWsUr4X053J4jN0Bc jJYF5s6AqzbbzqRLg55Z+pnHHA1B/05t49aMx3rzOVh8Y1EfKKSFtHFRUlo5WBt3VjTjBogtWaT QNKjQ9ux0ZtLLpN56 X-Received: by 2002:a17:907:6d88:b0:7ad:b5f1:8ffc with SMTP id sb8-20020a1709076d8800b007adb5f18ffcmr52498323ejc.727.1667916366011; Tue, 08 Nov 2022 06:06:06 -0800 (PST) X-Google-Smtp-Source: AMsMyM6mwU6kKJ1f9uB19r3RGzfYNHovoxF5ytPa619e6EiWlgBzSmcCmo73OD7s58GwK+3BFLJa9w== X-Received: by 2002:a17:907:6d88:b0:7ad:b5f1:8ffc with SMTP id sb8-20020a1709076d8800b007adb5f18ffcmr52498262ejc.727.1667916365475; Tue, 08 Nov 2022 06:06:05 -0800 (PST) Received: from alrua-x1.borgediget.toke.dk ([45.145.92.2]) by smtp.gmail.com with ESMTPSA id s18-20020aa7c552000000b00461c1804cdasm5618185edr.3.2022.11.08.06.06.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Nov 2022 06:06:05 -0800 (PST) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id 5E527781531; Tue, 8 Nov 2022 15:06:04 +0100 (CET) From: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa Cc: Kumar Kartikeya Dwivedi , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , bpf@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH bpf-next v3 3/3] bpf: Use 64-bit return value for bpf_prog_run Date: Tue, 8 Nov 2022 15:06:01 +0100 Message-Id: <20221108140601.149971-4-toke@redhat.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221108140601.149971-1-toke@redhat.com> References: <20221108140601.149971-1-toke@redhat.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net From: Kumar Kartikeya Dwivedi BPF ABI always uses 64-bit return value, but so far __bpf_prog_run and higher level wrappers always truncated the return value to 32-bit. We want to be able to introduce a new BPF program type that returns a PTR_TO_BTF_ID or NULL from the BPF program to the caller context in the kernel. To be able to use this returned pointer value, the bpf_prog_run invocation needs to be able to return a 64-bit value, so update the definitions to allow this. To avoid code churn in the whole kernel, we let the compiler handle truncation normally, and allow new call sites to utilize the 64-bit return value, by receiving the return value as a u64. Only the cgroup_bpf_lsm functions have been modified apart from the generic bpf_prog_run wrappers, as their prototype needs to match bpf_func_t to be called generically, but otherwise none of other cases have been modified yet. It will be done when a specific call site requires handling of the 64-bit return value from the BPF program. Acked-by: Song Liu Reviewed-by: Stanislav Fomichev Signed-off-by: Kumar Kartikeya Dwivedi Signed-off-by: Toke Høiland-Jørgensen --- include/linux/bpf-cgroup.h | 12 ++++++------ include/linux/bpf.h | 22 +++++++++++----------- include/linux/filter.h | 30 +++++++++++++++--------------- kernel/bpf/cgroup.c | 12 ++++++------ kernel/bpf/core.c | 14 +++++++------- kernel/bpf/offload.c | 4 ++-- net/packet/af_packet.c | 7 +++++-- 7 files changed, 52 insertions(+), 49 deletions(-) diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h index 57e9e109257e..85ae187e5d41 100644 --- a/include/linux/bpf-cgroup.h +++ b/include/linux/bpf-cgroup.h @@ -23,12 +23,12 @@ struct ctl_table; struct ctl_table_header; struct task_struct; -unsigned int __cgroup_bpf_run_lsm_sock(const void *ctx, - const struct bpf_insn *insn); -unsigned int __cgroup_bpf_run_lsm_socket(const void *ctx, - const struct bpf_insn *insn); -unsigned int __cgroup_bpf_run_lsm_current(const void *ctx, - const struct bpf_insn *insn); +u64 __cgroup_bpf_run_lsm_sock(const void *ctx, + const struct bpf_insn *insn); +u64 __cgroup_bpf_run_lsm_socket(const void *ctx, + const struct bpf_insn *insn); +u64 __cgroup_bpf_run_lsm_current(const void *ctx, + const struct bpf_insn *insn); #ifdef CONFIG_CGROUP_BPF diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 863a5db1e370..8f1190a0fc62 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -59,8 +59,8 @@ typedef u64 (*bpf_callback_t)(u64, u64, u64, u64, u64); typedef int (*bpf_iter_init_seq_priv_t)(void *private_data, struct bpf_iter_aux_info *aux); typedef void (*bpf_iter_fini_seq_priv_t)(void *private_data); -typedef unsigned int (*bpf_func_t)(const void *, - const struct bpf_insn *); +typedef u64 (*bpf_func_t)(const void *, + const struct bpf_insn *); struct bpf_iter_seq_info { const struct seq_operations *seq_ops; bpf_iter_init_seq_priv_t init_seq_private; @@ -1004,7 +1004,7 @@ struct bpf_dispatcher { struct bpf_ksym ksym; }; -static __always_inline __nocfi unsigned int bpf_dispatcher_nop_func( +static __always_inline __nocfi u64 bpf_dispatcher_nop_func( const void *ctx, const struct bpf_insn *insnsi, bpf_func_t bpf_func) @@ -1049,7 +1049,7 @@ int __init bpf_arch_init_dispatcher_early(void *ip); #define DEFINE_BPF_DISPATCHER(name) \ notrace BPF_DISPATCHER_ATTRIBUTES \ - noinline __nocfi unsigned int bpf_dispatcher_##name##_func( \ + noinline __nocfi u64 bpf_dispatcher_##name##_func( \ const void *ctx, \ const struct bpf_insn *insnsi, \ bpf_func_t bpf_func) \ @@ -1062,7 +1062,7 @@ int __init bpf_arch_init_dispatcher_early(void *ip); BPF_DISPATCHER_INIT_CALL(bpf_dispatcher_##name); #define DECLARE_BPF_DISPATCHER(name) \ - unsigned int bpf_dispatcher_##name##_func( \ + u64 bpf_dispatcher_##name##_func( \ const void *ctx, \ const struct bpf_insn *insnsi, \ bpf_func_t bpf_func); \ @@ -1265,7 +1265,7 @@ struct bpf_prog { u8 tag[BPF_TAG_SIZE]; struct bpf_prog_stats __percpu *stats; int __percpu *active; - unsigned int (*bpf_func)(const void *ctx, + u64 (*bpf_func)(const void *ctx, const struct bpf_insn *insn); struct bpf_prog_aux *aux; /* Auxiliary fields */ struct sock_fprog_kern *orig_prog; /* Original BPF program */ @@ -1619,9 +1619,9 @@ static inline void bpf_reset_run_ctx(struct bpf_run_ctx *old_ctx) /* BPF program asks to set CN on the packet. */ #define BPF_RET_SET_CN (1 << 0) -typedef u32 (*bpf_prog_run_fn)(const struct bpf_prog *prog, const void *ctx); +typedef u64 (*bpf_prog_run_fn)(const struct bpf_prog *prog, const void *ctx); -static __always_inline u32 +static __always_inline u64 bpf_prog_run_array(const struct bpf_prog_array *array, const void *ctx, bpf_prog_run_fn run_prog) { @@ -1629,7 +1629,7 @@ bpf_prog_run_array(const struct bpf_prog_array *array, const struct bpf_prog *prog; struct bpf_run_ctx *old_run_ctx; struct bpf_trace_run_ctx run_ctx; - u32 ret = 1; + u64 ret = 1; RCU_LOCKDEP_WARN(!rcu_read_lock_held(), "no rcu lock held"); @@ -1659,7 +1659,7 @@ bpf_prog_run_array(const struct bpf_prog_array *array, * section and disable preemption for that program alone, so it can access * rcu-protected dynamically sized maps. */ -static __always_inline u32 +static __always_inline u64 bpf_prog_run_array_sleepable(const struct bpf_prog_array __rcu *array_rcu, const void *ctx, bpf_prog_run_fn run_prog) { @@ -1668,7 +1668,7 @@ bpf_prog_run_array_sleepable(const struct bpf_prog_array __rcu *array_rcu, const struct bpf_prog_array *array; struct bpf_run_ctx *old_run_ctx; struct bpf_trace_run_ctx run_ctx; - u32 ret = 1; + u64 ret = 1; might_fault(); diff --git a/include/linux/filter.h b/include/linux/filter.h index e6c1c277ffdc..d67d3e6a2f58 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -573,16 +573,16 @@ extern int (*nfct_btf_struct_access)(struct bpf_verifier_log *log, const struct enum bpf_access_type atype, u32 *next_btf_id, enum bpf_type_flag *flag); -typedef unsigned int (*bpf_dispatcher_fn)(const void *ctx, - const struct bpf_insn *insnsi, - unsigned int (*bpf_func)(const void *, - const struct bpf_insn *)); +typedef u64 (*bpf_dispatcher_fn)(const void *ctx, + const struct bpf_insn *insnsi, + u64 (*bpf_func)(const void *, + const struct bpf_insn *)); -static __always_inline u32 __bpf_prog_run(const struct bpf_prog *prog, +static __always_inline u64 __bpf_prog_run(const struct bpf_prog *prog, const void *ctx, bpf_dispatcher_fn dfunc) { - u32 ret; + u64 ret; cant_migrate(); if (static_branch_unlikely(&bpf_stats_enabled_key)) { @@ -602,7 +602,7 @@ static __always_inline u32 __bpf_prog_run(const struct bpf_prog *prog, return ret; } -static __always_inline u32 bpf_prog_run(const struct bpf_prog *prog, const void *ctx) +static __always_inline u64 bpf_prog_run(const struct bpf_prog *prog, const void *ctx) { return __bpf_prog_run(prog, ctx, bpf_dispatcher_nop_func); } @@ -615,10 +615,10 @@ static __always_inline u32 bpf_prog_run(const struct bpf_prog *prog, const void * invocation of a BPF program does not require reentrancy protection * against a BPF program which is invoked from a preempting task. */ -static inline u32 bpf_prog_run_pin_on_cpu(const struct bpf_prog *prog, +static inline u64 bpf_prog_run_pin_on_cpu(const struct bpf_prog *prog, const void *ctx) { - u32 ret; + u64 ret; migrate_disable(); ret = bpf_prog_run(prog, ctx); @@ -714,13 +714,13 @@ static inline u8 *bpf_skb_cb(const struct sk_buff *skb) } /* Must be invoked with migration disabled */ -static inline u32 __bpf_prog_run_save_cb(const struct bpf_prog *prog, +static inline u64 __bpf_prog_run_save_cb(const struct bpf_prog *prog, const void *ctx) { const struct sk_buff *skb = ctx; u8 *cb_data = bpf_skb_cb(skb); u8 cb_saved[BPF_SKB_CB_LEN]; - u32 res; + u64 res; if (unlikely(prog->cb_access)) { memcpy(cb_saved, cb_data, sizeof(cb_saved)); @@ -735,10 +735,10 @@ static inline u32 __bpf_prog_run_save_cb(const struct bpf_prog *prog, return res; } -static inline u32 bpf_prog_run_save_cb(const struct bpf_prog *prog, +static inline u64 bpf_prog_run_save_cb(const struct bpf_prog *prog, struct sk_buff *skb) { - u32 res; + u64 res; migrate_disable(); res = __bpf_prog_run_save_cb(prog, skb); @@ -746,11 +746,11 @@ static inline u32 bpf_prog_run_save_cb(const struct bpf_prog *prog, return res; } -static inline u32 bpf_prog_run_clear_cb(const struct bpf_prog *prog, +static inline u64 bpf_prog_run_clear_cb(const struct bpf_prog *prog, struct sk_buff *skb) { u8 *cb_data = bpf_skb_cb(skb); - u32 res; + u64 res; if (unlikely(prog->cb_access)) memset(cb_data, 0, BPF_SKB_CB_LEN); diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index bf2fdb33fb31..b78a51513431 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -63,8 +63,8 @@ bpf_prog_run_array_cg(const struct cgroup_bpf *cgrp, return run_ctx.retval; } -unsigned int __cgroup_bpf_run_lsm_sock(const void *ctx, - const struct bpf_insn *insn) +u64 __cgroup_bpf_run_lsm_sock(const void *ctx, + const struct bpf_insn *insn) { const struct bpf_prog *shim_prog; struct sock *sk; @@ -85,8 +85,8 @@ unsigned int __cgroup_bpf_run_lsm_sock(const void *ctx, return ret; } -unsigned int __cgroup_bpf_run_lsm_socket(const void *ctx, - const struct bpf_insn *insn) +u64 __cgroup_bpf_run_lsm_socket(const void *ctx, + const struct bpf_insn *insn) { const struct bpf_prog *shim_prog; struct socket *sock; @@ -107,8 +107,8 @@ unsigned int __cgroup_bpf_run_lsm_socket(const void *ctx, return ret; } -unsigned int __cgroup_bpf_run_lsm_current(const void *ctx, - const struct bpf_insn *insn) +u64 __cgroup_bpf_run_lsm_current(const void *ctx, + const struct bpf_insn *insn) { const struct bpf_prog *shim_prog; struct cgroup *cgrp; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 9c16338bcbe8..bc4ca1e6a990 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2004,7 +2004,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) #define PROG_NAME(stack_size) __bpf_prog_run##stack_size #define DEFINE_BPF_PROG_RUN(stack_size) \ -static unsigned int PROG_NAME(stack_size)(const void *ctx, const struct bpf_insn *insn) \ +static u64 PROG_NAME(stack_size)(const void *ctx, const struct bpf_insn *insn) \ { \ u64 stack[stack_size / sizeof(u64)]; \ u64 regs[MAX_BPF_EXT_REG] = {}; \ @@ -2048,8 +2048,8 @@ EVAL4(DEFINE_BPF_PROG_RUN_ARGS, 416, 448, 480, 512); #define PROG_NAME_LIST(stack_size) PROG_NAME(stack_size), -static unsigned int (*interpreters[])(const void *ctx, - const struct bpf_insn *insn) = { +static u64 (*interpreters[])(const void *ctx, + const struct bpf_insn *insn) = { EVAL6(PROG_NAME_LIST, 32, 64, 96, 128, 160, 192) EVAL6(PROG_NAME_LIST, 224, 256, 288, 320, 352, 384) EVAL4(PROG_NAME_LIST, 416, 448, 480, 512) @@ -2074,8 +2074,8 @@ void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth) } #else -static unsigned int __bpf_prog_ret0_warn(const void *ctx, - const struct bpf_insn *insn) +static u64 __bpf_prog_ret0_warn(const void *ctx, + const struct bpf_insn *insn) { /* If this handler ever gets executed, then BPF_JIT_ALWAYS_ON * is not working properly, so warn about it! @@ -2210,8 +2210,8 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err) } EXPORT_SYMBOL_GPL(bpf_prog_select_runtime); -static unsigned int __bpf_prog_ret1(const void *ctx, - const struct bpf_insn *insn) +static u64 __bpf_prog_ret1(const void *ctx, + const struct bpf_insn *insn) { return 1; } diff --git a/kernel/bpf/offload.c b/kernel/bpf/offload.c index 13e4efc971e6..d6a37ab87511 100644 --- a/kernel/bpf/offload.c +++ b/kernel/bpf/offload.c @@ -246,8 +246,8 @@ static int bpf_prog_offload_translate(struct bpf_prog *prog) return ret; } -static unsigned int bpf_prog_warn_on_exec(const void *ctx, - const struct bpf_insn *insn) +static u64 bpf_prog_warn_on_exec(const void *ctx, + const struct bpf_insn *insn) { WARN(1, "attempt to execute device eBPF program on the host!"); return 0; diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index 44f20cf8a0c0..2f61f2889e52 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -1444,8 +1444,11 @@ static unsigned int fanout_demux_bpf(struct packet_fanout *f, rcu_read_lock(); prog = rcu_dereference(f->bpf_prog); - if (prog) - ret = bpf_prog_run_clear_cb(prog, skb) % num; + if (prog) { + ret = bpf_prog_run_clear_cb(prog, skb); + /* For some architectures, we need to do modulus in 32-bit width */ + ret %= num; + } rcu_read_unlock(); return ret;