From patchwork Fri Jul 2 11:18:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12355973 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C9E7C11F68 for ; Fri, 2 Jul 2021 11:20:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 62C46613F4 for ; Fri, 2 Jul 2021 11:20:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231797AbhGBLW5 (ORCPT ); Fri, 2 Jul 2021 07:22:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230424AbhGBLW4 (ORCPT ); Fri, 2 Jul 2021 07:22:56 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9AF7C061762; Fri, 2 Jul 2021 04:20:23 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id e33so9236477pgm.3; Fri, 02 Jul 2021 04:20:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Pu6s/bZWRTuBtOPeO5jualkpubf/IINLtS9eFBKRQso=; b=l3DMROhBejaWiPdmTwkCprGmi0YvK+RFa9OYaKraCBuJCG9YkhEuY/ZMXihq+BG0QV GkzWGhg0ywqBoS8Q5M/GUOncX0qJtRivGiIqJ3QelCa+OVplrtK8POq2f33qeiD6H3vn du84c9DB/j6JOn2TWty1obMfenmdWptfu1xhu0fevQZrfFtpqPOlwrxldaNRBHw5+APr 6OzwT0oSFUhds2WdWkBiMv3xKq/vgwNGK7xBuVn8xMwhbFxo8fe2hw9HZrmpaVU6G0vu w3xRPwZuCGtZUCz52E1ToYo17BGKG/ycirKmAxspvkwbhuW1lrnGyIv8tNZ0kFeU+gDd Lfjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Pu6s/bZWRTuBtOPeO5jualkpubf/IINLtS9eFBKRQso=; b=Tv/gVbeFhH1H03HI6lMlyAiovDVs0IZokeQESOK7lvNM1NWnN0LxITmnNiNmGZfHBT hiybg2rZrM9XBwtYADNPh2+T+NAB+zqL7Ut6vKX2bR8wy5u3HmimXtEfpsTr6jLil1Rb eqM8mCFhwplR78vqZuwDFhcFqFmkS5NR+yZzkXbY3fL10awZuiD6GxlWrKKUr9ThMPAL aHi3/TD6RtEKNJkzXLt+15GV7moDseU53XdWhTZ3NnofKpuCeByfhvUzkB4rzQI5I3dA 59hxKB8v2AE1Q3GPy02Cd5qrVBwj6qbVJeMk5rewJ9JZDvzlQkLi2x9eRKCMKpQH73G3 D4GA== X-Gm-Message-State: AOAM531Snzpy0iIN5BBiGfc2gWctJ409LVBUByS9qya4n6LjLl5bHiCS U3IcsDALypmVkAEuhCPoVL6M9/ax0Bg= X-Google-Smtp-Source: ABdhPJyiJev5O+4QxZ88EgSuUy7G0OA5F1gH/tcqu4eBQCUZGBxBk2nsFLBBCP9ldc4fCPQwTsncxQ== X-Received: by 2002:a63:1443:: with SMTP id 3mr1514771pgu.7.1625224823148; Fri, 02 Jul 2021 04:20:23 -0700 (PDT) Received: from localhost ([2409:4063:4d83:c0b5:70cd:e919:ab0c:33ce]) by smtp.gmail.com with ESMTPSA id 27sm3442869pgr.31.2021.07.02.04.20.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Jul 2021 04:20:22 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: netdev@vger.kernel.org Cc: Kumar Kartikeya Dwivedi , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Jesper Dangaard Brouer , "David S. Miller" , Jakub Kicinski , John Fastabend , Martin KaFai Lau , Eric Leblond , bpf@vger.kernel.org Subject: [PATCH net-next v6 1/5] net: core: split out code to run generic XDP prog Date: Fri, 2 Jul 2021 16:48:21 +0530 Message-Id: <20210702111825.491065-2-memxor@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210702111825.491065-1-memxor@gmail.com> References: <20210702111825.491065-1-memxor@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This helper can later be utilized in code that runs cpumap and devmap programs in generic redirect mode and adjust skb based on changes made to xdp_buff. When returning XDP_REDIRECT/XDP_TX, it invokes __skb_push, so whenever a generic redirect path invokes devmap/cpumap prog if set, it must __skb_pull again as we expect mac header to be pulled. It also drops the skb_reset_mac_len call after do_xdp_generic, as the mac_header and network_header are advanced by the same offset, so the difference (mac_len) remains constant. Reviewed-by: Toke Høiland-Jørgensen Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/netdevice.h | 2 + net/core/dev.c | 84 ++++++++++++++++++++++++--------------- 2 files changed, 55 insertions(+), 31 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index be1dcceda5e4..90472ea70db2 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3984,6 +3984,8 @@ static inline void dev_consume_skb_any(struct sk_buff *skb) __dev_kfree_skb_any(skb, SKB_REASON_CONSUMED); } +u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, + struct bpf_prog *xdp_prog); void generic_xdp_tx(struct sk_buff *skb, struct bpf_prog *xdp_prog); int do_xdp_generic(struct bpf_prog *xdp_prog, struct sk_buff *skb); int netif_rx(struct sk_buff *skb); diff --git a/net/core/dev.c b/net/core/dev.c index 991d09b67bd9..ad5ab33cbd39 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4740,45 +4740,18 @@ static struct netdev_rx_queue *netif_get_rxqueue(struct sk_buff *skb) return rxqueue; } -static u32 netif_receive_generic_xdp(struct sk_buff *skb, - struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) +u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, + struct bpf_prog *xdp_prog) { void *orig_data, *orig_data_end, *hard_start; struct netdev_rx_queue *rxqueue; - u32 metalen, act = XDP_DROP; bool orig_bcast, orig_host; u32 mac_len, frame_sz; __be16 orig_eth_type; struct ethhdr *eth; + u32 metalen, act; int off; - /* Reinjected packets coming from act_mirred or similar should - * not get XDP generic processing. - */ - if (skb_is_redirected(skb)) - return XDP_PASS; - - /* XDP packets must be linear and must have sufficient headroom - * of XDP_PACKET_HEADROOM bytes. This is the guarantee that also - * native XDP provides, thus we need to do it here as well. - */ - if (skb_cloned(skb) || skb_is_nonlinear(skb) || - skb_headroom(skb) < XDP_PACKET_HEADROOM) { - int hroom = XDP_PACKET_HEADROOM - skb_headroom(skb); - int troom = skb->tail + skb->data_len - skb->end; - - /* In case we have to go down the path and also linearize, - * then lets do the pskb_expand_head() work just once here. - */ - if (pskb_expand_head(skb, - hroom > 0 ? ALIGN(hroom, NET_SKB_PAD) : 0, - troom > 0 ? troom + 128 : 0, GFP_ATOMIC)) - goto do_drop; - if (skb_linearize(skb)) - goto do_drop; - } - /* The XDP program wants to see the packet starting at the MAC * header. */ @@ -4833,6 +4806,13 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb, skb->protocol = eth_type_trans(skb, skb->dev); } + /* Redirect/Tx gives L2 packet, code that will reuse skb must __skb_pull + * before calling us again on redirect path. We do not call do_redirect + * as we leave that up to the caller. + * + * Caller is responsible for managing lifetime of skb (i.e. calling + * kfree_skb in response to actions it cannot handle/XDP_DROP). + */ switch (act) { case XDP_REDIRECT: case XDP_TX: @@ -4843,6 +4823,49 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb, if (metalen) skb_metadata_set(skb, metalen); break; + } + + return act; +} + +static u32 netif_receive_generic_xdp(struct sk_buff *skb, + struct xdp_buff *xdp, + struct bpf_prog *xdp_prog) +{ + u32 act = XDP_DROP; + + /* Reinjected packets coming from act_mirred or similar should + * not get XDP generic processing. + */ + if (skb_is_redirected(skb)) + return XDP_PASS; + + /* XDP packets must be linear and must have sufficient headroom + * of XDP_PACKET_HEADROOM bytes. This is the guarantee that also + * native XDP provides, thus we need to do it here as well. + */ + if (skb_cloned(skb) || skb_is_nonlinear(skb) || + skb_headroom(skb) < XDP_PACKET_HEADROOM) { + int hroom = XDP_PACKET_HEADROOM - skb_headroom(skb); + int troom = skb->tail + skb->data_len - skb->end; + + /* In case we have to go down the path and also linearize, + * then lets do the pskb_expand_head() work just once here. + */ + if (pskb_expand_head(skb, + hroom > 0 ? ALIGN(hroom, NET_SKB_PAD) : 0, + troom > 0 ? troom + 128 : 0, GFP_ATOMIC)) + goto do_drop; + if (skb_linearize(skb)) + goto do_drop; + } + + act = bpf_prog_run_generic_xdp(skb, xdp, xdp_prog); + switch (act) { + case XDP_REDIRECT: + case XDP_TX: + case XDP_PASS: + break; default: bpf_warn_invalid_xdp_action(act); fallthrough; @@ -5308,7 +5331,6 @@ static int __netif_receive_skb_core(struct sk_buff **pskb, bool pfmemalloc, ret = NET_RX_DROP; goto out; } - skb_reset_mac_len(skb); } if (eth_type_vlan(skb->protocol)) { From patchwork Fri Jul 2 11:18:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12355975 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94482C11F68 for ; Fri, 2 Jul 2021 11:20:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7E84D61422 for ; Fri, 2 Jul 2021 11:20:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231849AbhGBLXB (ORCPT ); Fri, 2 Jul 2021 07:23:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231865AbhGBLW7 (ORCPT ); Fri, 2 Jul 2021 07:22:59 -0400 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF270C061762; Fri, 2 Jul 2021 04:20:27 -0700 (PDT) Received: by mail-pg1-x542.google.com with SMTP id u14so9190091pga.11; Fri, 02 Jul 2021 04:20:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xl2XQWEJG8begT6rZuY2RmzXT/AoqbtgvUhn3YaKP7M=; b=qcTzLh92hj03rlhXIFhZnwtBGfTM/0T0YtunyglUMeoXMjshQFU5LccL5TrrhMMFzo /kCTZL+h5rA+hVNOLmA9qSitdHTogXq/dl60f0JssY9n7uUabMZLoNQK+2Uyr1ceqfH8 b9ct5yAk+lF0C2io2kG3cWuhTkQ34nh3rvIwnoxnv4X5eNpcSgF1DRt+LEZBPnW8t0q/ 7IctqBL1PVVL8y5tsq5iuI6amdaXt+1VACKAyr5RLiVbUtEw2VAech/SFT530zv+21E0 zuJggkDn/+GKIwotBtB/B+tsa2cbYsMRREJzv13BqQzkXRxFbknTw+HxtcRw/JKt8KG6 tbQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xl2XQWEJG8begT6rZuY2RmzXT/AoqbtgvUhn3YaKP7M=; b=ikXxbIqYNa+gpmTOLqDTtkQ4IeDk1HLHlR+d9cv94Efo7ovcj0FMfkxAH/cY7AYQdz kEdbBhPRDJJ3BpgQ1K6icLWxV9Gx3U6GAZC2P2PTbHcVw56KbV6cv2Pq9/fosAP9/PHS MP7qZZh9vrMRZQaVzv6SVl92f+MtCyfs9e9szH5nwpD+UCoiDfyWGHFNNp5B3LQ26Aqv XDw7UJKSYgXeVkgm8GQpYJhg/y0nABHHO+5AFAy3nWYH5vCXF/gX5K3QLdz+QY0gh3uY Exrwyp+UhFkAaFZBtnxHbyPDyCI0Op9m03nsvwZTSrZ+QIAksnjILPClArgzG/8uppnf 9tgw== X-Gm-Message-State: AOAM532KTfr68xBFORtBkgUrkN9X6YFJh86Zmdkzfx4C7MgSe+VN34qr h3mz4e/1i0S0PT5crnPck7mxYIZE6FY= X-Google-Smtp-Source: ABdhPJyu/4/S95BlUQEBZLpYPTvZ+JUCewwovtx90/NqzBrsM/4Y6Kj+jckh/VID5GZlVAY7ebe3VA== X-Received: by 2002:a62:7e8a:0:b029:2f8:183:8aa2 with SMTP id z132-20020a627e8a0000b02902f801838aa2mr4774076pfc.58.1625224827414; Fri, 02 Jul 2021 04:20:27 -0700 (PDT) Received: from localhost ([2409:4063:4d83:c0b5:70cd:e919:ab0c:33ce]) by smtp.gmail.com with ESMTPSA id j24sm3076690pfe.58.2021.07.02.04.20.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Jul 2021 04:20:27 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: netdev@vger.kernel.org Cc: Kumar Kartikeya Dwivedi , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Jesper Dangaard Brouer , "David S. Miller" , Jakub Kicinski , John Fastabend , Martin KaFai Lau , Eric Leblond , bpf@vger.kernel.org Subject: [PATCH net-next v6 2/5] bitops: add non-atomic bitops for pointers Date: Fri, 2 Jul 2021 16:48:22 +0530 Message-Id: <20210702111825.491065-3-memxor@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210702111825.491065-1-memxor@gmail.com> References: <20210702111825.491065-1-memxor@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org cpumap needs to set, clear, and test the lowest bit in skb pointer in various places. To make these checks less noisy, add pointer friendly bitop macros that also do some typechecking to sanitize the argument. These wrap the non-atomic bitops __set_bit, __clear_bit, and test_bit but for pointer arguments. Pointer's address has to be passed in and it is treated as an unsigned long *, since width and representation of pointer and unsigned long match on targets Linux supports. They are prefixed with double underscore to indicate lack of atomicity. Reviewed-by: Toke Høiland-Jørgensen Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bitops.h | 50 +++++++++++++++++++++++++++++++++++++++ include/linux/typecheck.h | 9 +++++++ 2 files changed, 59 insertions(+) diff --git a/include/linux/bitops.h b/include/linux/bitops.h index 26bf15e6cd35..5e62e2383b7f 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -4,6 +4,7 @@ #include #include +#include #include @@ -253,6 +254,55 @@ static __always_inline void __assign_bit(long nr, volatile unsigned long *addr, __clear_bit(nr, addr); } +/** + * __ptr_set_bit - Set bit in a pointer's value + * @nr: the bit to set + * @addr: the address of the pointer variable + * + * Example: + * void *p = foo(); + * __ptr_set_bit(bit, &p); + */ +#define __ptr_set_bit(nr, addr) \ + ({ \ + typecheck_pointer(*(addr)); \ + __set_bit(nr, (unsigned long *)(addr)); \ + }) + +/** + * __ptr_clear_bit - Clear bit in a pointer's value + * @nr: the bit to clear + * @addr: the address of the pointer variable + * + * Example: + * void *p = foo(); + * __ptr_clear_bit(bit, &p); + */ +#define __ptr_clear_bit(nr, addr) \ + ({ \ + typecheck_pointer(*(addr)); \ + __clear_bit(nr, (unsigned long *)(addr)); \ + }) + +/** + * __ptr_test_bit - Test bit in a pointer's value + * @nr: the bit to test + * @addr: the address of the pointer variable + * + * Example: + * void *p = foo(); + * if (__ptr_test_bit(bit, &p)) { + * ... + * } else { + * ... + * } + */ +#define __ptr_test_bit(nr, addr) \ + ({ \ + typecheck_pointer(*(addr)); \ + test_bit(nr, (unsigned long *)(addr)); \ + }) + #ifdef __KERNEL__ #ifndef set_mask_bits diff --git a/include/linux/typecheck.h b/include/linux/typecheck.h index 20d310331eb5..46b15e2aaefb 100644 --- a/include/linux/typecheck.h +++ b/include/linux/typecheck.h @@ -22,4 +22,13 @@ (void)__tmp; \ }) +/* + * Check at compile time that something is a pointer type. + */ +#define typecheck_pointer(x) \ +({ typeof(x) __dummy; \ + (void)sizeof(*__dummy); \ + 1; \ +}) + #endif /* TYPECHECK_H_INCLUDED */ From patchwork Fri Jul 2 11:18:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12355977 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6432C11F68 for ; Fri, 2 Jul 2021 11:20:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B1AA66142D for ; Fri, 2 Jul 2021 11:20:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231682AbhGBLXF (ORCPT ); Fri, 2 Jul 2021 07:23:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231283AbhGBLXE (ORCPT ); Fri, 2 Jul 2021 07:23:04 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C97DC061762; Fri, 2 Jul 2021 04:20:32 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id j199so8203656pfd.7; Fri, 02 Jul 2021 04:20:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WZomPK+k1GcPIXzavGcgY4b6IUBeBsGY7fztD9EkNoQ=; b=KzPpelHUx9uNi+2O4C5w703Bl1m3SuHToLrIVhr8D58xFryxAO1tkoFLcUjuf9ExUj bMAOsxBxPp4PmpzcB4jciI86e6xPvcPB85H/PZDaSS2cN5mCwyWZDN6F+jvng2/J1Vc0 pmxd+krcIrBhI3k5UygAzoYQ8baPqnwJXGXgSBcrf5Svmas2qKwh+b9rXYwJv3ILMnzJ UMPfExh/1tvPWxjW6raPEvGIY813BeKkwyDPMEHP7FBBSKy1kzVjsKTswwLTT8RO0YW3 5feuapikcm0qlEqb29lQpWFpZ3+TpY7hTBYvkqf77OunmAKSPtxWAL1v9zbUmY+9aMAV vbZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WZomPK+k1GcPIXzavGcgY4b6IUBeBsGY7fztD9EkNoQ=; b=D6KdSMRSIr0JYFELaNCzahh0MQDFmfDpVGi84Jzk2OQUWoMv986znKsP0XiaNERskY u6vO/o44EUyJ5dk7RkDdE/L4z0NEur/HX47lmKfON/kyfRsaVQRgoMFkMZeizdUg0rrP KJIE1PjQzR8UwTj/4u4sxhfga4QndcKdk1AbMQOHghytwA4cSmDT2CcyAqTYlLZ7OqUq lxVtC6wyLFyUOR6cKRVfEu5HnX7SBaraWrVJf/yj+SrbzhVn2lv+ulG+G7w2VzveLUsU oV3xQJZ2pA2GjnlJl+J9Y/K7HowygQ3HpLIcmXbhwT0bzj131hB/nMalMdefidrk9z/v wXxQ== X-Gm-Message-State: AOAM531qxef9MptqiDUGlunUWKDAozZSOzmJSGewDCX1UYp1FkrWjj+8 8vnQDF9cXPGLVucaSZimnmfDq5QDZT4= X-Google-Smtp-Source: ABdhPJyg6pCaZPKPDRK47Q64KM+WiugcAyn8MfmXb/SQ0Dy6xDcMPwt/O3KYeXQfIRI8SZVh47gldg== X-Received: by 2002:a63:5505:: with SMTP id j5mr725688pgb.250.1625224831579; Fri, 02 Jul 2021 04:20:31 -0700 (PDT) Received: from localhost ([2409:4063:4d83:c0b5:70cd:e919:ab0c:33ce]) by smtp.gmail.com with ESMTPSA id h10sm3322284pfh.33.2021.07.02.04.20.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Jul 2021 04:20:31 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: netdev@vger.kernel.org Cc: Kumar Kartikeya Dwivedi , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Jesper Dangaard Brouer , "David S. Miller" , Jakub Kicinski , John Fastabend , Martin KaFai Lau , Eric Leblond , bpf@vger.kernel.org Subject: [PATCH net-next v6 3/5] bpf: cpumap: implement generic cpumap Date: Fri, 2 Jul 2021 16:48:23 +0530 Message-Id: <20210702111825.491065-4-memxor@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210702111825.491065-1-memxor@gmail.com> References: <20210702111825.491065-1-memxor@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This change implements CPUMAP redirect support for generic XDP programs. The idea is to reuse the cpu map entry's queue that is used to push native xdp frames for redirecting skb to a different CPU. This will match native XDP behavior (in that RPS is invoked again for packet reinjected into networking stack). To be able to determine whether the incoming skb is from the driver or cpumap, we reuse skb->redirected bit that skips generic XDP processing when it is set. To always make use of this, CONFIG_NET_REDIRECT guard on it has been lifted and it is always available. From the redirect side, we add the skb to ptr_ring with its lowest bit set to 1. This should be safe as skb is not 1-byte aligned. This allows kthread to discern between xdp_frames and sk_buff. On consumption of the ptr_ring item, the lowest bit is unset. In the end, the skb is simply added to the list that kthread is anyway going to maintain for xdp_frames converted to skb, and then received again by using netif_receive_skb_list. Bulking optimization for generic cpumap is left as an exercise for a future patch for now. Since cpumap entry progs are now supported, also remove check in generic_xdp_install for the cpumap. Reviewed-by: Toke Høiland-Jørgensen Signed-off-by: Kumar Kartikeya Dwivedi Acked-by: Jesper Dangaard Brouer --- include/linux/bpf.h | 9 +++- include/linux/skbuff.h | 10 +--- kernel/bpf/cpumap.c | 116 ++++++++++++++++++++++++++++++++++------- net/core/dev.c | 3 +- net/core/filter.c | 6 ++- 5 files changed, 114 insertions(+), 30 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index f309fc1509f2..095aaa104c56 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1513,7 +1513,8 @@ bool dev_map_can_have_prog(struct bpf_map *map); void __cpu_map_flush(void); int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp, struct net_device *dev_rx); -bool cpu_map_prog_allowed(struct bpf_map *map); +int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu, + struct sk_buff *skb); /* Return map's numa specified by userspace */ static inline int bpf_map_attr_numa_node(const union bpf_attr *attr) @@ -1710,6 +1711,12 @@ static inline int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, return 0; } +static inline int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu, + struct sk_buff *skb) +{ + return -EOPNOTSUPP; +} + static inline bool cpu_map_prog_allowed(struct bpf_map *map) { return false; diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index b2db9cd9a73f..f19190820e63 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -863,8 +863,8 @@ struct sk_buff { __u8 tc_skip_classify:1; __u8 tc_at_ingress:1; #endif -#ifdef CONFIG_NET_REDIRECT __u8 redirected:1; +#ifdef CONFIG_NET_REDIRECT __u8 from_ingress:1; #endif #ifdef CONFIG_TLS_DEVICE @@ -4664,17 +4664,13 @@ static inline __wsum lco_csum(struct sk_buff *skb) static inline bool skb_is_redirected(const struct sk_buff *skb) { -#ifdef CONFIG_NET_REDIRECT return skb->redirected; -#else - return false; -#endif } static inline void skb_set_redirected(struct sk_buff *skb, bool from_ingress) { -#ifdef CONFIG_NET_REDIRECT skb->redirected = 1; +#ifdef CONFIG_NET_REDIRECT skb->from_ingress = from_ingress; if (skb->from_ingress) skb->tstamp = 0; @@ -4683,9 +4679,7 @@ static inline void skb_set_redirected(struct sk_buff *skb, bool from_ingress) static inline void skb_reset_redirect(struct sk_buff *skb) { -#ifdef CONFIG_NET_REDIRECT skb->redirected = 0; -#endif } static inline bool skb_csum_is_sctp(struct sk_buff *skb) diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index a1a0c4e791c6..399ea5b7f115 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -16,6 +16,7 @@ * netstack, and assigning dedicated CPUs for this stage. This * basically allows for 10G wirespeed pre-filtering via bpf. */ +#include #include #include #include @@ -168,6 +169,46 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu) } } +static void cpu_map_bpf_prog_run_skb(struct bpf_cpu_map_entry *rcpu, + struct list_head *listp, + struct xdp_cpumap_stats *stats) +{ + struct sk_buff *skb, *tmp; + struct xdp_buff xdp; + u32 act; + int err; + + list_for_each_entry_safe(skb, tmp, listp, list) { + act = bpf_prog_run_generic_xdp(skb, &xdp, rcpu->prog); + switch (act) { + case XDP_PASS: + break; + case XDP_REDIRECT: + skb_list_del_init(skb); + err = xdp_do_generic_redirect(skb->dev, skb, &xdp, + rcpu->prog); + if (unlikely(err)) { + kfree_skb(skb); + stats->drop++; + } else { + stats->redirect++; + } + return; + default: + bpf_warn_invalid_xdp_action(act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(skb->dev, rcpu->prog, act); + fallthrough; + case XDP_DROP: + skb_list_del_init(skb); + kfree_skb(skb); + stats->drop++; + return; + } + } +} + static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu, void **frames, int n, struct xdp_cpumap_stats *stats) @@ -176,11 +217,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu, struct xdp_buff xdp; int i, nframes = 0; - if (!rcpu->prog) - return n; - - rcu_read_lock_bh(); - xdp_set_return_frame_no_direct(); xdp.rxq = &rxq; @@ -227,17 +263,37 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu, } } + xdp_clear_return_frame_no_direct(); + + return nframes; +} + +#define CPUMAP_BATCH 8 + +static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames, + int xdp_n, struct xdp_cpumap_stats *stats, + struct list_head *list) +{ + int nframes; + + if (!rcpu->prog) + return xdp_n; + + rcu_read_lock_bh(); + + nframes = cpu_map_bpf_prog_run_xdp(rcpu, frames, xdp_n, stats); + if (stats->redirect) - xdp_do_flush_map(); + xdp_do_flush(); - xdp_clear_return_frame_no_direct(); + if (unlikely(!list_empty(list))) + cpu_map_bpf_prog_run_skb(rcpu, list, stats); rcu_read_unlock_bh(); /* resched point, may call do_softirq() */ return nframes; } -#define CPUMAP_BATCH 8 static int cpu_map_kthread_run(void *data) { @@ -254,9 +310,9 @@ static int cpu_map_kthread_run(void *data) struct xdp_cpumap_stats stats = {}; /* zero stats */ unsigned int kmem_alloc_drops = 0, sched = 0; gfp_t gfp = __GFP_ZERO | GFP_ATOMIC; + int i, n, m, nframes, xdp_n; void *frames[CPUMAP_BATCH]; void *skbs[CPUMAP_BATCH]; - int i, n, m, nframes; LIST_HEAD(list); /* Release CPU reschedule checks */ @@ -280,9 +336,20 @@ static int cpu_map_kthread_run(void *data) */ n = __ptr_ring_consume_batched(rcpu->queue, frames, CPUMAP_BATCH); - for (i = 0; i < n; i++) { + for (i = 0, xdp_n = 0; i < n; i++) { void *f = frames[i]; - struct page *page = virt_to_page(f); + struct page *page; + + if (unlikely(__ptr_test_bit(0, &f))) { + struct sk_buff *skb = f; + + __ptr_clear_bit(0, &skb); + list_add_tail(&skb->list, &list); + continue; + } + + frames[xdp_n++] = f; + page = virt_to_page(f); /* Bring struct page memory area to curr CPU. Read by * build_skb_around via page_is_pfmemalloc(), and when @@ -292,7 +359,7 @@ static int cpu_map_kthread_run(void *data) } /* Support running another XDP prog on this CPU */ - nframes = cpu_map_bpf_prog_run_xdp(rcpu, frames, n, &stats); + nframes = cpu_map_bpf_prog_run(rcpu, frames, xdp_n, &stats, &list); if (nframes) { m = kmem_cache_alloc_bulk(skbuff_head_cache, gfp, nframes, skbs); if (unlikely(m == 0)) { @@ -330,12 +397,6 @@ static int cpu_map_kthread_run(void *data) return 0; } -bool cpu_map_prog_allowed(struct bpf_map *map) -{ - return map->map_type == BPF_MAP_TYPE_CPUMAP && - map->value_size != offsetofend(struct bpf_cpumap_val, qsize); -} - static int __cpu_map_load_bpf_program(struct bpf_cpu_map_entry *rcpu, int fd) { struct bpf_prog *prog; @@ -696,6 +757,25 @@ int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp, return 0; } +int cpu_map_generic_redirect(struct bpf_cpu_map_entry *rcpu, + struct sk_buff *skb) +{ + int ret; + + __skb_pull(skb, skb->mac_len); + skb_set_redirected(skb, false); + __ptr_set_bit(0, &skb); + + ret = ptr_ring_produce(rcpu->queue, skb); + if (ret < 0) + goto trace; + + wake_up_process(rcpu->kthread); +trace: + trace_xdp_cpumap_enqueue(rcpu->map_id, !ret, !!ret, rcpu->cpu); + return ret; +} + void __cpu_map_flush(void) { struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list); diff --git a/net/core/dev.c b/net/core/dev.c index ad5ab33cbd39..8521936414f2 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -5665,8 +5665,7 @@ static int generic_xdp_install(struct net_device *dev, struct netdev_bpf *xdp) * have a bpf_prog installed on an entry */ for (i = 0; i < new->aux->used_map_cnt; i++) { - if (dev_map_can_have_prog(new->aux->used_maps[i]) || - cpu_map_prog_allowed(new->aux->used_maps[i])) { + if (dev_map_can_have_prog(new->aux->used_maps[i])) { mutex_unlock(&new->aux->used_maps_mutex); return -EINVAL; } diff --git a/net/core/filter.c b/net/core/filter.c index 0b13d8157a8f..4a21fde3028f 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -4038,8 +4038,12 @@ static int xdp_do_generic_redirect_map(struct net_device *dev, goto err; consume_skb(skb); break; + case BPF_MAP_TYPE_CPUMAP: + err = cpu_map_generic_redirect(fwd, skb); + if (unlikely(err)) + goto err; + break; default: - /* TODO: Handle BPF_MAP_TYPE_CPUMAP */ err = -EBADRQC; goto err; } From patchwork Fri Jul 2 11:18:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12355979 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0292FC11F68 for ; Fri, 2 Jul 2021 11:20:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E28FF61423 for ; Fri, 2 Jul 2021 11:20:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231915AbhGBLXI (ORCPT ); Fri, 2 Jul 2021 07:23:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231283AbhGBLXI (ORCPT ); Fri, 2 Jul 2021 07:23:08 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3162CC061762; Fri, 2 Jul 2021 04:20:36 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id y17so9198988pgf.12; Fri, 02 Jul 2021 04:20:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SkgpkHwukme+s+DiUo/QAxq3uuNb0i0Ju9uiWH1fc9E=; b=Wds0tv0VpjnRTGGDTF3gD6Qe8Sqs5n4XKQwCEvkVu3ogw8JmGaZiP4gOrcAgwy4h7t we3ZD6psZ/3t7oEobmei8f2V9VNkjlJcPkj5/mP5T32RDq5qIqZn7XeDIo7nyIT70LUc uSTNgaQUaXt7gXSn7nLjqvB9LiKzWfJ2KOY78TUrF8SPdHyYNorpTBHVpvHGRJyasHpq QzYviUhtt+TIKQerxnAzD+uwBpTeiAkDBpOIoMk8rjVThvgJaQca+mgHbZOQUzIRu21N dsGwpDQb+DDMkiGiEyznG+MYDwXLQ80edQ/M1GaUuFkgsLtxgpcAbszHLnKRQtukB+75 j9Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SkgpkHwukme+s+DiUo/QAxq3uuNb0i0Ju9uiWH1fc9E=; b=raSrFpkSur9X2N6AtE02xPs3dabbJaZsiBJgcX5wfOmwSrquLFYy4Fc5P1D2r5ef9z jLR6LPgiZjN2sm3RM8yIYQL+Zd+R2twfUKGzBj51LNto3CnPPa7PP/FjWuL5a6samO2R 4G0995YiyIIfSX1j531rccf3d9E3nEt6dSu1MohIcE+hggwQFPv2PAukEM4NHEVxPwdF n/Y+B8gmDC/xlfWNDyGQVM7H8IT5FdHqoz6ij9CTRXKpS9rIgkcL0T5fXbD1oVHZoukv AIEuhfYogP4cJOdYs4n9EIksV+Jt/exeI884FYb4DaSatTcvFHyIFXGmd10iwmFlHxv4 fULQ== X-Gm-Message-State: AOAM530jPvPjbqeA7NjK1UF9qnI3IQYITe+gzqwIzZQa6co53TKLwYSl pX6QGYBtx7CplX3lJ4szbmYYMeXsQ4U= X-Google-Smtp-Source: ABdhPJxDFnGzoF0D9sB2S57EN99jsYRcVtYytNB8t1zsrtBgZ2Hwno4DX5XO5fGmmX965M1iI9QJaA== X-Received: by 2002:a63:4b09:: with SMTP id y9mr1256322pga.350.1625224835607; Fri, 02 Jul 2021 04:20:35 -0700 (PDT) Received: from localhost ([2409:4063:4d83:c0b5:70cd:e919:ab0c:33ce]) by smtp.gmail.com with ESMTPSA id i1sm10592588pjs.31.2021.07.02.04.20.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Jul 2021 04:20:35 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: netdev@vger.kernel.org Cc: Kumar Kartikeya Dwivedi , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Jesper Dangaard Brouer , "David S. Miller" , Jakub Kicinski , John Fastabend , Martin KaFai Lau , Eric Leblond , bpf@vger.kernel.org Subject: [PATCH net-next v6 4/5] bpf: devmap: implement devmap prog execution for generic XDP Date: Fri, 2 Jul 2021 16:48:24 +0530 Message-Id: <20210702111825.491065-5-memxor@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210702111825.491065-1-memxor@gmail.com> References: <20210702111825.491065-1-memxor@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This lifts the restriction on running devmap BPF progs in generic redirect mode. To match native XDP behavior, it is invoked right before generic_xdp_tx is called, and only supports XDP_PASS/XDP_ABORTED/ XDP_DROP actions. We also return 0 even if devmap program drops the packet, as semantically redirect has already succeeded and the devmap prog is the last point before TX of the packet to device where it can deliver a verdict on the packet. This also means it must take care of freeing the skb, as xdp_do_generic_redirect callers only do that in case an error is returned. Since devmap entry prog is supported, remove the check in generic_xdp_install entirely. Reviewed-by: Toke Høiland-Jørgensen Signed-off-by: Kumar Kartikeya Dwivedi --- include/linux/bpf.h | 1 - kernel/bpf/devmap.c | 49 ++++++++++++++++++++++++++++++++++++--------- net/core/dev.c | 18 ----------------- 3 files changed, 39 insertions(+), 29 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 095aaa104c56..4afbff308ca3 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1508,7 +1508,6 @@ int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, struct bpf_prog *xdp_prog, struct bpf_map *map, bool exclude_ingress); -bool dev_map_can_have_prog(struct bpf_map *map); void __cpu_map_flush(void); int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp, diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index 2a75e6c2d27d..49f03e8e5561 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -318,16 +318,6 @@ static int dev_map_hash_get_next_key(struct bpf_map *map, void *key, return -ENOENT; } -bool dev_map_can_have_prog(struct bpf_map *map) -{ - if ((map->map_type == BPF_MAP_TYPE_DEVMAP || - map->map_type == BPF_MAP_TYPE_DEVMAP_HASH) && - map->value_size != offsetofend(struct bpf_devmap_val, ifindex)) - return true; - - return false; -} - static int dev_map_bpf_prog_run(struct bpf_prog *xdp_prog, struct xdp_frame **frames, int n, struct net_device *dev) @@ -499,6 +489,37 @@ static inline int __xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp, return 0; } +static u32 dev_map_bpf_prog_run_skb(struct sk_buff *skb, struct bpf_dtab_netdev *dst) +{ + struct xdp_txq_info txq = { .dev = dst->dev }; + struct xdp_buff xdp; + u32 act; + + if (!dst->xdp_prog) + return XDP_PASS; + + __skb_pull(skb, skb->mac_len); + xdp.txq = &txq; + + act = bpf_prog_run_generic_xdp(skb, &xdp, dst->xdp_prog); + switch (act) { + case XDP_PASS: + __skb_push(skb, skb->mac_len); + break; + default: + bpf_warn_invalid_xdp_action(act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(dst->dev, dst->xdp_prog, act); + fallthrough; + case XDP_DROP: + kfree_skb(skb); + break; + } + + return act; +} + int dev_xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp, struct net_device *dev_rx) { @@ -614,6 +635,14 @@ int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, err = xdp_ok_fwd_dev(dst->dev, skb->len); if (unlikely(err)) return err; + + /* Redirect has already succeeded semantically at this point, so we just + * return 0 even if packet is dropped. Helper below takes care of + * freeing skb. + */ + if (dev_map_bpf_prog_run_skb(skb, dst) != XDP_PASS) + return 0; + skb->dev = dst->dev; generic_xdp_tx(skb, xdp_prog); diff --git a/net/core/dev.c b/net/core/dev.c index 8521936414f2..c674fe191e8a 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -5656,24 +5656,6 @@ static int generic_xdp_install(struct net_device *dev, struct netdev_bpf *xdp) struct bpf_prog *new = xdp->prog; int ret = 0; - if (new) { - u32 i; - - mutex_lock(&new->aux->used_maps_mutex); - - /* generic XDP does not work with DEVMAPs that can - * have a bpf_prog installed on an entry - */ - for (i = 0; i < new->aux->used_map_cnt; i++) { - if (dev_map_can_have_prog(new->aux->used_maps[i])) { - mutex_unlock(&new->aux->used_maps_mutex); - return -EINVAL; - } - } - - mutex_unlock(&new->aux->used_maps_mutex); - } - switch (xdp->command) { case XDP_SETUP_PROG: rcu_assign_pointer(dev->xdp_prog, new); From patchwork Fri Jul 2 11:18:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kumar Kartikeya Dwivedi X-Patchwork-Id: 12355981 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E910C11F68 for ; Fri, 2 Jul 2021 11:20:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F1AC16141C for ; Fri, 2 Jul 2021 11:20:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231593AbhGBLXM (ORCPT ); Fri, 2 Jul 2021 07:23:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231933AbhGBLXL (ORCPT ); Fri, 2 Jul 2021 07:23:11 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28F6EC0613DB; Fri, 2 Jul 2021 04:20:40 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id y4so8642802pfi.9; Fri, 02 Jul 2021 04:20:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Mu/JNqTDQqqkilde5SASiAsqtYxd9tc1IpCTmKR6mQk=; b=G834EleIUOSD+Ns5fRJ+dpHbVrKVpoY2XYPzTgZMXQVZrMkALyv1AdsGV9tOuR7+NE /04btzL8LhQiiFFGrfsSJcYXAwz8qZBLfTE6H99DYdcaeHdu/zDyBQvOdEL1q4RYJUb0 LGcnON4B7tpx51TkyHDSFQ+t9mcoq2Lw02vrZOhAk6r3CV9x9TfVoEs5zmGWzLok8UzH G6ZHuDDD2Q//Fg+HkNGvucMnoISav39fyJUVr9eN5c0wKFF96DriJeaSSO/hUQwh6lm6 Ml30XLT5+Sz4PPdWM+kFUuIP12KNmyNCiK5D9nUMrF7NlzWRAGj3MzFDhfrqWMwpQeDl FfjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Mu/JNqTDQqqkilde5SASiAsqtYxd9tc1IpCTmKR6mQk=; b=gF1dbmIRRQQjjU3D7/16TbXSAIA1RkhuemTQQQhThq5Ukr2GBUDZrA1KUoQmS3HKPT +DjLMjGNeu5z37GClXT+JRMJ57okAHZJohI4Rlhyd4K7tXX+txHCG96KqDj8f1624Q7s b60es+I04h6KAXJiRqW633+SQcUk+Dz9mAQBsjjuNVsRBiREG8MTAtqAnVk/NQxHglBD nDqogpIHwEmvPvSF4QHzOWDv9TDxkQDiFQ6zmh16G5s41v9MoI31JIXcZ25bkoWfniId /9wWo2POBwG8WtyNRshoxu++rEWgWNV7tGJN/hsihyie3QqcpKwjCYH9Bgmjz1qMaQ0L dxag== X-Gm-Message-State: AOAM532kXhJKMJKVQTT33UaRPvAFDDj1yd46V/cUcE/0ZwoU2hBN55jB 6oPq4fXsTxOME/Y5OzNrI5MfJjmmTzo= X-Google-Smtp-Source: ABdhPJwJ1nFWVJNNayG/ZqMybPTRfTYhdUWAp38klAZuackCFckr3VBNWc/emvDFrQcAZiDdW6VswA== X-Received: by 2002:a63:d211:: with SMTP id a17mr20061pgg.265.1625224839546; Fri, 02 Jul 2021 04:20:39 -0700 (PDT) Received: from localhost ([2409:4063:4d83:c0b5:70cd:e919:ab0c:33ce]) by smtp.gmail.com with ESMTPSA id u13sm3044439pfi.54.2021.07.02.04.20.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Jul 2021 04:20:39 -0700 (PDT) From: Kumar Kartikeya Dwivedi To: netdev@vger.kernel.org Cc: Kumar Kartikeya Dwivedi , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Jesper Dangaard Brouer , "David S. Miller" , Jakub Kicinski , John Fastabend , Martin KaFai Lau , Eric Leblond , bpf@vger.kernel.org Subject: [PATCH net-next v6 5/5] bpf: tidy xdp attach selftests Date: Fri, 2 Jul 2021 16:48:25 +0530 Message-Id: <20210702111825.491065-6-memxor@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210702111825.491065-1-memxor@gmail.com> References: <20210702111825.491065-1-memxor@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Support for cpumap and devmap entry progs in previous commits means the test needs to be updated for the new semantics. Also take this opportunity to convert it from CHECK macros to the new ASSERT macros. Since xdp_cpumap_attach has no subtest, put the sole test inside the test_xdp_cpumap_attach function. Reviewed-by: Toke Høiland-Jørgensen Signed-off-by: Kumar Kartikeya Dwivedi --- .../bpf/prog_tests/xdp_cpumap_attach.c | 43 +++++++------------ .../bpf/prog_tests/xdp_devmap_attach.c | 39 +++++++---------- 2 files changed, 32 insertions(+), 50 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c b/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c index 0176573fe4e7..8755effd80b0 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_cpumap_attach.c @@ -7,64 +7,53 @@ #define IFINDEX_LO 1 -void test_xdp_with_cpumap_helpers(void) +void test_xdp_cpumap_attach(void) { struct test_xdp_with_cpumap_helpers *skel; struct bpf_prog_info info = {}; + __u32 len = sizeof(info); struct bpf_cpumap_val val = { .qsize = 192, }; - __u32 duration = 0, idx = 0; - __u32 len = sizeof(info); int err, prog_fd, map_fd; + __u32 idx = 0; skel = test_xdp_with_cpumap_helpers__open_and_load(); - if (CHECK_FAIL(!skel)) { - perror("test_xdp_with_cpumap_helpers__open_and_load"); + if (!ASSERT_OK_PTR(skel, "test_xdp_with_cpumap_helpers__open_and_load")) return; - } - /* can not attach program with cpumaps that allow programs - * as xdp generic - */ prog_fd = bpf_program__fd(skel->progs.xdp_redir_prog); err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE); - CHECK(err == 0, "Generic attach of program with 8-byte CPUMAP", - "should have failed\n"); + if (!ASSERT_OK(err, "Generic attach of program with 8-byte CPUMAP")) + goto out_close; + + err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE); + ASSERT_OK(err, "XDP program detach"); prog_fd = bpf_program__fd(skel->progs.xdp_dummy_cm); map_fd = bpf_map__fd(skel->maps.cpu_map); err = bpf_obj_get_info_by_fd(prog_fd, &info, &len); - if (CHECK_FAIL(err)) + if (!ASSERT_OK(err, "bpf_obj_get_info_by_fd")) goto out_close; val.bpf_prog.fd = prog_fd; err = bpf_map_update_elem(map_fd, &idx, &val, 0); - CHECK(err, "Add program to cpumap entry", "err %d errno %d\n", - err, errno); + ASSERT_OK(err, "Add program to cpumap entry"); err = bpf_map_lookup_elem(map_fd, &idx, &val); - CHECK(err, "Read cpumap entry", "err %d errno %d\n", err, errno); - CHECK(info.id != val.bpf_prog.id, "Expected program id in cpumap entry", - "expected %u read %u\n", info.id, val.bpf_prog.id); + ASSERT_OK(err, "Read cpumap entry"); + ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to cpumap entry prog_id"); /* can not attach BPF_XDP_CPUMAP program to a device */ err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd, XDP_FLAGS_SKB_MODE); - CHECK(err == 0, "Attach of BPF_XDP_CPUMAP program", - "should have failed\n"); + if (!ASSERT_NEQ(err, 0, "Attach of BPF_XDP_CPUMAP program")) + bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE); val.qsize = 192; val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_prog); err = bpf_map_update_elem(map_fd, &idx, &val, 0); - CHECK(err == 0, "Add non-BPF_XDP_CPUMAP program to cpumap entry", - "should have failed\n"); + ASSERT_NEQ(err, 0, "Add non-BPF_XDP_CPUMAP program to cpumap entry"); out_close: test_xdp_with_cpumap_helpers__destroy(skel); } - -void test_xdp_cpumap_attach(void) -{ - if (test__start_subtest("cpumap_with_progs")) - test_xdp_with_cpumap_helpers(); -} diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c b/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c index 88ef3ec8ac4c..c72af030ff10 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_devmap_attach.c @@ -16,50 +16,45 @@ void test_xdp_with_devmap_helpers(void) .ifindex = IFINDEX_LO, }; __u32 len = sizeof(info); - __u32 duration = 0, idx = 0; int err, dm_fd, map_fd; + __u32 idx = 0; skel = test_xdp_with_devmap_helpers__open_and_load(); - if (CHECK_FAIL(!skel)) { - perror("test_xdp_with_devmap_helpers__open_and_load"); + if (!ASSERT_OK_PTR(skel, "test_xdp_with_devmap_helpers__open_and_load")) return; - } - /* can not attach program with DEVMAPs that allow programs - * as xdp generic - */ dm_fd = bpf_program__fd(skel->progs.xdp_redir_prog); err = bpf_set_link_xdp_fd(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE); - CHECK(err == 0, "Generic attach of program with 8-byte devmap", - "should have failed\n"); + if (!ASSERT_OK(err, "Generic attach of program with 8-byte devmap")) + goto out_close; + + err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE); + ASSERT_OK(err, "XDP program detach"); dm_fd = bpf_program__fd(skel->progs.xdp_dummy_dm); map_fd = bpf_map__fd(skel->maps.dm_ports); err = bpf_obj_get_info_by_fd(dm_fd, &info, &len); - if (CHECK_FAIL(err)) + if (!ASSERT_OK(err, "bpf_obj_get_info_by_fd")) goto out_close; val.bpf_prog.fd = dm_fd; err = bpf_map_update_elem(map_fd, &idx, &val, 0); - CHECK(err, "Add program to devmap entry", - "err %d errno %d\n", err, errno); + ASSERT_OK(err, "Add program to devmap entry"); err = bpf_map_lookup_elem(map_fd, &idx, &val); - CHECK(err, "Read devmap entry", "err %d errno %d\n", err, errno); - CHECK(info.id != val.bpf_prog.id, "Expected program id in devmap entry", - "expected %u read %u\n", info.id, val.bpf_prog.id); + ASSERT_OK(err, "Read devmap entry"); + ASSERT_EQ(info.id, val.bpf_prog.id, "Match program id to devmap entry prog_id"); /* can not attach BPF_XDP_DEVMAP program to a device */ err = bpf_set_link_xdp_fd(IFINDEX_LO, dm_fd, XDP_FLAGS_SKB_MODE); - CHECK(err == 0, "Attach of BPF_XDP_DEVMAP program", - "should have failed\n"); + if (!ASSERT_NEQ(err, 0, "Attach of BPF_XDP_DEVMAP program")) + bpf_set_link_xdp_fd(IFINDEX_LO, -1, XDP_FLAGS_SKB_MODE); val.ifindex = 1; val.bpf_prog.fd = bpf_program__fd(skel->progs.xdp_dummy_prog); err = bpf_map_update_elem(map_fd, &idx, &val, 0); - CHECK(err == 0, "Add non-BPF_XDP_DEVMAP program to devmap entry", - "should have failed\n"); + ASSERT_NEQ(err, 0, "Add non-BPF_XDP_DEVMAP program to devmap entry"); out_close: test_xdp_with_devmap_helpers__destroy(skel); @@ -68,12 +63,10 @@ void test_xdp_with_devmap_helpers(void) void test_neg_xdp_devmap_helpers(void) { struct test_xdp_devmap_helpers *skel; - __u32 duration = 0; skel = test_xdp_devmap_helpers__open_and_load(); - if (CHECK(skel, - "Load of XDP program accessing egress ifindex without attach type", - "should have failed\n")) { + if (!ASSERT_EQ(skel, NULL, + "Load of XDP program accessing egress ifindex without attach type")) { test_xdp_devmap_helpers__destroy(skel); } }