From patchwork Fri Dec 17 00:27:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= X-Patchwork-Id: 12683185 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 294C8C4167B for ; Fri, 17 Dec 2021 00:28:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230396AbhLQA2K (ORCPT ); Thu, 16 Dec 2021 19:28:10 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:59312 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230424AbhLQA2A (ORCPT ); Thu, 16 Dec 2021 19:28:00 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1639700879; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5BzoQ4LYtnk4t54dol/Wlm8CS8osrqlNokqKq+15qJw=; b=S7bqkfO4FGuCPyT2htT6TIIrSZVRDHydidYgIdFWvy2B9vLd4hJ9hlPYLm/oMpnO4pEqYs VF0cWNbtLcKizh0A0U0WVoSea/J/oRneAX9CgvfouXuZ5LoejU1OY4cpcvlEuw5OCAJZun EXscWI79uCuqMuUeAreRod1nX++v3OY= Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-213-WOfhIsdGMR-KBrPCaUkXmQ-1; Thu, 16 Dec 2021 19:27:58 -0500 X-MC-Unique: WOfhIsdGMR-KBrPCaUkXmQ-1 Received: by mail-ed1-f69.google.com with SMTP id l15-20020a056402124f00b003e57269ab87so418384edw.6 for ; Thu, 16 Dec 2021 16:27:58 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5BzoQ4LYtnk4t54dol/Wlm8CS8osrqlNokqKq+15qJw=; b=02TucwTnboWuyx8s5NHdDhruJ+4upc0h3yNfDD8HyfX6XTr0vUW7LnDBGR/NUApa/B 1IBzWZoC0lMA0Ni+HksMEgI/xEUVnwxzGx+5+zOJlsC51uAu+fM4GhfNeRwu+rKNWzWv +xOvM9wCTXJqT6oo1Hxpa3dVYyQF/hJFaRxnS/M6lg80/QD/OFKXgMwSQFyuP29uq8VF BuvgGa8HmRGOwuAx/5vk22IMy/7EASQQAvM+nOwWQf9Y3dJ+INUdK9nrg4C/7fmMN0Ze 6nkaLFmWpssgBFExauo0ATUH21XYO7Y0ToLuR70hZ1S3cJwZpofhsykEqiVbqseX3l0X lZXg== X-Gm-Message-State: AOAM533A8R1IIWVQZ0HBGMblHFRm8e0DjkgSBMR7CMXld3YIgILu2O1B 5DUy7OByiBLbXA+QFo1t4S2/cIe5O1CdsN07X26EJAZb2tE94XRk2msBzcziiuyL7bNsb1/mtA+ CMUMrkU19QKDRAT2r X-Received: by 2002:a17:907:60d1:: with SMTP id hv17mr437025ejc.384.1639700876572; Thu, 16 Dec 2021 16:27:56 -0800 (PST) X-Google-Smtp-Source: ABdhPJynkiOOzHg5VbokHaefMtfILMHyPng9lTe2E9kyu9Mva5f+qFiPFL4gH2LAXH/4MVYXyl/DyA== X-Received: by 2002:a17:907:60d1:: with SMTP id hv17mr436964ejc.384.1639700875196; Thu, 16 Dec 2021 16:27:55 -0800 (PST) Received: from alrua-x1.borgediget.toke.dk ([2a0c:4d80:42:443::2]) by smtp.gmail.com with ESMTPSA id g19sm1973695edr.6.2021.12.16.16.27.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Dec 2021 16:27:54 -0800 (PST) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id BFB1C1802E8; Fri, 17 Dec 2021 01:27:52 +0100 (CET) From: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer Cc: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , netdev@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH bpf-next v4 5/7] xdp: add xdp_do_redirect_frame() for pre-computed xdp_frames Date: Fri, 17 Dec 2021 01:27:39 +0100 Message-Id: <20211217002741.146797-6-toke@redhat.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211217002741.146797-1-toke@redhat.com> References: <20211217002741.146797-1-toke@redhat.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Add an xdp_do_redirect_frame() variant which supports pre-computed xdp_frame structures. This will be used in bpf_prog_run() to avoid having to write to the xdp_frame structure when the XDP program doesn't modify the frame boundaries. Signed-off-by: Toke Høiland-Jørgensen --- include/linux/filter.h | 4 +++ net/core/filter.c | 65 +++++++++++++++++++++++++++++++++++------- 2 files changed, 58 insertions(+), 11 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index 60eec80fa1d4..71fa57b88bfc 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -1019,6 +1019,10 @@ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, struct bpf_prog *prog); +int xdp_do_redirect_frame(struct net_device *dev, + struct xdp_buff *xdp, + struct xdp_frame *xdpf, + struct bpf_prog *prog); void xdp_do_flush(void); /* The xdp_do_flush_map() helper has been renamed to drop the _map suffix, as diff --git a/net/core/filter.c b/net/core/filter.c index f4540c800f4a..663a5d5370d5 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -3957,26 +3957,44 @@ u32 xdp_master_redirect(struct xdp_buff *xdp) } EXPORT_SYMBOL_GPL(xdp_master_redirect); -int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) +static inline int __xdp_do_redirect_xsk(struct bpf_redirect_info *ri, + struct net_device *dev, + struct xdp_buff *xdp, + struct bpf_prog *xdp_prog) { - struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); enum bpf_map_type map_type = ri->map_type; void *fwd = ri->tgt_value; u32 map_id = ri->map_id; - struct xdp_frame *xdpf; - struct bpf_map *map; int err; ri->map_id = 0; /* Valid map id idr range: [1,INT_MAX[ */ ri->map_type = BPF_MAP_TYPE_UNSPEC; - if (map_type == BPF_MAP_TYPE_XSKMAP) { - err = __xsk_map_redirect(fwd, xdp); - goto out; - } + err = __xsk_map_redirect(fwd, xdp); + if (unlikely(err)) + goto err; + + _trace_xdp_redirect_map(dev, xdp_prog, fwd, map_type, map_id, ri->tgt_index); + return 0; +err: + _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, map_type, map_id, ri->tgt_index, err); + return err; +} + +static __always_inline int __xdp_do_redirect_frame(struct bpf_redirect_info *ri, + struct net_device *dev, + struct xdp_frame *xdpf, + struct bpf_prog *xdp_prog) +{ + enum bpf_map_type map_type = ri->map_type; + void *fwd = ri->tgt_value; + u32 map_id = ri->map_id; + struct bpf_map *map; + int err; + + ri->map_id = 0; /* Valid map id idr range: [1,INT_MAX[ */ + ri->map_type = BPF_MAP_TYPE_UNSPEC; - xdpf = xdp_convert_buff_to_frame(xdp); if (unlikely(!xdpf)) { err = -EOVERFLOW; goto err; @@ -4013,7 +4031,6 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, err = -EBADRQC; } -out: if (unlikely(err)) goto err; @@ -4023,8 +4040,34 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, _trace_xdp_redirect_map_err(dev, xdp_prog, fwd, map_type, map_id, ri->tgt_index, err); return err; } + +int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, + struct bpf_prog *xdp_prog) +{ + struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + enum bpf_map_type map_type = ri->map_type; + + if (map_type == BPF_MAP_TYPE_XSKMAP) + return __xdp_do_redirect_xsk(ri, dev, xdp, xdp_prog); + + return __xdp_do_redirect_frame(ri, dev, xdp_convert_buff_to_frame(xdp), + xdp_prog); +} EXPORT_SYMBOL_GPL(xdp_do_redirect); +int xdp_do_redirect_frame(struct net_device *dev, struct xdp_buff *xdp, + struct xdp_frame *xdpf, struct bpf_prog *xdp_prog) +{ + struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); + enum bpf_map_type map_type = ri->map_type; + + if (map_type == BPF_MAP_TYPE_XSKMAP) + return __xdp_do_redirect_xsk(ri, dev, xdp, xdp_prog); + + return __xdp_do_redirect_frame(ri, dev, xdpf, xdp_prog); +} +EXPORT_SYMBOL_GPL(xdp_do_redirect_frame); + static int xdp_do_generic_redirect_map(struct net_device *dev, struct sk_buff *skb, struct xdp_buff *xdp,