From patchwork Wed Nov 23 14:46:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= X-Patchwork-Id: 13053759 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D49C6C433FE for ; Wed, 23 Nov 2022 14:48:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237044AbiKWOsC (ORCPT ); Wed, 23 Nov 2022 09:48:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237125AbiKWOrz (ORCPT ); Wed, 23 Nov 2022 09:47:55 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 642BF65E7C for ; Wed, 23 Nov 2022 06:46:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669214811; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h4AIn+quvPgiVICTvsFmAlgR+g1gZGyh/F5Rd1nBfOI=; b=YDXb++VvFd7oKQwH47j9uFcIuevVqEjUQyRKO04SozzeVy2N56Qnm+OjwYdtGiQuKOI4un COwFrccBjtLWfF/RtigHoac7WcVi+E3sJkCvVUbLrYNTngIgs8kg31ZzFdIQcvP+JukVh9 nu4Zn4S4u+N1TxjpglXEw92TTzsPx70= Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-86-oV1MEtndPH6zBE1FCE0pWQ-1; Wed, 23 Nov 2022 09:46:50 -0500 X-MC-Unique: oV1MEtndPH6zBE1FCE0pWQ-1 Received: by mail-ed1-f71.google.com with SMTP id dz9-20020a0564021d4900b0045d9a3aded4so10785612edb.22 for ; Wed, 23 Nov 2022 06:46:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h4AIn+quvPgiVICTvsFmAlgR+g1gZGyh/F5Rd1nBfOI=; b=OfjMwHc4gw/esvY15F3siJ88OXc6w+6hTRV3vcE3ka4FRzns3IDZUV780n/XRAbIjN BCpYAoBMCHJ3oYNm0kqFfVGY5LFZxqhrJdUWwx+es95hLlll/3IlA/qzDO0huwRHpGLa Tij/LQYRzn8U5bLQ9/ErAKu7Mouk0hJjgzDfPSxLmNwarrL8fXRYLkdOoCIiSWGpFpe1 1RjiV/nTgELuz2qTkmcaWYhCPM1CBI9Plj/hO0dAqAwsWMNDNP2IHKwDJQPvvBzocTSZ Hs1L+D1MV+BUyoINCz1WoO7WKxbfNZ9NvO1iLFdvDc8FvGMOFWV4cEFW2R71B1URUFEQ Qccw== X-Gm-Message-State: ANoB5pm2BE41GIu7Mh6jk93C0Co3OvecZt34RxdSxB58xw0SuGqzcuEE oEO5Zrb8CZK1gMua4llmHjRY6cwW02VYBgekHCIuBrN+kM6lN814BbnMwxBQ+7Hvd/c7R7fCX7M ThGXehQ7PVem9wyaA X-Received: by 2002:aa7:db98:0:b0:46a:d57:d9d0 with SMTP id u24-20020aa7db98000000b0046a0d57d9d0mr4617916edt.113.1669214808094; Wed, 23 Nov 2022 06:46:48 -0800 (PST) X-Google-Smtp-Source: AA0mqf76Xwlecz3uWnEPrUVfOZ2j2sVz2PufzKSvKkuGlwgVHmxvQrWUtXtZI6kmUXc9mVNmnnYO8A== X-Received: by 2002:aa7:db98:0:b0:46a:d57:d9d0 with SMTP id u24-20020aa7db98000000b0046a0d57d9d0mr4617851edt.113.1669214807256; Wed, 23 Nov 2022 06:46:47 -0800 (PST) Received: from alrua-x1.borgediget.toke.dk ([2a0c:4d80:42:443::2]) by smtp.gmail.com with ESMTPSA id g3-20020aa7c843000000b0043bbb3535d6sm7631864edt.66.2022.11.23.06.46.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 06:46:46 -0800 (PST) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id 5E7F97D5120; Wed, 23 Nov 2022 15:46:46 +0100 (CET) From: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= To: bpf@vger.kernel.org Cc: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , John Fastabend , David Ahern , Martin KaFai Lau , Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Magnus Karlsson , Maryam Tahhan , Stanislav Fomichev , xdp-hints@xdp-project.net, netdev@vger.kernel.org Subject: [PATCH bpf-next 1/2] xdp: Add drv_priv pointer to struct xdp_buff Date: Wed, 23 Nov 2022 15:46:40 +0100 Message-Id: <20221123144641.339138-1-toke@redhat.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221121182552.2152891-1-sdf@google.com> References: <20221121182552.2152891-1-sdf@google.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net This allows drivers to add more context data to the xdp_buff object, which they can use for metadata kfunc implementations. Cc: John Fastabend Cc: David Ahern Cc: Martin KaFai Lau Cc: Jakub Kicinski Cc: Willem de Bruijn Cc: Jesper Dangaard Brouer Cc: Anatoly Burakov Cc: Alexander Lobakin Cc: Magnus Karlsson Cc: Maryam Tahhan Cc: Stanislav Fomichev Cc: xdp-hints@xdp-project.net Cc: netdev@vger.kernel.org Signed-off-by: Toke Høiland-Jørgensen --- include/net/xdp.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/net/xdp.h b/include/net/xdp.h index 348aefd467ed..27c54ad3c8e2 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -84,6 +84,7 @@ struct xdp_buff { struct xdp_txq_info *txq; u32 frame_sz; /* frame size to deduce data_hard_end/reserved tailroom*/ u32 flags; /* supported values defined in xdp_buff_flags */ + void *drv_priv; }; static __always_inline bool xdp_buff_has_frags(struct xdp_buff *xdp) From patchwork Wed Nov 23 14:46:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= X-Patchwork-Id: 13053758 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 551D2C4332F for ; Wed, 23 Nov 2022 14:47:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236783AbiKWOrt (ORCPT ); Wed, 23 Nov 2022 09:47:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236666AbiKWOrp (ORCPT ); Wed, 23 Nov 2022 09:47:45 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FA98450AF for ; Wed, 23 Nov 2022 06:46:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1669214810; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xz+km6/Hwl8oAosXhHMDgdK+wl//qFcWr7SigG5fMGI=; b=Y5DufcHpV+ONscMG28HLaa99ZVhp3yYPakMBZlp1Sq5UTraWjgjJFuNKHgpHArL14k2g49 iD86qF9hFlsBqDcfMoVur0+OhodpT9VpGTR8N/ntRkWgt5izEIJkJvwVar2QIfPxF684YL sa+1E0UzLowXRi8zB9MTOuzMoAUWdng= Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com [209.85.218.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-493-X90xQynyNL258bi5X5vwzA-1; Wed, 23 Nov 2022 09:46:49 -0500 X-MC-Unique: X90xQynyNL258bi5X5vwzA-1 Received: by mail-ej1-f71.google.com with SMTP id sh31-20020a1709076e9f00b007ae32b7eb51so9993337ejc.9 for ; Wed, 23 Nov 2022 06:46:49 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xz+km6/Hwl8oAosXhHMDgdK+wl//qFcWr7SigG5fMGI=; b=zXxLfGngfHfgzvABLrYgnFGW6HVoOo65qKbf/EP+kDIHJLHjY7aRKF5ji/QdsiRS6v 3foy71Qw+YW1xugq89XWba4FpvJIeoC1Pw27U/iWFLkxGkMOocgCaVMvO388+LvEBLdQ IgB3vq63W+iZUBD1Z0nN0a0O4GwonTuP73UtZQtWiOb60rAoSWxfhqBCUrdog8TJ2rz3 Z+yqOyOWsxvnydt825kvi9h4lLDYIVHt/8tebngwKRFfMVqevgyiZu47QcixKW1J8uA6 fpKBwdyTldi6/SqmSO20mr310/l75pctMnwxxcOTFv30TO98w1csP3MYlqV9IbpN8R1l trbQ== X-Gm-Message-State: ANoB5pn5pp7I9zgX4w3hP0GfFMCb8PU3nWzDRb1seP9m/GO0vMEDsCE1 7BZE/DgXIUqMmJPfPeHqthHYQ/5UP0YZpxGgX4Jr8TyQMUUpWr4aLwsplVAxOkkaWXCVpbJ/y48 3ilmJnp7pUCxTIf01 X-Received: by 2002:a17:906:160b:b0:78d:dddb:3974 with SMTP id m11-20020a170906160b00b0078ddddb3974mr23674880ejd.411.1669214807924; Wed, 23 Nov 2022 06:46:47 -0800 (PST) X-Google-Smtp-Source: AA0mqf7gzrrwlJ97t9gbAltHqOq94T5Eij3iYI7xoXgpdWePyqrT1+AnkNTuxRdJZ1WEi3DXUCAAxQ== X-Received: by 2002:a17:906:160b:b0:78d:dddb:3974 with SMTP id m11-20020a170906160b00b0078ddddb3974mr23674846ejd.411.1669214807480; Wed, 23 Nov 2022 06:46:47 -0800 (PST) Received: from alrua-x1.borgediget.toke.dk ([45.145.92.2]) by smtp.gmail.com with ESMTPSA id mm27-20020a170906cc5b00b007ae693cd265sm7266907ejb.150.2022.11.23.06.46.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 23 Nov 2022 06:46:47 -0800 (PST) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id 61E197D5122; Wed, 23 Nov 2022 15:46:46 +0100 (CET) From: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= To: bpf@vger.kernel.org Cc: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , John Fastabend , David Ahern , Martin KaFai Lau , Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Magnus Karlsson , Maryam Tahhan , Stanislav Fomichev , xdp-hints@xdp-project.net, netdev@vger.kernel.org Subject: [PATCH bpf-next 2/2] mlx5: Support XDP RX metadata Date: Wed, 23 Nov 2022 15:46:41 +0100 Message-Id: <20221123144641.339138-2-toke@redhat.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221123144641.339138-1-toke@redhat.com> References: <20221121182552.2152891-1-sdf@google.com> <20221123144641.339138-1-toke@redhat.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Support RX hash and timestamp metadata kfuncs. We need to pass in the cqe pointer to the mlx5e_skb_from* functions so it can be retrieved from the XDP ctx to do this. Cc: John Fastabend Cc: David Ahern Cc: Martin KaFai Lau Cc: Jakub Kicinski Cc: Willem de Bruijn Cc: Jesper Dangaard Brouer Cc: Anatoly Burakov Cc: Alexander Lobakin Cc: Magnus Karlsson Cc: Maryam Tahhan Cc: Stanislav Fomichev Cc: xdp-hints@xdp-project.net Cc: netdev@vger.kernel.org Signed-off-by: Toke Høiland-Jørgensen --- This goes on top of Stanislav's series, obvioulsy. Verified that it works using the xdp_hw_metadata utility; going to do ome benchmarking and follow up with the results, but figured I'd send this out straight away in case others wanted to play with it. Stanislav, feel free to fold it into the next version of your series if you want! -Toke drivers/net/ethernet/mellanox/mlx5/core/en.h | 7 +++- .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 32 +++++++++++++++++++ .../net/ethernet/mellanox/mlx5/core/en/xdp.h | 10 ++++++ .../ethernet/mellanox/mlx5/core/en/xsk/rx.c | 3 ++ .../ethernet/mellanox/mlx5/core/en/xsk/rx.h | 3 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 4 +++ .../net/ethernet/mellanox/mlx5/core/en_rx.c | 19 +++++------ 7 files changed, 65 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index ff5b302531d5..960404027f0b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -629,7 +629,7 @@ typedef struct sk_buff * u16 cqe_bcnt, u32 head_offset, u32 page_idx); typedef struct sk_buff * (*mlx5e_fp_skb_from_cqe)(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - u32 cqe_bcnt); + struct mlx5_cqe64 *cqe, u32 cqe_bcnt); typedef bool (*mlx5e_fp_post_rx_wqes)(struct mlx5e_rq *rq); typedef void (*mlx5e_fp_dealloc_wqe)(struct mlx5e_rq*, u16); typedef void (*mlx5e_fp_shampo_dealloc_hd)(struct mlx5e_rq*, u16, u16, bool); @@ -1035,6 +1035,11 @@ int mlx5e_vlan_rx_kill_vid(struct net_device *dev, __always_unused __be16 proto, u16 vid); void mlx5e_timestamp_init(struct mlx5e_priv *priv); +static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config) +{ + return config->rx_filter == HWTSTAMP_FILTER_ALL; +} + struct mlx5e_xsk_param; struct mlx5e_rq_param; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c index 20507ef2f956..604c8cdfde02 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c @@ -156,6 +156,38 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq, return true; } +bool mlx5e_xdp_rx_timestamp_supported(const struct xdp_md *ctx) +{ + const struct xdp_buff *xdp = (void *)ctx; + struct mlx5_xdp_ctx *mctx = xdp->drv_priv; + + return mlx5e_rx_hw_stamp(mctx->rq->tstamp); +} + +u64 mlx5e_xdp_rx_timestamp(const struct xdp_md *ctx) +{ + const struct xdp_buff *xdp = (void *)ctx; + struct mlx5_xdp_ctx *mctx = xdp->drv_priv; + + return mlx5e_cqe_ts_to_ns(mctx->rq->ptp_cyc2time, + mctx->rq->clock, get_cqe_ts(mctx->cqe)); +} + +bool mlx5e_xdp_rx_hash_supported(const struct xdp_md *ctx) +{ + const struct xdp_buff *xdp = (void *)ctx; + + return xdp->rxq->dev->features & NETIF_F_RXHASH; +} + +u32 mlx5e_xdp_rx_hash(const struct xdp_md *ctx) +{ + const struct xdp_buff *xdp = (void *)ctx; + struct mlx5_xdp_ctx *mctx = xdp->drv_priv; + + return be32_to_cpu(mctx->cqe->rss_hash_result); +} + /* returns true if packet was consumed by xdp */ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, struct bpf_prog *prog, struct xdp_buff *xdp) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h index bc2d9034af5b..07d80d0446ff 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h @@ -44,6 +44,11 @@ (MLX5E_XDP_INLINE_WQE_MAX_DS_CNT * MLX5_SEND_WQE_DS - \ sizeof(struct mlx5_wqe_inline_seg)) +struct mlx5_xdp_ctx { + struct mlx5_cqe64 *cqe; + struct mlx5e_rq *rq; +}; + struct mlx5e_xsk_param; int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk); bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page, @@ -56,6 +61,11 @@ void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq); int mlx5e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, u32 flags); +bool mlx5e_xdp_rx_hash_supported(const struct xdp_md *ctx); +u32 mlx5e_xdp_rx_hash(const struct xdp_md *ctx); +bool mlx5e_xdp_rx_timestamp_supported(const struct xdp_md *ctx); +u64 mlx5e_xdp_rx_timestamp(const struct xdp_md *ctx); + INDIRECT_CALLABLE_DECLARE(bool mlx5e_xmit_xdp_frame_mpwqe(struct mlx5e_xdpsq *sq, struct mlx5e_xmit_data *xdptxd, struct skb_shared_info *sinfo, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c index c91b54d9ff27..c6715cb23d45 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c @@ -283,8 +283,10 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, + struct mlx5_cqe64 *cqe, u32 cqe_bcnt) { + struct mlx5_xdp_ctx mlctx = { .cqe = cqe, .rq = rq }; struct xdp_buff *xdp = wi->au->xsk; struct bpf_prog *prog; @@ -298,6 +300,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, xsk_buff_set_size(xdp, cqe_bcnt); xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool); net_prefetch(xdp->data); + xdp->drv_priv = &mlctx; prog = rcu_dereference(rq->xdp_prog); if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, xdp))) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h index 087c943bd8e9..9198f137f48f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.h @@ -18,6 +18,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, u32 page_idx); struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - u32 cqe_bcnt); + struct mlx5_cqe64 *cqe, + u32 cqe_bcnt); #endif /* __MLX5_EN_XSK_RX_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 14bd86e368d5..015bfe891458 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -4890,6 +4890,10 @@ const struct net_device_ops mlx5e_netdev_ops = { .ndo_tx_timeout = mlx5e_tx_timeout, .ndo_bpf = mlx5e_xdp, .ndo_xdp_xmit = mlx5e_xdp_xmit, + .ndo_xdp_rx_timestamp_supported = mlx5e_xdp_rx_timestamp_supported, + .ndo_xdp_rx_timestamp = mlx5e_xdp_rx_timestamp, + .ndo_xdp_rx_hash_supported = mlx5e_xdp_rx_hash_supported, + .ndo_xdp_rx_hash = mlx5e_xdp_rx_hash, .ndo_xsk_wakeup = mlx5e_xsk_wakeup, #ifdef CONFIG_MLX5_EN_ARFS .ndo_rx_flow_steer = mlx5e_rx_flow_steer, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index b1ea0b995d9c..1d6600441e74 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -76,11 +76,6 @@ const struct mlx5e_rx_handlers mlx5e_rx_handlers_nic = { .handle_rx_cqe_mpwqe_shampo = mlx5e_handle_rx_cqe_mpwrq_shampo, }; -static inline bool mlx5e_rx_hw_stamp(struct hwtstamp_config *config) -{ - return config->rx_filter == HWTSTAMP_FILTER_ALL; -} - static inline void mlx5e_read_cqe_slot(struct mlx5_cqwq *wq, u32 cqcc, void *data) { @@ -1573,7 +1568,7 @@ static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroom, static struct sk_buff * mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - u32 cqe_bcnt) + struct mlx5_cqe64 *cqe, u32 cqe_bcnt) { union mlx5e_alloc_unit *au = wi->au; u16 rx_headroom = rq->buff.headroom; @@ -1595,7 +1590,8 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, prog = rcu_dereference(rq->xdp_prog); if (prog) { - struct xdp_buff xdp; + struct mlx5_xdp_ctx mlctx = { .cqe = cqe, .rq = rq }; + struct xdp_buff xdp = { .drv_priv = &mlctx }; net_prefetchw(va); /* xdp_frame data area */ mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp); @@ -1619,16 +1615,17 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, static struct sk_buff * mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi, - u32 cqe_bcnt) + struct mlx5_cqe64 *cqe, u32 cqe_bcnt) { struct mlx5e_rq_frag_info *frag_info = &rq->wqe.info.arr[0]; + struct mlx5_xdp_ctx mlctx = { .cqe = cqe, .rq = rq }; + struct xdp_buff xdp = { .drv_priv = &mlctx }; struct mlx5e_wqe_frag_info *head_wi = wi; union mlx5e_alloc_unit *au = wi->au; u16 rx_headroom = rq->buff.headroom; struct skb_shared_info *sinfo; u32 frag_consumed_bytes; struct bpf_prog *prog; - struct xdp_buff xdp; struct sk_buff *skb; dma_addr_t addr; u32 truesize; @@ -1766,7 +1763,7 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe) mlx5e_skb_from_cqe_linear, mlx5e_skb_from_cqe_nonlinear, mlx5e_xsk_skb_from_cqe_linear, - rq, wi, cqe_bcnt); + rq, wi, cqe, cqe_bcnt); if (!skb) { /* probably for XDP */ if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) { @@ -2575,7 +2572,7 @@ static void mlx5e_trap_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe goto free_wqe; } - skb = mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe_bcnt); + skb = mlx5e_skb_from_cqe_nonlinear(rq, wi, cqe, cqe_bcnt); if (!skb) goto free_wqe;