From patchwork Tue Nov 23 16:39:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12634761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D923C433F5 for ; Tue, 23 Nov 2021 16:42:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238816AbhKWQpq (ORCPT ); Tue, 23 Nov 2021 11:45:46 -0500 Received: from mga02.intel.com ([134.134.136.20]:20338 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238570AbhKWQpQ (ORCPT ); Tue, 23 Nov 2021 11:45:16 -0500 X-IronPort-AV: E=McAfee;i="6200,9189,10177"; a="222282451" X-IronPort-AV: E=Sophos;i="5.87,258,1631602800"; d="scan'208";a="222282451" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2021 08:41:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,258,1631602800"; d="scan'208";a="591251873" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by FMSMGA003.fm.intel.com with ESMTP; 23 Nov 2021 08:41:36 -0800 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 1ANGf4Wo016784; Tue, 23 Nov 2021 16:41:33 GMT From: Alexander Lobakin To: "David S. Miller" , Jakub Kicinski Cc: Alexander Lobakin , Jesse Brandeburg , Michal Swiatkowski , Maciej Fijalkowski , Jonathan Corbet , Shay Agroskin , Arthur Kiyanovski , David Arinzon , Noam Dagan , Saeed Bishara , Ioana Ciornei , Claudiu Manoil , Tony Nguyen , Thomas Petazzoni , Marcin Wojtas , Russell King , Saeed Mahameed , Leon Romanovsky , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , =?utf-8?q?Toke_H=C3=B8iland-J?= =?utf-8?q?=C3=B8rgensen?= , John Fastabend , Edward Cree , Martin Habets , "Michael S. Tsirkin" , Jason Wang , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , Lorenzo Bianconi , Yajun Deng , Sergey Ryazanov , David Ahern , Andrei Vagin , Johannes Berg , Vladimir Oltean , Cong Wang , netdev@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [PATCH v2 net-next 12/26] veth: don't mix XDP_DROP counter with Rx XDP errors Date: Tue, 23 Nov 2021 17:39:41 +0100 Message-Id: <20211123163955.154512-13-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211123163955.154512-1-alexandr.lobakin@intel.com> References: <20211123163955.154512-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Similarly to mlx5, count XDP_ABORTED and other Rx XDP errors separately from XDP_DROP to better align with generic XDP stats. Signed-off-by: Alexander Lobakin Reviewed-by: Jesse Brandeburg --- drivers/net/veth.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) -- 2.33.1 diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 5ca0a899101d..0e6c030576f4 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -43,6 +43,7 @@ struct veth_stats { u64 xdp_packets; u64 xdp_bytes; u64 xdp_redirect; + u64 xdp_errors; u64 xdp_drops; u64 xdp_tx; u64 xdp_tx_err; @@ -96,6 +97,7 @@ static const struct veth_q_stat_desc veth_rq_stats_desc[] = { { "xdp_bytes", VETH_RQ_STAT(xdp_bytes) }, { "drops", VETH_RQ_STAT(rx_drops) }, { "xdp_redirect", VETH_RQ_STAT(xdp_redirect) }, + { "xdp_errors", VETH_RQ_STAT(xdp_errors) }, { "xdp_drops", VETH_RQ_STAT(xdp_drops) }, { "xdp_tx", VETH_RQ_STAT(xdp_tx) }, { "xdp_tx_errors", VETH_RQ_STAT(xdp_tx_err) }, @@ -655,16 +657,18 @@ static struct xdp_frame *veth_xdp_rcv_one(struct veth_rq *rq, fallthrough; case XDP_ABORTED: trace_xdp_exception(rq->dev, xdp_prog, act); - fallthrough; + goto err_xdp; case XDP_DROP: stats->xdp_drops++; - goto err_xdp; + goto xdp_drop; } } rcu_read_unlock(); return frame; err_xdp: + stats->xdp_errors++; +xdp_drop: rcu_read_unlock(); xdp_return_frame(frame); xdp_xmit: @@ -805,7 +809,8 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, fallthrough; case XDP_ABORTED: trace_xdp_exception(rq->dev, xdp_prog, act); - fallthrough; + stats->xdp_errors++; + goto xdp_drop; case XDP_DROP: stats->xdp_drops++; goto xdp_drop;