From patchwork Wed Mar 3 15:29:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maciej Fijalkowski X-Patchwork-Id: 12114485 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BCF2C15500 for ; Wed, 3 Mar 2021 22:56:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EAE8964F0B for ; Wed, 3 Mar 2021 22:56:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1391834AbhCCWuH (ORCPT ); Wed, 3 Mar 2021 17:50:07 -0500 Received: from mga05.intel.com ([192.55.52.43]:12189 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442620AbhCCPmG (ORCPT ); Wed, 3 Mar 2021 10:42:06 -0500 IronPort-SDR: D9OTeArAhYm1W5upTymbq7uXiNw5Xyxpp3yJ4w81IEcB5t7s7a0dhvCxAhGSbGTRhwvJBhjq0z jil6U005Ceng== X-IronPort-AV: E=McAfee;i="6000,8403,9912"; a="272218012" X-IronPort-AV: E=Sophos;i="5.81,220,1610438400"; d="scan'208";a="272218012" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Mar 2021 07:39:07 -0800 IronPort-SDR: ATfSTsxIdKPFQ0MciTSQb9mrnUq+S5W/hYZwx2CnGa8K/W/D4vfVwTYL4Ig0dgyLvbvWw4IkKu 5+tsY/FLAk5g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,220,1610438400"; d="scan'208";a="367645044" Received: from ranger.igk.intel.com ([10.102.21.164]) by orsmga003.jf.intel.com with ESMTP; 03 Mar 2021 07:39:04 -0800 From: Maciej Fijalkowski To: makita.toshiaki@lab.ntt.co.jp, daniel@iogearbox.net, ast@kernel.org, bpf@vger.kernel.org, netdev@vger.kernel.org Cc: bjorn.topel@intel.com, magnus.karlsson@intel.com, Maciej Fijalkowski Subject: [PATCH bpf] veth: store queue_mapping independently of XDP prog presence Date: Wed, 3 Mar 2021 16:29:03 +0100 Message-Id: <20210303152903.11172-1-maciej.fijalkowski@intel.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net Currently, veth_xmit() would call the skb_record_rx_queue() only when there is XDP program loaded on peer interface in native mode. If peer has XDP prog in generic mode, then netif_receive_generic_xdp() has a call to netif_get_rxqueue(skb), so for multi-queue veth it will not be possible to grab a correct rxq. To fix that, store queue_mapping independently of XDP prog presence on peer interface. Fixes: 638264dc9022 ("veth: Support per queue XDP ring") Signed-off-by: Maciej Fijalkowski Acked-by: Toshiaki Makita --- drivers/net/veth.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index aa1a66ad2ce5..34e49c75db42 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -302,8 +302,7 @@ static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) if (rxq < rcv->real_num_rx_queues) { rq = &rcv_priv->rq[rxq]; rcv_xdp = rcu_access_pointer(rq->xdp_prog); - if (rcv_xdp) - skb_record_rx_queue(skb, rxq); + skb_record_rx_queue(skb, rxq); } skb_tx_timestamp(skb);