From patchwork Mon Sep 5 19:21:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= X-Patchwork-Id: 12966461 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EE9EECAAD5 for ; Mon, 5 Sep 2022 19:21:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229978AbiIETVp (ORCPT ); Mon, 5 Sep 2022 15:21:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39714 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229456AbiIETVo (ORCPT ); Mon, 5 Sep 2022 15:21:44 -0400 Received: from mail.toke.dk (mail.toke.dk [IPv6:2a0c:4d80:42:2001::664]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07A9552DEB for ; Mon, 5 Sep 2022 12:21:42 -0700 (PDT) From: =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1662405701; bh=EmU8UWoOJVtYX3OK4J67jvUUj0Sf/TK1mXUtHMLbBoE=; h=From:To:Cc:Subject:Date:From; b=Q5TgHF9/XkJAF5e+CXpCavCj9qVNVK+KRXon2AwEpBOKzW0SvPsq7OI0yi0XJArfX uY4nkKgdqwosmTVTaeK9ffMp2PWLXHa5a4R6i/IzUJ9VREk7lEFm6D+fmQMOHhG0ZZ cqBcgnDdJon+gfGn38p6Owv+7WgqLVOU0tioTx8LX6BtXvd87LWLUDzbgZ3zeo+YrY Hf5T9vshKau+qg+ShuIEiG9zA/l1Eja2zAqGREKuloEYZ/KtzHTJIv3CWI0ege19/M w9+GMEJWMWyQTqhSeDPeiKBo3BfdEBynF2XH3cWtzGHXE2wZk7SwvYrNXi0cdpTUfh CoFBBCtHiyjSw== To: Jamal Hadi Salim , Cong Wang , Jiri Pirko , "David S. Miller" , =?utf-8?q?Toke_H=C3=B8iland-J=C3=B8rgensen?= Cc: Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Subject: [PATCH net] sch_sfb: Also store skb len before calling child enqueue Date: Mon, 5 Sep 2022 21:21:36 +0200 Message-Id: <20220905192137.965549-1-toke@toke.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Cong Wang noticed that the previous fix for sch_sfb accessing the queued skb after enqueueing it to a child qdisc was incomplete: the SFB enqueue function was also calling qdisc_qstats_backlog_inc() after enqueue, which reads the pkt len from the skb cb field. Fix this by also storing the skb len, and using the stored value to increment the backlog after enqueueing. Fixes: 9efd23297cca ("sch_sfb: Don't assume the skb is still around after enqueueing to child") Signed-off-by: Toke Høiland-Jørgensen Acked-by: Cong Wang --- net/sched/sch_sfb.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c index 0d761f454ae8..2829455211f8 100644 --- a/net/sched/sch_sfb.c +++ b/net/sched/sch_sfb.c @@ -281,6 +281,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch, { struct sfb_sched_data *q = qdisc_priv(sch); + unsigned int len = qdisc_pkt_len(skb); struct Qdisc *child = q->qdisc; struct tcf_proto *fl; struct sfb_skb_cb cb; @@ -403,7 +404,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch, memcpy(&cb, sfb_skb_cb(skb), sizeof(cb)); ret = qdisc_enqueue(skb, child, to_free); if (likely(ret == NET_XMIT_SUCCESS)) { - qdisc_qstats_backlog_inc(sch, skb); + sch->qstats.backlog += len; sch->q.qlen++; increment_qlen(&cb, q); } else if (net_xmit_drop_count(ret)) {