From patchwork Tue Mar 25 14:28:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Chia-Yu Chang (Nokia)" X-Patchwork-Id: 14029104 X-Patchwork-Delegate: kuba@kernel.org Received: from DU2PR03CU002.outbound.protection.outlook.com (mail-northeuropeazon11012006.outbound.protection.outlook.com [52.101.66.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A200D25A32B; Tue, 25 Mar 2025 14:29:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=52.101.66.6 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742912957; cv=fail; b=ikh4PBKdRaqvV/e4u27XE+rknIaha5Pc1GMjiBksJuvWknjiByMlrnuf9GqLTYWNLHhKVueGqAj9UZemC4IWBv6K8EejVavxRXM528+8IiND1MIgNv2ldRKrqxAn6cDMBAvqqpcX9wbME6Ozit08e7jehzOIpK+vXtTRHkfUpGo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742912957; c=relaxed/simple; bh=zmZFqmLk+uK0WHVOTUahDWCkSEmu/dkhcxtwgo+OPnk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=PUB5ohXLaHXDRCdLDJS5bOz5G3ACRzVbyC8mwqkxxjxsCyssMUbzwXHZLhVjhPlaf272iausBuJ+yurrc+VzRvSJv1jPfJgBwBY4nHH/KbqXEWHsf5HKaU0qCsQVMz+oLjjoa55bWaVB+FItyEZDoi3Pqw2DXpEKfwRaSmil8S4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nokia-bell-labs.com; spf=fail smtp.mailfrom=nokia-bell-labs.com; dkim=pass (2048-bit key) header.d=nokia-bell-labs.com header.i=@nokia-bell-labs.com header.b=hlfCpCto; arc=fail smtp.client-ip=52.101.66.6 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nokia-bell-labs.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nokia-bell-labs.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=nokia-bell-labs.com header.i=@nokia-bell-labs.com header.b="hlfCpCto" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=X1ATcJtaCwU+AfgQeIQe1h+OStEXa/22d1FVgZC1RlGpYwvDpj9nUwZ/IWmgH1aEdI/j8iFTOM8ZliSNH4/osWMHlFn2YjNwf5U1cQi5lMvD0WSCrdQnHnuZBlHJwJfOiT9ZFp8QaA2sRfAB5fd+ov3MYp1k/aZytyHGnMLtnldqSjqLwEc0XvupFzp8yTguHCAI66Lf40bkOi1JqWk9JmQn2DP2lYR/p6CeYvQIDQE16HcRvIZEVmo8RGIIAG/37DehBDTEJKUXSC7VU8UrqUabSTjwrz+ifzoYe1ZiywCAf1FCNPEXQiki4S23GWrU1Nw0jVACy6Ps0yCYY3xwig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BNfEk4n5QAfuPoheCzkWcoUnSDNY+ddmQjW0DtA9WVQ=; b=gcXF1+utQLXO23LpPJiPxcp2bCVBV1mUTsgIpPjANbthzIbjZMfYiywmHen3sHPPnYE2bqhY/xaYbr8F6jUzj4Gf8kkU4qUEDzGAX7/uNghB69wH2wfz3BX/NNOXqhvB9uiD/Gxho0YRwtGhGd7oJjuYIFZ1oE6rpXYTVF+CbjNyNyqdSDqfjBfbo5w94L8WR5NKVIu4p6co80mLT3/CoEzMKUemHfx2VrLNI25ZGu6NyAN6Zz7U5Xhi4qnDmqmxn7JivcJfqdbg6LNc0o08lU6QjOgq0gWC25QCHp7p94dQV7YMy9SIgWGfvjNS0PzNzERLi+NgBPhnuzVflMYSEw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=temperror (sender ip is 131.228.2.20) smtp.rcpttodomain=apple.com smtp.mailfrom=nokia-bell-labs.com; dmarc=temperror action=none header.from=nokia-bell-labs.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nokia-bell-labs.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BNfEk4n5QAfuPoheCzkWcoUnSDNY+ddmQjW0DtA9WVQ=; b=hlfCpCtoxoTgvACRBNBW57TabZs5WDimdDnI/jbIYXeulTjtE8u2uvPHei4AKouLgBGk5Oim+DHzANxMRUKuW7uKdHsY1PWXwpnKN+2lQbtWvjOxcELo0Av4Nw3ky14yrMpnWSCVwIpreh1Ha3InTr/eytgW62aP8F1jiIU279BP23JIaA/5QfeSXbnYjk8PAcDn8E8o6e8PGECMpq3iU8ONI5s10M2nTk0iRAbe7tNt66Yre8o4OmijgDUuBI5z/6GCoaPltZVAEGRA+Jv4udMyt8Vkma5Z+JztrEqzAQxhaTqifWCqP7rX5JT12ZaRE+VD7sr71/u4Q4HDEc8ulQ== Received: from CWLP265CA0368.GBRP265.PROD.OUTLOOK.COM (2603:10a6:401:5e::20) by DBAPR07MB6679.eurprd07.prod.outlook.com (2603:10a6:10:180::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8534.42; Tue, 25 Mar 2025 14:29:12 +0000 Received: from AMS0EPF000001B3.eurprd05.prod.outlook.com (2603:10a6:401:5e:cafe::b6) by CWLP265CA0368.outlook.office365.com (2603:10a6:401:5e::20) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8534.42 via Frontend Transport; Tue, 25 Mar 2025 14:29:11 +0000 X-MS-Exchange-Authentication-Results: spf=temperror (sender IP is 131.228.2.20) smtp.mailfrom=nokia-bell-labs.com; dkim=none (message not signed) header.d=none;dmarc=temperror action=none header.from=nokia-bell-labs.com; Received-SPF: TempError (protection.outlook.com: error in processing during lookup of nokia-bell-labs.com: DNS Timeout) Received: from fihe3nok0734.emea.nsn-net.net (131.228.2.20) by AMS0EPF000001B3.mail.protection.outlook.com (10.167.16.167) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8534.20 via Frontend Transport; Tue, 25 Mar 2025 14:29:10 +0000 Received: from sarah.nbl.nsn-rdnet.net (sarah.nbl.nsn-rdnet.net [10.0.73.150]) by fihe3nok0734.emea.nsn-net.net (Postfix) with ESMTP id 102BD32F87; Tue, 25 Mar 2025 16:29:09 +0200 (EET) From: chia-yu.chang@nokia-bell-labs.com To: netdev@vger.kernel.org, dave.taht@gmail.com, pabeni@redhat.com, jhs@mojatatu.com, kuba@kernel.org, stephen@networkplumber.org, xiyou.wangcong@gmail.com, jiri@resnulli.us, davem@davemloft.net, edumazet@google.com, horms@kernel.org, andrew+netdev@lunn.ch, donald.hunter@gmail.com, ast@fiberby.net, liuhangbin@gmail.com, shuah@kernel.org, linux-kselftest@vger.kernel.org, ij@kernel.org, ncardwell@google.com, koen.de_schepper@nokia-bell-labs.com, g.white@cablelabs.com, ingemar.s.johansson@ericsson.com, mirja.kuehlewind@ericsson.com, cheshire@apple.com, rs.ietf@gmx.at, Jason_Livingood@comcast.com, vidhi_goel@apple.com Cc: Chia-Yu Chang Subject: [PATCH v10 net-next 3/5] sched: Struct definition and parsing of dualpi2 qdisc Date: Tue, 25 Mar 2025 15:28:34 +0100 Message-Id: <20250325142836.47048-4-chia-yu.chang@nokia-bell-labs.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250325142836.47048-1-chia-yu.chang@nokia-bell-labs.com> References: <20250325142836.47048-1-chia-yu.chang@nokia-bell-labs.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: AMS0EPF000001B3:EE_|DBAPR07MB6679:EE_ X-MS-Office365-Filtering-Correlation-Id: 236fac74-c953-4483-e222-08dd6ba968c6 X-LD-Processed: 5d471751-9675-428d-917b-70f44f9630b0,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|1800799024|82310400026|36860700013|376014|921020|13003099007; X-Microsoft-Antispam-Message-Info: U7O+mAkz85s1aAAINywy3xY371cJptMIGq77u7KNV6BciKQbRSEPzX9/6/bx38roAcxO5ZV3F1Z/1pwtsT/l8dwoI78bfeft8cp02ploeR59XAShsC3zFbKDgV68K8SolgRIVD4Pdq6bGViCLn2d0W0QmTeH5cB0o8I3myDr6UnErqr2llXs9hdc7yTF3xFDHJsirA52n3t4odxTVo5jhFcPkMw8ArVOUSq8tRZrAAeLhl6r0Gi0Y0Bplm2nT4/R2MHblA1L17omTpOKthBk6azGz1BJEjeT5azmX27dDCpl/cxCxl/wWmwmaravHxdRIMIl9VYF7L7iAaQTKLIU3bVOvhLbBEQxMaGZjv71bXeRNhFYQ7GsG4tU8AXp4kMYUoPrdkphm3PRwjdL6LdFVil+aLRQixOwBxwxYRC02bjYJ7bf7IRgHg41SJznjaltUaNdonhZFSZVFPRt8OOsbNeNnV048adirzZeyucP98JVuExyy71wQ5L6E5mqZKp5J3ml/4bYWP6Hhjkrm4NC/TndAmaoHl6cMTR6hAZGW6eksOqh0q7/Pm7elFtlHyEFrjmoRwJYhCximvb1GXp6Rw/VW1Z261WVUWl3oGeRiilAaeHmkw0gfXC616hOGTC11raOx04eBvy6l0sE0VyNGJUWxO7LhcjkbXiEp8+JgYWykFmF0UwmiC2AKKtzOi0ofcmrkel8KhfBEk0nrZ/UD0zj12rqyJ2KGovlhOmH1dNrkXcnZc2YrepABa9aHhwKT6y/O3RF+hdhwyLImW+DOG1dqKzFadItaIk5CdSCJgRIiLno23TRD6wFVaHuUL19xTl2DAQMbVTIL1l1vpv55scD7cdgiYF1Ms0GrxFBXnTgg6I0tx4XCgYodtD3FjTB6gIcwQE5UI5YRfdwdnZioFlm76X/cuhdJ47BK0KXdscq1JJGmPgVg2TAR1BWUvb9vpEe7Ek58lFXV8U5I2odS3QJeYskwUNpJU9z0plP9sMvRLlQyxE3PWbRT7TG+9lYr4Clkd9C+CzHdNzCg98FXAvjIMxuQ4UO/cajnsD7eSo0xAtGvgxgH/tyeqQvqGmfzm8854K14Qq0upVkusU4AoKztMuvVJsGdvXWZQRFGvr+qLBhQfmKs53wLgQEZYzoLXn0cLXL+LjrpmBpACNdSoBILkchpJMJlWFfbpntDDQlLDqTbJjFm/9Gc8IFYSVt9ATQhPcaJARR5iiPVtXS63mdl5CKYK8oHE2NTH5mEOcQRuVZjfxwvVWx3c+xmZdRvczSTBGcXoG7JavMtnjP/3hs4wjuxoe+FZcOA7NpnY80YsXWIyUMBxFAyZ8Sn5zX4wuidBrevcFmlOtbDHx7w2PA5H2ZfHfwcD/b1x+ynl+9RuTF4WeiL/OgZct06cD0Ofz/nB7xP4+1Ifn0mcS9whwkDoJ8yfhEpMC89BEn0nW8Ihf545IUR0uKG/brm0xaOyY4qYcOggFgWdotsMyqsQs+BVa8aFeygCobDuc4n6RW0246U38WHMOV4Oq4dqm7TTGGsSYNG7Y7+pfBga14Kg== X-Forefront-Antispam-Report: CIP:131.228.2.20;CTRY:FI;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:fihe3nok0734.emea.nsn-net.net;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(1800799024)(82310400026)(36860700013)(376014)(921020)(13003099007);DIR:OUT;SFP:1101; X-OriginatorOrg: nokia-bell-labs.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Mar 2025 14:29:10.4081 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 236fac74-c953-4483-e222-08dd6ba968c6 X-MS-Exchange-CrossTenant-Id: 5d471751-9675-428d-917b-70f44f9630b0 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5d471751-9675-428d-917b-70f44f9630b0;Ip=[131.228.2.20];Helo=[fihe3nok0734.emea.nsn-net.net] X-MS-Exchange-CrossTenant-AuthSource: AMS0EPF000001B3.eurprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR07MB6679 X-Patchwork-Delegate: kuba@kernel.org From: Chia-Yu Chang DualPI2 is the reference implementation of IETF RFC9332 DualQ Coupled AQM (https://datatracker.ietf.org/doc/html/rfc9332) providing two queues called low latency (L-queue) and classic (C-queue). By default, it directs non-ECN and ECT(0) into the C-queue and ECT(1) and CE into the low latency queue (L-queue), as per the IETF spec. This patch defines the dualpi2 Qdisc structure and parsing, and the following two patches include dumping and enqueue/dequeue for the DualPI2. Signed-off-by: Chia-Yu Chang --- include/uapi/linux/pkt_sched.h | 24 ++ net/sched/sch_dualpi2.c | 553 +++++++++++++++++++++++++++++++++ 2 files changed, 577 insertions(+) create mode 100644 net/sched/sch_dualpi2.c diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h index 25a9a47001cd..fd5bec118cdc 100644 --- a/include/uapi/linux/pkt_sched.h +++ b/include/uapi/linux/pkt_sched.h @@ -1210,4 +1210,28 @@ enum { #define TCA_ETS_MAX (__TCA_ETS_MAX - 1) +/* DUALPI2 */ +enum { + TCA_DUALPI2_UNSPEC, + TCA_DUALPI2_LIMIT, /* Packets */ + TCA_DUALPI2_MEMORY_LIMIT, /* Bytes */ + TCA_DUALPI2_TARGET, /* us */ + TCA_DUALPI2_TUPDATE, /* us */ + TCA_DUALPI2_ALPHA, /* Hz scaled up by 256 */ + TCA_DUALPI2_BETA, /* HZ scaled up by 256 */ + TCA_DUALPI2_STEP_THRESH, /* Packets or us */ + TCA_DUALPI2_STEP_PACKETS, /* Whether STEP_THRESH is in packets */ + TCA_DUALPI2_MIN_QLEN_STEP, /* Minimum qlen to apply STEP_THRESH */ + TCA_DUALPI2_COUPLING, /* Coupling factor between queues */ + TCA_DUALPI2_DROP_OVERLOAD, /* Whether to drop on overload */ + TCA_DUALPI2_DROP_EARLY, /* Whether to drop on enqueue */ + TCA_DUALPI2_C_PROTECTION, /* Percentage */ + TCA_DUALPI2_ECN_MASK, /* L4S queue classification mask */ + TCA_DUALPI2_SPLIT_GSO, /* Split GSO packets at enqueue */ + TCA_DUALPI2_PAD, + __TCA_DUALPI2_MAX +}; + +#define TCA_DUALPI2_MAX (__TCA_DUALPI2_MAX - 1) + #endif diff --git a/net/sched/sch_dualpi2.c b/net/sched/sch_dualpi2.c new file mode 100644 index 000000000000..3bf9db6d3bb3 --- /dev/null +++ b/net/sched/sch_dualpi2.c @@ -0,0 +1,553 @@ +// SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +/* Copyright (C) 2024 Nokia + * + * Author: Koen De Schepper + * Author: Olga Albisser + * Author: Henrik Steen + * Author: Olivier Tilmans + * Author: Chia-Yu Chang + * + * DualPI Improved with a Square (dualpi2): + * - Supports congestion controls that comply with the Prague requirements + * in RFC9331 (e.g. TCP-Prague) + * - Supports coupled dual-queue with PI2 as defined in RFC9332 + * - Supports ECN L4S-identifier (IP.ECN==0b*1) + * + * note: Although DCTCP and BBRv3 can use shallow-threshold ECN marks, + * they do not meet the 'Prague L4S Requirements' listed in RFC 9331 + * Section 4, so they can only be used with DualPI2 in a datacenter + * context. + * + * References: + * - RFC9332: https://datatracker.ietf.org/doc/html/rfc9332 + * - De Schepper, Koen, et al. "PI 2: A linearized AQM for both classic and + * scalable TCP." in proc. ACM CoNEXT'16, 2016. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +/* 32b enable to support flows with windows up to ~8.6 * 1e9 packets + * i.e., twice the maximal snd_cwnd. + * MAX_PROB must be consistent with the RNG in dualpi2_roll(). + */ +#define MAX_PROB U32_MAX + +/* alpha/beta values exchanged over netlink are in units of 256ns */ +#define ALPHA_BETA_SHIFT 8 + +/* Scaled values of alpha/beta must fit in 32b to avoid overflow in later + * computations. Consequently (see and dualpi2_scale_alpha_beta()), their + * netlink-provided values can use at most 31b, i.e. be at most (2^23)-1 + * (~4MHz) as those are given in 1/256th. This enable to tune alpha/beta to + * control flows whose maximal RTTs can be in usec up to few secs. + */ +#define ALPHA_BETA_MAX ((1U << 31) - 1) + +/* Internal alpha/beta are in units of 64ns. + * This enables to use all alpha/beta values in the allowed range without loss + * of precision due to rounding when scaling them internally, e.g., + * scale_alpha_beta(1) will not round down to 0. + */ +#define ALPHA_BETA_GRANULARITY 6 + +#define ALPHA_BETA_SCALING (ALPHA_BETA_SHIFT - ALPHA_BETA_GRANULARITY) + +/* We express the weights (wc, wl) in %, i.e., wc + wl = 100 */ +#define MAX_WC 100 + +struct dualpi2_sched_data { + struct Qdisc *l_queue; /* The L4S Low latency queue (L-queue) */ + struct Qdisc *sch; /* The Classic queue (C-queue) */ + + /* Registered tc filters */ + struct tcf_proto __rcu *tcf_filters; + struct tcf_block *tcf_block; + + struct { /* PI2 parameters */ + u64 target; /* Target delay in nanoseconds */ + u32 tupdate;/* Timer frequency in nanoseconds */ + u32 prob; /* Base PI probability */ + u32 alpha; /* Gain factor for the integral rate response */ + u32 beta; /* Gain factor for the proportional response */ + struct hrtimer timer; /* prob update timer */ + } pi2; + + struct { /* Step AQM (L-queue only) parameters */ + u32 thresh; /* Step threshold */ + bool in_packets; /* Whether the step is in packets or time */ + } step; + + struct { /* C-queue starvation protection */ + s32 credit; /* Credit (sign indicates which queue) */ + s32 init; /* Reset value of the credit */ + u8 wc; /* C-queue weight (between 0 and MAX_WC) */ + u8 wl; /* L-queue weight (MAX_WC - wc) */ + } c_protection; + + /* General dualQ parameters */ + u32 memory_limit; /* Memory limit of both queues */ + u8 coupling_factor;/* Coupling factor (k) between both queues */ + u8 ecn_mask; /* Mask to match packets into L-queue */ + u32 min_qlen_step; /* Minimum queue length to apply step thresh */ + bool drop_early; /* Drop at enqueue instead of dequeue if true */ + bool drop_overload; /* Drop (1) on overload, or overflow (0) */ + bool split_gso; /* Split aggregated skb (1) or leave as is */ + + /* Statistics */ + u64 c_head_ts; /* Enqueue timestamp of the C-queue head */ + u64 l_head_ts; /* Enqueue timestamp of the L-queue head */ + u64 last_qdelay; /* Q delay val at the last probability update */ + u32 packets_in_c; /* Enqueue packet counter of the C-queue */ + u32 packets_in_l; /* Enqueue packet counter of the L-queue */ + u32 maxq; /* Maximum queue size of the C-queue */ + u32 ecn_mark; /* ECN mark pkt counter due to PI probability */ + u32 step_marks; /* ECN mark pkt counter due to step AQM */ + u32 memory_used; /* Memory used of both queues */ + u32 max_memory_used;/* Maximum used memory */ +}; + +static u32 dualpi2_scale_alpha_beta(u32 param) +{ + u64 tmp = ((u64)param * MAX_PROB >> ALPHA_BETA_SCALING); + + do_div(tmp, NSEC_PER_SEC); + return tmp; +} + +static ktime_t next_pi2_timeout(struct dualpi2_sched_data *q) +{ + return ktime_add_ns(ktime_get_ns(), q->pi2.tupdate); +} + +static void dualpi2_reset_c_protection(struct dualpi2_sched_data *q) +{ + q->c_protection.credit = q->c_protection.init; +} + +/* This computes the initial credit value and WRR weight for the L queue (wl) + * from the weight of the C queue (wc). + * If wl > wc, the scheduler will start with the L queue when reset. + */ +static void dualpi2_calculate_c_protection(struct Qdisc *sch, + struct dualpi2_sched_data *q, u32 wc) +{ + q->c_protection.wc = wc; + q->c_protection.wl = MAX_WC - wc; + q->c_protection.init = (s32)psched_mtu(qdisc_dev(sch)) * + ((int)q->c_protection.wc - (int)q->c_protection.wl); + dualpi2_reset_c_protection(q); +} + +static s64 __scale_delta(u64 diff) +{ + do_div(diff, 1 << ALPHA_BETA_GRANULARITY); + return diff; +} + +static void get_queue_delays(struct dualpi2_sched_data *q, u64 *qdelay_c, + u64 *qdelay_l) +{ + u64 now, qc, ql; + + now = ktime_get_ns(); + qc = READ_ONCE(q->c_head_ts); + ql = READ_ONCE(q->l_head_ts); + + *qdelay_c = qc ? now - qc : 0; + *qdelay_l = ql ? now - ql : 0; +} + +static u32 calculate_probability(struct Qdisc *sch) +{ + struct dualpi2_sched_data *q = qdisc_priv(sch); + u32 new_prob; + u64 qdelay_c; + u64 qdelay_l; + u64 qdelay; + s64 delta; + + get_queue_delays(q, &qdelay_c, &qdelay_l); + qdelay = max(qdelay_l, qdelay_c); + /* Alpha and beta take at most 32b, i.e, the delay difference would + * overflow for queuing delay differences > ~4.2sec. + */ + delta = ((s64)qdelay - q->pi2.target) * q->pi2.alpha; + delta += ((s64)qdelay - q->last_qdelay) * q->pi2.beta; + if (delta > 0) { + new_prob = __scale_delta(delta) + q->pi2.prob; + if (new_prob < q->pi2.prob) + new_prob = MAX_PROB; + } else { + new_prob = q->pi2.prob - __scale_delta(~delta + 1); + if (new_prob > q->pi2.prob) + new_prob = 0; + } + q->last_qdelay = qdelay; + /* If we do not drop on overload, ensure we cap the L4S probability to + * 100% to keep window fairness when overflowing. + */ + if (!q->drop_overload) + return min_t(u32, new_prob, MAX_PROB / q->coupling_factor); + return new_prob; +} + +static enum hrtimer_restart dualpi2_timer(struct hrtimer *timer) +{ + struct dualpi2_sched_data *q = from_timer(q, timer, pi2.timer); + + WRITE_ONCE(q->pi2.prob, calculate_probability(q->sch)); + + hrtimer_set_expires(&q->pi2.timer, next_pi2_timeout(q)); + return HRTIMER_RESTART; +} + +static struct netlink_range_validation dualpi2_alpha_beta_range = { + .min = 1, + .max = ALPHA_BETA_MAX, +}; + +static struct netlink_range_validation dualpi2_wc_range = { + .min = 0, + .max = MAX_WC, +}; + +static const struct nla_policy dualpi2_policy[TCA_DUALPI2_MAX + 1] = { + [TCA_DUALPI2_LIMIT] = NLA_POLICY_MIN(NLA_U32, 1), + [TCA_DUALPI2_MEMORY_LIMIT] = NLA_POLICY_MIN(NLA_U32, 1), + [TCA_DUALPI2_TARGET] = {.type = NLA_U32}, + [TCA_DUALPI2_TUPDATE] = NLA_POLICY_MIN(NLA_U32, 1), + [TCA_DUALPI2_ALPHA] = + NLA_POLICY_FULL_RANGE(NLA_U32, &dualpi2_alpha_beta_range), + [TCA_DUALPI2_BETA] = + NLA_POLICY_FULL_RANGE(NLA_U32, &dualpi2_alpha_beta_range), + [TCA_DUALPI2_STEP_THRESH] = {.type = NLA_U32}, + [TCA_DUALPI2_STEP_PACKETS] = {.type = NLA_U8}, + [TCA_DUALPI2_MIN_QLEN_STEP] = {.type = NLA_U32}, + [TCA_DUALPI2_COUPLING] = NLA_POLICY_MIN(NLA_U8, 1), + [TCA_DUALPI2_DROP_OVERLOAD] = {.type = NLA_U8}, + [TCA_DUALPI2_DROP_EARLY] = {.type = NLA_U8}, + [TCA_DUALPI2_C_PROTECTION] = + NLA_POLICY_FULL_RANGE(NLA_U8, &dualpi2_wc_range), + [TCA_DUALPI2_ECN_MASK] = {.type = NLA_U8}, + [TCA_DUALPI2_SPLIT_GSO] = {.type = NLA_U8}, +}; + +static int dualpi2_change(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) +{ + struct nlattr *tb[TCA_DUALPI2_MAX + 1]; + struct dualpi2_sched_data *q; + int old_backlog; + int old_qlen; + int err; + + if (!opt) + return -EINVAL; + err = nla_parse_nested(tb, TCA_DUALPI2_MAX, opt, dualpi2_policy, + extack); + if (err < 0) + return err; + + q = qdisc_priv(sch); + sch_tree_lock(sch); + + if (tb[TCA_DUALPI2_LIMIT]) { + u32 limit = nla_get_u32(tb[TCA_DUALPI2_LIMIT]); + + WRITE_ONCE(sch->limit, limit); + WRITE_ONCE(q->memory_limit, limit * psched_mtu(qdisc_dev(sch))); + } + + if (tb[TCA_DUALPI2_MEMORY_LIMIT]) + WRITE_ONCE(q->memory_limit, + nla_get_u32(tb[TCA_DUALPI2_MEMORY_LIMIT])); + + if (tb[TCA_DUALPI2_TARGET]) { + u64 target = nla_get_u32(tb[TCA_DUALPI2_TARGET]); + + WRITE_ONCE(q->pi2.target, target * NSEC_PER_USEC); + } + + if (tb[TCA_DUALPI2_TUPDATE]) { + u64 tupdate = nla_get_u32(tb[TCA_DUALPI2_TUPDATE]); + + WRITE_ONCE(q->pi2.tupdate, tupdate * NSEC_PER_USEC); + } + + if (tb[TCA_DUALPI2_ALPHA]) { + u32 alpha = nla_get_u32(tb[TCA_DUALPI2_ALPHA]); + + WRITE_ONCE(q->pi2.alpha, dualpi2_scale_alpha_beta(alpha)); + } + + if (tb[TCA_DUALPI2_BETA]) { + u32 beta = nla_get_u32(tb[TCA_DUALPI2_BETA]); + + WRITE_ONCE(q->pi2.beta, dualpi2_scale_alpha_beta(beta)); + } + + if (tb[TCA_DUALPI2_STEP_PACKETS]) { + bool step_pkt = !!nla_get_u8(tb[TCA_DUALPI2_STEP_PACKETS]); + u32 step_th = READ_ONCE(q->step.thresh); + + WRITE_ONCE(q->step.in_packets, step_pkt); + WRITE_ONCE(q->step.thresh, + step_pkt ? step_th : (step_th * NSEC_PER_USEC)); + } + + if (tb[TCA_DUALPI2_STEP_THRESH]) { + u32 step_th = nla_get_u32(tb[TCA_DUALPI2_STEP_THRESH]); + bool step_pkt = READ_ONCE(q->step.in_packets); + + WRITE_ONCE(q->step.thresh, + step_pkt ? step_th : (step_th * NSEC_PER_USEC)); + } + + if (tb[TCA_DUALPI2_MIN_QLEN_STEP]) + WRITE_ONCE(q->min_qlen_step, + nla_get_u32(tb[TCA_DUALPI2_MIN_QLEN_STEP])); + + if (tb[TCA_DUALPI2_COUPLING]) { + u8 coupling = nla_get_u8(tb[TCA_DUALPI2_COUPLING]); + + WRITE_ONCE(q->coupling_factor, coupling); + } + + if (tb[TCA_DUALPI2_DROP_OVERLOAD]) + WRITE_ONCE(q->drop_overload, + !!nla_get_u8(tb[TCA_DUALPI2_DROP_OVERLOAD])); + + if (tb[TCA_DUALPI2_DROP_EARLY]) + WRITE_ONCE(q->drop_early, + !!nla_get_u8(tb[TCA_DUALPI2_DROP_EARLY])); + + if (tb[TCA_DUALPI2_C_PROTECTION]) { + u8 wc = nla_get_u8(tb[TCA_DUALPI2_C_PROTECTION]); + + dualpi2_calculate_c_protection(sch, q, wc); + } + + if (tb[TCA_DUALPI2_ECN_MASK]) + WRITE_ONCE(q->ecn_mask, + nla_get_u8(tb[TCA_DUALPI2_ECN_MASK])); + + if (tb[TCA_DUALPI2_SPLIT_GSO]) + WRITE_ONCE(q->split_gso, + !!nla_get_u8(tb[TCA_DUALPI2_SPLIT_GSO])); + + old_qlen = qdisc_qlen(sch); + old_backlog = sch->qstats.backlog; + while (qdisc_qlen(sch) > sch->limit || + q->memory_used > q->memory_limit) { + struct sk_buff *skb = __qdisc_dequeue_head(&sch->q); + + q->memory_used -= skb->truesize; + qdisc_qstats_backlog_dec(sch, skb); + rtnl_qdisc_drop(skb, sch); + } + qdisc_tree_reduce_backlog(sch, old_qlen - qdisc_qlen(sch), + old_backlog - sch->qstats.backlog); + + sch_tree_unlock(sch); + return 0; +} + +/* Default alpha/beta values give a 10dB stability margin with max_rtt=100ms. */ +static void dualpi2_reset_default(struct Qdisc *sch) +{ + struct dualpi2_sched_data *q = qdisc_priv(sch); + + q->sch->limit = 10000; /* Max 125ms at 1Gbps */ + q->memory_limit = q->sch->limit * psched_mtu(qdisc_dev(sch)); + + q->pi2.target = 15 * NSEC_PER_MSEC; + q->pi2.tupdate = 16 * NSEC_PER_MSEC; + q->pi2.alpha = dualpi2_scale_alpha_beta(41); /* ~0.16 Hz * 256 */ + q->pi2.beta = dualpi2_scale_alpha_beta(819); /* ~3.20 Hz * 256 */ + + q->step.thresh = 1 * NSEC_PER_MSEC; + q->step.in_packets = false; + + dualpi2_calculate_c_protection(q->sch, q, 10); /* wc=10%, wl=90% */ + + q->ecn_mask = INET_ECN_ECT_1; + q->min_qlen_step = 0; + q->coupling_factor = 2; /* window fairness for equal RTTs */ + q->drop_overload = true; /* Preserve latency by dropping */ + q->drop_early = false; /* PI2 drops on dequeue */ + q->split_gso = true; +} + +static int dualpi2_init(struct Qdisc *sch, struct nlattr *opt, + struct netlink_ext_ack *extack) +{ + struct dualpi2_sched_data *q = qdisc_priv(sch); + int err; + + q->l_queue = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops, + TC_H_MAKE(sch->handle, 1), extack); + if (!q->l_queue) + return -ENOMEM; + + err = tcf_block_get(&q->tcf_block, &q->tcf_filters, sch, extack); + if (err) + return err; + + q->sch = sch; + dualpi2_reset_default(sch); + hrtimer_init(&q->pi2.timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED); + q->pi2.timer.function = dualpi2_timer; + + if (opt) { + err = dualpi2_change(sch, opt, extack); + + if (err) + return err; + } + + hrtimer_start(&q->pi2.timer, next_pi2_timeout(q), + HRTIMER_MODE_ABS_PINNED); + return 0; +} + +/* Reset both L-queue and C-queue, internal packet counters, PI probability, + * C-queue protection credit, and timestamps, while preserving current + * configuration of DUALPI2. + */ +static void dualpi2_reset(struct Qdisc *sch) +{ + struct dualpi2_sched_data *q = qdisc_priv(sch); + + qdisc_reset_queue(sch); + qdisc_reset_queue(q->l_queue); + q->c_head_ts = 0; + q->l_head_ts = 0; + q->pi2.prob = 0; + q->packets_in_c = 0; + q->packets_in_l = 0; + q->maxq = 0; + q->ecn_mark = 0; + q->step_marks = 0; + q->memory_used = 0; + q->max_memory_used = 0; + dualpi2_reset_c_protection(q); +} + +static void dualpi2_destroy(struct Qdisc *sch) +{ + struct dualpi2_sched_data *q = qdisc_priv(sch); + + q->pi2.tupdate = 0; + hrtimer_cancel(&q->pi2.timer); + if (q->l_queue) + qdisc_put(q->l_queue); + tcf_block_put(q->tcf_block); +} + +static struct Qdisc *dualpi2_leaf(struct Qdisc *sch, unsigned long arg) +{ + return NULL; +} + +static unsigned long dualpi2_find(struct Qdisc *sch, u32 classid) +{ + return 0; +} + +static unsigned long dualpi2_bind(struct Qdisc *sch, unsigned long parent, + u32 classid) +{ + return 0; +} + +static void dualpi2_unbind(struct Qdisc *q, unsigned long cl) +{ +} + +static struct tcf_block *dualpi2_tcf_block(struct Qdisc *sch, unsigned long cl, + struct netlink_ext_ack *extack) +{ + struct dualpi2_sched_data *q = qdisc_priv(sch); + + if (cl) + return NULL; + return q->tcf_block; +} + +static void dualpi2_walk(struct Qdisc *sch, struct qdisc_walker *arg) +{ + unsigned int i; + + if (arg->stop) + return; + + /* We statically define only 2 queues */ + for (i = 0; i < 2; i++) { + if (arg->count < arg->skip) { + arg->count++; + continue; + } + if (arg->fn(sch, i + 1, arg) < 0) { + arg->stop = 1; + break; + } + arg->count++; + } +} + +/* Minimal class support to handler tc filters */ +static const struct Qdisc_class_ops dualpi2_class_ops = { + .leaf = dualpi2_leaf, + .find = dualpi2_find, + .tcf_block = dualpi2_tcf_block, + .bind_tcf = dualpi2_bind, + .unbind_tcf = dualpi2_unbind, + .walk = dualpi2_walk, +}; + +static struct Qdisc_ops dualpi2_qdisc_ops __read_mostly = { + .id = "dualpi2", + .cl_ops = &dualpi2_class_ops, + .priv_size = sizeof(struct dualpi2_sched_data), + .peek = qdisc_peek_dequeued, + .init = dualpi2_init, + .destroy = dualpi2_destroy, + .reset = dualpi2_reset, + .change = dualpi2_change, + .owner = THIS_MODULE, +}; + +static int __init dualpi2_module_init(void) +{ + return register_qdisc(&dualpi2_qdisc_ops); +} + +static void __exit dualpi2_module_exit(void) +{ + unregister_qdisc(&dualpi2_qdisc_ops); +} + +module_init(dualpi2_module_init); +module_exit(dualpi2_module_exit); + +MODULE_DESCRIPTION("Dual Queue with Proportional Integral controller Improved with a Square (dualpi2) scheduler"); +MODULE_AUTHOR("Koen De Schepper "); +MODULE_AUTHOR("Olga Albisser "); +MODULE_AUTHOR("Henrik Steen "); +MODULE_AUTHOR("Olivier Tilmans "); +MODULE_AUTHOR("Chia-Yu Chang "); + +MODULE_LICENSE("Dual BSD/GPL"); +MODULE_VERSION("1.0");