Message ID | 20250106133837.18609-1-toke@redhat.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net] sched: sch_cake: add bounds checks to host bulk flow fairness counts | expand |
Hi Toke, kernel test robot noticed the following build warnings: [auto build test WARNING on net/main] url: https://github.com/intel-lab-lkp/linux/commits/Toke-H-iland-J-rgensen/sched-sch_cake-add-bounds-checks-to-host-bulk-flow-fairness-counts/20250106-214156 base: net/main patch link: https://lore.kernel.org/r/20250106133837.18609-1-toke%40redhat.com patch subject: [PATCH net] sched: sch_cake: add bounds checks to host bulk flow fairness counts config: i386-buildonly-randconfig-004-20250107 (https://download.01.org/0day-ci/archive/20250107/202501071052.ZOECqwS9-lkp@intel.com/config) compiler: gcc-12 (Debian 12.2.0-14) 12.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250107/202501071052.ZOECqwS9-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202501071052.ZOECqwS9-lkp@intel.com/ All warnings (new ones prefixed by >>): net/sched/sch_cake.c: In function 'cake_dequeue': >> net/sched/sch_cake.c:1975:37: warning: variable 'dsthost' set but not used [-Wunused-but-set-variable] 1975 | struct cake_host *srchost, *dsthost; | ^~~~~~~ >> net/sched/sch_cake.c:1975:27: warning: variable 'srchost' set but not used [-Wunused-but-set-variable] 1975 | struct cake_host *srchost, *dsthost; | ^~~~~~~ vim +/dsthost +1975 net/sched/sch_cake.c 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1970 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1971 static struct sk_buff *cake_dequeue(struct Qdisc *sch) 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1972 { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1973 struct cake_sched_data *q = qdisc_priv(sch); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1974 struct cake_tin_data *b = &q->tins[q->cur_tin]; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 @1975 struct cake_host *srchost, *dsthost; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1976 ktime_t now = ktime_get(); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1977 struct cake_flow *flow; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1978 struct list_head *head; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1979 bool first_flow = true; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1980 struct sk_buff *skb; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1981 u64 delay; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1982 u32 len; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1983 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1984 begin: 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1985 if (!sch->q.qlen) 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1986 return NULL; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1987 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1988 /* global hard shaper */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1989 if (ktime_after(q->time_next_packet, now) && 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1990 ktime_after(q->failsafe_next_packet, now)) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1991 u64 next = min(ktime_to_ns(q->time_next_packet), 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1992 ktime_to_ns(q->failsafe_next_packet)); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1993 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1994 sch->qstats.overlimits++; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1995 qdisc_watchdog_schedule_ns(&q->watchdog, next); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1996 return NULL; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1997 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1998 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 1999 /* Choose a class to work on. */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2000 if (!q->rate_ns) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2001 /* In unlimited mode, can't rely on shaper timings, just balance 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2002 * with DRR 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2003 */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2004 bool wrapped = false, empty = true; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2005 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2006 while (b->tin_deficit < 0 || 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2007 !(b->sparse_flow_count + b->bulk_flow_count)) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2008 if (b->tin_deficit <= 0) cbd22f172df782 Kevin 'ldir' Darbyshire-Bryant 2019-12-18 2009 b->tin_deficit += b->tin_quantum; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2010 if (b->sparse_flow_count + b->bulk_flow_count) 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2011 empty = false; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2012 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2013 q->cur_tin++; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2014 b++; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2015 if (q->cur_tin >= q->tin_cnt) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2016 q->cur_tin = 0; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2017 b = q->tins; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2018 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2019 if (wrapped) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2020 /* It's possible for q->qlen to be 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2021 * nonzero when we actually have no 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2022 * packets anywhere. 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2023 */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2024 if (empty) 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2025 return NULL; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2026 } else { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2027 wrapped = true; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2028 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2029 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2030 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2031 } else { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2032 /* In shaped mode, choose: 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2033 * - Highest-priority tin with queue and meeting schedule, or 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2034 * - The earliest-scheduled tin with queue. 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2035 */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2036 ktime_t best_time = KTIME_MAX; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2037 int tin, best_tin = 0; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2038 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2039 for (tin = 0; tin < q->tin_cnt; tin++) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2040 b = q->tins + tin; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2041 if ((b->sparse_flow_count + b->bulk_flow_count) > 0) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2042 ktime_t time_to_pkt = \ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2043 ktime_sub(b->time_next_packet, now); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2044 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2045 if (ktime_to_ns(time_to_pkt) <= 0 || 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2046 ktime_compare(time_to_pkt, 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2047 best_time) <= 0) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2048 best_time = time_to_pkt; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2049 best_tin = tin; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2050 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2051 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2052 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2053 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2054 q->cur_tin = best_tin; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2055 b = q->tins + best_tin; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2056 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2057 /* No point in going further if no packets to deliver. */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2058 if (unlikely(!(b->sparse_flow_count + b->bulk_flow_count))) 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2059 return NULL; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2060 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2061 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2062 retry: 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2063 /* service this class */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2064 head = &b->decaying_flows; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2065 if (!first_flow || list_empty(head)) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2066 head = &b->new_flows; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2067 if (list_empty(head)) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2068 head = &b->old_flows; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2069 if (unlikely(list_empty(head))) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2070 head = &b->decaying_flows; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2071 if (unlikely(list_empty(head))) 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2072 goto begin; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2073 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2074 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2075 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2076 flow = list_first_entry(head, struct cake_flow, flowchain); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2077 q->cur_flow = flow - b->flows; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2078 first_flow = false; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2079 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2080 /* triple isolation (modified DRR++) */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2081 srchost = &b->hosts[flow->srchost]; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2082 dsthost = &b->hosts[flow->dsthost]; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2083 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2084 /* flow isolation (DRR++) */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2085 if (flow->deficit <= 0) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2086 /* Keep all flows with deficits out of the sparse and decaying 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2087 * rotations. No non-empty flow can go into the decaying 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2088 * rotation, so they can't get deficits 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2089 */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2090 if (flow->set == CAKE_SET_SPARSE) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2091 if (flow->head) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2092 b->sparse_flow_count--; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2093 b->bulk_flow_count++; 712639929912c5 George Amanakis 2019-03-01 2094 c75152104797f8 Toke Høiland-Jørgensen 2025-01-06 2095 cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode); c75152104797f8 Toke Høiland-Jørgensen 2025-01-06 2096 cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode); 712639929912c5 George Amanakis 2019-03-01 2097 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2098 flow->set = CAKE_SET_BULK; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2099 } else { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2100 /* we've moved it to the bulk rotation for 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2101 * correct deficit accounting but we still want 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2102 * to count it as a sparse flow, not a bulk one. 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2103 */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2104 flow->set = CAKE_SET_SPARSE_WAIT; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2105 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2106 } 712639929912c5 George Amanakis 2019-03-01 2107 c75152104797f8 Toke Høiland-Jørgensen 2025-01-06 2108 flow->deficit += cake_get_flow_quantum(b, flow, q->flow_mode); 712639929912c5 George Amanakis 2019-03-01 2109 list_move_tail(&flow->flowchain, &b->old_flows); 712639929912c5 George Amanakis 2019-03-01 2110 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2111 goto retry; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2112 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2113 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2114 /* Retrieve a packet via the AQM */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2115 while (1) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2116 skb = cake_dequeue_one(sch); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2117 if (!skb) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2118 /* this queue was actually empty */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2119 if (cobalt_queue_empty(&flow->cvars, &b->cparams, now)) 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2120 b->unresponsive_flow_count--; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2121 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2122 if (flow->cvars.p_drop || flow->cvars.count || 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2123 ktime_before(now, flow->cvars.drop_next)) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2124 /* keep in the flowchain until the state has 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2125 * decayed to rest 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2126 */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2127 list_move_tail(&flow->flowchain, 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2128 &b->decaying_flows); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2129 if (flow->set == CAKE_SET_BULK) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2130 b->bulk_flow_count--; 712639929912c5 George Amanakis 2019-03-01 2131 c75152104797f8 Toke Høiland-Jørgensen 2025-01-06 2132 cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode); c75152104797f8 Toke Høiland-Jørgensen 2025-01-06 2133 cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode); 712639929912c5 George Amanakis 2019-03-01 2134 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2135 b->decaying_flow_count++; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2136 } else if (flow->set == CAKE_SET_SPARSE || 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2137 flow->set == CAKE_SET_SPARSE_WAIT) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2138 b->sparse_flow_count--; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2139 b->decaying_flow_count++; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2140 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2141 flow->set = CAKE_SET_DECAYING; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2142 } else { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2143 /* remove empty queue from the flowchain */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2144 list_del_init(&flow->flowchain); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2145 if (flow->set == CAKE_SET_SPARSE || 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2146 flow->set == CAKE_SET_SPARSE_WAIT) 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2147 b->sparse_flow_count--; 712639929912c5 George Amanakis 2019-03-01 2148 else if (flow->set == CAKE_SET_BULK) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2149 b->bulk_flow_count--; 712639929912c5 George Amanakis 2019-03-01 2150 c75152104797f8 Toke Høiland-Jørgensen 2025-01-06 2151 cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode); c75152104797f8 Toke Høiland-Jørgensen 2025-01-06 2152 cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode); 712639929912c5 George Amanakis 2019-03-01 2153 } else 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2154 b->decaying_flow_count--; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2155 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2156 flow->set = CAKE_SET_NONE; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2157 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2158 goto begin; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2159 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2160 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2161 /* Last packet in queue may be marked, shouldn't be dropped */ 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2162 if (!cobalt_should_drop(&flow->cvars, &b->cparams, now, skb, 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2163 (b->bulk_flow_count * 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2164 !!(q->rate_flags & 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2165 CAKE_FLAG_INGRESS))) || 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2166 !flow->head) 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2167 break; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2168 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2169 /* drop this packet, get another one */ 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2170 if (q->rate_flags & CAKE_FLAG_INGRESS) { 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2171 len = cake_advance_shaper(q, b, skb, 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2172 now, true); 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2173 flow->deficit -= len; 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2174 b->tin_deficit -= len; 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2175 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2176 flow->dropped++; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2177 b->tin_dropped++; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2178 qdisc_tree_reduce_backlog(sch, 1, qdisc_pkt_len(skb)); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2179 qdisc_qstats_drop(sch); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2180 kfree_skb(skb); 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2181 if (q->rate_flags & CAKE_FLAG_INGRESS) 7298de9cd7255a Toke Høiland-Jørgensen 2018-07-06 2182 goto retry; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2183 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2184 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2185 b->tin_ecn_mark += !!flow->cvars.ecn_marked; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2186 qdisc_bstats_update(sch, skb); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2187 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2188 /* collect delay stats */ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2189 delay = ktime_to_ns(ktime_sub(now, cobalt_get_enqueue_time(skb))); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2190 b->avge_delay = cake_ewma(b->avge_delay, delay, 8); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2191 b->peak_delay = cake_ewma(b->peak_delay, delay, 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2192 delay > b->peak_delay ? 2 : 8); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2193 b->base_delay = cake_ewma(b->base_delay, delay, 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2194 delay < b->base_delay ? 2 : 8); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2195 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2196 len = cake_advance_shaper(q, b, skb, now, false); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2197 flow->deficit -= len; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2198 b->tin_deficit -= len; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2199 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2200 if (ktime_after(q->time_next_packet, now) && sch->q.qlen) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2201 u64 next = min(ktime_to_ns(q->time_next_packet), 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2202 ktime_to_ns(q->failsafe_next_packet)); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2203 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2204 qdisc_watchdog_schedule_ns(&q->watchdog, next); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2205 } else if (!sch->q.qlen) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2206 int i; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2207 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2208 for (i = 0; i < q->tin_cnt; i++) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2209 if (q->tins[i].decaying_flow_count) { 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2210 ktime_t next = \ 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2211 ktime_add_ns(now, 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2212 q->tins[i].cparams.target); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2213 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2214 qdisc_watchdog_schedule_ns(&q->watchdog, 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2215 ktime_to_ns(next)); 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2216 break; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2217 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2218 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2219 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2220 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2221 if (q->overflow_timeout) 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2222 q->overflow_timeout--; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2223 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2224 return skb; 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2225 } 046f6fd5daefac Toke Høiland-Jørgensen 2018-07-06 2226
diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c index 8d8b2db4653c..8f61ecb78139 100644 --- a/net/sched/sch_cake.c +++ b/net/sched/sch_cake.c @@ -627,6 +627,63 @@ static bool cake_ddst(int flow_mode) return (flow_mode & CAKE_FLOW_DUAL_DST) == CAKE_FLOW_DUAL_DST; } +static void cake_dec_srchost_bulk_flow_count(struct cake_tin_data *q, + struct cake_flow *flow, + int flow_mode) +{ + if (likely(cake_dsrc(flow_mode) && + q->hosts[flow->srchost].srchost_bulk_flow_count)) + q->hosts[flow->srchost].srchost_bulk_flow_count--; +} + +static void cake_inc_srchost_bulk_flow_count(struct cake_tin_data *q, + struct cake_flow *flow, + int flow_mode) +{ + if (likely(cake_dsrc(flow_mode) && + q->hosts[flow->srchost].srchost_bulk_flow_count < CAKE_QUEUES)) + q->hosts[flow->srchost].srchost_bulk_flow_count++; +} + +static void cake_dec_dsthost_bulk_flow_count(struct cake_tin_data *q, + struct cake_flow *flow, + int flow_mode) +{ + if (likely(cake_ddst(flow_mode) && + q->hosts[flow->dsthost].dsthost_bulk_flow_count)) + q->hosts[flow->dsthost].dsthost_bulk_flow_count--; +} + +static void cake_inc_dsthost_bulk_flow_count(struct cake_tin_data *q, + struct cake_flow *flow, + int flow_mode) +{ + if (likely(cake_ddst(flow_mode) && + q->hosts[flow->dsthost].dsthost_bulk_flow_count < CAKE_QUEUES)) + q->hosts[flow->dsthost].dsthost_bulk_flow_count++; +} + +static u16 cake_get_flow_quantum(struct cake_tin_data *q, + struct cake_flow *flow, + int flow_mode) +{ + u16 host_load = 1; + + if (cake_dsrc(flow_mode)) + host_load = max(host_load, + q->hosts[flow->srchost].srchost_bulk_flow_count); + + if (cake_ddst(flow_mode)) + host_load = max(host_load, + q->hosts[flow->dsthost].dsthost_bulk_flow_count); + + /* The get_random_u16() is a way to apply dithering to avoid + * accumulating roundoff errors + */ + return (q->flow_quantum * quantum_div[host_load] + + get_random_u16()) >> 16; +} + static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb, int flow_mode, u16 flow_override, u16 host_override) { @@ -773,10 +830,8 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb, allocate_dst = cake_ddst(flow_mode); if (q->flows[outer_hash + k].set == CAKE_SET_BULK) { - if (allocate_src) - q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--; - if (allocate_dst) - q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--; + cake_dec_srchost_bulk_flow_count(q, &q->flows[outer_hash + k], flow_mode); + cake_dec_dsthost_bulk_flow_count(q, &q->flows[outer_hash + k], flow_mode); } found: /* reserve queue for future packets in same flow */ @@ -801,9 +856,10 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb, q->hosts[outer_hash + k].srchost_tag = srchost_hash; found_src: srchost_idx = outer_hash + k; - if (q->flows[reduced_hash].set == CAKE_SET_BULK) - q->hosts[srchost_idx].srchost_bulk_flow_count++; q->flows[reduced_hash].srchost = srchost_idx; + + if (q->flows[reduced_hash].set == CAKE_SET_BULK) + cake_inc_srchost_bulk_flow_count(q, &q->flows[reduced_hash], flow_mode); } if (allocate_dst) { @@ -824,9 +880,10 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb, q->hosts[outer_hash + k].dsthost_tag = dsthost_hash; found_dst: dsthost_idx = outer_hash + k; - if (q->flows[reduced_hash].set == CAKE_SET_BULK) - q->hosts[dsthost_idx].dsthost_bulk_flow_count++; q->flows[reduced_hash].dsthost = dsthost_idx; + + if (q->flows[reduced_hash].set == CAKE_SET_BULK) + cake_inc_dsthost_bulk_flow_count(q, &q->flows[reduced_hash], flow_mode); } } @@ -1839,10 +1896,6 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, /* flowchain */ if (!flow->set || flow->set == CAKE_SET_DECAYING) { - struct cake_host *srchost = &b->hosts[flow->srchost]; - struct cake_host *dsthost = &b->hosts[flow->dsthost]; - u16 host_load = 1; - if (!flow->set) { list_add_tail(&flow->flowchain, &b->new_flows); } else { @@ -1852,18 +1905,8 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, flow->set = CAKE_SET_SPARSE; b->sparse_flow_count++; - if (cake_dsrc(q->flow_mode)) - host_load = max(host_load, srchost->srchost_bulk_flow_count); - - if (cake_ddst(q->flow_mode)) - host_load = max(host_load, dsthost->dsthost_bulk_flow_count); - - flow->deficit = (b->flow_quantum * - quantum_div[host_load]) >> 16; + flow->deficit = cake_get_flow_quantum(b, flow, q->flow_mode); } else if (flow->set == CAKE_SET_SPARSE_WAIT) { - struct cake_host *srchost = &b->hosts[flow->srchost]; - struct cake_host *dsthost = &b->hosts[flow->dsthost]; - /* this flow was empty, accounted as a sparse flow, but actually * in the bulk rotation. */ @@ -1871,12 +1914,8 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, b->sparse_flow_count--; b->bulk_flow_count++; - if (cake_dsrc(q->flow_mode)) - srchost->srchost_bulk_flow_count++; - - if (cake_ddst(q->flow_mode)) - dsthost->dsthost_bulk_flow_count++; - + cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode); + cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode); } if (q->buffer_used > q->buffer_max_used) @@ -1939,7 +1978,6 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch) struct list_head *head; bool first_flow = true; struct sk_buff *skb; - u16 host_load; u64 delay; u32 len; @@ -2042,7 +2080,6 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch) /* triple isolation (modified DRR++) */ srchost = &b->hosts[flow->srchost]; dsthost = &b->hosts[flow->dsthost]; - host_load = 1; /* flow isolation (DRR++) */ if (flow->deficit <= 0) { @@ -2055,11 +2092,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch) b->sparse_flow_count--; b->bulk_flow_count++; - if (cake_dsrc(q->flow_mode)) - srchost->srchost_bulk_flow_count++; - - if (cake_ddst(q->flow_mode)) - dsthost->dsthost_bulk_flow_count++; + cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode); + cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode); flow->set = CAKE_SET_BULK; } else { @@ -2071,19 +2105,7 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch) } } - if (cake_dsrc(q->flow_mode)) - host_load = max(host_load, srchost->srchost_bulk_flow_count); - - if (cake_ddst(q->flow_mode)) - host_load = max(host_load, dsthost->dsthost_bulk_flow_count); - - WARN_ON(host_load > CAKE_QUEUES); - - /* The get_random_u16() is a way to apply dithering to avoid - * accumulating roundoff errors - */ - flow->deficit += (b->flow_quantum * quantum_div[host_load] + - get_random_u16()) >> 16; + flow->deficit += cake_get_flow_quantum(b, flow, q->flow_mode); list_move_tail(&flow->flowchain, &b->old_flows); goto retry; @@ -2107,11 +2129,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch) if (flow->set == CAKE_SET_BULK) { b->bulk_flow_count--; - if (cake_dsrc(q->flow_mode)) - srchost->srchost_bulk_flow_count--; - - if (cake_ddst(q->flow_mode)) - dsthost->dsthost_bulk_flow_count--; + cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode); + cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode); b->decaying_flow_count++; } else if (flow->set == CAKE_SET_SPARSE || @@ -2129,12 +2148,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch) else if (flow->set == CAKE_SET_BULK) { b->bulk_flow_count--; - if (cake_dsrc(q->flow_mode)) - srchost->srchost_bulk_flow_count--; - - if (cake_ddst(q->flow_mode)) - dsthost->dsthost_bulk_flow_count--; - + cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode); + cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode); } else b->decaying_flow_count--;
Even though we fixed a logic error in the commit cited below, syzbot still managed to trigger an underflow of the per-host bulk flow counters, leading to an out of bounds memory access. To avoid any such logic errors causing out of bounds memory accesses, this commit factors out all accesses to the per-host bulk flow counters to a series of helpers that perform bounds-checking before any increments and decrements. This also has the benefit of improving readability by moving the conditional checks for the flow mode into these helpers, instead of having them spread out throughout the code (which was the cause of the original logic error). Fixes: 546ea84d07e3 ("sched: sch_cake: fix bulk flow accounting logic for host fairness") Reported-by: syzbot+f63600d288bfb7057424@syzkaller.appspotmail.com Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> --- net/sched/sch_cake.c | 135 ++++++++++++++++++++++++------------------- 1 file changed, 75 insertions(+), 60 deletions(-)