From patchwork Fri Feb 9 14:13:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13551367 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C396A6A026 for ; Fri, 9 Feb 2024 14:13:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488025; cv=none; b=d0C/7HpPZZTY/x2xbbtGVlzMArAyQ9W8iYDITOy2+5cwSfSCDFK9u+qg+DDhoGrRvCPTyWgiM+M3n9l107cml0MxO3tLqd4S6nWxoXZbkVAtHCkA7ot5jdyH4DVtzdDWc4AyrdcEz6b12vbgXYch21ycZ/YiSJhcb9zp6JTIO+o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488025; c=relaxed/simple; bh=CFy7YB5reuaeNmGM2OwQgtwqrUFDCUP5goW2aOF/6Z4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rYCYbkD1Br9utv/AfI79H0ZaHUSAosZHHJQNsmw3U/VHdzpb1DkzTwmw4nGDA/FcrhZ5Wpadb115iPxxXWgp/xP/VBI8g76Fs0RKnEtoeqNBiSCn5ROY3VHGgg611dijMmTdmAaz+uyuui/qjbNVz9TAHpAu9rSxGu+sO8UwzpM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=na9pGCAL; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="na9pGCAL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707488022; x=1739024022; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CFy7YB5reuaeNmGM2OwQgtwqrUFDCUP5goW2aOF/6Z4=; b=na9pGCALWpVtPdjVx3NPxLbnPbRFeIJxxLTT4ex2gZEy9fRpZQTn7zmn LXpEOmqSpZStAS8G1kIutloJ029ZCQJKtgO/SUZCloMlvhIJp932FMVNw /rfS3cBINl8aSFjvwqqXAZ0rdqih9daP/phtYlELoNRlN2g/SFlM32myU S1+1FWQBJKz/zwG/feHtG+yE1BLfWyaonzyPga2fAJfCVvMW4I+fyEn8Q HQ1k5P0qUQZryZ+IaddYAtWyh+S0R2s9lJNtw7nEtSpWFyXM7XdC3cv7B CtdXL0aDwB4Xl9bLdn4zn/Vr6bjd9io3i/BTF/iOx2Pd6M1EZLj/aXG+u g==; X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="12082121" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="12082121" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2024 06:13:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="934434418" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="934434418" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 09 Feb 2024 06:13:36 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id AF6D8F7; Fri, 9 Feb 2024 16:13:35 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 01/10] thunderbolt: Use DP_LOCAL_CAP for maximum bandwidth calculation Date: Fri, 9 Feb 2024 16:13:26 +0200 Message-ID: <20240209141335.2286786-2-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> References: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The DisplayPort IN adapter DP_LOCAL_CAP holds the aggregated capabilities and gets updated after graphics side does the DPRX capabilities read so we should use this to figure out the maximum possible bandwidth for the DisplayPort tunnel. While there make the variable name to match better what it is used for and add kernel-doc comment to the function. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tunnel.c | 57 ++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 32 deletions(-) diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index 6fffb2c82d3d..a766ab297064 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -926,12 +926,18 @@ static int tb_dp_activate(struct tb_tunnel *tunnel, bool active) return 0; } -/* max_bw is rounded up to next granularity */ +/** + * tb_dp_bandwidth_mode_maximum_bandwidth() - Maximum possible bandwidth + * @tunnel: DP tunnel to check + * @max_bw_rounded: Maximum bandwidth in Mb/s rounded up to the next granularity + * + * Returns maximum possible bandwidth for this tunnel in Mb/s. + */ static int tb_dp_bandwidth_mode_maximum_bandwidth(struct tb_tunnel *tunnel, - int *max_bw) + int *max_bw_rounded) { struct tb_port *in = tunnel->src_port; - int ret, rate, lanes, nrd_bw; + int ret, rate, lanes, max_bw; u32 cap; /* @@ -947,32 +953,18 @@ static int tb_dp_bandwidth_mode_maximum_bandwidth(struct tb_tunnel *tunnel, return ret; rate = tb_dp_cap_get_rate_ext(cap); - if (tb_dp_is_uhbr_rate(rate)) { - /* - * When UHBR is used there is no reduction in lanes so - * we can use this directly. - */ - lanes = tb_dp_cap_get_lanes(cap); - } else { - /* - * If there is no UHBR supported then check the - * non-reduced rate and lanes. - */ - ret = usb4_dp_port_nrd(in, &rate, &lanes); - if (ret) - return ret; - } + lanes = tb_dp_cap_get_lanes(cap); - nrd_bw = tb_dp_bandwidth(rate, lanes); + max_bw = tb_dp_bandwidth(rate, lanes); - if (max_bw) { + if (max_bw_rounded) { ret = usb4_dp_port_granularity(in); if (ret < 0) return ret; - *max_bw = roundup(nrd_bw, ret); + *max_bw_rounded = roundup(max_bw, ret); } - return nrd_bw; + return max_bw; } static int tb_dp_bandwidth_mode_consumed_bandwidth(struct tb_tunnel *tunnel, @@ -981,7 +973,7 @@ static int tb_dp_bandwidth_mode_consumed_bandwidth(struct tb_tunnel *tunnel, { struct tb_port *out = tunnel->dst_port; struct tb_port *in = tunnel->src_port; - int ret, allocated_bw, max_bw; + int ret, allocated_bw, max_bw_rounded; if (!usb4_dp_port_bandwidth_mode_enabled(in)) return -EOPNOTSUPP; @@ -995,10 +987,10 @@ static int tb_dp_bandwidth_mode_consumed_bandwidth(struct tb_tunnel *tunnel, return ret; allocated_bw = ret; - ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, &max_bw); + ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, &max_bw_rounded); if (ret < 0) return ret; - if (allocated_bw == max_bw) + if (allocated_bw == max_bw_rounded) allocated_bw = ret; if (tb_port_path_direction_downstream(in, out)) { @@ -1023,17 +1015,18 @@ static int tb_dp_allocated_bandwidth(struct tb_tunnel *tunnel, int *allocated_up * Otherwise we read it from the DPRX. */ if (usb4_dp_port_bandwidth_mode_enabled(in) && tunnel->bw_mode) { - int ret, allocated_bw, max_bw; + int ret, allocated_bw, max_bw_rounded; ret = usb4_dp_port_allocated_bandwidth(in); if (ret < 0) return ret; allocated_bw = ret; - ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, &max_bw); + ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, + &max_bw_rounded); if (ret < 0) return ret; - if (allocated_bw == max_bw) + if (allocated_bw == max_bw_rounded) allocated_bw = ret; if (tb_port_path_direction_downstream(in, out)) { @@ -1055,24 +1048,24 @@ static int tb_dp_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up, { struct tb_port *out = tunnel->dst_port; struct tb_port *in = tunnel->src_port; - int max_bw, ret, tmp; + int max_bw_rounded, ret, tmp; if (!usb4_dp_port_bandwidth_mode_enabled(in)) return -EOPNOTSUPP; - ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, &max_bw); + ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, &max_bw_rounded); if (ret < 0) return ret; if (tb_port_path_direction_downstream(in, out)) { - tmp = min(*alloc_down, max_bw); + tmp = min(*alloc_down, max_bw_rounded); ret = usb4_dp_port_allocate_bandwidth(in, tmp); if (ret) return ret; *alloc_down = tmp; *alloc_up = 0; } else { - tmp = min(*alloc_up, max_bw); + tmp = min(*alloc_up, max_bw_rounded); ret = usb4_dp_port_allocate_bandwidth(in, tmp); if (ret) return ret; From patchwork Fri Feb 9 14:13:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13551364 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1E656A019 for ; Fri, 9 Feb 2024 14:13:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488023; cv=none; b=eIb0s0ACwrQWdmlMPcguFBz7I0HUXAWFhJUtBz4L0qyjSgu9Y4l0rHtIDXNNvk49FXXn2xRQDfxVXRkp+e0GLeIEUOnHbJmtoaHtgYlIQCIhh4bcWdMj+0y34a70RjG9zHguGQJ1n5HyHyaYlj91X4eB5q6bBHcOGyTUTTLtdEU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488023; c=relaxed/simple; bh=bCFcj4DCd1TbIJtgdCpuPO9BQeigS+tJGh3AniGVmPk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tn22/cEL7IJZHuOvNoRe/GaPmH5VLRkrnDMtoLnj6zT587JOoEpkY8FwJCnnl1yG15i9Bhf37waI/JruxRucJPBpsflkn5sT+wW6CTa22nvpfTnN3ZyAhHgt9DaxbVSuawGNHvjJluzxyxaJbzxBJPgVABgXACf+eMymGjcbEoE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=mARjmoDB; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mARjmoDB" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707488021; x=1739024021; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bCFcj4DCd1TbIJtgdCpuPO9BQeigS+tJGh3AniGVmPk=; b=mARjmoDBrWuhYjbAR8pLHvnZR/bplQCYd3MPklffYuJw1/j2m9qjkkUG JMY69zdDLPKNYp6Bs3wk8Dy0SidHAlYzh/AoQsBKopCVzhyktj02iLCkr LDnTlr2AYNpi8OUt2XDuFYOYx1rsKfaBvYr7SPrMXKU3q761ctDbQq6Ph Yx4iG3iVN+SSOS/CN9kgJjrjYUUYKxyT/pqN/aOSfNAkMApyk38e0zthh 7U6xHPr1qiLGXe/Ggh/oXGKVAVwDswpYimUeJVvRvv1tXnN7Q2rS7/Ed7 T4hOWja4CjnVGjU1jNoIHf5yJdyl9C7XSSiGgvx15MIkqvfdiMx/icLn4 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="12082115" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="12082115" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2024 06:13:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="934434414" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="934434414" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 09 Feb 2024 06:13:36 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id B49A2161; Fri, 9 Feb 2024 16:13:35 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 02/10] thunderbolt: Re-calculate estimated bandwidth when allocation mode is enabled Date: Fri, 9 Feb 2024 16:13:27 +0200 Message-ID: <20240209141335.2286786-3-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> References: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When we program the initial bandwidth estimation the DPTX (graphics driver) has not yet read the capabilities of the monitor so the values used are the highest possible of the involved DisplayPort IN and OUT adapters, not the actual monitor capabilities. To allow the graphics more accurate bandwidth estimation re-calculate it once we receive the bandwidth allocation mode enabled notification. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 64dd22e1f5b2..5b0434c140f9 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -2413,10 +2413,19 @@ static void tb_handle_dp_bandwidth_request(struct work_struct *work) ret = usb4_dp_port_requested_bandwidth(in); if (ret < 0) { - if (ret == -ENODATA) - tb_port_dbg(in, "no bandwidth request active\n"); - else + if (ret == -ENODATA) { + /* + * There is no request active so this means the + * BW allocation mode was enabled from graphics + * side. At this point we know that the graphics + * driver has read the DRPX capabilities so we + * can offer an better bandwidth estimatation. + */ + tb_port_dbg(in, "DPTX enabled bandwidth allocation mode, updating estimated bandwidth\n"); + tb_recalc_estimated_bandwidth(tb); + } else { tb_port_warn(in, "failed to read requested bandwidth\n"); + } goto put_sw; } requested_bw = ret; From patchwork Fri Feb 9 14:13:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13551363 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3889C6A013 for ; Fri, 9 Feb 2024 14:13:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488022; cv=none; b=K5TN17lYShyH9b3e1UUr9kC+3n/D94wJcytRm8ji8JPUHI0LY8DFyC9Q9Jr4R0RoMn2QyVX0Lwji+1Zo9Q73XU6qk/IyTNaOlJsZxBVvMfbqjRQEOjIztBk9KEcH1MvWK7JiZRGZpQb47FXPBkhhSKQ2bvK6uoARzJ98KgpX4l0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488022; c=relaxed/simple; bh=uOrwPpMveSwqAPP4dqZMly2TyvYc13tRbhk3hKEdKeY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HVs/H/KL0iAUqS63hmw5eGb4HT7h9K2gfAoRWnE341tQnBpRVtzDNq9DQf1VpY+n0JsboA/OFAXLsM2ymQqYLPmvLZHV8eQ1Q1lCWOj5lJDJU7m0w7+qMDmtL+EzBxBrIoOuS7TvsMhSEM1Y/nJS0gL09NyTNut+yD/kWGzKYuc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Ffzpv8NH; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Ffzpv8NH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707488021; x=1739024021; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uOrwPpMveSwqAPP4dqZMly2TyvYc13tRbhk3hKEdKeY=; b=Ffzpv8NHfqnl9ZKOZxmHicsiWGTZcmifGD2EXtC6+Klz6o7t9uupPB4w lR2EZ/bsprIjor3w8Tcre46BuS4z/m6nGNl7RsHVgT106SRPAthYOBB99 1DEzTDCeXH2zfiHX18S8sUJ81pPCrVRA0qGYdCvssnIck/bWpQ9xx0/68 cSpPGPvwLNSg2do8LDmVQIaznNcaHfjV3qFtrs/rYPjxnhHTqT8FIZdbG jK8jdPtZB1oBfw1Cli5gs3iImRaWsakxH59v8lmosU1yjfnoNiF/xT8kv FLFKBY+wYGWRqcWS4g4tBHo2B/5oHDWpC3uwMn051XZ1kdzKXgIVxRAMI A==; X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="12082112" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="12082112" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2024 06:13:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="934434419" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="934434419" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 09 Feb 2024 06:13:36 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id BCF4E1FF; Fri, 9 Feb 2024 16:13:35 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 03/10] thunderbolt: Handle bandwidth allocation mode disable request Date: Fri, 9 Feb 2024 16:13:28 +0200 Message-ID: <20240209141335.2286786-4-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> References: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Graphics can disable DisplayPort bandwidth allocation mode as well so if this make sure to reset the tunnel state accordingly. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 5b0434c140f9..abd86fd8d71f 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -2406,8 +2406,23 @@ static void tb_handle_dp_bandwidth_request(struct work_struct *work) tb_port_dbg(in, "handling bandwidth allocation request\n"); + tunnel = tb_find_tunnel(tb, TB_TUNNEL_DP, in, NULL); + if (!tunnel) { + tb_port_warn(in, "failed to find tunnel\n"); + goto put_sw; + } + if (!usb4_dp_port_bandwidth_mode_enabled(in)) { - tb_port_warn(in, "bandwidth allocation mode not enabled\n"); + if (tunnel->bw_mode) { + /* + * Reset the tunnel back to use the legacy + * allocation. + */ + tunnel->bw_mode = false; + tb_port_dbg(in, "DPTX disabled bandwidth allocation mode\n"); + } else { + tb_port_warn(in, "bandwidth allocation mode not enabled\n"); + } goto put_sw; } @@ -2432,11 +2447,6 @@ static void tb_handle_dp_bandwidth_request(struct work_struct *work) tb_port_dbg(in, "requested bandwidth %d Mb/s\n", requested_bw); - tunnel = tb_find_tunnel(tb, TB_TUNNEL_DP, in, NULL); - if (!tunnel) { - tb_port_warn(in, "failed to find tunnel\n"); - goto put_sw; - } out = tunnel->dst_port; From patchwork Fri Feb 9 14:13:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13551365 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E40876A028 for ; Fri, 9 Feb 2024 14:13:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488024; cv=none; b=PykJx11LAHYWOAvQvdz7KaR6VBym1WEgAtFx/fOOQXhYJ54Rr71mHESDSNdOXH+OlWJzQlI3B2C05oG1A7PaNbD5uwibel//MEUHKET0PiR63iu1yAwaLnamW2q9zkbLa0PdnxijtygCZ37IBAW0y/wjFuKcwUdfqAHxKBs46pE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488024; c=relaxed/simple; bh=PnSnht9xoe4T3QB+wyWMImh36XwALruKaprJfIWXFYY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EhvTxwboPCc00Y45Z6cz2u3/5t/mZmceubbJaF7ONTQNyHnnbhECAZakYY0VKqVGBtc2bW4xlmNoYJHKwrBnjoHzoL4Sx17o6qLESqKdWPed7VjR/7GW6+ZPsoWZlEnB3g4bSOMlftVz39+1b8Y5g+ebhu1yku1X0JdOLvLwVLU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XKMtOzAd; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XKMtOzAd" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707488022; x=1739024022; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PnSnht9xoe4T3QB+wyWMImh36XwALruKaprJfIWXFYY=; b=XKMtOzAdpduU9kMQuha5Xf312CFBwg1Q6qoFYtX9YS49B+YloZ45hPeP SsgwRiZDES5PYA0IK93OmfYoYcOQLEBR5QzTlMIT9iafTNl+gbtIID/gP 8nHwhQsvcB1MneA/tjz0C1h3mVOUyPJbteE+CXBgPIjHxT7LrA5swdcsB 4qPqcQuIRVyeNsHlLYPcSGTEu9Olqnp5adMvns4VyoleGIYWXqDZwBqJY qezgDUm37aD76KQy6VfeY/d9+7mtpWG16WCf3Xv3lwGffba1cmMOdl+Hn Ab6ZWKvdf+gkBUeF5zV33DBIE/yfss489rLnhcHklXhBmaAcTZzEed7CK w==; X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="12082118" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="12082118" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2024 06:13:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="934434415" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="934434415" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 09 Feb 2024 06:13:36 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id CB59F333; Fri, 9 Feb 2024 16:13:35 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 04/10] thunderbolt: Log an error if DPTX request is not cleared Date: Fri, 9 Feb 2024 16:13:29 +0200 Message-ID: <20240209141335.2286786-5-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> References: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This helps debugging issues around DisplayPort bandwidth allocation mode. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/usb4.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c index 4b35898aa216..f4fba144105d 100644 --- a/drivers/thunderbolt/usb4.c +++ b/drivers/thunderbolt/usb4.c @@ -2858,8 +2858,10 @@ static int usb4_dp_port_wait_and_clear_cm_ack(struct tb_port *port, usleep_range(50, 100); } while (ktime_before(ktime_get(), end)); - if (val & ADP_DP_CS_8_DR) + if (val & ADP_DP_CS_8_DR) { + tb_port_warn(port, "timeout waiting for DPTX request to clear\n"); return -ETIMEDOUT; + } ret = tb_port_read(port, &val, TB_CFG_PORT, port->cap_adap + ADP_DP_CS_2, 1); From patchwork Fri Feb 9 14:13:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13551366 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 70AFD6A32F for ; Fri, 9 Feb 2024 14:13:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488025; cv=none; b=O8XenNVI4lzYQDnTtrVqTk4J1SwMVJiidujYF9GQKtRFfUMV95o1FxEoaPvXqT1XmFvkPvaIX7v/NLzGHr3B+h9hiLANfsfojiccK++U2kOGBNzaTsqmqkoG5sDNXSX+LUwbj5XVN1fysoXN/k3YvNdt//FmgqgkaLo/v5UXrDk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488025; c=relaxed/simple; bh=p4PErb2ZlcKYuDfKlHWV4bD1hOGwfKHFwWkTpyXORmA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=J23QuqtruIbGmgB/OuaZucBYJH+k7z+xjXkf8SabFheySHMU9UifDiVqNj7D3m3HUzas8aGuovXF3ugKpYoc8NM7kPYXRFyV0LVpcv8rkTQQtJq4DOhKOpjRT+FgAT3W4R2u6f+UyCoWod3BPYWE9/ag+3aJdEwwfuHKY2MDBms= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=FTWvXawN; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FTWvXawN" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707488023; x=1739024023; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=p4PErb2ZlcKYuDfKlHWV4bD1hOGwfKHFwWkTpyXORmA=; b=FTWvXawN071eRBKsDREwlMgmh1JShfCSMWpJO5g27LOAZ+7n8zwBTbbk lrh0HGsJxXMfG5LL/kqFpH6sLSapJ+nO6egDPcxjgmhn6CXn75EfQNhlR J0+ZHgYRobtJv8p4zjtyDePuPl+3ZCa3qP0D+84PNpj4pWEMcaYDws7bH 8rMnaclJ/50zrAHGV+UQQzI/lpJJZeo6dP2XSsJhaLq3VXomLWlxDfCRT gS3e54DqoI3AsL7Qt8oDuf7PuTurE6yHkZ/lQw9FYeLj1cmTwPGHlhDzQ CUCi/RwcdthVKwjG2LK6FEaiq8QeAfG4Ca3T24Gyz/VB8HCK9Ok2BItxy g==; X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="12082126" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="12082126" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2024 06:13:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="934434444" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="934434444" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 09 Feb 2024 06:13:40 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id D070632F; Fri, 9 Feb 2024 16:13:35 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 05/10] thunderbolt: Fail the failed bandwidth request properly Date: Fri, 9 Feb 2024 16:13:30 +0200 Message-ID: <20240209141335.2286786-6-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> References: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The USB4 spec says that if the Connection Manager writes Allocated_BW that is smaller than Requested_BW, the DisplayPort IN adapter signals this failure back to the DPTX (graphics driver). Implement this by rewriting the same allocated bandwidth values back. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index abd86fd8d71f..9dbdf2770f0b 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -2270,11 +2270,11 @@ static int tb_alloc_dp_bandwidth(struct tb_tunnel *tunnel, int *requested_up, */ ret = tb_tunnel_maximum_bandwidth(tunnel, &max_up, &max_down); if (ret) - return ret; + goto fail; ret = usb4_dp_port_granularity(in); if (ret < 0) - return ret; + goto fail; granularity = ret; max_up_rounded = roundup(max_up, granularity); @@ -2304,7 +2304,8 @@ static int tb_alloc_dp_bandwidth(struct tb_tunnel *tunnel, int *requested_up, "bandwidth request too high (%d/%d Mb/s > %d/%d Mb/s)\n", requested_up_corrected, requested_down_corrected, max_up_rounded, max_down_rounded); - return -ENOBUFS; + ret = -ENOBUFS; + goto fail; } if ((*requested_up >= 0 && requested_up_corrected <= allocated_up) || @@ -2332,7 +2333,7 @@ static int tb_alloc_dp_bandwidth(struct tb_tunnel *tunnel, int *requested_up, */ ret = tb_release_unused_usb3_bandwidth(tb, in, out); if (ret) - return ret; + goto fail; /* * Then go over all tunnels that cross the same USB4 ports (they @@ -2357,7 +2358,7 @@ static int tb_alloc_dp_bandwidth(struct tb_tunnel *tunnel, int *requested_up, *requested_down); if (ret) { tb_configure_sym(tb, in, out, 0, 0, true); - return ret; + goto fail; } ret = tb_tunnel_alloc_bandwidth(tunnel, requested_up, @@ -2372,6 +2373,18 @@ static int tb_alloc_dp_bandwidth(struct tb_tunnel *tunnel, int *requested_up, reclaim: tb_reclaim_usb3_bandwidth(tb, in, out); +fail: + if (ret && ret != -ENODEV) { + /* + * Write back the same allocated (so no change), this + * makes the DPTX request fail on graphics side. + */ + tb_tunnel_dbg(tunnel, + "failing the request by rewriting allocated %d/%d Mb/s\n", + allocated_up, allocated_down); + tb_tunnel_alloc_bandwidth(tunnel, &allocated_up, &allocated_down); + } + return ret; } From patchwork Fri Feb 9 14:13:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13551368 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 670566A342 for ; Fri, 9 Feb 2024 14:13:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488026; cv=none; b=t7WhALuJQeLxyhXQuVsx6meTsodM06xbJCZ7b7xPN+t1WeIvvP/p44iTNByun/77XzTso6w8P9Igxv2RB8RIMxGQuo5xuXe6hk4hZVwPm+f2ceJI+n4votgN089c0JDSPLCrl240ItjIpb2V4w7fKAMH15y0oQeitiSBPzJeEyM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488026; c=relaxed/simple; bh=aRAQJrlfssxWlrXt4lrfFlu3hxjgmacnXFbBXmgYf00=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VErTPxt9N8/mhXXswFiegsLEsonc9XOkFIUzII2/AJvvxuuk0kmGex1AftTAHScwtNrXxsJ2Vgm0K16TdYoWnSd3rStSt3KAAshsk+sAl6k7Wly2UrJ8oNp0W8QZ4SIAIwSzNPGJ7TxMqfczatH6QIaA5AKA7jDm8799JFa+SXI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XvtGPeVM; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XvtGPeVM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707488024; x=1739024024; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aRAQJrlfssxWlrXt4lrfFlu3hxjgmacnXFbBXmgYf00=; b=XvtGPeVMogPOLMgCbCToZ6XtbIfSr3C0iyI8kC8UrnVcbg0JvnHHOy/5 ETJg8FXT+WoiVETiF86XhmKgB2z/XyZIVxXqjEnOrC1SjmAmXwhzGWlg1 4S7X9a1m7KBNkHedNz10REMbghWUQ9FmkyUdOFP5NbRB55A80X795R7ux OSAJn3LOI+s6GHRKE6/XHCVBi03Km1aMBG/JD21XH/97KlV1xzptkCt2R z/eApD2n7BhSWy9mm6ELsks0lLiXndskKaY1Kr+BYKZisnyEGCtvnluyP a3uN+GKKxfHGkkcroMhZPiesMkoVB2nVRWmvsFmbn0ppb/QE9rOPS8+RL w==; X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="12082129" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="12082129" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2024 06:13:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="934434448" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="934434448" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 09 Feb 2024 06:13:40 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id DF647B2C; Fri, 9 Feb 2024 16:13:35 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 06/10] thunderbolt: Re-order bandwidth group functions Date: Fri, 9 Feb 2024 16:13:31 +0200 Message-ID: <20240209141335.2286786-7-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> References: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This is needed by the following patches so that we do not have to add forward declaratations for any of these. Separating the move and the actual changes also makes it easier to review the code. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 454 +++++++++++++++++++-------------------- 1 file changed, 225 insertions(+), 229 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 9dbdf2770f0b..d23a80339a8d 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -75,112 +75,6 @@ struct tb_hotplug_event { bool unplug; }; -static void tb_init_bandwidth_groups(struct tb_cm *tcm) -{ - int i; - - for (i = 0; i < ARRAY_SIZE(tcm->groups); i++) { - struct tb_bandwidth_group *group = &tcm->groups[i]; - - group->tb = tcm_to_tb(tcm); - group->index = i + 1; - INIT_LIST_HEAD(&group->ports); - } -} - -static void tb_bandwidth_group_attach_port(struct tb_bandwidth_group *group, - struct tb_port *in) -{ - if (!group || WARN_ON(in->group)) - return; - - in->group = group; - list_add_tail(&in->group_list, &group->ports); - - tb_port_dbg(in, "attached to bandwidth group %d\n", group->index); -} - -static struct tb_bandwidth_group *tb_find_free_bandwidth_group(struct tb_cm *tcm) -{ - int i; - - for (i = 0; i < ARRAY_SIZE(tcm->groups); i++) { - struct tb_bandwidth_group *group = &tcm->groups[i]; - - if (list_empty(&group->ports)) - return group; - } - - return NULL; -} - -static struct tb_bandwidth_group * -tb_attach_bandwidth_group(struct tb_cm *tcm, struct tb_port *in, - struct tb_port *out) -{ - struct tb_bandwidth_group *group; - struct tb_tunnel *tunnel; - - /* - * Find all DP tunnels that go through all the same USB4 links - * as this one. Because we always setup tunnels the same way we - * can just check for the routers at both ends of the tunnels - * and if they are the same we have a match. - */ - list_for_each_entry(tunnel, &tcm->tunnel_list, list) { - if (!tb_tunnel_is_dp(tunnel)) - continue; - - if (tunnel->src_port->sw == in->sw && - tunnel->dst_port->sw == out->sw) { - group = tunnel->src_port->group; - if (group) { - tb_bandwidth_group_attach_port(group, in); - return group; - } - } - } - - /* Pick up next available group then */ - group = tb_find_free_bandwidth_group(tcm); - if (group) - tb_bandwidth_group_attach_port(group, in); - else - tb_port_warn(in, "no available bandwidth groups\n"); - - return group; -} - -static void tb_discover_bandwidth_group(struct tb_cm *tcm, struct tb_port *in, - struct tb_port *out) -{ - if (usb4_dp_port_bandwidth_mode_enabled(in)) { - int index, i; - - index = usb4_dp_port_group_id(in); - for (i = 0; i < ARRAY_SIZE(tcm->groups); i++) { - if (tcm->groups[i].index == index) { - tb_bandwidth_group_attach_port(&tcm->groups[i], in); - return; - } - } - } - - tb_attach_bandwidth_group(tcm, in, out); -} - -static void tb_detach_bandwidth_group(struct tb_port *in) -{ - struct tb_bandwidth_group *group = in->group; - - if (group) { - in->group = NULL; - list_del_init(&in->group_list); - - tb_port_dbg(in, "detached from bandwidth group %d\n", group->index); - } -} - static void tb_handle_hotplug(struct work_struct *work); static void tb_queue_hotplug(struct tb *tb, u64 route, u8 port, bool unplug) @@ -472,34 +366,6 @@ static void tb_switch_discover_tunnels(struct tb_switch *sw, } } -static void tb_discover_tunnels(struct tb *tb) -{ - struct tb_cm *tcm = tb_priv(tb); - struct tb_tunnel *tunnel; - - tb_switch_discover_tunnels(tb->root_switch, &tcm->tunnel_list, true); - - list_for_each_entry(tunnel, &tcm->tunnel_list, list) { - if (tb_tunnel_is_pci(tunnel)) { - struct tb_switch *parent = tunnel->dst_port->sw; - - while (parent != tunnel->src_port->sw) { - parent->boot = true; - parent = tb_switch_parent(parent); - } - } else if (tb_tunnel_is_dp(tunnel)) { - struct tb_port *in = tunnel->src_port; - struct tb_port *out = tunnel->dst_port; - - /* Keep the domain from powering down */ - pm_runtime_get_sync(&in->sw->dev); - pm_runtime_get_sync(&out->sw->dev); - - tb_discover_bandwidth_group(tcm, in, out); - } - } -} - static int tb_port_configure_xdomain(struct tb_port *port, struct tb_xdomain *xd) { if (tb_switch_is_usb4(port->sw)) @@ -1464,6 +1330,231 @@ static void tb_scan_port(struct tb_port *port) } } +static void +tb_recalc_estimated_bandwidth_for_group(struct tb_bandwidth_group *group) +{ + struct tb_tunnel *first_tunnel; + struct tb *tb = group->tb; + struct tb_port *in; + int ret; + + tb_dbg(tb, "re-calculating bandwidth estimation for group %u\n", + group->index); + + first_tunnel = NULL; + list_for_each_entry(in, &group->ports, group_list) { + int estimated_bw, estimated_up, estimated_down; + struct tb_tunnel *tunnel; + struct tb_port *out; + + if (!usb4_dp_port_bandwidth_mode_enabled(in)) + continue; + + tunnel = tb_find_tunnel(tb, TB_TUNNEL_DP, in, NULL); + if (WARN_ON(!tunnel)) + break; + + if (!first_tunnel) { + /* + * Since USB3 bandwidth is shared by all DP + * tunnels under the host router USB4 port, even + * if they do not begin from the host router, we + * can release USB3 bandwidth just once and not + * for each tunnel separately. + */ + first_tunnel = tunnel; + ret = tb_release_unused_usb3_bandwidth(tb, + first_tunnel->src_port, first_tunnel->dst_port); + if (ret) { + tb_tunnel_warn(tunnel, + "failed to release unused bandwidth\n"); + break; + } + } + + out = tunnel->dst_port; + ret = tb_available_bandwidth(tb, in, out, &estimated_up, + &estimated_down, true); + if (ret) { + tb_tunnel_warn(tunnel, + "failed to re-calculate estimated bandwidth\n"); + break; + } + + /* + * Estimated bandwidth includes: + * - already allocated bandwidth for the DP tunnel + * - available bandwidth along the path + * - bandwidth allocated for USB 3.x but not used. + */ + if (tb_port_path_direction_downstream(in, out)) + estimated_bw = estimated_down; + else + estimated_bw = estimated_up; + + if (usb4_dp_port_set_estimated_bandwidth(in, estimated_bw)) + tb_tunnel_warn(tunnel, + "failed to update estimated bandwidth\n"); + } + + if (first_tunnel) + tb_reclaim_usb3_bandwidth(tb, first_tunnel->src_port, + first_tunnel->dst_port); + + tb_dbg(tb, "bandwidth estimation for group %u done\n", group->index); +} + +static void tb_recalc_estimated_bandwidth(struct tb *tb) +{ + struct tb_cm *tcm = tb_priv(tb); + int i; + + tb_dbg(tb, "bandwidth consumption changed, re-calculating estimated bandwidth\n"); + + for (i = 0; i < ARRAY_SIZE(tcm->groups); i++) { + struct tb_bandwidth_group *group = &tcm->groups[i]; + + if (!list_empty(&group->ports)) + tb_recalc_estimated_bandwidth_for_group(group); + } + + tb_dbg(tb, "bandwidth re-calculation done\n"); +} + +static void tb_init_bandwidth_groups(struct tb_cm *tcm) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(tcm->groups); i++) { + struct tb_bandwidth_group *group = &tcm->groups[i]; + + group->tb = tcm_to_tb(tcm); + group->index = i + 1; + INIT_LIST_HEAD(&group->ports); + } +} + +static void tb_bandwidth_group_attach_port(struct tb_bandwidth_group *group, + struct tb_port *in) +{ + if (!group || WARN_ON(in->group)) + return; + + in->group = group; + list_add_tail(&in->group_list, &group->ports); + + tb_port_dbg(in, "attached to bandwidth group %d\n", group->index); +} + +static struct tb_bandwidth_group *tb_find_free_bandwidth_group(struct tb_cm *tcm) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(tcm->groups); i++) { + struct tb_bandwidth_group *group = &tcm->groups[i]; + + if (list_empty(&group->ports)) + return group; + } + + return NULL; +} + +static struct tb_bandwidth_group * +tb_attach_bandwidth_group(struct tb_cm *tcm, struct tb_port *in, + struct tb_port *out) +{ + struct tb_bandwidth_group *group; + struct tb_tunnel *tunnel; + + /* + * Find all DP tunnels that go through all the same USB4 links + * as this one. Because we always setup tunnels the same way we + * can just check for the routers at both ends of the tunnels + * and if they are the same we have a match. + */ + list_for_each_entry(tunnel, &tcm->tunnel_list, list) { + if (!tb_tunnel_is_dp(tunnel)) + continue; + + if (tunnel->src_port->sw == in->sw && + tunnel->dst_port->sw == out->sw) { + group = tunnel->src_port->group; + if (group) { + tb_bandwidth_group_attach_port(group, in); + return group; + } + } + } + + /* Pick up next available group then */ + group = tb_find_free_bandwidth_group(tcm); + if (group) + tb_bandwidth_group_attach_port(group, in); + else + tb_port_warn(in, "no available bandwidth groups\n"); + + return group; +} + +static void tb_discover_bandwidth_group(struct tb_cm *tcm, struct tb_port *in, + struct tb_port *out) +{ + if (usb4_dp_port_bandwidth_mode_enabled(in)) { + int index, i; + + index = usb4_dp_port_group_id(in); + for (i = 0; i < ARRAY_SIZE(tcm->groups); i++) { + if (tcm->groups[i].index == index) { + tb_bandwidth_group_attach_port(&tcm->groups[i], in); + return; + } + } + } + + tb_attach_bandwidth_group(tcm, in, out); +} + +static void tb_detach_bandwidth_group(struct tb_port *in) +{ + struct tb_bandwidth_group *group = in->group; + + if (group) { + in->group = NULL; + list_del_init(&in->group_list); + + tb_port_dbg(in, "detached from bandwidth group %d\n", group->index); + } +} + +static void tb_discover_tunnels(struct tb *tb) +{ + struct tb_cm *tcm = tb_priv(tb); + struct tb_tunnel *tunnel; + + tb_switch_discover_tunnels(tb->root_switch, &tcm->tunnel_list, true); + + list_for_each_entry(tunnel, &tcm->tunnel_list, list) { + if (tb_tunnel_is_pci(tunnel)) { + struct tb_switch *parent = tunnel->dst_port->sw; + + while (parent != tunnel->src_port->sw) { + parent->boot = true; + parent = tb_switch_parent(parent); + } + } else if (tb_tunnel_is_dp(tunnel)) { + struct tb_port *in = tunnel->src_port; + struct tb_port *out = tunnel->dst_port; + + /* Keep the domain from powering down */ + pm_runtime_get_sync(&in->sw->dev); + pm_runtime_get_sync(&out->sw->dev); + + tb_discover_bandwidth_group(tcm, in, out); + } + } +} + static void tb_deactivate_and_free_tunnel(struct tb_tunnel *tunnel) { struct tb_port *src_port, *dst_port; @@ -1605,101 +1696,6 @@ static struct tb_port *tb_find_pcie_down(struct tb_switch *sw, return tb_find_unused_port(sw, TB_TYPE_PCIE_DOWN); } -static void -tb_recalc_estimated_bandwidth_for_group(struct tb_bandwidth_group *group) -{ - struct tb_tunnel *first_tunnel; - struct tb *tb = group->tb; - struct tb_port *in; - int ret; - - tb_dbg(tb, "re-calculating bandwidth estimation for group %u\n", - group->index); - - first_tunnel = NULL; - list_for_each_entry(in, &group->ports, group_list) { - int estimated_bw, estimated_up, estimated_down; - struct tb_tunnel *tunnel; - struct tb_port *out; - - if (!usb4_dp_port_bandwidth_mode_enabled(in)) - continue; - - tunnel = tb_find_tunnel(tb, TB_TUNNEL_DP, in, NULL); - if (WARN_ON(!tunnel)) - break; - - if (!first_tunnel) { - /* - * Since USB3 bandwidth is shared by all DP - * tunnels under the host router USB4 port, even - * if they do not begin from the host router, we - * can release USB3 bandwidth just once and not - * for each tunnel separately. - */ - first_tunnel = tunnel; - ret = tb_release_unused_usb3_bandwidth(tb, - first_tunnel->src_port, first_tunnel->dst_port); - if (ret) { - tb_tunnel_warn(tunnel, - "failed to release unused bandwidth\n"); - break; - } - } - - out = tunnel->dst_port; - ret = tb_available_bandwidth(tb, in, out, &estimated_up, - &estimated_down, true); - if (ret) { - tb_tunnel_warn(tunnel, - "failed to re-calculate estimated bandwidth\n"); - break; - } - - /* - * Estimated bandwidth includes: - * - already allocated bandwidth for the DP tunnel - * - available bandwidth along the path - * - bandwidth allocated for USB 3.x but not used. - */ - tb_tunnel_dbg(tunnel, - "re-calculated estimated bandwidth %u/%u Mb/s\n", - estimated_up, estimated_down); - - if (tb_port_path_direction_downstream(in, out)) - estimated_bw = estimated_down; - else - estimated_bw = estimated_up; - - if (usb4_dp_port_set_estimated_bandwidth(in, estimated_bw)) - tb_tunnel_warn(tunnel, - "failed to update estimated bandwidth\n"); - } - - if (first_tunnel) - tb_reclaim_usb3_bandwidth(tb, first_tunnel->src_port, - first_tunnel->dst_port); - - tb_dbg(tb, "bandwidth estimation for group %u done\n", group->index); -} - -static void tb_recalc_estimated_bandwidth(struct tb *tb) -{ - struct tb_cm *tcm = tb_priv(tb); - int i; - - tb_dbg(tb, "bandwidth consumption changed, re-calculating estimated bandwidth\n"); - - for (i = 0; i < ARRAY_SIZE(tcm->groups); i++) { - struct tb_bandwidth_group *group = &tcm->groups[i]; - - if (!list_empty(&group->ports)) - tb_recalc_estimated_bandwidth_for_group(group); - } - - tb_dbg(tb, "bandwidth re-calculation done\n"); -} - static struct tb_port *tb_find_dp_out(struct tb *tb, struct tb_port *in) { struct tb_port *host_port, *port; From patchwork Fri Feb 9 14:13:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13551370 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 579156EB4A for ; Fri, 9 Feb 2024 14:13:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488027; cv=none; b=Pm4/PaZ0QUkgDdbDT16XJqVe5EWaYqUl4IS70FkSuSRmoYdsjAEj1hu7tz5LpyDAK+pLJfHunUAGUZaD7dQS4S5zQimCibmtycpwKGoNeQ8h57otcfLunAHTGpilQMDebgPUp0dAcI1NReVe2cgMoLL/PM57N3slQL/M8CrmFgg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488027; c=relaxed/simple; bh=TecJGJwUkcKh9NNygULsxNjAJYDtDB6GX73SZGMJyPg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pE53Z6IBLWPTsDLLeP/7xRjZ9TGqXP5rhXhKGJdxidCB9Qlt4jO7TAnbIt7uqy4CWmamtc/L1Qa7AILDGaBjIfHtY/aJteEWOoRwf5UjH6Mgd0lj8pQAD19fZwp9I6x12/WT2GgdYeHNno0bQJpzOSX0MSMDsfP1RCjztjl/kM4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ILWUk9QZ; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ILWUk9QZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707488025; x=1739024025; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TecJGJwUkcKh9NNygULsxNjAJYDtDB6GX73SZGMJyPg=; b=ILWUk9QZ9rffApWRjWwFM99kwwDE//XSkF97rxGTs/AM5Uf9nfJQm+dy 0pO3SJPhRjTwLY+xGTpfv4VZQXkuBO2R/qvM0w/FsfDUtv1HAYLmT2L9J DMfqszyY7LDVRIC/XKvUX4+UgILb4tnSFokrtCInyyJN/wDv620aRj2pv xfMzwbKBXUt4WLRCB1WhUQoBGUTXtuH2k19njeCShdgr/F5X0/D+JbH9A oNoybQxsha8KRBqJuh7zJnQDlUWmmaAF4kiMzXTjZguKC+UNH83kqertR OUCJzI8CXVKpKwdcY8k0xznQi7fVaSYgYViHXGq4yHoesm7dYi4BOtDP9 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="12082139" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="12082139" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2024 06:13:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="934434452" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="934434452" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 09 Feb 2024 06:13:40 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id E55564C4; Fri, 9 Feb 2024 16:13:35 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 07/10] thunderbolt: Introduce tb_tunnel_direction_downstream() Date: Fri, 9 Feb 2024 16:13:32 +0200 Message-ID: <20240209141335.2286786-8-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> References: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This helper takes tunnel as parameter. Convert existing code to call this where possible. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 9 +++------ drivers/thunderbolt/tunnel.c | 23 +++++++++-------------- drivers/thunderbolt/tunnel.h | 6 ++++++ 3 files changed, 18 insertions(+), 20 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index d23a80339a8d..e664045ad41c 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -1387,7 +1387,7 @@ tb_recalc_estimated_bandwidth_for_group(struct tb_bandwidth_group *group) * - available bandwidth along the path * - bandwidth allocated for USB 3.x but not used. */ - if (tb_port_path_direction_downstream(in, out)) + if (tb_tunnel_direction_downstream(tunnel)) estimated_bw = estimated_down; else estimated_bw = estimated_up; @@ -2388,11 +2388,11 @@ static void tb_handle_dp_bandwidth_request(struct work_struct *work) { struct tb_hotplug_event *ev = container_of(work, typeof(*ev), work); int requested_bw, requested_up, requested_down, ret; - struct tb_port *in, *out; struct tb_tunnel *tunnel; struct tb *tb = ev->tb; struct tb_cm *tcm = tb_priv(tb); struct tb_switch *sw; + struct tb_port *in; pm_runtime_get_sync(&tb->dev); @@ -2456,10 +2456,7 @@ static void tb_handle_dp_bandwidth_request(struct work_struct *work) tb_port_dbg(in, "requested bandwidth %d Mb/s\n", requested_bw); - - out = tunnel->dst_port; - - if (tb_port_path_direction_downstream(in, out)) { + if (tb_tunnel_direction_downstream(tunnel)) { requested_up = -1; requested_down = requested_bw; } else { diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index a766ab297064..e02b34654d29 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -706,7 +706,7 @@ static int tb_dp_xchg_caps(struct tb_tunnel *tunnel) "DP OUT maximum supported bandwidth %u Mb/s x%u = %u Mb/s\n", out_rate, out_lanes, bw); - if (tb_port_path_direction_downstream(in, out)) + if (tb_tunnel_direction_downstream(tunnel)) max_bw = tunnel->max_down; else max_bw = tunnel->max_up; @@ -831,7 +831,7 @@ static int tb_dp_bandwidth_alloc_mode_enable(struct tb_tunnel *tunnel) * max_up/down fields. For discovery we just read what the * estimation was set to. */ - if (tb_port_path_direction_downstream(in, out)) + if (tb_tunnel_direction_downstream(tunnel)) estimated_bw = tunnel->max_down; else estimated_bw = tunnel->max_up; @@ -971,7 +971,6 @@ static int tb_dp_bandwidth_mode_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up, int *consumed_down) { - struct tb_port *out = tunnel->dst_port; struct tb_port *in = tunnel->src_port; int ret, allocated_bw, max_bw_rounded; @@ -993,7 +992,7 @@ static int tb_dp_bandwidth_mode_consumed_bandwidth(struct tb_tunnel *tunnel, if (allocated_bw == max_bw_rounded) allocated_bw = ret; - if (tb_port_path_direction_downstream(in, out)) { + if (tb_tunnel_direction_downstream(tunnel)) { *consumed_up = 0; *consumed_down = allocated_bw; } else { @@ -1007,7 +1006,6 @@ static int tb_dp_bandwidth_mode_consumed_bandwidth(struct tb_tunnel *tunnel, static int tb_dp_allocated_bandwidth(struct tb_tunnel *tunnel, int *allocated_up, int *allocated_down) { - struct tb_port *out = tunnel->dst_port; struct tb_port *in = tunnel->src_port; /* @@ -1029,7 +1027,7 @@ static int tb_dp_allocated_bandwidth(struct tb_tunnel *tunnel, int *allocated_up if (allocated_bw == max_bw_rounded) allocated_bw = ret; - if (tb_port_path_direction_downstream(in, out)) { + if (tb_tunnel_direction_downstream(tunnel)) { *allocated_up = 0; *allocated_down = allocated_bw; } else { @@ -1046,7 +1044,6 @@ static int tb_dp_allocated_bandwidth(struct tb_tunnel *tunnel, int *allocated_up static int tb_dp_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up, int *alloc_down) { - struct tb_port *out = tunnel->dst_port; struct tb_port *in = tunnel->src_port; int max_bw_rounded, ret, tmp; @@ -1057,7 +1054,7 @@ static int tb_dp_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up, if (ret < 0) return ret; - if (tb_port_path_direction_downstream(in, out)) { + if (tb_tunnel_direction_downstream(tunnel)) { tmp = min(*alloc_down, max_bw_rounded); ret = usb4_dp_port_allocate_bandwidth(in, tmp); if (ret) @@ -1143,17 +1140,16 @@ static int tb_dp_read_cap(struct tb_tunnel *tunnel, unsigned int cap, u32 *rate, static int tb_dp_maximum_bandwidth(struct tb_tunnel *tunnel, int *max_up, int *max_down) { - struct tb_port *in = tunnel->src_port; int ret; - if (!usb4_dp_port_bandwidth_mode_enabled(in)) + if (!usb4_dp_port_bandwidth_mode_enabled(tunnel->src_port)) return -EOPNOTSUPP; ret = tb_dp_bandwidth_mode_maximum_bandwidth(tunnel, NULL); if (ret < 0) return ret; - if (tb_port_path_direction_downstream(in, tunnel->dst_port)) { + if (tb_tunnel_direction_downstream(tunnel)) { *max_up = 0; *max_down = ret; } else { @@ -1167,8 +1163,7 @@ static int tb_dp_maximum_bandwidth(struct tb_tunnel *tunnel, int *max_up, static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up, int *consumed_down) { - struct tb_port *in = tunnel->src_port; - const struct tb_switch *sw = in->sw; + const struct tb_switch *sw = tunnel->src_port->sw; u32 rate = 0, lanes = 0; int ret; @@ -1214,7 +1209,7 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up, return 0; } - if (tb_port_path_direction_downstream(in, tunnel->dst_port)) { + if (tb_tunnel_direction_downstream(tunnel)) { *consumed_up = 0; *consumed_down = tb_dp_bandwidth(rate, lanes); } else { diff --git a/drivers/thunderbolt/tunnel.h b/drivers/thunderbolt/tunnel.h index b4cff5482112..1a27ccd08b86 100644 --- a/drivers/thunderbolt/tunnel.h +++ b/drivers/thunderbolt/tunnel.h @@ -139,6 +139,12 @@ static inline bool tb_tunnel_is_usb3(const struct tb_tunnel *tunnel) return tunnel->type == TB_TUNNEL_USB3; } +static inline bool tb_tunnel_direction_downstream(const struct tb_tunnel *tunnel) +{ + return tb_port_path_direction_downstream(tunnel->src_port, + tunnel->dst_port); +} + const char *tb_tunnel_type_name(const struct tb_tunnel *tunnel); #define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \ From patchwork Fri Feb 9 14:13:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13551372 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0137E73196 for ; Fri, 9 Feb 2024 14:13:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488028; cv=none; b=Lasc8BUUrFH5kxZX5Lelo9UzebclJeNnz8WFzit5LQp2TxWb+YU8ErDU0NqvQDylb+a4EmAs6JvRGikb5n0aLHUMb7RpmY9DDSlL/MgMbOmtNYVyCPQgpNb4l8I8lUxVy7DV+4Sg3E1d06heCDHt1ffPOgynbRU3jYftl/6SE8k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488028; c=relaxed/simple; bh=Lxdqj5MERCMmtK2VhmVWvN4KG+6U40gj1VwgaHN7GQE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UgdHBeW7oooqLZwFFKyMhb+KaKKbzzQg77Do5WFZKON6guyKaHArMEeRFvGtwI4Nb6C8uIchxOgwwvNoAeYUrJWPm3rZpDtKqz/lKZqMwFpvLp0Ac28FVAdjLtIAoZg7oxB1kHUguaD5UeLn6Xn5HOT3HGSPIcQgdyKG/ioXVUw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Xisx7lFp; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Xisx7lFp" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707488027; x=1739024027; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Lxdqj5MERCMmtK2VhmVWvN4KG+6U40gj1VwgaHN7GQE=; b=Xisx7lFpE6j98khF30ZihNcCW4bUOKxo+pbWQ81YsqewPJuWBvjhIa+B Hdn03Xz1K2JLnHJKfMtZRWMOHoEJfTBs0Imb3FpKhfTrvYtWHCP+YR/n2 u3fXU/2niFSmdxs8pCn+aSUHcvAD7HMHscyNEarE1kqJO+57/Lmdfocec G8L0X3Mandiew01R3ISaeNOp6BCGDYowKKkjEfVIaLcQaQTul3IKnI8Xb ey6qZ6rTK9YS88DQhli9sV+MjECFGqEgLdShri2yDqsryh/d+3HDyGpKd VXeeOjjsnO5NXhPUib3XhGJvGeVIuBjrvzVk+2OC+q5FSkdXevcHlUGf1 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="12082142" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="12082142" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2024 06:13:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="934434455" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="934434455" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 09 Feb 2024 06:13:40 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 01397BA3; Fri, 9 Feb 2024 16:13:35 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 08/10] thunderbolt: Reserve released DisplayPort bandwidth for a group for 10 seconds Date: Fri, 9 Feb 2024 16:13:33 +0200 Message-ID: <20240209141335.2286786-9-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> References: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The USB4 spec says that the Connection Manager should reserve the bandwidth that is released in the same group for 10 seconds before it can be shared with other groups. Add support for this. We also delay the symmetric transition by that same 10 seconds to avoid any unnecessary transitions (i.e if the released bandwidth is used by another DisplayPort tunnel in the same group the link can stay asymmetric the whole time). Signed-off-by: Mika Westerberg --- drivers/thunderbolt/domain.c | 4 + drivers/thunderbolt/tb.c | 199 ++++++++++++++++++++++++++++++----- drivers/thunderbolt/tb.h | 10 ++ 3 files changed, 184 insertions(+), 29 deletions(-) diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c index ee8a894bd70d..d7abb8c445aa 100644 --- a/drivers/thunderbolt/domain.c +++ b/drivers/thunderbolt/domain.c @@ -506,6 +506,10 @@ void tb_domain_remove(struct tb *tb) mutex_unlock(&tb->lock); flush_workqueue(tb->wq); + + if (tb->cm_ops->deinit) + tb->cm_ops->deinit(tb); + device_unregister(&tb->dev); } diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index e664045ad41c..eda53567fa4a 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -17,6 +17,7 @@ #include "tunnel.h" #define TB_TIMEOUT 100 /* ms */ +#define TB_RELEASE_BW_TIMEOUT 10000 /* ms */ /* * Minimum bandwidth (in Mb/s) that is needed in the single transmitter/receiver @@ -547,6 +548,10 @@ static int tb_consumed_usb3_pcie_bandwidth(struct tb *tb, * Calculates consumed DP bandwidth at @port between path from @src_port * to @dst_port. Does not take tunnel starting from @src_port and ending * from @src_port into account. + * + * If there is bandwidth reserved for any of the groups between + * @src_port and @dst_port (but not yet used) that is also taken into + * account in the returned consumed bandwidth. */ static int tb_consumed_dp_bandwidth(struct tb *tb, struct tb_port *src_port, @@ -555,9 +560,11 @@ static int tb_consumed_dp_bandwidth(struct tb *tb, int *consumed_up, int *consumed_down) { + int group_reserved[MAX_GROUPS] = {}; struct tb_cm *tcm = tb_priv(tb); struct tb_tunnel *tunnel; - int ret; + bool downstream; + int i, ret; *consumed_up = *consumed_down = 0; @@ -566,6 +573,7 @@ static int tb_consumed_dp_bandwidth(struct tb *tb, * their consumed bandwidth from the available. */ list_for_each_entry(tunnel, &tcm->tunnel_list, list) { + const struct tb_bandwidth_group *group; int dp_consumed_up, dp_consumed_down; if (tb_tunnel_is_invalid(tunnel)) @@ -577,6 +585,15 @@ static int tb_consumed_dp_bandwidth(struct tb *tb, if (!tb_tunnel_port_on_path(tunnel, port)) continue; + /* + * Calculate what is reserved for groups crossing the + * same ports only once (as that is reserved for all the + * tunnels in the group). + */ + group = tunnel->src_port->group; + if (group && group->reserved && !group_reserved[group->index]) + group_reserved[group->index] = group->reserved; + /* * Ignore the DP tunnel between src_port and dst_port * because it is the same tunnel and we may be @@ -595,6 +612,14 @@ static int tb_consumed_dp_bandwidth(struct tb *tb, *consumed_down += dp_consumed_down; } + downstream = tb_port_path_direction_downstream(src_port, dst_port); + for (i = 0; i < ARRAY_SIZE(group_reserved); i++) { + if (downstream) + *consumed_down += group_reserved[i]; + else + *consumed_up += group_reserved[i]; + } + return 0; } @@ -1047,8 +1072,6 @@ static int tb_configure_asym(struct tb *tb, struct tb_port *src_port, * @tb: Domain structure * @src_port: Source adapter to start the transition * @dst_port: Destination adapter - * @requested_up: New lower bandwidth request upstream (Mb/s) - * @requested_down: New lower bandwidth request downstream (Mb/s) * @keep_asym: Keep asymmetric link if preferred * * Goes over each link from @src_port to @dst_port and tries to @@ -1056,8 +1079,7 @@ static int tb_configure_asym(struct tb *tb, struct tb_port *src_port, * allows and link asymmetric preference is ignored (if @keep_asym is %false). */ static int tb_configure_sym(struct tb *tb, struct tb_port *src_port, - struct tb_port *dst_port, int requested_up, - int requested_down, bool keep_asym) + struct tb_port *dst_port, bool keep_asym) { bool clx = false, clx_disabled = false, downstream; struct tb_switch *sw; @@ -1096,10 +1118,10 @@ static int tb_configure_sym(struct tb *tb, struct tb_port *src_port, * guard band 10%) as the link was configured asymmetric * already. */ - if (consumed_down + requested_down >= asym_threshold) + if (consumed_down >= asym_threshold) continue; } else { - if (consumed_up + requested_up >= asym_threshold) + if (consumed_up >= asym_threshold) continue; } @@ -1172,7 +1194,7 @@ static void tb_configure_link(struct tb_port *down, struct tb_port *up, struct tb_port *host_port; host_port = tb_port_at(tb_route(sw), tb->root_switch); - tb_configure_sym(tb, host_port, up, 0, 0, false); + tb_configure_sym(tb, host_port, up, false); } /* Set the link configured */ @@ -1392,7 +1414,17 @@ tb_recalc_estimated_bandwidth_for_group(struct tb_bandwidth_group *group) else estimated_bw = estimated_up; - if (usb4_dp_port_set_estimated_bandwidth(in, estimated_bw)) + /* + * If there is reserved bandwidth for the group that is + * not yet released we report that too. + */ + tb_tunnel_dbg(tunnel, + "re-calculated estimated bandwidth %u (+ %u reserved) = %u Mb/s\n", + estimated_bw, group->reserved, + estimated_bw + group->reserved); + + if (usb4_dp_port_set_estimated_bandwidth(in, + estimated_bw + group->reserved)) tb_tunnel_warn(tunnel, "failed to update estimated bandwidth\n"); } @@ -1421,6 +1453,54 @@ static void tb_recalc_estimated_bandwidth(struct tb *tb) tb_dbg(tb, "bandwidth re-calculation done\n"); } +static bool __release_group_bandwidth(struct tb_bandwidth_group *group) +{ + if (group->reserved) { + tb_dbg(group->tb, "group %d released total %d Mb/s\n", group->index, + group->reserved); + group->reserved = 0; + return true; + } + return false; +} + +static void __configure_group_sym(struct tb_bandwidth_group *group) +{ + struct tb_tunnel *tunnel; + struct tb_port *in; + + if (list_empty(&group->ports)) + return; + + /* + * All the tunnels in the group go through the same USB4 links + * so we find the first one here and pass the IN and OUT + * adapters to tb_configure_sym() which now transitions the + * links back to symmetric if bandwidth requirement < asym_threshold. + * + * We do this here to avoid unnecessary transitions (for example + * if the graphics released bandwidth for other tunnel in the + * same group). + */ + in = list_first_entry(&group->ports, struct tb_port, group_list); + tunnel = tb_find_tunnel(group->tb, TB_TUNNEL_DP, in, NULL); + if (tunnel) + tb_configure_sym(group->tb, in, tunnel->dst_port, true); +} + +static void tb_bandwidth_group_release_work(struct work_struct *work) +{ + struct tb_bandwidth_group *group = + container_of(work, typeof(*group), release_work.work); + struct tb *tb = group->tb; + + mutex_lock(&tb->lock); + if (__release_group_bandwidth(group)) + tb_recalc_estimated_bandwidth(tb); + __configure_group_sym(group); + mutex_unlock(&tb->lock); +} + static void tb_init_bandwidth_groups(struct tb_cm *tcm) { int i; @@ -1431,6 +1511,8 @@ static void tb_init_bandwidth_groups(struct tb_cm *tcm) group->tb = tcm_to_tb(tcm); group->index = i + 1; INIT_LIST_HEAD(&group->ports); + INIT_DELAYED_WORK(&group->release_work, + tb_bandwidth_group_release_work); } } @@ -1524,6 +1606,12 @@ static void tb_detach_bandwidth_group(struct tb_port *in) list_del_init(&in->group_list); tb_port_dbg(in, "detached from bandwidth group %d\n", group->index); + + /* No more tunnels so release the reserved bandwidth if any */ + if (list_empty(&group->ports)) { + cancel_delayed_work(&group->release_work); + __release_group_bandwidth(group); + } } } @@ -1582,7 +1670,7 @@ static void tb_deactivate_and_free_tunnel(struct tb_tunnel *tunnel) * If bandwidth on a link is < asym_threshold * transition the link to symmetric. */ - tb_configure_sym(tb, src_port, dst_port, 0, 0, true); + tb_configure_sym(tb, src_port, dst_port, true); /* Now we can allow the domain to runtime suspend again */ pm_runtime_mark_last_busy(&dst_port->sw->dev); pm_runtime_put_autosuspend(&dst_port->sw->dev); @@ -2239,8 +2327,10 @@ static int tb_alloc_dp_bandwidth(struct tb_tunnel *tunnel, int *requested_up, int allocated_up, allocated_down, available_up, available_down, ret; int requested_up_corrected, requested_down_corrected, granularity; int max_up, max_down, max_up_rounded, max_down_rounded; + struct tb_bandwidth_group *group; struct tb *tb = tunnel->tb; struct tb_port *in, *out; + bool downstream; ret = tb_tunnel_allocated_bandwidth(tunnel, &allocated_up, &allocated_down); if (ret) @@ -2304,21 +2394,44 @@ static int tb_alloc_dp_bandwidth(struct tb_tunnel *tunnel, int *requested_up, goto fail; } + downstream = tb_tunnel_direction_downstream(tunnel); + group = in->group; + if ((*requested_up >= 0 && requested_up_corrected <= allocated_up) || (*requested_down >= 0 && requested_down_corrected <= allocated_down)) { - /* - * If bandwidth on a link is < asym_threshold transition - * the link to symmetric. - */ - tb_configure_sym(tb, in, out, *requested_up, *requested_down, true); - /* - * If requested bandwidth is less or equal than what is - * currently allocated to that tunnel we simply change - * the reservation of the tunnel. Since all the tunnels - * going out from the same USB4 port are in the same - * group the released bandwidth will be taken into - * account for the other tunnels automatically below. - */ + if (tunnel->bw_mode) { + int reserved; + /* + * If requested bandwidth is less or equal than + * what is currently allocated to that tunnel we + * simply change the reservation of the tunnel + * and add the released bandwidth for the group + * for the next 10s. Then we release it for + * others to use. + */ + if (downstream) + reserved = allocated_down - *requested_down; + else + reserved = allocated_up - *requested_up; + + if (reserved > 0) { + group->reserved += reserved; + tb_dbg(tb, "group %d reserved %d total %d Mb/s\n", + group->index, reserved, group->reserved); + + /* + * If it was not already pending, + * schedule release now. If it is then + * postpone it for the next 10s (unless + * it is already running in which case + * the 10s already expired and we should + * give the reserved back to others). + */ + mod_delayed_work(system_wq, &group->release_work, + msecs_to_jiffies(TB_RELEASE_BW_TIMEOUT)); + } + } + return tb_tunnel_alloc_bandwidth(tunnel, requested_up, requested_down); } @@ -2341,11 +2454,15 @@ static int tb_alloc_dp_bandwidth(struct tb_tunnel *tunnel, int *requested_up, if (ret) goto reclaim; - tb_tunnel_dbg(tunnel, "bandwidth available for allocation %d/%d Mb/s\n", - available_up, available_down); + tb_tunnel_dbg(tunnel, "bandwidth available for allocation %d/%d (+ %u reserved) Mb/s\n", + available_up, available_down, group->reserved); + + if ((*requested_up >= 0 && + available_up + group->reserved >= requested_up_corrected) || + (*requested_down >= 0 && + available_down + group->reserved >= requested_down_corrected)) { + int released = 0; - if ((*requested_up >= 0 && available_up >= requested_up_corrected) || - (*requested_down >= 0 && available_down >= requested_down_corrected)) { /* * If bandwidth on a link is >= asym_threshold * transition the link to asymmetric. @@ -2353,7 +2470,7 @@ static int tb_alloc_dp_bandwidth(struct tb_tunnel *tunnel, int *requested_up, ret = tb_configure_asym(tb, in, out, *requested_up, *requested_down); if (ret) { - tb_configure_sym(tb, in, out, 0, 0, true); + tb_configure_sym(tb, in, out, true); goto fail; } @@ -2361,7 +2478,20 @@ static int tb_alloc_dp_bandwidth(struct tb_tunnel *tunnel, int *requested_up, requested_down); if (ret) { tb_tunnel_warn(tunnel, "failed to allocate bandwidth\n"); - tb_configure_sym(tb, in, out, 0, 0, true); + tb_configure_sym(tb, in, out, true); + } + + if (downstream) { + if (*requested_down > available_down) + released = *requested_down - available_down; + } else { + if (*requested_up > available_up) + released = *requested_up - available_up; + } + if (released) { + group->reserved -= released; + tb_dbg(tb, "group %d released %d total %d Mb/s\n", + group->index, released, group->reserved); } } else { ret = -ENOBUFS; @@ -2585,6 +2715,16 @@ static void tb_stop(struct tb *tb) tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */ } +static void tb_deinit(struct tb *tb) +{ + struct tb_cm *tcm = tb_priv(tb); + int i; + + /* Cancel all the release bandwidth workers */ + for (i = 0; i < ARRAY_SIZE(tcm->groups); i++) + cancel_delayed_work_sync(&tcm->groups[i].release_work); +} + static int tb_scan_finalize_switch(struct device *dev, void *data) { if (tb_is_switch(dev)) { @@ -2893,6 +3033,7 @@ static int tb_runtime_resume(struct tb *tb) static const struct tb_cm_ops tb_cm_ops = { .start = tb_start, .stop = tb_stop, + .deinit = tb_deinit, .suspend_noirq = tb_suspend_noirq, .resume_noirq = tb_resume_noirq, .freeze_noirq = tb_freeze_noirq, diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index d0dfbf040356..1bbbeb034e0e 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -217,6 +217,11 @@ struct tb_switch { * @tb: Pointer to the domain the group belongs to * @index: Index of the group (aka Group_ID). Valid values %1-%7 * @ports: DP IN adapters belonging to this group are linked here + * @reserved: Bandwidth released by one tunnel in the group, available + * to others. This is reported as part of estimated_bw for + * the group. + * @release_work: Worker to release the @reserved if it is not used by + * any of the tunnels. * * Any tunnel that requires isochronous bandwidth (that's DP for now) is * attached to a bandwidth group. All tunnels going through the same @@ -227,6 +232,8 @@ struct tb_bandwidth_group { struct tb *tb; int index; struct list_head ports; + int reserved; + struct delayed_work release_work; }; /** @@ -452,6 +459,8 @@ struct tb_path { * ICM to send driver ready message to the firmware. * @start: Starts the domain * @stop: Stops the domain + * @deinit: Perform any cleanup after the domain is stopped but before + * it is unregistered. Called without @tb->lock taken. Optional. * @suspend_noirq: Connection manager specific suspend_noirq * @resume_noirq: Connection manager specific resume_noirq * @suspend: Connection manager specific suspend @@ -485,6 +494,7 @@ struct tb_cm_ops { int (*driver_ready)(struct tb *tb); int (*start)(struct tb *tb, bool reset); void (*stop)(struct tb *tb); + void (*deinit)(struct tb *tb); int (*suspend_noirq)(struct tb *tb); int (*resume_noirq)(struct tb *tb); int (*suspend)(struct tb *tb); From patchwork Fri Feb 9 14:13:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13551371 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 773046A8A2 for ; Fri, 9 Feb 2024 14:13:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488028; cv=none; b=b3uR2+EpoeRbmlaUDlmNskOg7SU1K6mTju7y3f7zfY1a1n1i+OMltZzzMTf/gT1OcXYxedpUy+s9gVNsUna1AA77eO8bulR5p8rZkhgWtQYN7juQZ1iAS6ZNlvHw0ODOlPI8ip1RFx3impenRyRSxIVxNzRzc+H8e6Htj8agBYE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488028; c=relaxed/simple; bh=antzI5cxyctRSp9XaM5d1CCzMssgryozW9f3L9pwDCc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ev+PjVx8wbdHPeJ0PNgREbVsAvr4BKvy8HUL1ePtszncpQeIdIDhZ3JNb1LYw6l1oX/2i8+3/ku05FYDNnv8YrmV7KpvUwp0iIYIDM3/e8oEpSWyHEnY7mIX1JOSerRC2o/nqWDBhfdpQllZAjJqZCa3tC25Us2EVs+Sj5pya0E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=c676u/ex; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="c676u/ex" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707488026; x=1739024026; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=antzI5cxyctRSp9XaM5d1CCzMssgryozW9f3L9pwDCc=; b=c676u/exPmA9/fQHEYs41tNCtDEI+PFjRhD2/OeeYdYqZGAkDbFyjIVP mfV0L6pTESHGCRXCBeDTQs9JSX6v8C7Hs4ZIyago/PLX69/Iu/bTTgrm/ et6Aphkjph3Xbj2NqNsziJB57DxqtUV0NdQPE1Ywr81r0VEkMyni6AIub KB8b/5eHSV6dJm2TnmVxCz1ShjKrx0bsR4TVkWxva/V4hkz7sJrTIhQ6T /9tNnAgyej6OI+J+Lf5kF/5ySj/GzuPU/FNOzqpqdyZv1NIxetEAIoxCo baarLuag5HX+Dpgnfo+PbXgTU2fsmCelo/fHQJePIKuuMdBChUXY0pk5g w==; X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="12082135" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="12082135" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2024 06:13:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="934434450" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="934434450" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 09 Feb 2024 06:13:40 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 06FC7B84; Fri, 9 Feb 2024 16:13:36 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 09/10] thunderbolt: Calculate DisplayPort tunnel bandwidth after DPRX capabilities read Date: Fri, 9 Feb 2024 16:13:34 +0200 Message-ID: <20240209141335.2286786-10-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> References: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Gil Fine According to USB4 Connection Manager guide, after DisplayPort tunnel was setup, the DPRX capabilities read is performed by the DPTX. According to VESA spec, this shall be completed within 5 seconds after the DisplayPort tunnel was setup. Hence, if the bit: DPRX Capabilities Read Done, was not set to '1' by this time, we timeout and fail calculating DisplayPort tunnel consumed bandwidth. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tunnel.c | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index e02b34654d29..cb6609a56a03 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -1184,17 +1184,13 @@ static int tb_dp_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up, /* * Then see if the DPRX negotiation is ready and if yes * return that bandwidth (it may be smaller than the - * reduced one). Otherwise return the remote (possibly - * reduced) caps. + * reduced one). According to VESA spec, the DPRX + * negotiation shall compete in 5 seconds after tunnel + * established. We give it 100ms extra just in case. */ - ret = tb_dp_wait_dprx(tunnel, 150); - if (ret) { - if (ret == -ETIMEDOUT) - ret = tb_dp_read_cap(tunnel, DP_REMOTE_CAP, - &rate, &lanes); - if (ret) - return ret; - } + ret = tb_dp_wait_dprx(tunnel, 5100); + if (ret) + return ret; ret = tb_dp_read_cap(tunnel, DP_COMMON_CAP, &rate, &lanes); if (ret) return ret; From patchwork Fri Feb 9 14:13:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13551369 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 229796A8AB for ; Fri, 9 Feb 2024 14:13:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488026; cv=none; b=d6oKWLZnuCEWDkDC/1bNE3Q/aWWKNB2xyUmMcglQhj91TzNYiWpBdT2wy9pQW5K9pq5v7guEPEcw+k3mjTvKn7Us+xc3MygHteroj7+gfKPI85VWEx4M60Dcxmzfas+FNp2K848Jf0WAYwKMIkj1K6odSFVkCJq6lMDSZjJpzcU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707488026; c=relaxed/simple; bh=iTDKHIBodDKuw0g9lRLNElQMT2IsQWp6pRpcPWav0AE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sWy0LbigMTWYdBQNUDkTpIBxc7lhQvmYDLqiMh0Xzslj0sinlIsWM0/JZe5wc2uJj5TRGM3dZe3u361IMEz4EZs75qEq6L5nZFHvetZkofhTRMDvcQqvtlRyZKQQu3xm43KMxjhvG/X4XwOOMmTYd6PMYFy6kjsp0Bz7o9WAu3s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=none smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=FKs/TPh1; arc=none smtp.client-ip=192.198.163.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FKs/TPh1" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707488025; x=1739024025; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iTDKHIBodDKuw0g9lRLNElQMT2IsQWp6pRpcPWav0AE=; b=FKs/TPh19UvJQuJDj75TRKuRht8X9rM01xqzW+jVyGLJjVA6cKu1aH/i s35BHs6GOuSFaQUEgePLaxPVyjkVaGj8xNSSl66giRDnDSpQJ3/UK6gA9 /IYQSPA02u6vYKIoZuhA03JjDhlJhQNIePn6cwfdpUiYROBcqnQF/G9td 3PRc2qa303uzWSxnaNQH+tKhG8JAUjelTC13iULVpseHalOtDLrzZb6Yx zj3iosqzk754rleFYnj/fCGKQsohN9YTP4082NDorjTLRGCxyoVB2hQu7 f4zi/bMwx2eaFlEBCjfGZWBhj1hjUqtq1AhSohviJzdbiaGm9xi7Asb5O A==; X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="12082132" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="12082132" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Feb 2024 06:13:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10978"; a="934434446" X-IronPort-AV: E=Sophos;i="6.05,257,1701158400"; d="scan'208";a="934434446" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 09 Feb 2024 06:13:40 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 10FD1F23; Fri, 9 Feb 2024 16:13:36 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 10/10] thunderbolt: Improve DisplayPort tunnel setup process to be more robust Date: Fri, 9 Feb 2024 16:13:35 +0200 Message-ID: <20240209141335.2286786-11-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> References: <20240209141335.2286786-1-mika.westerberg@linux.intel.com> Precedence: bulk X-Mailing-List: linux-usb@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Gil Fine After DisplayPort tunnel setup, we add verification that the DPRX capabilities read process completed. Otherwise, we bail out, teardown the tunnel, and try setup another DisplayPort tunnel using next available DP IN adapter. We do so till all DP IN adapters tried. This way, we avoid allocating DP IN adapter and (bandwidth for it) for unusable tunnel. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 84 ++++++++++++++++++++-------------------- 1 file changed, 43 insertions(+), 41 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index eda53567fa4a..306c62c35a05 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -1821,48 +1821,14 @@ static struct tb_port *tb_find_dp_out(struct tb *tb, struct tb_port *in) return NULL; } -static bool tb_tunnel_one_dp(struct tb *tb) +static bool tb_tunnel_one_dp(struct tb *tb, struct tb_port *in, + struct tb_port *out) { int available_up, available_down, ret, link_nr; struct tb_cm *tcm = tb_priv(tb); - struct tb_port *port, *in, *out; int consumed_up, consumed_down; struct tb_tunnel *tunnel; - /* - * Find pair of inactive DP IN and DP OUT adapters and then - * establish a DP tunnel between them. - */ - tb_dbg(tb, "looking for DP IN <-> DP OUT pairs:\n"); - - in = NULL; - out = NULL; - list_for_each_entry(port, &tcm->dp_resources, list) { - if (!tb_port_is_dpin(port)) - continue; - - if (tb_port_is_enabled(port)) { - tb_port_dbg(port, "DP IN in use\n"); - continue; - } - - in = port; - tb_port_dbg(in, "DP IN available\n"); - - out = tb_find_dp_out(tb, port); - if (out) - break; - } - - if (!in) { - tb_dbg(tb, "no suitable DP IN adapter available, not tunneling\n"); - return false; - } - if (!out) { - tb_dbg(tb, "no suitable DP OUT adapter available, not tunneling\n"); - return false; - } - /* * This is only applicable to links that are not bonded (so * when Thunderbolt 1 hardware is involved somewhere in the @@ -1923,15 +1889,19 @@ static bool tb_tunnel_one_dp(struct tb *tb) goto err_free; } + /* If fail reading tunnel's consumed bandwidth, tear it down */ + ret = tb_tunnel_consumed_bandwidth(tunnel, &consumed_up, &consumed_down); + if (ret) + goto err_deactivate; + list_add_tail(&tunnel->list, &tcm->tunnel_list); - tb_reclaim_usb3_bandwidth(tb, in, out); + tb_reclaim_usb3_bandwidth(tb, in, out); /* * Transition the links to asymmetric if the consumption exceeds * the threshold. */ - if (!tb_tunnel_consumed_bandwidth(tunnel, &consumed_up, &consumed_down)) - tb_configure_asym(tb, in, out, consumed_up, consumed_down); + tb_configure_asym(tb, in, out, consumed_up, consumed_down); /* Update the domain with the new bandwidth estimation */ tb_recalc_estimated_bandwidth(tb); @@ -1943,6 +1913,8 @@ static bool tb_tunnel_one_dp(struct tb *tb) tb_increase_tmu_accuracy(tunnel); return true; +err_deactivate: + tb_tunnel_deactivate(tunnel); err_free: tb_tunnel_free(tunnel); err_reclaim_usb: @@ -1962,13 +1934,43 @@ static bool tb_tunnel_one_dp(struct tb *tb) static void tb_tunnel_dp(struct tb *tb) { + struct tb_cm *tcm = tb_priv(tb); + struct tb_port *port, *in, *out; + if (!tb_acpi_may_tunnel_dp()) { tb_dbg(tb, "DP tunneling disabled, not creating tunnel\n"); return; } - while (tb_tunnel_one_dp(tb)) - ; + /* + * Find pair of inactive DP IN and DP OUT adapters and then + * establish a DP tunnel between them. + */ + tb_dbg(tb, "looking for DP IN <-> DP OUT pairs:\n"); + + in = NULL; + out = NULL; + list_for_each_entry(port, &tcm->dp_resources, list) { + if (!tb_port_is_dpin(port)) + continue; + + if (tb_port_is_enabled(port)) { + tb_port_dbg(port, "DP IN in use\n"); + continue; + } + + in = port; + tb_port_dbg(in, "DP IN available\n"); + + out = tb_find_dp_out(tb, port); + if (out) + tb_tunnel_one_dp(tb, in, out); + else + tb_port_dbg(in, "no suitable DP OUT adapter available, not tunneling\n"); + } + + if (!in) + tb_dbg(tb, "no suitable DP IN adapter available, not tunneling\n"); } static void tb_dp_resource_unavailable(struct tb *tb, struct tb_port *port)