From patchwork Mon May 29 10:04:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258378 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB6E2C7EE2E for ; Mon, 29 May 2023 10:04:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231937AbjE2KEi (ORCPT ); Mon, 29 May 2023 06:04:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231916AbjE2KEf (ORCPT ); Mon, 29 May 2023 06:04:35 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99847C4 for ; Mon, 29 May 2023 03:04:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354672; x=1716890672; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=whWFz9RyW3TfeGp3ipNMG6t6fJqJGOQUaYiaDltuCyA=; b=j5R4JwWq+ANtacWYoKT3sygTRnMW7NqsNlNOpHSk0pFwXSvGene+n6Eg nmK4oSk9ZICpWHfTzaNOJ97koN8kgbYTd9+bwtQXC5koe6ILFqF5LiBfG rx7pwkRw6JZGfKxeir80LM44qc3z9TN7jIQ6yjM55inxOSssv7ftbCxQQ HiEN3kLew+WksaUuvF63mNEyFYR+KSmQ8A3HjoubinP9Anrc/XSMmOywM zQKOrLSe939MCgAjh+rCqFa1dhERFrxontcFut3UAP/KDyBRzyH9OOeIj bqGnTSJORV+94KUTE3ly4cT6sSV0Yds0f4JVC+EYJH1U4kPEsfDzRBDQS g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684418" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684418" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518286" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518286" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:21 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id C1B0D24F; Mon, 29 May 2023 13:04:25 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 01/20] thunderbolt: Introduce tb_switch_downstream_port() Date: Mon, 29 May 2023 13:04:06 +0300 Message-Id: <20230529100425.6125-2-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Gil Fine Introduce tb_switch_downstream_port() helper function that returns the downstream port of a parent switch that is connected to the upstream port of specified switch. From now on, we use it all across the driver where applicable. While there fix a whitespace in comment and rename 'downstream' to 'down' to be consistent with the rest of the driver. Signed-off-by: Gil Fine Signed-off-by: Mika Westerberg --- drivers/thunderbolt/acpi.c | 5 ++--- drivers/thunderbolt/icm.c | 24 ++++++++++-------------- drivers/thunderbolt/switch.c | 19 +++++++------------ drivers/thunderbolt/tb.c | 8 +++----- drivers/thunderbolt/tb.h | 14 ++++++++++++++ drivers/thunderbolt/tmu.c | 29 +++++++++++++---------------- drivers/thunderbolt/usb4.c | 9 ++++----- 7 files changed, 53 insertions(+), 55 deletions(-) diff --git a/drivers/thunderbolt/acpi.c b/drivers/thunderbolt/acpi.c index 3514bf65b7a4..38fefd0e5268 100644 --- a/drivers/thunderbolt/acpi.c +++ b/drivers/thunderbolt/acpi.c @@ -296,16 +296,15 @@ static bool tb_acpi_bus_match(struct device *dev) static struct acpi_device *tb_acpi_switch_find_companion(struct tb_switch *sw) { + struct tb_switch *parent_sw = tb_switch_parent(sw); struct acpi_device *adev = NULL; - struct tb_switch *parent_sw; /* * Device routers exists under the downstream facing USB4 port * of the parent router. Their _ADR is always 0. */ - parent_sw = tb_switch_parent(sw); if (parent_sw) { - struct tb_port *port = tb_port_at(tb_route(sw), parent_sw); + struct tb_port *port = tb_switch_downstream_port(sw); struct acpi_device *port_adev; port_adev = acpi_find_child_by_adr(ACPI_COMPANION(&parent_sw->dev), diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c index 86521ebb2579..05274caf1466 100644 --- a/drivers/thunderbolt/icm.c +++ b/drivers/thunderbolt/icm.c @@ -644,13 +644,14 @@ static int add_switch(struct tb_switch *parent_sw, struct tb_switch *sw) return ret; } -static void update_switch(struct tb_switch *parent_sw, struct tb_switch *sw, - u64 route, u8 connection_id, u8 connection_key, - u8 link, u8 depth, bool boot) +static void update_switch(struct tb_switch *sw, u64 route, u8 connection_id, + u8 connection_key, u8 link, u8 depth, bool boot) { + struct tb_switch *parent_sw = tb_switch_parent(sw); + /* Disconnect from parent */ - tb_port_at(tb_route(sw), parent_sw)->remote = NULL; - /* Re-connect via updated port*/ + tb_switch_downstream_port(sw)->remote = NULL; + /* Re-connect via updated port */ tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw); /* Update with the new addressing information */ @@ -671,10 +672,7 @@ static void update_switch(struct tb_switch *parent_sw, struct tb_switch *sw, static void remove_switch(struct tb_switch *sw) { - struct tb_switch *parent_sw; - - parent_sw = tb_to_switch(sw->dev.parent); - tb_port_at(tb_route(sw), parent_sw)->remote = NULL; + tb_switch_downstream_port(sw)->remote = NULL; tb_switch_remove(sw); } @@ -755,7 +753,6 @@ icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr) if (sw) { u8 phy_port, sw_phy_port; - parent_sw = tb_to_switch(sw->dev.parent); sw_phy_port = tb_phy_port_from_link(sw->link); phy_port = tb_phy_port_from_link(link); @@ -785,7 +782,7 @@ icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr) route = tb_route(sw); } - update_switch(parent_sw, sw, route, pkg->connection_id, + update_switch(sw, route, pkg->connection_id, pkg->connection_key, link, depth, boot); tb_switch_put(sw); return; @@ -1236,9 +1233,8 @@ __icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr, if (sw) { /* Update the switch if it is still in the same place */ if (tb_route(sw) == route && !!sw->authorized == authorized) { - parent_sw = tb_to_switch(sw->dev.parent); - update_switch(parent_sw, sw, route, pkg->connection_id, - 0, 0, 0, boot); + update_switch(sw, route, pkg->connection_id, 0, 0, 0, + boot); tb_switch_put(sw); return; } diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 51e86b5171c7..4f3d02c58c9e 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -2754,7 +2754,6 @@ static int tb_switch_update_link_attributes(struct tb_switch *sw) */ int tb_switch_lane_bonding_enable(struct tb_switch *sw) { - struct tb_switch *parent = tb_to_switch(sw->dev.parent); struct tb_port *up, *down; u64 route = tb_route(sw); int ret; @@ -2766,7 +2765,7 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw) return 0; up = tb_upstream_port(sw); - down = tb_port_at(route, parent); + down = tb_switch_downstream_port(sw); if (!tb_port_is_width_supported(up, 2) || !tb_port_is_width_supported(down, 2)) @@ -2808,7 +2807,6 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw) */ void tb_switch_lane_bonding_disable(struct tb_switch *sw) { - struct tb_switch *parent = tb_to_switch(sw->dev.parent); struct tb_port *up, *down; if (!tb_route(sw)) @@ -2818,7 +2816,7 @@ void tb_switch_lane_bonding_disable(struct tb_switch *sw) if (!up->bonded) return; - down = tb_port_at(tb_route(sw), parent); + down = tb_switch_downstream_port(sw); tb_port_lane_bonding_disable(up); tb_port_lane_bonding_disable(down); @@ -3476,7 +3474,6 @@ struct tb_port *tb_switch_find_port(struct tb_switch *sw, static int tb_switch_pm_secondary_resolve(struct tb_switch *sw) { - struct tb_switch *parent = tb_switch_parent(sw); struct tb_port *up, *down; int ret; @@ -3484,7 +3481,7 @@ static int tb_switch_pm_secondary_resolve(struct tb_switch *sw) return 0; up = tb_upstream_port(sw); - down = tb_port_at(tb_route(sw), parent); + down = tb_switch_downstream_port(sw); ret = tb_port_pm_secondary_enable(up); if (ret) return ret; @@ -3494,7 +3491,6 @@ static int tb_switch_pm_secondary_resolve(struct tb_switch *sw) static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) { - struct tb_switch *parent = tb_switch_parent(sw); bool up_clx_support, down_clx_support; struct tb_port *up, *down; int ret; @@ -3510,7 +3506,7 @@ static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) return 0; /* Enable CLx only for first hop router (depth = 1) */ - if (tb_route(parent)) + if (tb_route(tb_switch_parent(sw))) return 0; ret = tb_switch_pm_secondary_resolve(sw); @@ -3518,7 +3514,7 @@ static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) return ret; up = tb_upstream_port(sw); - down = tb_port_at(tb_route(sw), parent); + down = tb_switch_downstream_port(sw); up_clx_support = tb_port_clx_supported(up, clx); down_clx_support = tb_port_clx_supported(down, clx); @@ -3594,7 +3590,6 @@ int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) { - struct tb_switch *parent = tb_switch_parent(sw); struct tb_port *up, *down; int ret; @@ -3609,11 +3604,11 @@ static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) return 0; /* Disable CLx only for first hop router (depth = 1) */ - if (tb_route(parent)) + if (tb_route(tb_switch_parent(sw))) return 0; up = tb_upstream_port(sw); - down = tb_port_at(tb_route(sw), parent); + down = tb_switch_downstream_port(sw); ret = tb_port_clx_disable(up, clx); if (ret) return ret; diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index c1af712ca728..1ab3aa114a17 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -628,7 +628,7 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw) * Look up available down port. Since we are chaining it should * be found right above this switch. */ - port = tb_port_at(tb_route(sw), parent); + port = tb_switch_downstream_port(sw); down = tb_find_usb3_down(parent, port); if (!down) return 0; @@ -1378,7 +1378,6 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw) { struct tb_port *up, *down, *port; struct tb_cm *tcm = tb_priv(tb); - struct tb_switch *parent_sw; struct tb_tunnel *tunnel; up = tb_switch_find_port(sw, TB_TYPE_PCIE_UP); @@ -1389,9 +1388,8 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw) * Look up available down port. Since we are chaining it should * be found right above this switch. */ - parent_sw = tb_to_switch(sw->dev.parent); - port = tb_port_at(tb_route(sw), parent_sw); - down = tb_find_pcie_down(parent_sw, port); + port = tb_switch_downstream_port(sw); + down = tb_find_pcie_down(tb_switch_parent(sw), port); if (!down) return 0; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index f7728c7acdda..beaeea679e10 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -857,6 +857,20 @@ static inline struct tb_switch *tb_switch_parent(struct tb_switch *sw) return tb_to_switch(sw->dev.parent); } +/** + * tb_switch_downstream_port() - Return downstream facing port of parent router + * @sw: Device router pointer + * + * Only call for device routers. Returns the downstream facing port of + * the parent router. + */ +static inline struct tb_port *tb_switch_downstream_port(struct tb_switch *sw) +{ + if (WARN_ON(!tb_route(sw))) + return NULL; + return tb_port_at(tb_route(sw), tb_switch_parent(sw)); +} + static inline bool tb_switch_is_light_ridge(const struct tb_switch *sw) { return sw->config.vendor_id == PCI_VENDOR_ID_INTEL && diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 626aca3124b1..a46203b33c5f 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -398,11 +398,10 @@ int tb_switch_tmu_disable(struct tb_switch *sw) if (tb_route(sw)) { bool unidirectional = sw->tmu.unidirectional; - struct tb_switch *parent = tb_switch_parent(sw); struct tb_port *down, *up; int ret; - down = tb_port_at(tb_route(sw), parent); + down = tb_switch_downstream_port(sw); up = tb_upstream_port(sw); /* * In case of uni-directional time sync, TMU handshake is @@ -442,10 +441,9 @@ int tb_switch_tmu_disable(struct tb_switch *sw) static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional) { - struct tb_switch *parent = tb_switch_parent(sw); struct tb_port *down, *up; - down = tb_port_at(tb_route(sw), parent); + down = tb_switch_downstream_port(sw); up = tb_upstream_port(sw); /* * In case of any failure in one of the steps when setting @@ -457,7 +455,8 @@ static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional) tb_port_tmu_time_sync_disable(down); tb_port_tmu_time_sync_disable(up); if (unidirectional) - tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_OFF); + tb_switch_tmu_rate_write(tb_switch_parent(sw), + TB_SWITCH_TMU_RATE_OFF); else tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF); @@ -472,12 +471,11 @@ static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional) */ static int __tb_switch_tmu_enable_bidirectional(struct tb_switch *sw) { - struct tb_switch *parent = tb_switch_parent(sw); struct tb_port *up, *down; int ret; up = tb_upstream_port(sw); - down = tb_port_at(tb_route(sw), parent); + down = tb_switch_downstream_port(sw); ret = tb_port_tmu_unidirectional_disable(up); if (ret) @@ -537,13 +535,13 @@ static int tb_switch_tmu_unidirectional_enable(struct tb_switch *sw) */ static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw) { - struct tb_switch *parent = tb_switch_parent(sw); struct tb_port *up, *down; int ret; up = tb_upstream_port(sw); - down = tb_port_at(tb_route(sw), parent); - ret = tb_switch_tmu_rate_write(parent, sw->tmu.rate_request); + down = tb_switch_downstream_port(sw); + ret = tb_switch_tmu_rate_write(tb_switch_parent(sw), + sw->tmu.rate_request); if (ret) return ret; @@ -576,10 +574,9 @@ static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw) static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw) { - struct tb_switch *parent = tb_switch_parent(sw); struct tb_port *down, *up; - down = tb_port_at(tb_route(sw), parent); + down = tb_switch_downstream_port(sw); up = tb_upstream_port(sw); /* * In case of any failure in one of the steps when change mode, @@ -589,7 +586,7 @@ static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw) */ tb_port_tmu_set_unidirectional(down, sw->tmu.unidirectional); if (sw->tmu.unidirectional_request) - tb_switch_tmu_rate_write(parent, sw->tmu.rate); + tb_switch_tmu_rate_write(tb_switch_parent(sw), sw->tmu.rate); else tb_switch_tmu_rate_write(sw, sw->tmu.rate); @@ -599,18 +596,18 @@ static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw) static int __tb_switch_tmu_change_mode(struct tb_switch *sw) { - struct tb_switch *parent = tb_switch_parent(sw); struct tb_port *up, *down; int ret; up = tb_upstream_port(sw); - down = tb_port_at(tb_route(sw), parent); + down = tb_switch_downstream_port(sw); ret = tb_port_tmu_set_unidirectional(down, sw->tmu.unidirectional_request); if (ret) goto out; if (sw->tmu.unidirectional_request) - ret = tb_switch_tmu_rate_write(parent, sw->tmu.rate_request); + ret = tb_switch_tmu_rate_write(tb_switch_parent(sw), + sw->tmu.rate_request); else ret = tb_switch_tmu_rate_write(sw, sw->tmu.rate_request); if (ret) diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c index 485b6e430686..9f5a98347bee 100644 --- a/drivers/thunderbolt/usb4.c +++ b/drivers/thunderbolt/usb4.c @@ -234,8 +234,8 @@ static bool link_is_usb4(struct tb_port *port) */ int usb4_switch_setup(struct tb_switch *sw) { - struct tb_port *downstream_port; - struct tb_switch *parent; + struct tb_switch *parent = tb_switch_parent(sw); + struct tb_port *down; bool tbt3, xhci; u32 val = 0; int ret; @@ -249,9 +249,8 @@ int usb4_switch_setup(struct tb_switch *sw) if (ret) return ret; - parent = tb_switch_parent(sw); - downstream_port = tb_port_at(tb_route(sw), parent); - sw->link_usb4 = link_is_usb4(downstream_port); + down = tb_switch_downstream_port(sw); + sw->link_usb4 = link_is_usb4(down); tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT"); xhci = val & ROUTER_CS_6_HCI; From patchwork Mon May 29 10:04:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C172C7EE2E for ; Mon, 29 May 2023 10:04:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231919AbjE2KEf (ORCPT ); Mon, 29 May 2023 06:04:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231912AbjE2KEc (ORCPT ); Mon, 29 May 2023 06:04:32 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64F0AC7 for ; Mon, 29 May 2023 03:04:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354665; x=1716890665; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aeEjKOCY7V43+0gX+kNnDgo8aWHhA9v8g1czZssCKbM=; b=RDRJXZ+bot52FH44J5RluulusDBcgHmSzmU4GT4HzezEDvwZTtNK6gts NLK81KBcUFnN2fBRk/FkG14SODT0wy2UeJN2vC0Wg6uwinns3qfAj3e+u 34DD0gbfrH8VGppWdmd+TFAlUglsrOI8bGQRy9LIC3H8aMqDU6T81JKmY IxkD6c8qWVOvsyNKH6ZVFWb4QCdQ9nMvGQJCJkJoHdNLLjvkxOhqJOiHI MsymXt7LmjFgsRjiRV2EAVhRmE+e3RSSXLNhdj+MkkJ+4X/3+RUpid6Kl cyK+OtwHdgwfzawdrdP6zngHfTH04uJsx+RU9yyNMjMhrHosGu6POMifV g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684412" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684412" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518284" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518284" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:21 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id D3FD753A; Mon, 29 May 2023 13:04:25 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 02/20] thunderbolt: Introduce tb_xdomain_downstream_port() Date: Mon, 29 May 2023 13:04:07 +0300 Message-Id: <20230529100425.6125-3-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org In the same way we did for the routers add a function that returns the parent routers downstream facing port for XDomain devices. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.h | 11 +++++++++++ drivers/thunderbolt/xdomain.c | 16 +++++++--------- 2 files changed, 18 insertions(+), 9 deletions(-) diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index beaeea679e10..797d8bb73bfa 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -1197,6 +1197,17 @@ static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd) return tb_to_switch(xd->dev.parent); } +/** + * tb_xdomain_downstream_port() - Return downstream facing port of parent router + * @xd: Xdomain pointer + * + * Returns the downstream port the XDomain is connected to. + */ +static inline struct tb_port *tb_xdomain_downstream_port(struct tb_xdomain *xd) +{ + return tb_port_at(xd->route, tb_xdomain_parent(xd)); +} + int tb_retimer_nvm_read(struct tb_retimer *rt, unsigned int address, void *buf, size_t size); int tb_retimer_scan(struct tb_port *port, bool add); diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index e2b54887d331..8389961b2d45 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -537,9 +537,8 @@ static int tb_xdp_link_state_status_request(struct tb_ctl *ctl, u64 route, static int tb_xdp_link_state_status_response(struct tb *tb, struct tb_ctl *ctl, struct tb_xdomain *xd, u8 sequence) { - struct tb_switch *sw = tb_to_switch(xd->dev.parent); struct tb_xdp_link_state_status_response res; - struct tb_port *port = tb_port_at(xd->route, sw); + struct tb_port *port = tb_xdomain_downstream_port(xd); u32 val[2]; int ret; @@ -1137,7 +1136,7 @@ static int tb_xdomain_update_link_attributes(struct tb_xdomain *xd) struct tb_port *port; int ret; - port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + port = tb_xdomain_downstream_port(xd); ret = tb_port_get_link_speed(port); if (ret < 0) @@ -1251,8 +1250,7 @@ static int tb_xdomain_get_link_status(struct tb_xdomain *xd) static int tb_xdomain_link_state_change(struct tb_xdomain *xd, unsigned int width) { - struct tb_switch *sw = tb_to_switch(xd->dev.parent); - struct tb_port *port = tb_port_at(xd->route, sw); + struct tb_port *port = tb_xdomain_downstream_port(xd); struct tb *tb = xd->tb; u8 tlw, tls; u32 val; @@ -1309,7 +1307,7 @@ static int tb_xdomain_bond_lanes_uuid_high(struct tb_xdomain *xd) return -ETIMEDOUT; } - port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + port = tb_xdomain_downstream_port(xd); /* * We can't use tb_xdomain_lane_bonding_enable() here because it @@ -1425,7 +1423,7 @@ static int tb_xdomain_get_properties(struct tb_xdomain *xd) if (xd->bonding_possible) { struct tb_port *port; - port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + port = tb_xdomain_downstream_port(xd); if (!port->bonded) tb_port_disable(port->dual_link_port); } @@ -1979,7 +1977,7 @@ int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd) struct tb_port *port; int ret; - port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + port = tb_xdomain_downstream_port(xd); if (!port->dual_link_port) return -ENODEV; @@ -2024,7 +2022,7 @@ void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd) { struct tb_port *port; - port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + port = tb_xdomain_downstream_port(xd); if (port->dual_link_port) { tb_port_lane_bonding_disable(port); if (tb_port_wait_for_link_width(port, 1, 100) == -ETIMEDOUT) From patchwork Mon May 29 10:04:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258376 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9BD3C77B7E for ; Mon, 29 May 2023 10:04:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231933AbjE2KEi (ORCPT ); Mon, 29 May 2023 06:04:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231907AbjE2KEe (ORCPT ); Mon, 29 May 2023 06:04:34 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32AF3A7 for ; Mon, 29 May 2023 03:04:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354672; x=1716890672; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kQc6YG/qIED9NqHZEoGGAm9SFSpB+fTbJkyPb98gsZE=; b=L3NZr7Y8+OC6XzzLIlByqkiUacOihSZke0WJzIW0XznT9WDyCbzqxY75 EFDvFgouOzzZkXIPJfvxPtbPQzAeQDexV7Tn9RglRXIcP1YYk1QYQQM0S ZACylAGOJOxq5juYQywBaRyhEzty/n/WnhLwDJFmx6ynXkMbi3B/+zfcC L2ynTmuVv7cE5oSZqr1ICSzdeqaRn0Skenxf75NrL+/f5m4yMdWZvXXSX Bwup+xHkD/qjLsN9uhwmZFQ0XqAKRAIf0xNAasjwqThefsREnKkx5eidx LW0jW+RrNk3eOwYOymhrpvWzQvw0NkS3UutDECub61gr2IOISFiVRmgqL g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684415" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684415" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518281" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518281" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:21 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id DE15F3B9; Mon, 29 May 2023 13:04:25 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 03/20] thunderbolt: Fix a couple of style issues in TMU code Date: Mon, 29 May 2023 13:04:08 +0300 Message-Id: <20230529100425.6125-4-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Drop extra empty line and get rid of the '__' in function names. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tmu.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index a46203b33c5f..8614e154be5f 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -395,7 +395,6 @@ int tb_switch_tmu_disable(struct tb_switch *sw) if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) return 0; - if (tb_route(sw)) { bool unidirectional = sw->tmu.unidirectional; struct tb_port *down, *up; @@ -439,7 +438,7 @@ int tb_switch_tmu_disable(struct tb_switch *sw) return 0; } -static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional) +static void tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional) { struct tb_port *down, *up; @@ -469,7 +468,7 @@ static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional) * This function is called when the previous TMU mode was * TB_SWITCH_TMU_RATE_OFF. */ -static int __tb_switch_tmu_enable_bidirectional(struct tb_switch *sw) +static int tb_switch_tmu_enable_bidirectional(struct tb_switch *sw) { struct tb_port *up, *down; int ret; @@ -500,7 +499,7 @@ static int __tb_switch_tmu_enable_bidirectional(struct tb_switch *sw) return 0; out: - __tb_switch_tmu_off(sw, false); + tb_switch_tmu_off(sw, false); return ret; } @@ -533,7 +532,7 @@ static int tb_switch_tmu_unidirectional_enable(struct tb_switch *sw) * This function is called when the previous TMU mode was * TB_SWITCH_TMU_RATE_OFF. */ -static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw) +static int tb_switch_tmu_enable_unidirectional(struct tb_switch *sw) { struct tb_port *up, *down; int ret; @@ -568,11 +567,11 @@ static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw) return 0; out: - __tb_switch_tmu_off(sw, true); + tb_switch_tmu_off(sw, true); return ret; } -static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw) +static void tb_switch_tmu_change_mode_prev(struct tb_switch *sw) { struct tb_port *down, *up; @@ -594,7 +593,7 @@ static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw) tb_port_tmu_set_unidirectional(up, sw->tmu.unidirectional); } -static int __tb_switch_tmu_change_mode(struct tb_switch *sw) +static int tb_switch_tmu_change_mode(struct tb_switch *sw) { struct tb_port *up, *down; int ret; @@ -632,7 +631,7 @@ static int __tb_switch_tmu_change_mode(struct tb_switch *sw) return 0; out: - __tb_switch_tmu_change_mode_prev(sw); + tb_switch_tmu_change_mode_prev(sw); return ret; } @@ -695,13 +694,13 @@ int tb_switch_tmu_enable(struct tb_switch *sw) */ if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) { if (unidirectional) - ret = __tb_switch_tmu_enable_unidirectional(sw); + ret = tb_switch_tmu_enable_unidirectional(sw); else - ret = __tb_switch_tmu_enable_bidirectional(sw); + ret = tb_switch_tmu_enable_bidirectional(sw); if (ret) return ret; } else if (sw->tmu.rate == TB_SWITCH_TMU_RATE_NORMAL) { - ret = __tb_switch_tmu_change_mode(sw); + ret = tb_switch_tmu_change_mode(sw); if (ret) return ret; } From patchwork Mon May 29 10:04:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 889A6C7EE29 for ; Mon, 29 May 2023 10:04:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231918AbjE2KEd (ORCPT ); Mon, 29 May 2023 06:04:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53878 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231913AbjE2KEc (ORCPT ); Mon, 29 May 2023 06:04:32 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B360D8 for ; Mon, 29 May 2023 03:04:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354664; x=1716890664; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Mn6wpix0zC80VSEJEo+QauGVQmlo5JsGlzseY/0ncUM=; b=CIiWqvgZJEoX/qBgh3Xaq2l4xKXXn1TZG2rIf9DiusHCEArmb/44CYhC zhGy9rT+lxZPLsHOfHgkHdIwU40mPWJgHquiXuPmrnH+f4yS1nXSBq+WO jv2Vy4y7s5h7g+kccz8oapLDmgAmlbuy0r0aBWYGKDYYmxXP31dpHO4rT GVsPIBYRxJlQu91KKLgejp7WQX6Bexy9S/JbfTdj+DJlLV5tY7aVow32k mLgQxfVisAPRgpUs15gFN6I61rh6vXXIsTlMxu5aoz+0Sl6V9FwhLzgIB kXFcq93iuGY9iTX4k5EHhjBOcXKo2xlhT5lUvP4D8KkCYeL6W1YEBm4EJ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684407" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684407" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518282" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518282" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:21 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id F1F43597; Mon, 29 May 2023 13:04:25 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 04/20] thunderbolt: Drop useless 'unidirectional' parameter from tb_switch_tmu_is_enabled() Date: Mon, 29 May 2023 13:04:09 +0300 Message-Id: <20230529100425.6125-5-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org There is no point passing it as we already have a field for that. While there clean up the kernel-doc of things that do not really belong to the API documentation (these can be figured out from the spec itself). No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 2 +- drivers/thunderbolt/tb.h | 10 ++++------ drivers/thunderbolt/tmu.c | 11 ++++------- 3 files changed, 9 insertions(+), 14 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 1ab3aa114a17..72041e29e544 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -362,7 +362,7 @@ static int tb_enable_tmu(struct tb_switch *sw) int ret; /* If it is already enabled in correct mode, don't touch it */ - if (tb_switch_tmu_is_enabled(sw, sw->tmu.unidirectional_request)) + if (tb_switch_tmu_is_enabled(sw)) return 0; ret = tb_switch_tmu_disable(sw); diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 797d8bb73bfa..0ac653bfd97e 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -995,16 +995,14 @@ void tb_switch_enable_tmu_1st_child(struct tb_switch *sw, /** * tb_switch_tmu_is_enabled() - Checks if the specified TMU mode is enabled * @sw: Router whose TMU mode to check - * @unidirectional: If uni-directional (bi-directional otherwise) * - * Return true if hardware TMU configuration matches the one passed in - * as parameter. That is HiFi/Normal and either uni-directional or bi-directional. + * Return true if hardware TMU configuration matches the requested + * configuration. */ -static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw, - bool unidirectional) +static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw) { return sw->tmu.rate == sw->tmu.rate_request && - sw->tmu.unidirectional == unidirectional; + sw->tmu.unidirectional == sw->tmu.unidirectional_request; } static inline const char *tb_switch_clx_name(enum tb_clx clx) diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 8614e154be5f..5d508ea8baa5 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -639,12 +639,9 @@ static int tb_switch_tmu_change_mode(struct tb_switch *sw) * tb_switch_tmu_enable() - Enable TMU on a router * @sw: Router whose TMU to enable * - * Enables TMU of a router to be in uni-directional Normal/HiFi - * or bi-directional HiFi mode. Calling tb_switch_tmu_configure() is required - * before calling this function, to select the mode Normal/HiFi and - * directionality (uni-directional/bi-directional). - * In HiFi mode all tunneling should work. In Normal mode, DP tunneling can't - * work. Uni-directional mode is required for CLx (Link Low-Power) to work. + * Enables TMU of a router to be in uni-directional Normal/HiFi or + * bi-directional HiFi mode. Calling tb_switch_tmu_configure() is + * required before calling this function. */ int tb_switch_tmu_enable(struct tb_switch *sw) { @@ -662,7 +659,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw) if (!tb_switch_is_clx_supported(sw)) return 0; - if (tb_switch_tmu_is_enabled(sw, sw->tmu.unidirectional_request)) + if (tb_switch_tmu_is_enabled(sw)) return 0; if (tb_switch_is_titan_ridge(sw) && unidirectional) { From patchwork Mon May 29 10:04:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46964C7EE2F for ; Mon, 29 May 2023 10:04:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231935AbjE2KEj (ORCPT ); Mon, 29 May 2023 06:04:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230165AbjE2KEf (ORCPT ); Mon, 29 May 2023 06:04:35 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5DA6C7 for ; Mon, 29 May 2023 03:04:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354674; x=1716890674; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kQTOYUoMDmr5dcQi06Bmkoj6RO0YAh9/GuMM/UdkwFQ=; b=ZkLarrB3T8LV4JIWVmVqV8ImZM5EG2ViGSl7q7cCbQpdpoujHIDrmy6E SISqjvs6wujTgX9YCgpkYH93/HYVvdUaeJTIJnQLXftSSY2TBwnO2bpJT 1+sBVFWpxB3EcX9K7lHRtGIB8sKcuhHYuQ8wKGEG+Y486GG98aAOi5+DO ArSrpF/oDXt/lx0wVxPbTwUbmoCNDSMyWswdxUbvNPYs1zzzghHTO2c+n mCMG3yFk7flcP1cdLgnQNLePPUKQlEplxcLRxeLePE8X1yjdJgZ/6z3mA VChrufdWuKF/XNe2tWEDXiqiTqNaWrqutzt/6q0JqVaxi6tF7083hLrBi A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684429" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684429" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518454" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518454" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 03738589; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 05/20] thunderbolt: Rework Titan Ridge TMU objection disable function Date: Mon, 29 May 2023 13:04:10 +0300 Message-Id: <20230529100425.6125-6-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Now this is split into two with one having a misleading name (tb_switch_tmu_unidirectional_enable()). Make this easier to read, rename and consolidate the two functions into one with name that explains what it actually does. Use the two constants as well that were added but never used to make it clear which bits are being set. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tmu.c | 24 ++++++++++-------------- 1 file changed, 10 insertions(+), 14 deletions(-) diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 5d508ea8baa5..30f18806abb7 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -503,8 +503,10 @@ static int tb_switch_tmu_enable_bidirectional(struct tb_switch *sw) return ret; } -static int tb_switch_tmu_objection_mask(struct tb_switch *sw) +/* Only needed for Titan Ridge */ +static int tb_switch_tmu_disable_objections(struct tb_switch *sw) { + struct tb_port *up = tb_upstream_port(sw); u32 val; int ret; @@ -515,17 +517,15 @@ static int tb_switch_tmu_objection_mask(struct tb_switch *sw) val &= ~TB_TIME_VSEC_3_CS_9_TMU_OBJ_MASK; - return tb_sw_write(sw, &val, TB_CFG_SWITCH, - sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1); -} - -static int tb_switch_tmu_unidirectional_enable(struct tb_switch *sw) -{ - struct tb_port *up = tb_upstream_port(sw); + ret = tb_sw_write(sw, &val, TB_CFG_SWITCH, + sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1); + if (ret) + return ret; return tb_port_tmu_write(up, TMU_ADP_CS_6, TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK, - TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK); + TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL1 | + TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL2); } /* @@ -670,11 +670,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw) if (!tb_switch_is_clx_enabled(sw, TB_CL1)) return -EOPNOTSUPP; - ret = tb_switch_tmu_objection_mask(sw); - if (ret) - return ret; - - ret = tb_switch_tmu_unidirectional_enable(sw); + ret = tb_switch_tmu_disable_objections(sw); if (ret) return ret; } From patchwork Mon May 29 10:04:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50C71C7EE29 for ; Mon, 29 May 2023 10:04:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231926AbjE2KEh (ORCPT ); Mon, 29 May 2023 06:04:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231910AbjE2KEe (ORCPT ); Mon, 29 May 2023 06:04:34 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B89BEDC for ; Mon, 29 May 2023 03:04:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354672; x=1716890672; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+2ayWDfJL/O7lNBfeF1TJTPlsjPZjczFxtUoiT4QpG0=; b=WjKjoLbKfQ2KmvRneCZiPZSiwSLfshqrWA3CY8bbeNmbOs+zAbeq058k 0unhDjcgC2o6E9U5j6Njs/AFf/feXa3kAxeykolos9peVHHXHJDn6TMVV OoNSR0B8eD3Ufgs9sUME9UGB6MyLZPE2jmfDBKn4ZAQeid7Ugj+0alDIU Gnky2q2KqXPqCDl9cT1N5fqH2xhNiujvwfjT7PeVvUQmkguneFjXf/luG JWk+9VCiGuu/6cT6ERUpp/a8DZg5eOgBOqP9Y0PKWhwPi4ZhbLZxsQy/0 clhewH0BYv572uPgJZqK8JPrsBwRadjpHMaT2trtJYYLp75axH9sr3J+X Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684424" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684424" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518432" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518432" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 12A875FD; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 06/20] thunderbolt: Get rid of tb_switch_enable_tmu_1st_child() Date: Mon, 29 May 2023 13:04:11 +0300 Message-Id: <20230529100425.6125-7-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This is better to be part of the software connection manager flows in tb.c. Also name the new function tb_increase_tmu_accuracy() to match what it actually does. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 43 +++++++++++++++++++++++++++++++-------- drivers/thunderbolt/tb.h | 2 -- drivers/thunderbolt/tmu.c | 29 -------------------------- 3 files changed, 34 insertions(+), 40 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 72041e29e544..39ec7094fe17 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -240,6 +240,38 @@ static void tb_discover_dp_resources(struct tb *tb) } } +static int tb_increase_switch_tmu_accuracy(struct device *dev, void *data) +{ + struct tb_switch *sw; + + sw = tb_to_switch(dev); + if (sw) { + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, + tb_switch_is_clx_enabled(sw, TB_CL1)); + if (tb_switch_tmu_enable(sw)) + tb_sw_warn(sw, "failed to increase TMU rate\n"); + } + + return 0; +} + +static void tb_increase_tmu_accuracy(struct tb_tunnel *tunnel) +{ + struct tb_switch *sw; + + if (!tunnel) + return; + + /* + * Once first DP tunnel is established we change the TMU + * accuracy of first depth child routers (and the host router) + * to the highest. This is needed for the DP tunneling to work + * but also allows CL0s. + */ + sw = tunnel->tb->root_switch; + device_for_each_child(&sw->dev, NULL, tb_increase_switch_tmu_accuracy); +} + static void tb_switch_discover_tunnels(struct tb_switch *sw, struct list_head *list, bool alloc_hopids) @@ -253,13 +285,7 @@ static void tb_switch_discover_tunnels(struct tb_switch *sw, switch (port->config.type) { case TB_TYPE_DP_HDMI_IN: tunnel = tb_tunnel_discover_dp(tb, port, alloc_hopids); - /* - * In case of DP tunnel exists, change host router's - * 1st children TMU mode to HiFi for CL0s to work. - */ - if (tunnel) - tb_switch_enable_tmu_1st_child(tb->root_switch, - TB_SWITCH_TMU_RATE_HIFI); + tb_increase_tmu_accuracy(tunnel); break; case TB_TYPE_PCIE_DOWN: @@ -1263,8 +1289,7 @@ static void tb_tunnel_dp(struct tb *tb) * In case of DP tunnel exists, change host router's 1st children * TMU mode to HiFi for CL0s to work. */ - tb_switch_enable_tmu_1st_child(tb->root_switch, TB_SWITCH_TMU_RATE_HIFI); - + tb_increase_tmu_accuracy(tunnel); return; err_free: diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 0ac653bfd97e..8cc64b79f35c 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -990,8 +990,6 @@ int tb_switch_tmu_enable(struct tb_switch *sw); void tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_rate rate, bool unidirectional); -void tb_switch_enable_tmu_1st_child(struct tb_switch *sw, - enum tb_switch_tmu_rate rate); /** * tb_switch_tmu_is_enabled() - Checks if the specified TMU mode is enabled * @sw: Router whose TMU mode to check diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 30f18806abb7..84abb783a6d9 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -731,32 +731,3 @@ void tb_switch_tmu_configure(struct tb_switch *sw, sw->tmu.unidirectional_request = unidirectional; sw->tmu.rate_request = rate; } - -static int tb_switch_tmu_config_enable(struct device *dev, void *rate) -{ - if (tb_is_switch(dev)) { - struct tb_switch *sw = tb_to_switch(dev); - - tb_switch_tmu_configure(sw, *(enum tb_switch_tmu_rate *)rate, - tb_switch_is_clx_enabled(sw, TB_CL1)); - if (tb_switch_tmu_enable(sw)) - tb_sw_dbg(sw, "fail switching TMU mode for 1st depth router\n"); - } - - return 0; -} - -/** - * tb_switch_enable_tmu_1st_child - Configure and enable TMU for 1st chidren - * @sw: The router to configure and enable it's children TMU - * @rate: Rate of the TMU to configure the router's chidren to - * - * Configures and enables the TMU mode of 1st depth children of the specified - * router to the specified rate. - */ -void tb_switch_enable_tmu_1st_child(struct tb_switch *sw, - enum tb_switch_tmu_rate rate) -{ - device_for_each_child(&sw->dev, &rate, - tb_switch_tmu_config_enable); -} From patchwork Mon May 29 10:04:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66A83C7EE29 for ; Mon, 29 May 2023 10:04:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231942AbjE2KEk (ORCPT ); Mon, 29 May 2023 06:04:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231927AbjE2KEg (ORCPT ); Mon, 29 May 2023 06:04:36 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38B71C9 for ; Mon, 29 May 2023 03:04:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354675; x=1716890675; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9Y5z3g5TftJxMhLHmZWlksqIqcdpHEd4zPzYPV21f8A=; b=HmW9naQz1wDXo2duxOUGTgz/Vxy9ykXPaWZBxVpA+Oj8W+v/4BQxfLv0 CYo8A64sI03H+w5+FyRtPaw62SEOIkBJ9psxl6urcIQdRKDoVg9eAGYu8 V88NTO9kMSfdX3gjga/Lnhf83bT9pDB+t05D/2UkevRfz/p/M/C6OTTM2 bOyJNWUAMU1r0Z+cpRAPLwHOBDbjlY7oWesL9T2LTZvXqi3HhCXSziMeL tO8ZIKyE19WIMu0qCYpox3BsL2RXYA5JuYd87pTLE2FLsI6Uuf5fn1yY+ 4vPKqfeDgl2CES+vnFgL/5DLZZmfREat+A4hO5rBvhK9I1GxTOzU/ZdL+ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684434" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684434" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518465" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518465" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 17FE75E2; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 07/20] thunderbolt: Move TMU configuration to tb_enable_tmu() Date: Mon, 29 May 2023 13:04:12 +0300 Message-Id: <20230529100425.6125-8-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org There is no need to duplicate the code the enables TMU. Also update the comment to better explain why we do this in the first place. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 30 ++++++++++-------------------- 1 file changed, 10 insertions(+), 20 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 39ec7094fe17..0630b877136e 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -387,6 +387,16 @@ static int tb_enable_tmu(struct tb_switch *sw) { int ret; + /* + * If CL1 is enabled then we need to configure the TMU accuracy + * level to normal. Otherwise we keep the TMU running at the + * highest accuracy. + */ + if (tb_switch_is_clx_enabled(sw, TB_CL1)) + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); + else + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); + /* If it is already enabled in correct mode, don't touch it */ if (tb_switch_tmu_is_enabled(sw)) return 0; @@ -873,16 +883,6 @@ static void tb_scan_port(struct tb_port *port) tb_switch_clx_name(TB_CL1)); } - if (tb_switch_is_clx_enabled(sw, TB_CL1)) - /* - * To support highest CLx state, we set router's TMU to - * Normal-Uni mode. - */ - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); - else - /* If CLx disabled, configure router's TMU to HiFi-Bidir mode*/ - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); - if (tb_enable_tmu(sw)) tb_sw_warn(sw, "failed to enable TMU\n"); @@ -2035,16 +2035,6 @@ static void tb_restore_children(struct tb_switch *sw) tb_sw_warn(sw, "failed to re-enable %s on upstream port\n", tb_switch_clx_name(TB_CL1)); - if (tb_switch_is_clx_enabled(sw, TB_CL1)) - /* - * To support highest CLx state, we set router's TMU to - * Normal-Uni mode. - */ - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); - else - /* If CLx disabled, configure router's TMU to HiFi-Bidir mode*/ - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); - if (tb_enable_tmu(sw)) tb_sw_warn(sw, "failed to restore TMU configuration\n"); From patchwork Mon May 29 10:04:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8F15C7EE2E for ; Mon, 29 May 2023 10:04:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231953AbjE2KEm (ORCPT ); Mon, 29 May 2023 06:04:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231910AbjE2KEh (ORCPT ); Mon, 29 May 2023 06:04:37 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E10B3A7 for ; Mon, 29 May 2023 03:04:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354675; x=1716890675; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZGw1wDKauYZ3GmGFcp+GyjaqHNurd7o4dNvWNQaOh6w=; b=LYkkEQsOw9svmGFEEOQSj25p1aUbepUcINEhEHva5b5i5p1hjfpgF3p3 z2nCTCTvBa5/yeT0eQzgf/V4ikSYYqyXF4gX5Vc5rsQwkRvbgFXKCh6Jw XrSMtfcMTKEa3V/mMkwaKzQDFR6PVtJnuEAA/D4V+YH4VvMRN/huTQJf+ D+hoi4ej2HnPlzfRQbTr2HAAyr4J0PPGmbCjQTQP+sJVXEIXVfA8xTx+A E1ubxMTdbZNsWKkfFs97KSV5AiHSFba/RmOx0gWQBl/6nafbC8pAI2OLt 0EeQqRVJpG5d8xrOk+fa6sjy+6rY8GcpZEiylBIc2SHiPagOFcvPV/qVs A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684440" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684440" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518472" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518472" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 269D285F; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 08/20] thunderbolt: Move tb_enable_tmu() close to other TMU functions Date: Mon, 29 May 2023 13:04:13 +0300 Message-Id: <20230529100425.6125-9-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This makes the code easier to follow. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 58 ++++++++++++++++++++-------------------- 1 file changed, 29 insertions(+), 29 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 0630b877136e..41c353f462e7 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -272,6 +272,35 @@ static void tb_increase_tmu_accuracy(struct tb_tunnel *tunnel) device_for_each_child(&sw->dev, NULL, tb_increase_switch_tmu_accuracy); } +static int tb_enable_tmu(struct tb_switch *sw) +{ + int ret; + + /* + * If CL1 is enabled then we need to configure the TMU accuracy + * level to normal. Otherwise we keep the TMU running at the + * highest accuracy. + */ + if (tb_switch_is_clx_enabled(sw, TB_CL1)) + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); + else + tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); + + /* If it is already enabled in correct mode, don't touch it */ + if (tb_switch_tmu_is_enabled(sw)) + return 0; + + ret = tb_switch_tmu_disable(sw); + if (ret) + return ret; + + ret = tb_switch_tmu_post_time(sw); + if (ret) + return ret; + + return tb_switch_tmu_enable(sw); +} + static void tb_switch_discover_tunnels(struct tb_switch *sw, struct list_head *list, bool alloc_hopids) @@ -383,35 +412,6 @@ static void tb_scan_xdomain(struct tb_port *port) } } -static int tb_enable_tmu(struct tb_switch *sw) -{ - int ret; - - /* - * If CL1 is enabled then we need to configure the TMU accuracy - * level to normal. Otherwise we keep the TMU running at the - * highest accuracy. - */ - if (tb_switch_is_clx_enabled(sw, TB_CL1)) - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); - else - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); - - /* If it is already enabled in correct mode, don't touch it */ - if (tb_switch_tmu_is_enabled(sw)) - return 0; - - ret = tb_switch_tmu_disable(sw); - if (ret) - return ret; - - ret = tb_switch_tmu_post_time(sw); - if (ret) - return ret; - - return tb_switch_tmu_enable(sw); -} - /** * tb_find_unused_port() - return the first inactive port on @sw * @sw: Switch to find the port on From patchwork Mon May 29 10:04:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 799CDC7EE2F for ; Mon, 29 May 2023 10:04:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231920AbjE2KEg (ORCPT ); Mon, 29 May 2023 06:04:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231913AbjE2KEf (ORCPT ); Mon, 29 May 2023 06:04:35 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74599B5 for ; Mon, 29 May 2023 03:04:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354674; x=1716890674; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wF5KjFmzuNFzp2s2TaW5HikN5gf8ZbKj9Zl30x3Pgbs=; b=OOjrLhpjMdFFAR1jhI+h9b83yKA0g01qaBpOqo93vwitqQzSGMr/LFYM EX5gVHsr8P3XnF4xtHwV6cRMoDQYWoM9pgdTdkXo9qkgygOU0lNaupFhm qw8b+9kMF3LYdRnt7pUzUtmffeNAazIoJ5IBocph/e/R7dzJfH/oVsrO5 cdTYzt+iDDu3uR/UINkwyUx1FWNIJY+91uK5DT8nxAQ85P3WkFo+DuKyJ lj43YfoPP7uigCqQZthcdCl5/kMyJ1fuC8HQX53HbI9/mSJBGxqaMw9C2 c0NrcTdyon4cqcUITNH0Kucfyk1f18A2zZRC6ki/ETGVT9PgpqySzJhvP Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684426" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684426" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518444" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518444" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 2DBDB911; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 09/20] thunderbolt: Check valid TMU configuration in tb_switch_tmu_configure() Date: Mon, 29 May 2023 13:04:14 +0300 Message-Id: <20230529100425.6125-10-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Instead of at enable time we can do this already in tb_switch_tmu_configure(). Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 6 ++++-- drivers/thunderbolt/tb.h | 5 ++--- drivers/thunderbolt/tmu.c | 13 ++++++++----- 3 files changed, 14 insertions(+), 10 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 41c353f462e7..91459bf2fd0f 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -282,9 +282,11 @@ static int tb_enable_tmu(struct tb_switch *sw) * highest accuracy. */ if (tb_switch_is_clx_enabled(sw, TB_CL1)) - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); + ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); else - tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); + ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); + if (ret) + return ret; /* If it is already enabled in correct mode, don't touch it */ if (tb_switch_tmu_is_enabled(sw)) diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 8cc64b79f35c..07e4e7b37f13 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -987,9 +987,8 @@ int tb_switch_tmu_init(struct tb_switch *sw); int tb_switch_tmu_post_time(struct tb_switch *sw); int tb_switch_tmu_disable(struct tb_switch *sw); int tb_switch_tmu_enable(struct tb_switch *sw); -void tb_switch_tmu_configure(struct tb_switch *sw, - enum tb_switch_tmu_rate rate, - bool unidirectional); +int tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_rate rate, + bool unidirectional); /** * tb_switch_tmu_is_enabled() - Checks if the specified TMU mode is enabled * @sw: Router whose TMU mode to check diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 84abb783a6d9..be310d97ea7b 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -648,9 +648,6 @@ int tb_switch_tmu_enable(struct tb_switch *sw) bool unidirectional = sw->tmu.unidirectional_request; int ret; - if (unidirectional && !sw->tmu.has_ucap) - return -EOPNOTSUPP; - /* * No need to enable TMU on devices that don't support CLx since on * these devices e.g. Alpine Ridge and earlier, the TMU mode HiFi @@ -724,10 +721,16 @@ int tb_switch_tmu_enable(struct tb_switch *sw) * * Selects the rate of the TMU and directionality (uni-directional or * bi-directional). Must be called before tb_switch_tmu_enable(). + * + * Returns %0 in success and negative errno otherwise. */ -void tb_switch_tmu_configure(struct tb_switch *sw, - enum tb_switch_tmu_rate rate, bool unidirectional) +int tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_rate rate, + bool unidirectional) { + if (unidirectional && !sw->tmu.has_ucap) + return -EINVAL; + sw->tmu.unidirectional_request = unidirectional; sw->tmu.rate_request = rate; + return 0; } From patchwork Mon May 29 10:04:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F4BFC7EE2E for ; Mon, 29 May 2023 10:04:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231946AbjE2KEq (ORCPT ); Mon, 29 May 2023 06:04:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231939AbjE2KEk (ORCPT ); Mon, 29 May 2023 06:04:40 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9B99B5 for ; Mon, 29 May 2023 03:04:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354676; x=1716890676; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RnZffdERIkXZDQaKdSVAUpVg1t+EAGalLtwqytNQjpQ=; b=K9x7wpfMFvNRRr89xFAaUKOpwtV4GBxn8U2Yc3HDuCxWnF8rDhb1zKuu hVAmKV3IqaQ4VnMmK+TFDl6H2CxiUCFKvlO9VIxHswIlhJpV+FOcDBDtS BQYc4nYcrLpOr+gd+MaM942wt2hlaOfiKu3Hw4NR5YB4XIo+HlPsAW3sh oc0YgI7Vi6X6vO+4s9DzK1p5AaX8dCE3l1RpBhr0ojJwEGtjLPufazsgS bmezyfsV+DTFiwFj+oNBQrdWeTPbBZ+iMPoPo1hwG3iJOcF7CNr+W/q/1 /1qrSWo3E4K5axhnlHyFM2n/Goj8RyuIofhvqUC1QWfYrDwhhJGBKSeFY A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684443" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684443" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518466" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518466" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 3B3DA916; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 10/20] thunderbolt: Move CLx support functions into clx.c Date: Mon, 29 May 2023 13:04:15 +0300 Message-Id: <20230529100425.6125-11-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org There really don't belong to switch.c so move them into their own file. As we do this rename the functions to match the conventions used elsewhere in the driver. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/Makefile | 2 +- drivers/thunderbolt/clx.c | 362 ++++++++++++++++++++++++++++++++++ drivers/thunderbolt/debugfs.c | 2 +- drivers/thunderbolt/switch.c | 362 +--------------------------------- drivers/thunderbolt/tb.c | 8 +- drivers/thunderbolt/tb.h | 17 +- drivers/thunderbolt/tmu.c | 6 +- 7 files changed, 381 insertions(+), 378 deletions(-) create mode 100644 drivers/thunderbolt/clx.c diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile index 78fd365893c1..c8b3d7b78098 100644 --- a/drivers/thunderbolt/Makefile +++ b/drivers/thunderbolt/Makefile @@ -2,7 +2,7 @@ obj-${CONFIG_USB4} := thunderbolt.o thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o -thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o +thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o clx.o thunderbolt-${CONFIG_ACPI} += acpi.o thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c new file mode 100644 index 000000000000..d5b46a8e57c9 --- /dev/null +++ b/drivers/thunderbolt/clx.c @@ -0,0 +1,362 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * CLx support + * + * Copyright (C) 2020 - 2023, Intel Corporation + * Authors: Gil Fine + * Mika Westerberg + */ + +#include + +#include "tb.h" + +static bool clx_enabled = true; +module_param_named(clx, clx_enabled, bool, 0444); +MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)"); + +static int tb_port_pm_secondary_set(struct tb_port *port, bool secondary) +{ + u32 phy; + int ret; + + ret = tb_port_read(port, &phy, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); + if (ret) + return ret; + + if (secondary) + phy |= LANE_ADP_CS_1_PMS; + else + phy &= ~LANE_ADP_CS_1_PMS; + + return tb_port_write(port, &phy, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); +} + +static int tb_port_pm_secondary_enable(struct tb_port *port) +{ + return tb_port_pm_secondary_set(port, true); +} + +static int tb_port_pm_secondary_disable(struct tb_port *port) +{ + return tb_port_pm_secondary_set(port, false); +} + +/* Called for USB4 or Titan Ridge routers only */ +static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask) +{ + u32 val, mask = 0; + bool ret; + + /* Don't enable CLx in case of two single-lane links */ + if (!port->bonded && port->dual_link_port) + return false; + + /* Don't enable CLx in case of inter-domain link */ + if (port->xdomain) + return false; + + if (tb_switch_is_usb4(port->sw)) { + if (!usb4_port_clx_supported(port)) + return false; + } else if (!tb_lc_is_clx_supported(port)) { + return false; + } + + if (clx_mask & TB_CL1) { + /* CL0s and CL1 are enabled and supported together */ + mask |= LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT; + } + if (clx_mask & TB_CL2) + mask |= LANE_ADP_CS_0_CL2_SUPPORT; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_0, 1); + if (ret) + return false; + + return !!(val & mask); +} + +static int tb_port_clx_set(struct tb_port *port, enum tb_clx clx, bool enable) +{ + u32 phy, mask; + int ret; + + /* CL0s and CL1 are enabled and supported together */ + if (clx == TB_CL1) + mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; + else + /* For now we support only CL0s and CL1. Not CL2 */ + return -EOPNOTSUPP; + + ret = tb_port_read(port, &phy, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); + if (ret) + return ret; + + if (enable) + phy |= mask; + else + phy &= ~mask; + + return tb_port_write(port, &phy, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); +} + +static int tb_port_clx_disable(struct tb_port *port, enum tb_clx clx) +{ + return tb_port_clx_set(port, clx, false); +} + +static int tb_port_clx_enable(struct tb_port *port, enum tb_clx clx) +{ + return tb_port_clx_set(port, clx, true); +} + +/** + * tb_port_clx_is_enabled() - Is given CL state enabled + * @port: USB4 port to check + * @clx_mask: Mask of CL states to check + * + * Returns true if any of the given CL states is enabled for @port. + */ +bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx_mask) +{ + u32 val, mask = 0; + int ret; + + if (!tb_port_clx_supported(port, clx_mask)) + return false; + + if (clx_mask & TB_CL1) + mask |= LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; + if (clx_mask & TB_CL2) + mask |= LANE_ADP_CS_1_CL2_ENABLE; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); + if (ret) + return false; + + return !!(val & mask); +} + +static int tb_switch_pm_secondary_resolve(struct tb_switch *sw) +{ + struct tb_port *up, *down; + int ret; + + if (!tb_route(sw)) + return 0; + + up = tb_upstream_port(sw); + down = tb_switch_downstream_port(sw); + ret = tb_port_pm_secondary_enable(up); + if (ret) + return ret; + + return tb_port_pm_secondary_disable(down); +} + +static int tb_switch_mask_clx_objections(struct tb_switch *sw) +{ + int up_port = sw->config.upstream_port_number; + u32 offset, val[2], mask_obj, unmask_obj; + int ret, i; + + /* Only Titan Ridge of pre-USB4 devices support CLx states */ + if (!tb_switch_is_titan_ridge(sw)) + return 0; + + if (!tb_route(sw)) + return 0; + + /* + * In Titan Ridge there are only 2 dual-lane Thunderbolt ports: + * Port A consists of lane adapters 1,2 and + * Port B consists of lane adapters 3,4 + * If upstream port is A, (lanes are 1,2), we mask objections from + * port B (lanes 3,4) and unmask objections from Port A and vice-versa. + */ + if (up_port == 1) { + mask_obj = TB_LOW_PWR_C0_PORT_B_MASK; + unmask_obj = TB_LOW_PWR_C1_PORT_A_MASK; + offset = TB_LOW_PWR_C1_CL1; + } else { + mask_obj = TB_LOW_PWR_C1_PORT_A_MASK; + unmask_obj = TB_LOW_PWR_C0_PORT_B_MASK; + offset = TB_LOW_PWR_C3_CL1; + } + + ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, + sw->cap_lp + offset, ARRAY_SIZE(val)); + if (ret) + return ret; + + for (i = 0; i < ARRAY_SIZE(val); i++) { + val[i] |= mask_obj; + val[i] &= ~unmask_obj; + } + + return tb_sw_write(sw, &val, TB_CFG_SWITCH, + sw->cap_lp + offset, ARRAY_SIZE(val)); +} + +static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) +{ + bool up_clx_support, down_clx_support; + struct tb_port *up, *down; + int ret; + + if (!tb_switch_clx_is_supported(sw)) + return 0; + + /* + * Enable CLx for host router's downstream port as part of the + * downstream router enabling procedure. + */ + if (!tb_route(sw)) + return 0; + + /* Enable CLx only for first hop router (depth = 1) */ + if (tb_route(tb_switch_parent(sw))) + return 0; + + ret = tb_switch_pm_secondary_resolve(sw); + if (ret) + return ret; + + up = tb_upstream_port(sw); + down = tb_switch_downstream_port(sw); + + up_clx_support = tb_port_clx_supported(up, clx); + down_clx_support = tb_port_clx_supported(down, clx); + + tb_port_dbg(up, "%s %ssupported\n", tb_switch_clx_name(clx), + up_clx_support ? "" : "not "); + tb_port_dbg(down, "%s %ssupported\n", tb_switch_clx_name(clx), + down_clx_support ? "" : "not "); + + if (!up_clx_support || !down_clx_support) + return -EOPNOTSUPP; + + ret = tb_port_clx_enable(up, clx); + if (ret) + return ret; + + ret = tb_port_clx_enable(down, clx); + if (ret) { + tb_port_clx_disable(up, clx); + return ret; + } + + ret = tb_switch_mask_clx_objections(sw); + if (ret) { + tb_port_clx_disable(up, clx); + tb_port_clx_disable(down, clx); + return ret; + } + + sw->clx = clx; + + tb_port_dbg(up, "%s enabled\n", tb_switch_clx_name(clx)); + return 0; +} + +/** + * tb_switch_clx_enable() - Enable CLx on upstream port of specified router + * @sw: Router to enable CLx for + * @clx: The CLx state to enable + * + * Enable CLx state only for first hop router. That is the most common + * use-case, that is intended for better thermal management, and so helps + * to improve performance. CLx is enabled only if both sides of the link + * support CLx, and if both sides of the link are not configured as two + * single lane links and only if the link is not inter-domain link. The + * complete set of conditions is described in CM Guide 1.0 section 8.1. + * + * Return: Returns 0 on success or an error code on failure. + */ +int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx) +{ + struct tb_switch *root_sw = sw->tb->root_switch; + + if (!clx_enabled) + return 0; + + /* + * CLx is not enabled and validated on Intel USB4 platforms before + * Alder Lake. + */ + if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw)) + return 0; + + switch (clx) { + case TB_CL1: + /* CL0s and CL1 are enabled and supported together */ + return __tb_switch_enable_clx(sw, clx); + + default: + return -EOPNOTSUPP; + } +} + +static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) +{ + struct tb_port *up, *down; + int ret; + + if (!tb_switch_clx_is_supported(sw)) + return 0; + + /* + * Disable CLx for host router's downstream port as part of the + * downstream router enabling procedure. + */ + if (!tb_route(sw)) + return 0; + + /* Disable CLx only for first hop router (depth = 1) */ + if (tb_route(tb_switch_parent(sw))) + return 0; + + up = tb_upstream_port(sw); + down = tb_switch_downstream_port(sw); + ret = tb_port_clx_disable(up, clx); + if (ret) + return ret; + + ret = tb_port_clx_disable(down, clx); + if (ret) + return ret; + + sw->clx = TB_CLX_DISABLE; + + tb_port_dbg(up, "%s disabled\n", tb_switch_clx_name(clx)); + return 0; +} + +/** + * tb_switch_cls_disable() - Disable CLx on upstream port of specified router + * @sw: Router to disable CLx for + * @clx: The CLx state to disable + * + * Return: Returns 0 on success or an error code on failure. + */ +int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx) +{ + if (!clx_enabled) + return 0; + + switch (clx) { + case TB_CL1: + /* CL0s and CL1 are enabled and supported together */ + return __tb_switch_disable_clx(sw, clx); + + default: + return -EOPNOTSUPP; + } +} diff --git a/drivers/thunderbolt/debugfs.c b/drivers/thunderbolt/debugfs.c index f92ad71ef983..e376ad25bf60 100644 --- a/drivers/thunderbolt/debugfs.c +++ b/drivers/thunderbolt/debugfs.c @@ -570,7 +570,7 @@ static int margining_run_write(void *data, u64 val) * CL states may interfere with lane margining so inform the user know * and bail out. */ - if (tb_port_is_clx_enabled(port, TB_CL1 | TB_CL2)) { + if (tb_port_clx_is_enabled(port, TB_CL1 | TB_CL2)) { tb_port_warn(port, "CL states are enabled, Disable them with clx=0 and re-connect\n"); ret = -EINVAL; diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 4f3d02c58c9e..984b5536e143 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -26,10 +26,6 @@ struct nvm_auth_status { u32 status; }; -static bool clx_enabled = true; -module_param_named(clx, clx_enabled, bool, 0444); -MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)"); - /* * Hold NVM authentication failure status per switch This information * needs to stay around even when the switch gets power cycled so we @@ -1183,135 +1179,6 @@ int tb_port_update_credits(struct tb_port *port) return tb_port_do_update_credits(port->dual_link_port); } -static int __tb_port_pm_secondary_set(struct tb_port *port, bool secondary) -{ - u32 phy; - int ret; - - ret = tb_port_read(port, &phy, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); - if (ret) - return ret; - - if (secondary) - phy |= LANE_ADP_CS_1_PMS; - else - phy &= ~LANE_ADP_CS_1_PMS; - - return tb_port_write(port, &phy, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); -} - -static int tb_port_pm_secondary_enable(struct tb_port *port) -{ - return __tb_port_pm_secondary_set(port, true); -} - -static int tb_port_pm_secondary_disable(struct tb_port *port) -{ - return __tb_port_pm_secondary_set(port, false); -} - -/* Called for USB4 or Titan Ridge routers only */ -static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask) -{ - u32 val, mask = 0; - bool ret; - - /* Don't enable CLx in case of two single-lane links */ - if (!port->bonded && port->dual_link_port) - return false; - - /* Don't enable CLx in case of inter-domain link */ - if (port->xdomain) - return false; - - if (tb_switch_is_usb4(port->sw)) { - if (!usb4_port_clx_supported(port)) - return false; - } else if (!tb_lc_is_clx_supported(port)) { - return false; - } - - if (clx_mask & TB_CL1) { - /* CL0s and CL1 are enabled and supported together */ - mask |= LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT; - } - if (clx_mask & TB_CL2) - mask |= LANE_ADP_CS_0_CL2_SUPPORT; - - ret = tb_port_read(port, &val, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_0, 1); - if (ret) - return false; - - return !!(val & mask); -} - -static int __tb_port_clx_set(struct tb_port *port, enum tb_clx clx, bool enable) -{ - u32 phy, mask; - int ret; - - /* CL0s and CL1 are enabled and supported together */ - if (clx == TB_CL1) - mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; - else - /* For now we support only CL0s and CL1. Not CL2 */ - return -EOPNOTSUPP; - - ret = tb_port_read(port, &phy, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); - if (ret) - return ret; - - if (enable) - phy |= mask; - else - phy &= ~mask; - - return tb_port_write(port, &phy, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); -} - -static int tb_port_clx_disable(struct tb_port *port, enum tb_clx clx) -{ - return __tb_port_clx_set(port, clx, false); -} - -static int tb_port_clx_enable(struct tb_port *port, enum tb_clx clx) -{ - return __tb_port_clx_set(port, clx, true); -} - -/** - * tb_port_is_clx_enabled() - Is given CL state enabled - * @port: USB4 port to check - * @clx_mask: Mask of CL states to check - * - * Returns true if any of the given CL states is enabled for @port. - */ -bool tb_port_is_clx_enabled(struct tb_port *port, unsigned int clx_mask) -{ - u32 val, mask = 0; - int ret; - - if (!tb_port_clx_supported(port, clx_mask)) - return false; - - if (clx_mask & TB_CL1) - mask |= LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; - if (clx_mask & TB_CL2) - mask |= LANE_ADP_CS_1_CL2_ENABLE; - - ret = tb_port_read(port, &val, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); - if (ret) - return false; - - return !!(val & mask); -} - static int tb_port_start_lane_initialization(struct tb_port *port) { int ret; @@ -3246,8 +3113,8 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime) * done for USB4 device too as CLx is re-enabled at resume. * CL0s and CL1 are enabled and supported together. */ - if (tb_switch_is_clx_enabled(sw, TB_CL1)) { - if (tb_switch_disable_clx(sw, TB_CL1)) + if (tb_switch_clx_is_enabled(sw, TB_CL1)) { + if (tb_switch_clx_disable(sw, TB_CL1)) tb_sw_warn(sw, "failed to disable %s on upstream port\n", tb_switch_clx_name(TB_CL1)); } @@ -3472,231 +3339,6 @@ struct tb_port *tb_switch_find_port(struct tb_switch *sw, return NULL; } -static int tb_switch_pm_secondary_resolve(struct tb_switch *sw) -{ - struct tb_port *up, *down; - int ret; - - if (!tb_route(sw)) - return 0; - - up = tb_upstream_port(sw); - down = tb_switch_downstream_port(sw); - ret = tb_port_pm_secondary_enable(up); - if (ret) - return ret; - - return tb_port_pm_secondary_disable(down); -} - -static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) -{ - bool up_clx_support, down_clx_support; - struct tb_port *up, *down; - int ret; - - if (!tb_switch_is_clx_supported(sw)) - return 0; - - /* - * Enable CLx for host router's downstream port as part of the - * downstream router enabling procedure. - */ - if (!tb_route(sw)) - return 0; - - /* Enable CLx only for first hop router (depth = 1) */ - if (tb_route(tb_switch_parent(sw))) - return 0; - - ret = tb_switch_pm_secondary_resolve(sw); - if (ret) - return ret; - - up = tb_upstream_port(sw); - down = tb_switch_downstream_port(sw); - - up_clx_support = tb_port_clx_supported(up, clx); - down_clx_support = tb_port_clx_supported(down, clx); - - tb_port_dbg(up, "%s %ssupported\n", tb_switch_clx_name(clx), - up_clx_support ? "" : "not "); - tb_port_dbg(down, "%s %ssupported\n", tb_switch_clx_name(clx), - down_clx_support ? "" : "not "); - - if (!up_clx_support || !down_clx_support) - return -EOPNOTSUPP; - - ret = tb_port_clx_enable(up, clx); - if (ret) - return ret; - - ret = tb_port_clx_enable(down, clx); - if (ret) { - tb_port_clx_disable(up, clx); - return ret; - } - - ret = tb_switch_mask_clx_objections(sw); - if (ret) { - tb_port_clx_disable(up, clx); - tb_port_clx_disable(down, clx); - return ret; - } - - sw->clx = clx; - - tb_port_dbg(up, "%s enabled\n", tb_switch_clx_name(clx)); - return 0; -} - -/** - * tb_switch_enable_clx() - Enable CLx on upstream port of specified router - * @sw: Router to enable CLx for - * @clx: The CLx state to enable - * - * Enable CLx state only for first hop router. That is the most common - * use-case, that is intended for better thermal management, and so helps - * to improve performance. CLx is enabled only if both sides of the link - * support CLx, and if both sides of the link are not configured as two - * single lane links and only if the link is not inter-domain link. The - * complete set of conditions is described in CM Guide 1.0 section 8.1. - * - * Return: Returns 0 on success or an error code on failure. - */ -int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) -{ - struct tb_switch *root_sw = sw->tb->root_switch; - - if (!clx_enabled) - return 0; - - /* - * CLx is not enabled and validated on Intel USB4 platforms before - * Alder Lake. - */ - if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw)) - return 0; - - switch (clx) { - case TB_CL1: - /* CL0s and CL1 are enabled and supported together */ - return __tb_switch_enable_clx(sw, clx); - - default: - return -EOPNOTSUPP; - } -} - -static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) -{ - struct tb_port *up, *down; - int ret; - - if (!tb_switch_is_clx_supported(sw)) - return 0; - - /* - * Disable CLx for host router's downstream port as part of the - * downstream router enabling procedure. - */ - if (!tb_route(sw)) - return 0; - - /* Disable CLx only for first hop router (depth = 1) */ - if (tb_route(tb_switch_parent(sw))) - return 0; - - up = tb_upstream_port(sw); - down = tb_switch_downstream_port(sw); - ret = tb_port_clx_disable(up, clx); - if (ret) - return ret; - - ret = tb_port_clx_disable(down, clx); - if (ret) - return ret; - - sw->clx = TB_CLX_DISABLE; - - tb_port_dbg(up, "%s disabled\n", tb_switch_clx_name(clx)); - return 0; -} - -/** - * tb_switch_disable_clx() - Disable CLx on upstream port of specified router - * @sw: Router to disable CLx for - * @clx: The CLx state to disable - * - * Return: Returns 0 on success or an error code on failure. - */ -int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) -{ - if (!clx_enabled) - return 0; - - switch (clx) { - case TB_CL1: - /* CL0s and CL1 are enabled and supported together */ - return __tb_switch_disable_clx(sw, clx); - - default: - return -EOPNOTSUPP; - } -} - -/** - * tb_switch_mask_clx_objections() - Mask CLx objections for a router - * @sw: Router to mask objections for - * - * Mask the objections coming from the second depth routers in order to - * stop these objections from interfering with the CLx states of the first - * depth link. - */ -int tb_switch_mask_clx_objections(struct tb_switch *sw) -{ - int up_port = sw->config.upstream_port_number; - u32 offset, val[2], mask_obj, unmask_obj; - int ret, i; - - /* Only Titan Ridge of pre-USB4 devices support CLx states */ - if (!tb_switch_is_titan_ridge(sw)) - return 0; - - if (!tb_route(sw)) - return 0; - - /* - * In Titan Ridge there are only 2 dual-lane Thunderbolt ports: - * Port A consists of lane adapters 1,2 and - * Port B consists of lane adapters 3,4 - * If upstream port is A, (lanes are 1,2), we mask objections from - * port B (lanes 3,4) and unmask objections from Port A and vice-versa. - */ - if (up_port == 1) { - mask_obj = TB_LOW_PWR_C0_PORT_B_MASK; - unmask_obj = TB_LOW_PWR_C1_PORT_A_MASK; - offset = TB_LOW_PWR_C1_CL1; - } else { - mask_obj = TB_LOW_PWR_C1_PORT_A_MASK; - unmask_obj = TB_LOW_PWR_C0_PORT_B_MASK; - offset = TB_LOW_PWR_C3_CL1; - } - - ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, - sw->cap_lp + offset, ARRAY_SIZE(val)); - if (ret) - return ret; - - for (i = 0; i < ARRAY_SIZE(val); i++) { - val[i] |= mask_obj; - val[i] &= ~unmask_obj; - } - - return tb_sw_write(sw, &val, TB_CFG_SWITCH, - sw->cap_lp + offset, ARRAY_SIZE(val)); -} - /* * Can be used for read/write a specified PCIe bridge for any Thunderbolt 3 * device. For now used only for Titan Ridge. diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 91459bf2fd0f..c7cfd740520a 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -247,7 +247,7 @@ static int tb_increase_switch_tmu_accuracy(struct device *dev, void *data) sw = tb_to_switch(dev); if (sw) { tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, - tb_switch_is_clx_enabled(sw, TB_CL1)); + tb_switch_clx_is_enabled(sw, TB_CL1)); if (tb_switch_tmu_enable(sw)) tb_sw_warn(sw, "failed to increase TMU rate\n"); } @@ -281,7 +281,7 @@ static int tb_enable_tmu(struct tb_switch *sw) * level to normal. Otherwise we keep the TMU running at the * highest accuracy. */ - if (tb_switch_is_clx_enabled(sw, TB_CL1)) + if (tb_switch_clx_is_enabled(sw, TB_CL1)) ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true); else ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false); @@ -879,7 +879,7 @@ static void tb_scan_port(struct tb_port *port) if (discovery) { tb_sw_dbg(sw, "discovery, not touching CL states\n"); } else { - ret = tb_switch_enable_clx(sw, TB_CL1); + ret = tb_switch_clx_enable(sw, TB_CL1); if (ret && ret != -EOPNOTSUPP) tb_sw_warn(sw, "failed to enable %s on upstream port\n", tb_switch_clx_name(TB_CL1)); @@ -2032,7 +2032,7 @@ static void tb_restore_children(struct tb_switch *sw) * CL0s and CL1 are enabled and supported together. * Silently ignore CLx re-enabling in case CLx is not supported. */ - ret = tb_switch_enable_clx(sw, TB_CL1); + ret = tb_switch_clx_enable(sw, TB_CL1); if (ret && ret != -EOPNOTSUPP) tb_sw_warn(sw, "failed to re-enable %s on upstream port\n", tb_switch_clx_name(TB_CL1)); diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 07e4e7b37f13..d29bc7eab051 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -1002,6 +1002,8 @@ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw) sw->tmu.unidirectional == sw->tmu.unidirectional_request; } +bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx_mask); + static inline const char *tb_switch_clx_name(enum tb_clx clx) { switch (clx) { @@ -1013,28 +1015,28 @@ static inline const char *tb_switch_clx_name(enum tb_clx clx) } } -int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx); -int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx); +int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx); +int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx); /** - * tb_switch_is_clx_enabled() - Checks if the CLx is enabled + * tb_switch_clx_is_enabled() - Checks if the CLx is enabled * @sw: Router to check for the CLx * @clx: The CLx state to check for * * Checks if the specified CLx is enabled on the router upstream link. * Not applicable for a host router. */ -static inline bool tb_switch_is_clx_enabled(const struct tb_switch *sw, +static inline bool tb_switch_clx_is_enabled(const struct tb_switch *sw, enum tb_clx clx) { return sw->clx == clx; } /** - * tb_switch_is_clx_supported() - Is CLx supported on this type of router + * tb_switch_clx_is_supported() - Is CLx supported on this type of router * @sw: The router to check CLx support for */ -static inline bool tb_switch_is_clx_supported(const struct tb_switch *sw) +static inline bool tb_switch_clx_is_supported(const struct tb_switch *sw) { if (sw->quirks & QUIRK_NO_CLX) return false; @@ -1042,8 +1044,6 @@ static inline bool tb_switch_is_clx_supported(const struct tb_switch *sw) return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw); } -int tb_switch_mask_clx_objections(struct tb_switch *sw); - int tb_switch_pcie_l1_enable(struct tb_switch *sw); int tb_switch_xhci_connect(struct tb_switch *sw); @@ -1089,7 +1089,6 @@ void tb_port_lane_bonding_disable(struct tb_port *port); int tb_port_wait_for_link_width(struct tb_port *port, int width, int timeout_msec); int tb_port_update_credits(struct tb_port *port); -bool tb_port_is_clx_enabled(struct tb_port *port, unsigned int clx); int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec); int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap); diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index be310d97ea7b..6988704c845c 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -388,7 +388,7 @@ int tb_switch_tmu_disable(struct tb_switch *sw) * on these devices e.g. Alpine Ridge and earlier, the TMU mode * HiFi bi-directional is enabled by default and we don't change it. */ - if (!tb_switch_is_clx_supported(sw)) + if (!tb_switch_clx_is_supported(sw)) return 0; /* Already disabled? */ @@ -653,7 +653,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw) * these devices e.g. Alpine Ridge and earlier, the TMU mode HiFi * bi-directional is enabled by default. */ - if (!tb_switch_is_clx_supported(sw)) + if (!tb_switch_clx_is_supported(sw)) return 0; if (tb_switch_tmu_is_enabled(sw)) @@ -664,7 +664,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw) * Titan Ridge supports CL0s and CL1 only. CL0s and CL1 are * enabled and supported together. */ - if (!tb_switch_is_clx_enabled(sw, TB_CL1)) + if (!tb_switch_clx_is_enabled(sw, TB_CL1)) return -EOPNOTSUPP; ret = tb_switch_tmu_disable_objections(sw); From patchwork Mon May 29 10:04:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258382 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C033FC7EE31 for ; Mon, 29 May 2023 10:04:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231956AbjE2KEn (ORCPT ); Mon, 29 May 2023 06:04:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231932AbjE2KEi (ORCPT ); Mon, 29 May 2023 06:04:38 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C29DBBE for ; Mon, 29 May 2023 03:04:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354676; x=1716890676; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HCL2y1xXfnUz7sRHFmtysR58hKnSfNBw2Kd0xE9pgmo=; b=SXnOH7ekejKrE/YgWWGzs7aF/FTonF9MFw9Ye0fGcyGHsudPJLMGjZUW WBjNUrr0i3c7FjZegbbim0vu9bNOHh+bx11LOQLH5IOAqZ8AhXcy6g7SN +yxoyyNbVI/0chkOthyRwt8ZpjUn8JIZOjSW7KoKVruiP+KZNQinqVOKe WuA2jb/D37+kNDo1XGaWWyJMNR73BwkJq3XN/dXYYwL2YUb6s6Pxyth4I qGGgl56EPv48KHOvPGNuQFq9z8Jj8W72UW8AURBMYMnTfDayn/KyDBFnG iXOD7us8auobH5G31mJ38prG8ZPdhS8Pv7zw438ZRkN4swabjZVWwLzib Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684447" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684447" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518474" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518474" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:24 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 426CE943; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 11/20] thunderbolt: Get rid of __tb_switch_[en|dis]able_clx() Date: Mon, 29 May 2023 13:04:16 +0300 Message-Id: <20230529100425.6125-12-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org No need to have separate functions for these so fold them into tb_switch_clx_enable() and tb_switch_clx_disable() accordingly. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/clx.c | 91 ++++++++++++++++++--------------------- 1 file changed, 42 insertions(+), 49 deletions(-) diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c index d5b46a8e57c9..d1a502525425 100644 --- a/drivers/thunderbolt/clx.c +++ b/drivers/thunderbolt/clx.c @@ -205,12 +205,46 @@ static int tb_switch_mask_clx_objections(struct tb_switch *sw) sw->cap_lp + offset, ARRAY_SIZE(val)); } -static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) +/** + * tb_switch_clx_enable() - Enable CLx on upstream port of specified router + * @sw: Router to enable CLx for + * @clx: The CLx state to enable + * + * Enable CLx state only for first hop router. That is the most common + * use-case, that is intended for better thermal management, and so helps + * to improve performance. CLx is enabled only if both sides of the link + * support CLx, and if both sides of the link are not configured as two + * single lane links and only if the link is not inter-domain link. The + * complete set of conditions is described in CM Guide 1.0 section 8.1. + * + * Return: Returns 0 on success or an error code on failure. + */ +int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx) { + struct tb_switch *root_sw = sw->tb->root_switch; bool up_clx_support, down_clx_support; struct tb_port *up, *down; int ret; + if (!clx_enabled) + return 0; + + /* + * CLx is not enabled and validated on Intel USB4 platforms before + * Alder Lake. + */ + if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw)) + return 0; + + switch (clx) { + case TB_CL1: + /* CL0s and CL1 are enabled and supported together */ + break; + + default: + return -EOPNOTSUPP; + } + if (!tb_switch_clx_is_supported(sw)) return 0; @@ -267,47 +301,28 @@ static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx) } /** - * tb_switch_clx_enable() - Enable CLx on upstream port of specified router - * @sw: Router to enable CLx for - * @clx: The CLx state to enable - * - * Enable CLx state only for first hop router. That is the most common - * use-case, that is intended for better thermal management, and so helps - * to improve performance. CLx is enabled only if both sides of the link - * support CLx, and if both sides of the link are not configured as two - * single lane links and only if the link is not inter-domain link. The - * complete set of conditions is described in CM Guide 1.0 section 8.1. + * tb_switch_clx_disable() - Disable CLx on upstream port of specified router + * @sw: Router to disable CLx for + * @clx: The CLx state to disable * * Return: Returns 0 on success or an error code on failure. */ -int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx) +int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx) { - struct tb_switch *root_sw = sw->tb->root_switch; + struct tb_port *up, *down; + int ret; if (!clx_enabled) return 0; - /* - * CLx is not enabled and validated on Intel USB4 platforms before - * Alder Lake. - */ - if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw)) - return 0; - switch (clx) { case TB_CL1: /* CL0s and CL1 are enabled and supported together */ - return __tb_switch_enable_clx(sw, clx); + break; default: return -EOPNOTSUPP; } -} - -static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) -{ - struct tb_port *up, *down; - int ret; if (!tb_switch_clx_is_supported(sw)) return 0; @@ -338,25 +353,3 @@ static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx) tb_port_dbg(up, "%s disabled\n", tb_switch_clx_name(clx)); return 0; } - -/** - * tb_switch_cls_disable() - Disable CLx on upstream port of specified router - * @sw: Router to disable CLx for - * @clx: The CLx state to disable - * - * Return: Returns 0 on success or an error code on failure. - */ -int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx) -{ - if (!clx_enabled) - return 0; - - switch (clx) { - case TB_CL1: - /* CL0s and CL1 are enabled and supported together */ - return __tb_switch_disable_clx(sw, clx); - - default: - return -EOPNOTSUPP; - } -} From patchwork Mon May 29 10:04:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258380 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DEF3C7EE2F for ; Mon, 29 May 2023 10:04:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231947AbjE2KEl (ORCPT ); Mon, 29 May 2023 06:04:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231929AbjE2KEg (ORCPT ); Mon, 29 May 2023 06:04:36 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CB59D8 for ; Mon, 29 May 2023 03:04:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354675; x=1716890675; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6ySPkHQ1Gq6BFKw2AQMmKvvZhXbGRgwqFzDOGrbZknk=; b=estcqlSm+9dxZLSk+R3rkJXaABHkaDACZdX0fFz9viBNA1DC5OmQjiY+ vYDQ1rw1jiKj7UBJXbBARxsNGc8UM5hd6VGiY8z4u46y0RAMxvti5EDX6 dLrkw2qEJegTvp6TnOPYYB3ds76Ycn+Ktnp5A7xtS2jOkbWSFrB5H8wGd EN1+PGEEjmS3MS4H3tB737j8Mu/hXlio2Bn/Bd/91W79dQQbAlkuQ+hup mKc5nSkUMNFBQIkXzBWBBD5r54PsUfBxjzbpF58goL2XpIsHkn7keKrBg 4MK0eFT6rIvpQ4QNENAiVJmJjEQGNQOiWCNoUublo3Dg2HgwtO6YR0l82 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684437" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684437" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518470" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518470" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:25 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 4970FA11; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 12/20] thunderbolt: Move CLx enabling into tb_enable_clx() Date: Mon, 29 May 2023 13:04:17 +0300 Message-Id: <20230529100425.6125-13-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This avoids some duplication and makes the flow slightly easier to understand. Also follows what we do in tb_enable_tmu(). Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index c7cfd740520a..e4f1233eb958 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -240,6 +240,18 @@ static void tb_discover_dp_resources(struct tb *tb) } } +static int tb_enable_clx(struct tb_switch *sw) +{ + int ret; + + /* + * CL0s and CL1 are enabled and supported together. + * Silently ignore CLx enabling in case CLx is not supported. + */ + ret = tb_switch_clx_enable(sw, TB_CL1); + return ret == -EOPNOTSUPP ? 0 : ret; +} + static int tb_increase_switch_tmu_accuracy(struct device *dev, void *data) { struct tb_switch *sw; @@ -777,7 +789,6 @@ static void tb_scan_port(struct tb_port *port) struct tb_port *upstream_port; bool discovery = false; struct tb_switch *sw; - int ret; if (tb_is_upstream_port(port)) return; @@ -876,14 +887,10 @@ static void tb_scan_port(struct tb_port *port) * CL0s and CL1 are enabled and supported together. * Silently ignore CLx enabling in case CLx is not supported. */ - if (discovery) { + if (discovery) tb_sw_dbg(sw, "discovery, not touching CL states\n"); - } else { - ret = tb_switch_clx_enable(sw, TB_CL1); - if (ret && ret != -EOPNOTSUPP) - tb_sw_warn(sw, "failed to enable %s on upstream port\n", - tb_switch_clx_name(TB_CL1)); - } + else if (tb_enable_clx(sw)) + tb_sw_warn(sw, "failed to enable CL states\n"); if (tb_enable_tmu(sw)) tb_sw_warn(sw, "failed to enable TMU\n"); @@ -2022,20 +2029,13 @@ static int tb_suspend_noirq(struct tb *tb) static void tb_restore_children(struct tb_switch *sw) { struct tb_port *port; - int ret; /* No need to restore if the router is already unplugged */ if (sw->is_unplugged) return; - /* - * CL0s and CL1 are enabled and supported together. - * Silently ignore CLx re-enabling in case CLx is not supported. - */ - ret = tb_switch_clx_enable(sw, TB_CL1); - if (ret && ret != -EOPNOTSUPP) - tb_sw_warn(sw, "failed to re-enable %s on upstream port\n", - tb_switch_clx_name(TB_CL1)); + if (tb_enable_clx(sw)) + tb_sw_warn(sw, "failed to re-enable CL states\n"); if (tb_enable_tmu(sw)) tb_sw_warn(sw, "failed to restore TMU configuration\n"); From patchwork Mon May 29 10:04:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258388 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79010C77B7E for ; Mon, 29 May 2023 10:04:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231963AbjE2KEs (ORCPT ); Mon, 29 May 2023 06:04:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231944AbjE2KEk (ORCPT ); Mon, 29 May 2023 06:04:40 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 17740A7 for ; Mon, 29 May 2023 03:04:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354678; x=1716890678; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Gkm6gbiGU5ztSscnmh8aKExqHfcIv6Xc00EbRV0M3us=; b=cMhqzZedeuDH1qFYxLMJAvLKLCiI6FiQSWQl+7oyhwQp7UMfNYLbV1j6 XjNum0bjxG4dGdII7p1U/o3gYFX9tNai5161l3e01P+s16kbP83M2RLtx 4DLS4Q6B7pRQfjh4oM8hhEl+7I4R/fCiH8TYbWghA2v0vEwDGy4529ris 4pNFtvgtX6+TiitmEideO8usiPCj2/WRzdKLnp7Ciu4TQKKL4CDhwIxuL slnjBdWRIgVIDinEh/TCmbpCZxHHJH+O64ejhJQzSirpcVZwHNahIY6K0 O/N11Ka/ArWtIuuupPE7Xcuk9Uk7TgCO7jMMK2kvqxcmFDZ3ouKo6vFkP A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684454" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684454" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518501" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518501" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:25 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 51A2EA91; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 13/20] thunderbolt: Switch CL states from enum to a bitmask Date: Mon, 29 May 2023 13:04:18 +0300 Message-Id: <20230529100425.6125-14-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This is more natural and follows the hardware register layout better. This makes it easier to see which CL states we enable (even though they should be enabled together). Rename 'clx_mask' to 'clx' everywhere as this is now always bitmask. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/clx.c | 169 ++++++++++++++++++++--------------- drivers/thunderbolt/switch.c | 7 +- drivers/thunderbolt/tb.c | 2 +- drivers/thunderbolt/tb.h | 54 ++++------- 4 files changed, 113 insertions(+), 119 deletions(-) diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c index d1a502525425..4601607f1901 100644 --- a/drivers/thunderbolt/clx.c +++ b/drivers/thunderbolt/clx.c @@ -45,7 +45,7 @@ static int tb_port_pm_secondary_disable(struct tb_port *port) } /* Called for USB4 or Titan Ridge routers only */ -static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask) +static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx) { u32 val, mask = 0; bool ret; @@ -65,11 +65,11 @@ static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask) return false; } - if (clx_mask & TB_CL1) { - /* CL0s and CL1 are enabled and supported together */ - mask |= LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT; - } - if (clx_mask & TB_CL2) + if (clx & TB_CL0S) + mask |= LANE_ADP_CS_0_CL0S_SUPPORT; + if (clx & TB_CL1) + mask |= LANE_ADP_CS_0_CL1_SUPPORT; + if (clx & TB_CL2) mask |= LANE_ADP_CS_0_CL2_SUPPORT; ret = tb_port_read(port, &val, TB_CFG_PORT, @@ -80,16 +80,17 @@ static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask) return !!(val & mask); } -static int tb_port_clx_set(struct tb_port *port, enum tb_clx clx, bool enable) +static int tb_port_clx_set(struct tb_port *port, unsigned int clx, bool enable) { - u32 phy, mask; + u32 phy, mask = 0; int ret; - /* CL0s and CL1 are enabled and supported together */ - if (clx == TB_CL1) - mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; - else - /* For now we support only CL0s and CL1. Not CL2 */ + if (clx & TB_CL0S) + mask |= LANE_ADP_CS_1_CL0S_ENABLE; + if (clx & TB_CL1) + mask |= LANE_ADP_CS_1_CL1_ENABLE; + + if (!mask) return -EOPNOTSUPP; ret = tb_port_read(port, &phy, TB_CFG_PORT, @@ -106,12 +107,12 @@ static int tb_port_clx_set(struct tb_port *port, enum tb_clx clx, bool enable) port->cap_phy + LANE_ADP_CS_1, 1); } -static int tb_port_clx_disable(struct tb_port *port, enum tb_clx clx) +static int tb_port_clx_disable(struct tb_port *port, unsigned int clx) { return tb_port_clx_set(port, clx, false); } -static int tb_port_clx_enable(struct tb_port *port, enum tb_clx clx) +static int tb_port_clx_enable(struct tb_port *port, unsigned int clx) { return tb_port_clx_set(port, clx, true); } @@ -119,21 +120,23 @@ static int tb_port_clx_enable(struct tb_port *port, enum tb_clx clx) /** * tb_port_clx_is_enabled() - Is given CL state enabled * @port: USB4 port to check - * @clx_mask: Mask of CL states to check + * @clx: Mask of CL states to check * * Returns true if any of the given CL states is enabled for @port. */ -bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx_mask) +bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx) { u32 val, mask = 0; int ret; - if (!tb_port_clx_supported(port, clx_mask)) + if (!tb_port_clx_supported(port, clx)) return false; - if (clx_mask & TB_CL1) - mask |= LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE; - if (clx_mask & TB_CL2) + if (clx & TB_CL0S) + mask |= LANE_ADP_CS_1_CL0S_ENABLE; + if (clx & TB_CL1) + mask |= LANE_ADP_CS_1_CL1_ENABLE; + if (clx & TB_CL2) mask |= LANE_ADP_CS_1_CL2_ENABLE; ret = tb_port_read(port, &val, TB_CFG_PORT, @@ -205,6 +208,50 @@ static int tb_switch_mask_clx_objections(struct tb_switch *sw) sw->cap_lp + offset, ARRAY_SIZE(val)); } +/** + * tb_switch_clx_is_supported() - Is CLx supported on this type of router + * @sw: The router to check CLx support for + */ +bool tb_switch_clx_is_supported(const struct tb_switch *sw) +{ + if (!clx_enabled) + return false; + + if (sw->quirks & QUIRK_NO_CLX) + return false; + + /* + * CLx is not enabled and validated on Intel USB4 platforms + * before Alder Lake. + */ + if (tb_switch_is_tiger_lake(sw)) + return false; + + return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw); +} + +static bool validate_mask(unsigned int clx) +{ + /* Previous states need to be enabled */ + if (clx & TB_CL2) + return (clx & (TB_CL0S | TB_CL1)) == (TB_CL0S | TB_CL1); + if (clx & TB_CL1) + return (clx & TB_CL0S) == TB_CL0S; + return true; +} + +static const char *clx_name(unsigned int clx) +{ + if (clx & TB_CL2) + return "CL0s/CL1/CL2"; + if (clx & TB_CL1) + return "CL0s/CL1"; + if (clx & TB_CL0S) + return "CL0s"; + + return "unknown"; +} + /** * tb_switch_clx_enable() - Enable CLx on upstream port of specified router * @sw: Router to enable CLx for @@ -219,46 +266,32 @@ static int tb_switch_mask_clx_objections(struct tb_switch *sw) * * Return: Returns 0 on success or an error code on failure. */ -int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx) +int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) { - struct tb_switch *root_sw = sw->tb->root_switch; bool up_clx_support, down_clx_support; + struct tb_switch *parent_sw; struct tb_port *up, *down; int ret; - if (!clx_enabled) - return 0; + if (!validate_mask(clx)) + return -EINVAL; - /* - * CLx is not enabled and validated on Intel USB4 platforms before - * Alder Lake. - */ - if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw)) + parent_sw = tb_switch_parent(sw); + if (!parent_sw) return 0; - switch (clx) { - case TB_CL1: - /* CL0s and CL1 are enabled and supported together */ - break; - - default: - return -EOPNOTSUPP; - } - - if (!tb_switch_clx_is_supported(sw)) - return 0; - - /* - * Enable CLx for host router's downstream port as part of the - * downstream router enabling procedure. - */ - if (!tb_route(sw)) + if (!tb_switch_clx_is_supported(parent_sw) || + !tb_switch_clx_is_supported(sw)) return 0; /* Enable CLx only for first hop router (depth = 1) */ if (tb_route(tb_switch_parent(sw))) return 0; + /* CL2 is not yet supported */ + if (clx & TB_CL2) + return -EOPNOTSUPP; + ret = tb_switch_pm_secondary_resolve(sw); if (ret) return ret; @@ -269,9 +302,9 @@ int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx) up_clx_support = tb_port_clx_supported(up, clx); down_clx_support = tb_port_clx_supported(down, clx); - tb_port_dbg(up, "%s %ssupported\n", tb_switch_clx_name(clx), + tb_port_dbg(up, "%s %ssupported\n", clx_name(clx), up_clx_support ? "" : "not "); - tb_port_dbg(down, "%s %ssupported\n", tb_switch_clx_name(clx), + tb_port_dbg(down, "%s %ssupported\n", clx_name(clx), down_clx_support ? "" : "not "); if (!up_clx_support || !down_clx_support) @@ -294,52 +327,40 @@ int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx) return ret; } - sw->clx = clx; + sw->clx |= clx; - tb_port_dbg(up, "%s enabled\n", tb_switch_clx_name(clx)); + tb_port_dbg(up, "%s enabled\n", clx_name(clx)); return 0; } /** * tb_switch_clx_disable() - Disable CLx on upstream port of specified router * @sw: Router to disable CLx for - * @clx: The CLx state to disable + * + * Disables all CL states of the given router. Can be called on any + * router and if the states were not enabled already does nothing. * * Return: Returns 0 on success or an error code on failure. */ -int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx) +int tb_switch_clx_disable(struct tb_switch *sw) { + unsigned int clx = sw->clx; struct tb_port *up, *down; int ret; - if (!clx_enabled) - return 0; - - switch (clx) { - case TB_CL1: - /* CL0s and CL1 are enabled and supported together */ - break; - - default: - return -EOPNOTSUPP; - } - if (!tb_switch_clx_is_supported(sw)) return 0; - /* - * Disable CLx for host router's downstream port as part of the - * downstream router enabling procedure. - */ - if (!tb_route(sw)) - return 0; - /* Disable CLx only for first hop router (depth = 1) */ if (tb_route(tb_switch_parent(sw))) return 0; + if (!clx) + return 0; + up = tb_upstream_port(sw); down = tb_switch_downstream_port(sw); + ret = tb_port_clx_disable(up, clx); if (ret) return ret; @@ -348,8 +369,8 @@ int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx) if (ret) return ret; - sw->clx = TB_CLX_DISABLE; + sw->clx = 0; - tb_port_dbg(up, "%s disabled\n", tb_switch_clx_name(clx)); + tb_port_dbg(up, "%s disabled\n", clx_name(clx)); return 0; } diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 984b5536e143..f33a09d92c9b 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -3111,13 +3111,8 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime) /* * Actually only needed for Titan Ridge but for simplicity can be * done for USB4 device too as CLx is re-enabled at resume. - * CL0s and CL1 are enabled and supported together. */ - if (tb_switch_clx_is_enabled(sw, TB_CL1)) { - if (tb_switch_clx_disable(sw, TB_CL1)) - tb_sw_warn(sw, "failed to disable %s on upstream port\n", - tb_switch_clx_name(TB_CL1)); - } + tb_switch_clx_disable(sw); err = tb_plug_events_active(sw, false); if (err) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index e4f1233eb958..2d360508aeeb 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -248,7 +248,7 @@ static int tb_enable_clx(struct tb_switch *sw) * CL0s and CL1 are enabled and supported together. * Silently ignore CLx enabling in case CLx is not supported. */ - ret = tb_switch_clx_enable(sw, TB_CL1); + ret = tb_switch_clx_enable(sw, TB_CL0S | TB_CL1); return ret == -EOPNOTSUPP ? 0 : ret; } diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index d29bc7eab051..72e245639eb8 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -117,13 +117,6 @@ struct tb_switch_tmu { enum tb_switch_tmu_rate rate_request; }; -enum tb_clx { - TB_CLX_DISABLE, - /* CL0s and CL1 are enabled and supported together */ - TB_CL1 = BIT(0), - TB_CL2 = BIT(1), -}; - /** * struct tb_switch - a thunderbolt switch * @dev: Device for the switch @@ -174,7 +167,7 @@ enum tb_clx { * @min_dp_main_credits: Router preferred minimum number of buffers for DP MAIN * @max_pcie_credits: Router preferred number of buffers for PCIe * @max_dma_credits: Router preferred number of buffers for DMA/P2P - * @clx: CLx state on the upstream link of the router + * @clx: CLx states on the upstream link of the router * * When the switch is being added or removed to the domain (other * switches) you need to have domain lock held. @@ -225,7 +218,7 @@ struct tb_switch { unsigned int min_dp_main_credits; unsigned int max_pcie_credits; unsigned int max_dma_credits; - enum tb_clx clx; + unsigned int clx; }; /** @@ -455,6 +448,11 @@ struct tb_path { #define TB_WAKE_ON_PCIE BIT(4) #define TB_WAKE_ON_DP BIT(5) +/* CL states */ +#define TB_CL0S BIT(0) +#define TB_CL1 BIT(1) +#define TB_CL2 BIT(2) + /** * struct tb_cm_ops - Connection manager specific operations vector * @driver_ready: Called right after control channel is started. Used by @@ -1002,46 +1000,26 @@ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw) sw->tmu.unidirectional == sw->tmu.unidirectional_request; } -bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx_mask); - -static inline const char *tb_switch_clx_name(enum tb_clx clx) -{ - switch (clx) { - /* CL0s and CL1 are enabled and supported together */ - case TB_CL1: - return "CL0s/CL1"; - default: - return "unknown"; - } -} +bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx); -int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx); -int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx); +bool tb_switch_clx_is_supported(const struct tb_switch *sw); +int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx); +int tb_switch_clx_disable(struct tb_switch *sw); /** * tb_switch_clx_is_enabled() - Checks if the CLx is enabled * @sw: Router to check for the CLx - * @clx: The CLx state to check for + * @clx: The CLx states to check for * * Checks if the specified CLx is enabled on the router upstream link. + * Returns true if any of the given states is enabled. + * * Not applicable for a host router. */ static inline bool tb_switch_clx_is_enabled(const struct tb_switch *sw, - enum tb_clx clx) + unsigned int clx) { - return sw->clx == clx; -} - -/** - * tb_switch_clx_is_supported() - Is CLx supported on this type of router - * @sw: The router to check CLx support for - */ -static inline bool tb_switch_clx_is_supported(const struct tb_switch *sw) -{ - if (sw->quirks & QUIRK_NO_CLX) - return false; - - return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw); + return sw->clx & clx; } int tb_switch_pcie_l1_enable(struct tb_switch *sw); From patchwork Mon May 29 10:04:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE932C77B7E for ; Mon, 29 May 2023 10:04:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231958AbjE2KEo (ORCPT ); Mon, 29 May 2023 06:04:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231936AbjE2KEi (ORCPT ); Mon, 29 May 2023 06:04:38 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8845BC7 for ; Mon, 29 May 2023 03:04:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354677; x=1716890677; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tdwqur24UfFHd+KSL34nx+mjjTyCGH6ENa9Hg34uMhs=; b=a6EnofdDRjinMMTsDOQke/Ez1W9YDMVvWy2ofLb67TVk41EFRxuq7JT+ bCnl4Y07XMknVzaPCr48OxmBl5xXhw9J6h0YguYN6aYeGNK/0YoDIEHRV QYJwF5CgAD7yoNB+CGNgS9Rr0QhsRienBM4I0gAGFz4KF7Lr1bJQThByy E8nokIVMA7xu/BR3Wxsc5pCDG8CjXwBq6Q0HJa3TsS+3TalFNNqvyYMFj /WohZPFlqoNbA+3Mx2qCg2JH6mG1M7KsmoIcM/Y2OWAtmJ4BKkucIXvJD i59NRwPAPnXkMyEowK4AI5gTzCT5T7tuosfHXGSs0m8tnfhF8tz18I/um w==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684450" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684450" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518494" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518494" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:25 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 592E6C51; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 14/20] thunderbolt: Check for first depth router in tb.c Date: Mon, 29 May 2023 13:04:19 +0300 Message-Id: <20230529100425.6125-15-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Currently tb_switch_clx_enable() enables CL states only for the first depth router. This is something we may want to change in the future and in addition it is not visible from the calling path at all. For this reason do the check in the tb.c so it is immediately visible that we only do this for the first depth router. Fix the kernel-docs accordingly. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/clx.c | 22 ++++++---------------- drivers/thunderbolt/tb.c | 10 ++++++++++ 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c index 4601607f1901..b8cfbd643311 100644 --- a/drivers/thunderbolt/clx.c +++ b/drivers/thunderbolt/clx.c @@ -257,14 +257,12 @@ static const char *clx_name(unsigned int clx) * @sw: Router to enable CLx for * @clx: The CLx state to enable * - * Enable CLx state only for first hop router. That is the most common - * use-case, that is intended for better thermal management, and so helps - * to improve performance. CLx is enabled only if both sides of the link - * support CLx, and if both sides of the link are not configured as two - * single lane links and only if the link is not inter-domain link. The - * complete set of conditions is described in CM Guide 1.0 section 8.1. + * CLx is enabled only if both sides of the link support CLx, and if both sides + * of the link are not configured as two single lane links and only if the link + * is not inter-domain link. The complete set of conditions is described in CM + * Guide 1.0 section 8.1. * - * Return: Returns 0 on success or an error code on failure. + * Returns %0 on success or an error code on failure. */ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) { @@ -284,10 +282,6 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) !tb_switch_clx_is_supported(sw)) return 0; - /* Enable CLx only for first hop router (depth = 1) */ - if (tb_route(tb_switch_parent(sw))) - return 0; - /* CL2 is not yet supported */ if (clx & TB_CL2) return -EOPNOTSUPP; @@ -340,7 +334,7 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) * Disables all CL states of the given router. Can be called on any * router and if the states were not enabled already does nothing. * - * Return: Returns 0 on success or an error code on failure. + * Returns %0 on success or an error code on failure. */ int tb_switch_clx_disable(struct tb_switch *sw) { @@ -351,10 +345,6 @@ int tb_switch_clx_disable(struct tb_switch *sw) if (!tb_switch_clx_is_supported(sw)) return 0; - /* Disable CLx only for first hop router (depth = 1) */ - if (tb_route(tb_switch_parent(sw))) - return 0; - if (!clx) return 0; diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 2d360508aeeb..1d056ff6d77f 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -244,6 +244,16 @@ static int tb_enable_clx(struct tb_switch *sw) { int ret; + /* + * Currently only enable CLx for the first link. This is enough + * to allow the CPU to save energy at least on Intel hardware + * and makes it slightly simpler to implement. We may change + * this in the future to cover the whole topology if it turns + * out to be beneficial. + */ + if (sw->config.depth != 1) + return 0; + /* * CL0s and CL1 are enabled and supported together. * Silently ignore CLx enabling in case CLx is not supported. From patchwork Mon May 29 10:04:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258386 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4586C7EE2E for ; Mon, 29 May 2023 10:04:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231969AbjE2KEt (ORCPT ); Mon, 29 May 2023 06:04:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231927AbjE2KEk (ORCPT ); Mon, 29 May 2023 06:04:40 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA169C7 for ; Mon, 29 May 2023 03:04:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354679; x=1716890679; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xhQ0CLgyvioITgCAlqeRaHAtGfsHJS91Trn+TaWrv28=; b=f39bkz6I2GgXJVZDcpQ284jy1Zrg92AZm7klMcNGgbG2Z5vgcBTck/hO Q4p/ulu+yrs7rWMg97fD6zZwIANiI5E5eAHBJ2XuBI7ySTDoPBqJ0YV1r Mo/c6IlLqT+4KPug+WnzmTNpM4dQSS4DqKgXq95FfSssGr8G1doyBxan1 INElyaOPI1YsWkyavK5TuENQ7hNoUuX7/cRyMm8XOD4iCZaSkjB2I812u VRdlXcVLXMOja5o6eak5T43ngHe9QNlvoom7FMzFpCNFMxftG18VBRawC ZAVqezYcQMguXBc/M2Df9qQ5O/MC7pkPyloxDFuFGu7SLpc3v9BSAHGen Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684470" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684470" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518627" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518627" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:27 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 60BB8D79; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 15/20] thunderbolt: Do not call CLx functions from TMU code Date: Mon, 29 May 2023 13:04:20 +0300 Message-Id: <20230529100425.6125-16-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org There is really no need to call any of the CLx functions in the TMU code so remove all these checks. This makes the TMU enable/disable flows easier to follow as well. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tmu.c | 23 ----------------------- 1 file changed, 23 deletions(-) diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 6988704c845c..7d06bacf24ff 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -383,14 +383,6 @@ int tb_switch_tmu_post_time(struct tb_switch *sw) */ int tb_switch_tmu_disable(struct tb_switch *sw) { - /* - * No need to disable TMU on devices that don't support CLx since - * on these devices e.g. Alpine Ridge and earlier, the TMU mode - * HiFi bi-directional is enabled by default and we don't change it. - */ - if (!tb_switch_clx_is_supported(sw)) - return 0; - /* Already disabled? */ if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) return 0; @@ -648,25 +640,10 @@ int tb_switch_tmu_enable(struct tb_switch *sw) bool unidirectional = sw->tmu.unidirectional_request; int ret; - /* - * No need to enable TMU on devices that don't support CLx since on - * these devices e.g. Alpine Ridge and earlier, the TMU mode HiFi - * bi-directional is enabled by default. - */ - if (!tb_switch_clx_is_supported(sw)) - return 0; - if (tb_switch_tmu_is_enabled(sw)) return 0; if (tb_switch_is_titan_ridge(sw) && unidirectional) { - /* - * Titan Ridge supports CL0s and CL1 only. CL0s and CL1 are - * enabled and supported together. - */ - if (!tb_switch_clx_is_enabled(sw, TB_CL1)) - return -EOPNOTSUPP; - ret = tb_switch_tmu_disable_objections(sw); if (ret) return ret; From patchwork Mon May 29 10:04:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258384 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFA13C7EE29 for ; Mon, 29 May 2023 10:04:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231936AbjE2KEp (ORCPT ); Mon, 29 May 2023 06:04:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231938AbjE2KEj (ORCPT ); Mon, 29 May 2023 06:04:39 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E58CCC2 for ; Mon, 29 May 2023 03:04:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354678; x=1716890678; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Hp+aoosGxfnW+/8U6JARpfP8sJ7cuKSNBe//Y2VD7E8=; b=CWva9aold5jYveb3MGNzwIjgls6TH2C62kcgx0Ho7YqRmEsGpHefqIAk pvxWE3BpN/jyI8kZJnL8H/kVXKJxNauXH48c+e5clrWuNnQ1pTc+EmbZ+ HT9YFfJsdR8U23QAP2b/By9lkEzcMe9EZSa3CgRAX1DkP2SxA2MC5yyDx s03GWVVfXhbo7P/o2AzrDUFTCtOF7vX/x2z9uNUBfR2f3PQb6kVaNKlxN 3jiP+Ox2WGDZ7eGsiDWIAFfc207qP6fqfTtVJC9S/XWz5cAKZ6bbxaDD9 L8NyTNw/WzE4K0IEYvvwWYQIKHJXJRwRIgr1DTrksDPppyhVEunvzf2Su A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684467" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684467" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518620" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518620" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:27 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 68249FE6; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 16/20] thunderbolt: Prefix TMU post time log message with "TMU: " Date: Mon, 29 May 2023 13:04:21 +0300 Message-Id: <20230529100425.6125-17-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Following what we do with other messages in this file. No functional changes. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c index 7d06bacf24ff..c926fb71c43d 100644 --- a/drivers/thunderbolt/tmu.c +++ b/drivers/thunderbolt/tmu.c @@ -308,7 +308,7 @@ int tb_switch_tmu_post_time(struct tb_switch *sw) return ret; for (i = 0; i < ARRAY_SIZE(gm_local_time); i++) - tb_sw_dbg(root_switch, "local_time[%d]=0x%08x\n", i, + tb_sw_dbg(root_switch, "TMU: local_time[%d]=0x%08x\n", i, gm_local_time[i]); /* Convert to nanoseconds (drop fractional part) */ From patchwork Mon May 29 10:04:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17C36C7EE32 for ; Mon, 29 May 2023 10:04:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232002AbjE2KEw (ORCPT ); Mon, 29 May 2023 06:04:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231952AbjE2KEm (ORCPT ); Mon, 29 May 2023 06:04:42 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8AB4DB for ; Mon, 29 May 2023 03:04:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354680; x=1716890680; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o6fVvWAIOSTWk0LSvbD6edTBjS2MdOYtX+ksmcZ51Gk=; b=HrkT98Z+TvNYxftQ3zDiM/iGuSKf2ZfbRj0wr3heLj/ukPKEN/ZRsOAG uliEUlobg03pdZCjvWqPvaLY0sFT3c0OInedKTEz9RV9/QxQYoafGTr69 zIgJUPFuDqpEFHU3tzteuKeEWbSffNeWFqQ0dhZxWv/bn9esvKA+CY9Am GPsSDXdF1rnEPmBcU07ful9DVMTJSTZt2tk/A4NyGGyaRqDHeihYvJlvn 3cO0mubhwL6lOw/m2D94pNUK0brVI4voM35/7KxpjYj8UL7YMe5FYiUF/ HiaSEpF6tfexRV3HhtIoBuI1aBhrvQTlDwanc18WCxb/2hu+bGdP4QyZl g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684480" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684480" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518650" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518650" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:28 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 6F9311040; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 17/20] thunderbolt: Prefix CL state related log messages with "CLx: " Date: Mon, 29 May 2023 13:04:22 +0300 Message-Id: <20230529100425.6125-18-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This makes it easier to spot from the logs and follows what we do with the TMU code already. We also log enabling/disabling CL states using the tb_sw_dbg() instead of tb_port_dbg(). Signed-off-by: Mika Westerberg --- drivers/thunderbolt/clx.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c index b8cfbd643311..5e745386c413 100644 --- a/drivers/thunderbolt/clx.c +++ b/drivers/thunderbolt/clx.c @@ -296,9 +296,9 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) up_clx_support = tb_port_clx_supported(up, clx); down_clx_support = tb_port_clx_supported(down, clx); - tb_port_dbg(up, "%s %ssupported\n", clx_name(clx), + tb_port_dbg(up, "CLx: %s %ssupported\n", clx_name(clx), up_clx_support ? "" : "not "); - tb_port_dbg(down, "%s %ssupported\n", clx_name(clx), + tb_port_dbg(down, "CLx: %s %ssupported\n", clx_name(clx), down_clx_support ? "" : "not "); if (!up_clx_support || !down_clx_support) @@ -323,7 +323,7 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) sw->clx |= clx; - tb_port_dbg(up, "%s enabled\n", clx_name(clx)); + tb_sw_dbg(sw, "CLx: %s enabled\n", clx_name(clx)); return 0; } @@ -361,6 +361,6 @@ int tb_switch_clx_disable(struct tb_switch *sw) sw->clx = 0; - tb_port_dbg(up, "%s disabled\n", clx_name(clx)); + tb_sw_dbg(sw, "CLx: %s disabled\n", clx_name(clx)); return 0; } From patchwork Mon May 29 10:04:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D301C7EE33 for ; Mon, 29 May 2023 10:04:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231944AbjE2KEz (ORCPT ); Mon, 29 May 2023 06:04:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231955AbjE2KEm (ORCPT ); Mon, 29 May 2023 06:04:42 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F0C81DC for ; Mon, 29 May 2023 03:04:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354680; x=1716890680; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jg2dp+ihyvbHVduyz8Ezy3Sgn1CKDV3r6B3tpgCPtQU=; b=WFt1Duq1lnJg1bSQAlP4qnQok5Z1sWiXC0LHSJvtN3BQO9L5FEWNPSM0 eL3citpkZ1gepTdbiYLEyP+Tg2b5AvsqFngjmujop/1mB5TQsBcpLVzcZ rXFiSqq5Rcbux3mT4HF0SBKssz3Fi0Pd6i4oPYD03w7ntyxxlaBPNtYjz BVH75MMMhTcdePP6g8pwdH7u8PtJ+xR9auRiJ4RZla9tU1N+/GHtZs8/v T3U7YNbD70U0++H3vW7wFpjVUDFRaUvehfVXbywxUiDNwXezgpUqRtE9S bdPq3vT3NLBbjuClLroI8OIm2KtGYRgaHMakcSZEbleApmjwQZxC9PsgL g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684483" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684483" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518665" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518665" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:28 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 7D1AD11A3; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 18/20] thunderbolt: Initialize CL states from the hardware Date: Mon, 29 May 2023 13:04:23 +0300 Message-Id: <20230529100425.6125-19-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org In case the boot firmware enabled any of them, read the currently configured CL states and update the router structure accordingly. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/clx.c | 100 +++++++++++++++++++++++++---------- drivers/thunderbolt/switch.c | 4 ++ drivers/thunderbolt/tb.h | 1 + 3 files changed, 78 insertions(+), 27 deletions(-) diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c index 5e745386c413..960409df4405 100644 --- a/drivers/thunderbolt/clx.c +++ b/drivers/thunderbolt/clx.c @@ -15,6 +15,21 @@ static bool clx_enabled = true; module_param_named(clx, clx_enabled, bool, 0444); MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)"); +static const char *clx_name(unsigned int clx) +{ + if (!clx) + return "disabled"; + + if (clx & TB_CL2) + return "CL0s/CL1/CL2"; + if (clx & TB_CL1) + return "CL0s/CL1"; + if (clx & TB_CL0S) + return "CL0s"; + + return "unknown"; +} + static int tb_port_pm_secondary_set(struct tb_port *port, bool secondary) { u32 phy; @@ -117,6 +132,29 @@ static int tb_port_clx_enable(struct tb_port *port, unsigned int clx) return tb_port_clx_set(port, clx, true); } +static int tb_port_clx(struct tb_port *port) +{ + u32 val; + int ret; + + if (!tb_port_clx_supported(port, TB_CL0S | TB_CL1 | TB_CL2)) + return 0; + + ret = tb_port_read(port, &val, TB_CFG_PORT, + port->cap_phy + LANE_ADP_CS_1, 1); + if (ret) + return ret; + + if (val & LANE_ADP_CS_1_CL0S_ENABLE) + ret |= TB_CL0S; + if (val & LANE_ADP_CS_1_CL1_ENABLE) + ret |= TB_CL1; + if (val & LANE_ADP_CS_1_CL2_ENABLE) + ret |= TB_CL2; + + return ret; +} + /** * tb_port_clx_is_enabled() - Is given CL state enabled * @port: USB4 port to check @@ -126,25 +164,45 @@ static int tb_port_clx_enable(struct tb_port *port, unsigned int clx) */ bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx) { - u32 val, mask = 0; - int ret; + return !!(tb_port_clx(port) & clx); +} - if (!tb_port_clx_supported(port, clx)) - return false; +/** + * tb_switch_clx_init() - Initialize router CL states + * @sw: Router + * + * Can be called for any router. Initializes the current CL state by + * reading it from the hardware. + * + * Returns %0 in case of success and negative errno in case of failure. + */ +int tb_switch_clx_init(struct tb_switch *sw) +{ + struct tb_port *up, *down; + unsigned int clx, tmp; - if (clx & TB_CL0S) - mask |= LANE_ADP_CS_1_CL0S_ENABLE; - if (clx & TB_CL1) - mask |= LANE_ADP_CS_1_CL1_ENABLE; - if (clx & TB_CL2) - mask |= LANE_ADP_CS_1_CL2_ENABLE; + if (tb_switch_is_icm(sw)) + return 0; - ret = tb_port_read(port, &val, TB_CFG_PORT, - port->cap_phy + LANE_ADP_CS_1, 1); - if (ret) - return false; + if (!tb_route(sw)) + return 0; - return !!(val & mask); + if (!tb_switch_clx_is_supported(sw)) + return 0; + + up = tb_upstream_port(sw); + down = tb_switch_downstream_port(sw); + + clx = tb_port_clx(up); + tmp = tb_port_clx(down); + if (clx != tmp) + tb_sw_warn(sw, "CLx: inconsistent configuration %#x != %#x\n", + clx, tmp); + + tb_sw_dbg(sw, "CLx: current mode: %s\n", clx_name(clx)); + + sw->clx = clx; + return 0; } static int tb_switch_pm_secondary_resolve(struct tb_switch *sw) @@ -240,18 +298,6 @@ static bool validate_mask(unsigned int clx) return true; } -static const char *clx_name(unsigned int clx) -{ - if (clx & TB_CL2) - return "CL0s/CL1/CL2"; - if (clx & TB_CL1) - return "CL0s/CL1"; - if (clx & TB_CL0S) - return "CL0s"; - - return "unknown"; -} - /** * tb_switch_clx_enable() - Enable CLx on upstream port of specified router * @sw: Router to enable CLx for diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index f33a09d92c9b..0c11caec7e8e 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -2859,6 +2859,10 @@ int tb_switch_add(struct tb_switch *sw) if (ret) return ret; + ret = tb_switch_clx_init(sw); + if (ret) + return ret; + ret = tb_switch_tmu_init(sw); if (ret) return ret; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 72e245639eb8..58df106aaa5e 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -1002,6 +1002,7 @@ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw) bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx); +int tb_switch_clx_init(struct tb_switch *sw); bool tb_switch_clx_is_supported(const struct tb_switch *sw); int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx); int tb_switch_clx_disable(struct tb_switch *sw); From patchwork Mon May 29 10:04:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06E23C7EE31 for ; Mon, 29 May 2023 10:04:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231978AbjE2KEu (ORCPT ); Mon, 29 May 2023 06:04:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54014 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231945AbjE2KEl (ORCPT ); Mon, 29 May 2023 06:04:41 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08863CD for ; Mon, 29 May 2023 03:04:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354679; x=1716890679; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XTKX5qCmJMJCmaW84u5HqRgEOZTjbzdZ+twNLR/kNKE=; b=MnM0xTVCAOgpBG6zUJi+MPEpoCSpUMhpCuc2iXa5BSLXF2UvUd4afFC6 5ztE5I7zhvzLW0kgjiIKOEgDf3qWsvcX67/MTUc2o+4E9pOYiQ17qKQbm tYOXereu86g+Xn/iL4tVYpV6DolpjTcscYuCLbq2dVJgtkzsDs5DKFn2+ 8FIpgQBwjTvL2YUGVdGd523PzuMskoUGgUSBbSSTMF43BqLXyaTv0iUbb 5mQNrIeqNuDtUf50MZyMfHq0Q9XVgX9fBk2+TbmohILczmN9omG5fV1XT zOoF+pUj2fkNa0TXQF6HOYF87J6Xx9oM/NBXBbF4B8lrrysx8BgKGZn08 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684474" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684474" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518636" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518636" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:28 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 82C487A6; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 19/20] thunderbolt: Make tb_switch_clx_disable() return CL states that were enabled Date: Mon, 29 May 2023 13:04:24 +0300 Message-Id: <20230529100425.6125-20-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org This allows us to disable all CL states temporarily when running lane margining and then return back the previously enabled states. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/clx.c | 8 ++++++-- drivers/thunderbolt/debugfs.c | 35 ++++++++++++++++++++++++----------- 2 files changed, 30 insertions(+), 13 deletions(-) diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c index 960409df4405..4f0cfbb24dd9 100644 --- a/drivers/thunderbolt/clx.c +++ b/drivers/thunderbolt/clx.c @@ -317,6 +317,9 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) struct tb_port *up, *down; int ret; + if (!clx) + return 0; + if (!validate_mask(clx)) return -EINVAL; @@ -380,7 +383,8 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) * Disables all CL states of the given router. Can be called on any * router and if the states were not enabled already does nothing. * - * Returns %0 on success or an error code on failure. + * Returns the CL states that were disabled or negative errno in case of + * failure. */ int tb_switch_clx_disable(struct tb_switch *sw) { @@ -408,5 +412,5 @@ int tb_switch_clx_disable(struct tb_switch *sw) sw->clx = 0; tb_sw_dbg(sw, "CLx: %s disabled\n", clx_name(clx)); - return 0; + return clx; } diff --git a/drivers/thunderbolt/debugfs.c b/drivers/thunderbolt/debugfs.c index e376ad25bf60..40b59e662ee3 100644 --- a/drivers/thunderbolt/debugfs.c +++ b/drivers/thunderbolt/debugfs.c @@ -553,8 +553,9 @@ static int margining_run_write(void *data, u64 val) struct usb4_port *usb4 = port->usb4; struct tb_switch *sw = port->sw; struct tb_margining *margining; + struct tb_switch *down_sw; struct tb *tb = sw->tb; - int ret; + int ret, clx; if (val != 1) return -EINVAL; @@ -566,15 +567,24 @@ static int margining_run_write(void *data, u64 val) goto out_rpm_put; } - /* - * CL states may interfere with lane margining so inform the user know - * and bail out. - */ - if (tb_port_clx_is_enabled(port, TB_CL1 | TB_CL2)) { - tb_port_warn(port, - "CL states are enabled, Disable them with clx=0 and re-connect\n"); - ret = -EINVAL; - goto out_unlock; + if (tb_is_upstream_port(port)) + down_sw = sw; + else if (port->remote) + down_sw = port->remote->sw; + else + down_sw = NULL; + + if (down_sw) { + /* + * CL states may interfere with lane margining so + * disable them temporarily now. + */ + ret = tb_switch_clx_disable(down_sw); + if (ret < 0) { + tb_sw_warn(down_sw, "failed to disable CL states\n"); + goto out_unlock; + } + clx = ret; } margining = usb4->margining; @@ -586,7 +596,7 @@ static int margining_run_write(void *data, u64 val) margining->right_high, USB4_MARGIN_SW_COUNTER_CLEAR); if (ret) - goto out_unlock; + goto out_clx; ret = usb4_port_sw_margin_errors(port, &margining->results[0]); } else { @@ -600,6 +610,9 @@ static int margining_run_write(void *data, u64 val) margining->right_high, margining->results); } +out_clx: + if (down_sw) + tb_switch_clx_enable(down_sw, clx); out_unlock: mutex_unlock(&tb->lock); out_rpm_put: From patchwork Mon May 29 10:04:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 13258390 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28F5FC7EE2F for ; Mon, 29 May 2023 10:04:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231987AbjE2KEv (ORCPT ); Mon, 29 May 2023 06:04:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231948AbjE2KEl (ORCPT ); Mon, 29 May 2023 06:04:41 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48CECD8 for ; Mon, 29 May 2023 03:04:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685354680; x=1716890680; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=irYC6Sj79QrXvnGR34ko/0r4dO3FcJ77FQfYDtxPzxg=; b=XuoJVNiCQrDf0APOJ40PWEWzLBkV7IYplApDwjKbL/NWgHFmHiRKxFZm vePgUeasi3WMEeoazk9J+Ji6Ay+zmnnvaXJsMFOeCebiaQJt2L95axb+l btTfdGoGhYhF4XTeta6LD1A9eXIk1BlExWnl2n4UpbJ/EyZ3PedkM1jlm l/z+GFcg/yuNCcV/nTLCY13Yvzn9WNBx6InlGelb7UTWsehBfmtztgT6v 8T9LKyGuRzb817wS1lLpNjxbI9wOmM8M06HNfbzFJbBHY2sI7d/y24gK7 ExoHc7oeB1QORn1ugJLJRRmhTgsifjuyfYDzr2y/e5tRQxPGaKuRM10zN A==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="354684477" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="354684477" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2023 03:04:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="683518646" X-IronPort-AV: E=Sophos;i="6.00,201,1681196400"; d="scan'208";a="683518646" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga006.jf.intel.com with ESMTP; 29 May 2023 03:04:28 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 8C7DF11DA; Mon, 29 May 2023 13:04:26 +0300 (EEST) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Yehezkel Bernat , Michael Jamet , Lukas Wunner , Andreas Noever , Gil Fine , Mika Westerberg Subject: [PATCH 20/20] thunderbolt: Disable CL states when a DMA tunnel is established Date: Mon, 29 May 2023 13:04:25 +0300 Message-Id: <20230529100425.6125-21-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230529100425.6125-1-mika.westerberg@linux.intel.com> References: <20230529100425.6125-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Tunnels between hosts should not have CL states enabled because otherwise they might enter a low power state without the other end noticing which causes packets to be lost. For this reason disable all CL states upon first DMA tunnel creation. Once the last DMA tunnel is torn down we try to re-enable them. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/clx.c | 2 +- drivers/thunderbolt/tb.c | 62 +++++++++++++++++++++++++++++++++++---- 2 files changed, 58 insertions(+), 6 deletions(-) diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c index 4f0cfbb24dd9..604cceb23659 100644 --- a/drivers/thunderbolt/clx.c +++ b/drivers/thunderbolt/clx.c @@ -317,7 +317,7 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx) struct tb_port *up, *down; int ret; - if (!clx) + if (!clx || sw->clx == clx) return 0; if (!validate_mask(clx)) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 1d056ff6d77f..aa6e11589c28 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -240,8 +240,11 @@ static void tb_discover_dp_resources(struct tb *tb) } } +/* Enables CL states up to host router */ static int tb_enable_clx(struct tb_switch *sw) { + struct tb_cm *tcm = tb_priv(sw->tb); + const struct tb_tunnel *tunnel; int ret; /* @@ -251,9 +254,26 @@ static int tb_enable_clx(struct tb_switch *sw) * this in the future to cover the whole topology if it turns * out to be beneficial. */ + while (sw && sw->config.depth > 1) + sw = tb_switch_parent(sw); + + if (!sw) + return 0; + if (sw->config.depth != 1) return 0; + /* + * If we are re-enabling then check if there is an active DMA + * tunnel and in that case bail out. + */ + list_for_each_entry(tunnel, &tcm->tunnel_list, list) { + if (tb_tunnel_is_dma(tunnel)) { + if (tb_tunnel_port_on_path(tunnel, tb_upstream_port(sw))) + return 0; + } + } + /* * CL0s and CL1 are enabled and supported together. * Silently ignore CLx enabling in case CLx is not supported. @@ -262,6 +282,16 @@ static int tb_enable_clx(struct tb_switch *sw) return ret == -EOPNOTSUPP ? 0 : ret; } +/* Disables CL states up to the host router */ +static void tb_disable_clx(struct tb_switch *sw) +{ + do { + if (tb_switch_clx_disable(sw) < 0) + tb_sw_warn(sw, "failed to disable CL states\n"); + sw = tb_switch_parent(sw); + } while (sw); +} + static int tb_increase_switch_tmu_accuracy(struct device *dev, void *data) { struct tb_switch *sw; @@ -1470,30 +1500,45 @@ static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, struct tb_port *nhi_port, *dst_port; struct tb_tunnel *tunnel; struct tb_switch *sw; + int ret; sw = tb_to_switch(xd->dev.parent); dst_port = tb_port_at(xd->route, sw); nhi_port = tb_switch_find_port(tb->root_switch, TB_TYPE_NHI); mutex_lock(&tb->lock); + + /* + * When tunneling DMA paths the link should not enter CL states + * so disable them now. + */ + tb_disable_clx(sw); + tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, transmit_path, transmit_ring, receive_path, receive_ring); if (!tunnel) { - mutex_unlock(&tb->lock); - return -ENOMEM; + ret = -ENOMEM; + goto err_clx; } if (tb_tunnel_activate(tunnel)) { tb_port_info(nhi_port, "DMA tunnel activation failed, aborting\n"); - tb_tunnel_free(tunnel); - mutex_unlock(&tb->lock); - return -EIO; + ret = -EIO; + goto err_free; } list_add_tail(&tunnel->list, &tcm->tunnel_list); mutex_unlock(&tb->lock); return 0; + +err_free: + tb_tunnel_free(tunnel); +err_clx: + tb_enable_clx(sw); + mutex_unlock(&tb->lock); + + return ret; } static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, @@ -1519,6 +1564,13 @@ static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd, receive_path, receive_ring)) tb_deactivate_and_free_tunnel(tunnel); } + + /* + * Try to re-enable CL states now, it is OK if this fails + * because we may still have another DMA tunnel active through + * the same host router USB4 downstream port. + */ + tb_enable_clx(sw); } static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,