From patchwork Mon Sep 5 12:02:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 12966023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F263ECAAD3 for ; Mon, 5 Sep 2022 12:02:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236892AbiIEMCO (ORCPT ); Mon, 5 Sep 2022 08:02:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236772AbiIEMB7 (ORCPT ); Mon, 5 Sep 2022 08:01:59 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9EF865AC42 for ; Mon, 5 Sep 2022 05:01:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662379317; x=1693915317; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4VpzIqWWEFflDNLNjLJzyC0Fap0RM4D+tmuyBWzQWvM=; b=iuEbxXsatkfux6FaAQ+4rQAhApWVG6ebb26wUA/g0PNCWw2TG8dxhguQ K++8Exl3feecwoXMjbUktGtGRnizSH/4y5trRrDq7GuAx8s1cBTo+Vpqj SgJEVAttfoW/5WTWn4itOVxvwic45Le3YGOh82w1T7G35fDLpzYfNwBOs BVal/vMw2Vmbq9x09A2KEisF0jHH+zzsIB8ju8SQmR6MqDZVqIrpJKk4d 3co+FTvobY+fHB7eq2Zqr4rYtyPEhEVdOB4VSFOe1YE5NFhVkVhJ42tH6 D3Me6fCLjgPZ4MCkqy89j78RUDKr9DNNWzwp9YeUeA51SjpPJHDy6U7n9 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10460"; a="295113123" X-IronPort-AV: E=Sophos;i="5.93,291,1654585200"; d="scan'208";a="295113123" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Sep 2022 05:01:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,291,1654585200"; d="scan'208";a="613760091" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga002.jf.intel.com with ESMTP; 05 Sep 2022 05:01:50 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 2C13FF7; Mon, 5 Sep 2022 15:02:06 +0300 (EEST) From: Mika Westerberg To: Szuying Chen Cc: gregkh@linuxfoundation.org, mario.limonciello@amd.com, andreas.noever@gmail.com, michael.jamet@intel.com, YehezkelShB@gmail.com, Yd_Tseng@asmedia.com.tw, Richard_Hsu@asmedia.com.tw, Lukas Wunner , Mika Westerberg , linux-usb@vger.kernel.org Subject: [PATCH v9 1/6] thunderbolt: Allow NVM upgrade of USB4 host routers Date: Mon, 5 Sep 2022 15:02:00 +0300 Message-Id: <20220905120205.23025-2-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220905120205.23025-1-mika.westerberg@linux.intel.com> References: <20220905120205.23025-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Szuying Chen Intel pre-USB4 host routers required the firmware connection manager to be active in order to perform NVM firmware upgrade and for this reason it was disabled when software connection manager is active. However, this is not necessary for USB4 host routers as this functionality is part of router operations that the router implements if it wants to support this. Signed-off-by: Szuying Chen Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tb.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 4a659cea26f7..462845804427 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -1442,8 +1442,11 @@ static int tb_start(struct tb *tb) * ICM firmware upgrade needs running firmware and in native * mode that is not available so disable firmware upgrade of the * root switch. + * + * However, USB4 routers support NVM firmware upgrade if they + * implement the necessary router operations. */ - tb->root_switch->no_nvm_upgrade = true; + tb->root_switch->no_nvm_upgrade = !tb_switch_is_usb4(tb->root_switch); /* All USB4 routers support runtime PM */ tb->root_switch->rpm = tb_switch_is_usb4(tb->root_switch); From patchwork Mon Sep 5 12:02:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 12966021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC1B9ECAAD5 for ; Mon, 5 Sep 2022 12:02:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230203AbiIEMCA (ORCPT ); Mon, 5 Sep 2022 08:02:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234228AbiIEMB5 (ORCPT ); Mon, 5 Sep 2022 08:01:57 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80E3957E03 for ; Mon, 5 Sep 2022 05:01:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662379314; x=1693915314; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/S8WiKjmYAhpqH+osEBpfozXJYT7V2OyEpiaVRL5fag=; b=CG7OfIbjb5CzYfm+XBPNn+LVPGan9FCzHyH6aLMLNO5uuoLrnrzRicv7 MY72R8aIcxE3JvABEhepokCF2vLd9rWho4WMZ1HF27poS8MZU04/1ox/s TwHZMBz93223cvgG2C0rHFN+yomUNB3DRLhB8yCBXHCReDOEQVOJzn57q GUSbiLpu6PENl20QQY4Y8CSDeJx2m4UVdKjLXR9pvATmHWJ3/jNZIkRL7 yW29H76t1lQSb6qZla1nfTBs01SwpbS0QPLQOb+B4RNGM1evyzsUydJVl mA/3BKuVc/P42wkpNFn5j/MgpKr+9gg7n9cuW1ywgUg18+ocL5QwtT3GW Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10460"; a="295113121" X-IronPort-AV: E=Sophos;i="5.93,291,1654585200"; d="scan'208";a="295113121" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Sep 2022 05:01:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,291,1654585200"; d="scan'208";a="613760090" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga002.jf.intel.com with ESMTP; 05 Sep 2022 05:01:50 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 35739193; Mon, 5 Sep 2022 15:02:06 +0300 (EEST) From: Mika Westerberg To: Szuying Chen Cc: gregkh@linuxfoundation.org, mario.limonciello@amd.com, andreas.noever@gmail.com, michael.jamet@intel.com, YehezkelShB@gmail.com, Yd_Tseng@asmedia.com.tw, Richard_Hsu@asmedia.com.tw, Lukas Wunner , Mika Westerberg , linux-usb@vger.kernel.org Subject: [PATCH v9 2/6] thunderbolt: Extend NVM version fields to 32-bits Date: Mon, 5 Sep 2022 15:02:01 +0300 Message-Id: <20220905120205.23025-3-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220905120205.23025-1-mika.westerberg@linux.intel.com> References: <20220905120205.23025-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Szuying Chen In order to support non-Intel NVM image formats extend the NVM major and minor version to 32-bits to better accommondate different versioning schemes. No functional impact. Signed-off-by: Szuying Chen Signed-off-by: Mika Westerberg --- drivers/thunderbolt/retimer.c | 4 ++-- drivers/thunderbolt/switch.c | 4 ++-- drivers/thunderbolt/tb.h | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c index 8c29bd556ae0..dc0fc90e81cf 100644 --- a/drivers/thunderbolt/retimer.c +++ b/drivers/thunderbolt/retimer.c @@ -71,8 +71,8 @@ static int tb_retimer_nvm_add(struct tb_retimer *rt) if (ret) goto err_nvm; - nvm->major = val >> 16; - nvm->minor = val >> 8; + nvm->major = (val >> 16) & 0xff; + nvm->minor = (val >> 8) & 0xff; ret = usb4_port_retimer_nvm_read(rt->port, rt->index, NVM_FLASH_SIZE, &val, sizeof(val)); diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index bd815e2cc6ec..ac8d7145e9ff 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -427,8 +427,8 @@ static int tb_switch_nvm_add(struct tb_switch *sw) if (ret) goto err_nvm; - nvm->major = val >> 16; - nvm->minor = val >> 8; + nvm->major = (val >> 16) & 0xff; + nvm->minor = (val >> 8) & 0xff; ret = tb_nvm_add_active(nvm, nvm_size, tb_switch_nvm_read); if (ret) diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 6cca6b759c5f..610ed62b55f2 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -48,8 +48,8 @@ */ struct tb_nvm { struct device *dev; - u8 major; - u8 minor; + u32 major; + u32 minor; int id; struct nvmem_device *active; struct nvmem_device *non_active; From patchwork Mon Sep 5 12:02:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 12966025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55973C6FA83 for ; Mon, 5 Sep 2022 12:02:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236609AbiIEMCM (ORCPT ); Mon, 5 Sep 2022 08:02:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236751AbiIEMB5 (ORCPT ); Mon, 5 Sep 2022 08:01:57 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C1925C34D for ; Mon, 5 Sep 2022 05:01:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662379314; x=1693915314; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qdaHVAEk2NuyqO0h8gjWmE8N1nUGXp37XCNSdOi2Nyc=; b=AXScR/EXIHKH0BpJyRvIwP91HCgd5l0//PpaGXIhy8S7dVMS0k3fZIHx 6QBFH3nYqJwKnWQu7ZTAS8vp5XhbyB1729MbA78FmNcuqPGkvc6XDhbWu orHt2YwdMaDtVWP74E0052FkShPbCQg5B4hg/i7gJXMiJuUx4e9/UGteU g7TLhXks0yHaYWDw4TOqFdfeRwoiu2FaSs6saaPR3flph0+OiBe2epbDs J/mcRLMBImzPpBTUfCRkA5oOaGq0bjaXf6P6ZGtnB4sCiA7S24PgpfKjU RqV5phRpzBj8JYBb+TKghfg1ZbZoqRwU80Fo5g7ZKF1Mo6jhFwpwEH2dv w==; X-IronPort-AV: E=McAfee;i="6500,9779,10460"; a="283368898" X-IronPort-AV: E=Sophos;i="5.93,291,1654585200"; d="scan'208";a="283368898" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Sep 2022 05:01:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,291,1654585200"; d="scan'208";a="739573246" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga004.jf.intel.com with ESMTP; 05 Sep 2022 05:01:50 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 3E4541E0; Mon, 5 Sep 2022 15:02:06 +0300 (EEST) From: Mika Westerberg To: Szuying Chen Cc: gregkh@linuxfoundation.org, mario.limonciello@amd.com, andreas.noever@gmail.com, michael.jamet@intel.com, YehezkelShB@gmail.com, Yd_Tseng@asmedia.com.tw, Richard_Hsu@asmedia.com.tw, Lukas Wunner , Mika Westerberg , linux-usb@vger.kernel.org Subject: [PATCH v9 3/6] thunderbolt: Rename and make nvm_read() available for other files Date: Mon, 5 Sep 2022 15:02:02 +0300 Message-Id: <20220905120205.23025-4-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220905120205.23025-1-mika.westerberg@linux.intel.com> References: <20220905120205.23025-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Szuying Chen In order to support non-Intel NVM formats the vendor specific NVM validation code that will live in nvm.c needs to be able to read various parts of the NVM so make the function available outside of switch.c and rename it accordingly. Signed-off-by: Szuying Chen Signed-off-by: Mika Westerberg --- drivers/thunderbolt/switch.c | 44 +++++++++++++++++++++--------------- drivers/thunderbolt/tb.h | 2 ++ 2 files changed, 28 insertions(+), 18 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index ac8d7145e9ff..a073a702c381 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -300,14 +300,6 @@ static inline bool nvm_upgradeable(struct tb_switch *sw) return nvm_readable(sw); } -static inline int nvm_read(struct tb_switch *sw, unsigned int address, - void *buf, size_t size) -{ - if (tb_switch_is_usb4(sw)) - return usb4_switch_nvm_read(sw, address, buf, size); - return dma_port_flash_read(sw->dma_port, address, buf, size); -} - static int nvm_authenticate(struct tb_switch *sw, bool auth_only) { int ret; @@ -335,8 +327,26 @@ static int nvm_authenticate(struct tb_switch *sw, bool auth_only) return ret; } -static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val, - size_t bytes) +/** + * tb_switch_nvm_read() - Read router NVM + * @sw: Router whose NVM to read + * @address: Start address on the NVM + * @buf: Buffer where the read data is copied + * @size: Size of the buffer in bytes + * + * Reads from router NVM and returns the requested data in @buf. Locking + * is up to the caller. Returns %0 in success and negative errno in case + * of failure. + */ +int tb_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf, + size_t size) +{ + if (tb_switch_is_usb4(sw)) + return usb4_switch_nvm_read(sw, address, buf, size); + return dma_port_flash_read(sw->dma_port, address, buf, size); +} + +static int nvm_read(void *priv, unsigned int offset, void *val, size_t bytes) { struct tb_nvm *nvm = priv; struct tb_switch *sw = tb_to_switch(nvm->dev); @@ -349,7 +359,7 @@ static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val, goto out; } - ret = nvm_read(sw, offset, val, bytes); + ret = tb_switch_nvm_read(sw, offset, val, bytes); mutex_unlock(&sw->tb->lock); out: @@ -359,8 +369,7 @@ static int tb_switch_nvm_read(void *priv, unsigned int offset, void *val, return ret; } -static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val, - size_t bytes) +static int nvm_write(void *priv, unsigned int offset, void *val, size_t bytes) { struct tb_nvm *nvm = priv; struct tb_switch *sw = tb_to_switch(nvm->dev); @@ -415,7 +424,7 @@ static int tb_switch_nvm_add(struct tb_switch *sw) if (!sw->safe_mode) { u32 nvm_size, hdr_size; - ret = nvm_read(sw, NVM_FLASH_SIZE, &val, sizeof(val)); + ret = tb_switch_nvm_read(sw, NVM_FLASH_SIZE, &val, sizeof(val)); if (ret) goto err_nvm; @@ -423,21 +432,20 @@ static int tb_switch_nvm_add(struct tb_switch *sw) nvm_size = (SZ_1M << (val & 7)) / 8; nvm_size = (nvm_size - hdr_size) / 2; - ret = nvm_read(sw, NVM_VERSION, &val, sizeof(val)); + ret = tb_switch_nvm_read(sw, NVM_VERSION, &val, sizeof(val)); if (ret) goto err_nvm; nvm->major = (val >> 16) & 0xff; nvm->minor = (val >> 8) & 0xff; - ret = tb_nvm_add_active(nvm, nvm_size, tb_switch_nvm_read); + ret = tb_nvm_add_active(nvm, nvm_size, nvm_read); if (ret) goto err_nvm; } if (!sw->no_nvm_upgrade) { - ret = tb_nvm_add_non_active(nvm, NVM_MAX_SIZE, - tb_switch_nvm_write); + ret = tb_nvm_add_non_active(nvm, NVM_MAX_SIZE, nvm_write); if (ret) goto err_nvm; } diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 610ed62b55f2..f9b241ee109f 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -759,6 +759,8 @@ int tb_nvm_write_data(unsigned int address, const void *buf, size_t size, unsigned int retries, write_block_fn write_next_block, void *write_block_data); +int tb_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf, + size_t size); struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent, u64 route); struct tb_switch *tb_switch_alloc_safe_mode(struct tb *tb, From patchwork Mon Sep 5 12:02:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 12966024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD61FC6FA89 for ; Mon, 5 Sep 2022 12:02:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236854AbiIEMCL (ORCPT ); Mon, 5 Sep 2022 08:02:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236754AbiIEMB5 (ORCPT ); Mon, 5 Sep 2022 08:01:57 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 951245C351 for ; Mon, 5 Sep 2022 05:01:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662379314; x=1693915314; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kjadbl1dKFlE7Aj8fIxa6mGp54dtaDy9UPlKIovYXWg=; b=YOIHJW8GaUTlVe377inyOID6iMMjilVpvB+38ote6Pm8NGaJxzeAv0Wc c7IGuI7qu1Kdb0iTPv/X+2lgUNRp7Tk22OIpZ5Nh1XjJe+1dKFoUf81sp KciaavWfYxuwXTxCHisnlfGEWJp8NqrMKpEmVSQSp5G8wBUZAj5Kb4Jr7 5FbkOrtDcLFpBDNe90EpdXZkDwbKUSCJenVHoID5QhCq9JII3/Hjz1BzC l1RDXNfC+CTxS/RZrq6OIfyig0upSu86HDvaMi6N+lAjxs9giDXoH8B6x zbRaDf6NbTlNPKfWzByKBWt8UEMrlauLwtbMDFErsLT/AzzWPJZirlsxf Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10460"; a="360328073" X-IronPort-AV: E=Sophos;i="5.93,291,1654585200"; d="scan'208";a="360328073" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Sep 2022 05:01:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,291,1654585200"; d="scan'208";a="675266643" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga008.fm.intel.com with ESMTP; 05 Sep 2022 05:01:50 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 4401F14F; Mon, 5 Sep 2022 15:02:06 +0300 (EEST) From: Mika Westerberg To: Szuying Chen Cc: gregkh@linuxfoundation.org, mario.limonciello@amd.com, andreas.noever@gmail.com, michael.jamet@intel.com, YehezkelShB@gmail.com, Yd_Tseng@asmedia.com.tw, Richard_Hsu@asmedia.com.tw, Lukas Wunner , Mika Westerberg , linux-usb@vger.kernel.org Subject: [PATCH v9 4/6] thunderbolt: Provide tb_retimer_nvm_read() analogous to tb_switch_nvm_read() Date: Mon, 5 Sep 2022 15:02:03 +0300 Message-Id: <20220905120205.23025-5-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220905120205.23025-1-mika.westerberg@linux.intel.com> References: <20220905120205.23025-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org As we are moving the NVM vendor specifics into nvm.c we need to deal witht he retimer NVM formats too. For this reason provide retimer specific function that can be used to read the contents of the NVM and rename the internal ones accordingly analogous to what we do with routers. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/retimer.c | 28 +++++++++++++++++++++------- drivers/thunderbolt/tb.h | 2 ++ 2 files changed, 23 insertions(+), 7 deletions(-) diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c index dc0fc90e81cf..bec02ad4cb3b 100644 --- a/drivers/thunderbolt/retimer.c +++ b/drivers/thunderbolt/retimer.c @@ -16,8 +16,23 @@ #define TB_MAX_RETIMER_INDEX 6 -static int tb_retimer_nvm_read(void *priv, unsigned int offset, void *val, - size_t bytes) +/** + * tb_retimer_nvm_read() - Read contents of retimer NVM + * @rt: Retimer device + * @address: NVM address (in bytes) to start reading + * @buf: Data read from NVM is stored here + * @size: Number of bytes to read + * + * Reads retimer NVM and copies the contents to @buf. Returns %0 if the + * read was successful and negative errno in case of failure. + */ +int tb_retimer_nvm_read(struct tb_retimer *rt, unsigned int address, void *buf, + size_t size) +{ + return usb4_port_retimer_nvm_read(rt->port, rt->index, address, buf, size); +} + +static int nvm_read(void *priv, unsigned int offset, void *val, size_t bytes) { struct tb_nvm *nvm = priv; struct tb_retimer *rt = tb_to_retimer(nvm->dev); @@ -30,7 +45,7 @@ static int tb_retimer_nvm_read(void *priv, unsigned int offset, void *val, goto out; } - ret = usb4_port_retimer_nvm_read(rt->port, rt->index, offset, val, bytes); + ret = tb_retimer_nvm_read(rt, offset, val, bytes); mutex_unlock(&rt->tb->lock); out: @@ -40,8 +55,7 @@ static int tb_retimer_nvm_read(void *priv, unsigned int offset, void *val, return ret; } -static int tb_retimer_nvm_write(void *priv, unsigned int offset, void *val, - size_t bytes) +static int nvm_write(void *priv, unsigned int offset, void *val, size_t bytes) { struct tb_nvm *nvm = priv; struct tb_retimer *rt = tb_to_retimer(nvm->dev); @@ -82,11 +96,11 @@ static int tb_retimer_nvm_add(struct tb_retimer *rt) nvm_size = (SZ_1M << (val & 7)) / 8; nvm_size = (nvm_size - SZ_16K) / 2; - ret = tb_nvm_add_active(nvm, nvm_size, tb_retimer_nvm_read); + ret = tb_nvm_add_active(nvm, nvm_size, nvm_read); if (ret) goto err_nvm; - ret = tb_nvm_add_non_active(nvm, NVM_MAX_SIZE, tb_retimer_nvm_write); + ret = tb_nvm_add_non_active(nvm, NVM_MAX_SIZE, nvm_write); if (ret) goto err_nvm; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index f9b241ee109f..5a4aface19e7 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -1144,6 +1144,8 @@ static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd) return tb_to_switch(xd->dev.parent); } +int tb_retimer_nvm_read(struct tb_retimer *rt, unsigned int address, void *buf, + size_t size); int tb_retimer_scan(struct tb_port *port, bool add); void tb_retimer_remove_all(struct tb_port *port); From patchwork Mon Sep 5 12:02:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 12966027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55764C6FA8B for ; Mon, 5 Sep 2022 12:02:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236985AbiIEMCP (ORCPT ); Mon, 5 Sep 2022 08:02:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236787AbiIEMCA (ORCPT ); Mon, 5 Sep 2022 08:02:00 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0E385D0D0 for ; Mon, 5 Sep 2022 05:01:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662379317; x=1693915317; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9ed6YQBSm6olQgRBi9oyUNLTAOel2VBtBR/wTG3Lai0=; b=Hb0yWUM3U8KjlqbgmHQp33mDiw+UbQvxIdSSkpkF9hco3+8900OsIvuq ZIBIfN+4gC/cSD/4gS9YwJ2WDY00uEcOizyUYtOyXkZJkKG8ahFKep2Hz 2NX0QH2M0wryyhBvWnTku0TrBgL8MIYVxvolVtPBPXVVU/odDQ36n+cW/ cHI1kepaT+WpY8l6rMjqMuCf3aNkEa8lzl8m1TevfR4rhANmP0rUJT9t4 De9g1ZjSWEZ4xoUY3Xaw1bmfOEEPYiaseLOYY/JZ7wIbJYByODEcQh/it w81qjkaORETvwHaXnGTPE6lYNnS1jbWbfbjWh8wo32330sBrNMvCx0IOz g==; X-IronPort-AV: E=McAfee;i="6500,9779,10460"; a="279390044" X-IronPort-AV: E=Sophos;i="5.93,291,1654585200"; d="scan'208";a="279390044" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Sep 2022 05:01:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,291,1654585200"; d="scan'208";a="717324067" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga002.fm.intel.com with ESMTP; 05 Sep 2022 05:01:54 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 4E5C11F6; Mon, 5 Sep 2022 15:02:06 +0300 (EEST) From: Mika Westerberg To: Szuying Chen Cc: gregkh@linuxfoundation.org, mario.limonciello@amd.com, andreas.noever@gmail.com, michael.jamet@intel.com, YehezkelShB@gmail.com, Yd_Tseng@asmedia.com.tw, Richard_Hsu@asmedia.com.tw, Lukas Wunner , Mika Westerberg , linux-usb@vger.kernel.org Subject: [PATCH v9 5/6] thunderbolt: Move vendor specific NVM handling into nvm.c Date: Mon, 5 Sep 2022 15:02:04 +0300 Message-Id: <20220905120205.23025-6-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220905120205.23025-1-mika.westerberg@linux.intel.com> References: <20220905120205.23025-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Szuying Chen As there will be more USB4 devices that support NVM firmware upgrade from various vendors, it makes sense to split out the Intel specific NVM image handling from the generic code. This moves the Intel specific NVM handling into a new structure that will be matched by the device type and the vendor ID. Do this for both routers and retimers. This makes it easier to extend the NVM support to cover new vendors and NVM image formats in the future. Signed-off-by: Szuying Chen Signed-off-by: Mika Westerberg --- drivers/thunderbolt/nvm.c | 348 +++++++++++++++++++++++++++++++++- drivers/thunderbolt/retimer.c | 81 +++----- drivers/thunderbolt/switch.c | 126 ++++-------- drivers/thunderbolt/tb.h | 22 ++- 4 files changed, 410 insertions(+), 167 deletions(-) diff --git a/drivers/thunderbolt/nvm.c b/drivers/thunderbolt/nvm.c index b3f310389378..2e9b132af321 100644 --- a/drivers/thunderbolt/nvm.c +++ b/drivers/thunderbolt/nvm.c @@ -12,19 +12,278 @@ #include "tb.h" +/* Intel specific NVM offsets */ +#define INTEL_NVM_DEVID 0x05 +#define INTEL_NVM_VERSION 0x08 +#define INTEL_NVM_CSS 0x10 +#define INTEL_NVM_FLASH_SIZE 0x45 + static DEFINE_IDA(nvm_ida); +/** + * struct tb_nvm_vendor_ops - Vendor specific NVM operations + * @read_version: Reads out NVM version from the flash + * @validate: Validates the NVM image before update (optional) + * @write_headers: Writes headers before the rest of the image (optional) + */ +struct tb_nvm_vendor_ops { + int (*read_version)(struct tb_nvm *nvm); + int (*validate)(struct tb_nvm *nvm); + int (*write_headers)(struct tb_nvm *nvm); +}; + +/** + * struct tb_nvm_vendor - Vendor to &struct tb_nvm_vendor_ops mapping + * @vendor: Vendor ID + * @vops: Vendor specific NVM operations + * + * Maps vendor ID to NVM vendor operations. If there is no mapping then + * NVM firmware upgrade is disabled for the device. + */ +struct tb_nvm_vendor { + u16 vendor; + const struct tb_nvm_vendor_ops *vops; +}; + +static int intel_switch_nvm_version(struct tb_nvm *nvm) +{ + struct tb_switch *sw = tb_to_switch(nvm->dev); + u32 val, nvm_size, hdr_size; + int ret; + + /* + * If the switch is in safe-mode the only accessible portion of + * the NVM is the non-active one where userspace is expected to + * write new functional NVM. + */ + if (sw->safe_mode) + return 0; + + ret = tb_switch_nvm_read(sw, INTEL_NVM_FLASH_SIZE, &val, sizeof(val)); + if (ret) + return ret; + + hdr_size = sw->generation < 3 ? SZ_8K : SZ_16K; + nvm_size = (SZ_1M << (val & 7)) / 8; + nvm_size = (nvm_size - hdr_size) / 2; + + ret = tb_switch_nvm_read(sw, INTEL_NVM_VERSION, &val, sizeof(val)); + if (ret) + return ret; + + nvm->major = (val >> 16) & 0xff; + nvm->minor = (val >> 8) & 0xff; + nvm->active_size = nvm_size; + + return 0; +} + +static int intel_switch_nvm_validate(struct tb_nvm *nvm) +{ + struct tb_switch *sw = tb_to_switch(nvm->dev); + unsigned int image_size, hdr_size; + u16 ds_size, device_id; + u8 *buf = nvm->buf; + + image_size = nvm->buf_data_size; + + /* + * FARB pointer must point inside the image and must at least + * contain parts of the digital section we will be reading here. + */ + hdr_size = (*(u32 *)buf) & 0xffffff; + if (hdr_size + INTEL_NVM_DEVID + 2 >= image_size) + return -EINVAL; + + /* Digital section start should be aligned to 4k page */ + if (!IS_ALIGNED(hdr_size, SZ_4K)) + return -EINVAL; + + /* + * Read digital section size and check that it also fits inside + * the image. + */ + ds_size = *(u16 *)(buf + hdr_size); + if (ds_size >= image_size) + return -EINVAL; + + if (sw->safe_mode) + return 0; + + /* + * Make sure the device ID in the image matches the one + * we read from the switch config space. + */ + device_id = *(u16 *)(buf + hdr_size + INTEL_NVM_DEVID); + if (device_id != sw->config.device_id) + return -EINVAL; + + /* Skip headers in the image */ + nvm->buf_data_start = buf + hdr_size; + nvm->buf_data_size = image_size - hdr_size; + + return 0; +} + +static int intel_switch_nvm_write_headers(struct tb_nvm *nvm) +{ + struct tb_switch *sw = tb_to_switch(nvm->dev); + + if (sw->generation < 3) { + int ret; + + /* Write CSS headers first */ + ret = dma_port_flash_write(sw->dma_port, + DMA_PORT_CSS_ADDRESS, nvm->buf + INTEL_NVM_CSS, + DMA_PORT_CSS_MAX_SIZE); + if (ret) + return ret; + } + + return 0; +} + +static const struct tb_nvm_vendor_ops intel_switch_nvm_ops = { + .read_version = intel_switch_nvm_version, + .validate = intel_switch_nvm_validate, + .write_headers = intel_switch_nvm_write_headers, +}; + +/* Router vendor NVM support table */ +static const struct tb_nvm_vendor switch_nvm_vendors[] = { + { PCI_VENDOR_ID_INTEL, &intel_switch_nvm_ops }, + { 0x8087, &intel_switch_nvm_ops }, +}; + +static int intel_retimer_nvm_version(struct tb_nvm *nvm) +{ + struct tb_retimer *rt = tb_to_retimer(nvm->dev); + u32 val, nvm_size; + int ret; + + ret = tb_retimer_nvm_read(rt, INTEL_NVM_VERSION, &val, sizeof(val)); + if (ret) + return ret; + + nvm->major = (val >> 16) & 0xff; + nvm->minor = (val >> 8) & 0xff; + + ret = tb_retimer_nvm_read(rt, INTEL_NVM_FLASH_SIZE, &val, sizeof(val)); + if (ret) + return ret; + + nvm_size = (SZ_1M << (val & 7)) / 8; + nvm_size = (nvm_size - SZ_16K) / 2; + nvm->active_size = nvm_size; + + return 0; +} + +static int intel_retimer_nvm_validate(struct tb_nvm *nvm) +{ + struct tb_retimer *rt = tb_to_retimer(nvm->dev); + unsigned int image_size, hdr_size; + u8 *buf = nvm->buf; + u16 ds_size, device; + + image_size = nvm->buf_data_size; + + /* + * FARB pointer must point inside the image and must at least + * contain parts of the digital section we will be reading here. + */ + hdr_size = (*(u32 *)buf) & 0xffffff; + if (hdr_size + INTEL_NVM_DEVID + 2 >= image_size) + return -EINVAL; + + /* Digital section start should be aligned to 4k page */ + if (!IS_ALIGNED(hdr_size, SZ_4K)) + return -EINVAL; + + /* + * Read digital section size and check that it also fits inside + * the image. + */ + ds_size = *(u16 *)(buf + hdr_size); + if (ds_size >= image_size) + return -EINVAL; + + /* + * Make sure the device ID in the image matches the retimer + * hardware. + */ + device = *(u16 *)(buf + hdr_size + INTEL_NVM_DEVID); + if (device != rt->device) + return -EINVAL; + + /* Skip headers in the image */ + nvm->buf_data_start = buf + hdr_size; + nvm->buf_data_size = image_size - hdr_size; + + return 0; +} + +static const struct tb_nvm_vendor_ops intel_retimer_nvm_ops = { + .read_version = intel_retimer_nvm_version, + .validate = intel_retimer_nvm_validate, +}; + +/* Retimer vendor NVM support table */ +static const struct tb_nvm_vendor retimer_nvm_vendors[] = { + { 0x8087, &intel_retimer_nvm_ops }, +}; + /** * tb_nvm_alloc() - Allocate new NVM structure * @dev: Device owning the NVM * * Allocates new NVM structure with unique @id and returns it. In case - * of error returns ERR_PTR(). + * of error returns ERR_PTR(). Specifically returns %-EOPNOTSUPP if the + * NVM format of the @dev is not known by the kernel. */ struct tb_nvm *tb_nvm_alloc(struct device *dev) { + const struct tb_nvm_vendor_ops *vops = NULL; struct tb_nvm *nvm; - int ret; + int ret, i; + + if (tb_is_switch(dev)) { + const struct tb_switch *sw = tb_to_switch(dev); + + for (i = 0; i < ARRAY_SIZE(switch_nvm_vendors); i++) { + const struct tb_nvm_vendor *v = &switch_nvm_vendors[i]; + + if (v->vendor == sw->config.vendor_id) { + vops = v->vops; + break; + } + } + + if (!vops) { + tb_sw_dbg(sw, "router NVM format of vendor %#x unknown\n", + sw->config.vendor_id); + return ERR_PTR(-EOPNOTSUPP); + } + } else if (tb_is_retimer(dev)) { + const struct tb_retimer *rt = tb_to_retimer(dev); + + for (i = 0; i < ARRAY_SIZE(retimer_nvm_vendors); i++) { + const struct tb_nvm_vendor *v = &retimer_nvm_vendors[i]; + + if (v->vendor == rt->vendor) { + vops = v->vops; + break; + } + } + + if (!vops) { + dev_dbg(dev, "retimer NVM format of vendor %#x unknown\n", + rt->vendor); + return ERR_PTR(-EOPNOTSUPP); + } + } else { + return ERR_PTR(-EOPNOTSUPP); + } nvm = kzalloc(sizeof(*nvm), GFP_KERNEL); if (!nvm) @@ -38,14 +297,85 @@ struct tb_nvm *tb_nvm_alloc(struct device *dev) nvm->id = ret; nvm->dev = dev; + nvm->vops = vops; return nvm; } +/** + * tb_nvm_read_version() - Read and populate NVM version + * @nvm: NVM structure + * + * Uses vendor specific means to read out and fill in the existing + * active NVM version. Returns %0 in case of success and negative errno + * otherwise. + */ +int tb_nvm_read_version(struct tb_nvm *nvm) +{ + const struct tb_nvm_vendor_ops *vops = nvm->vops; + + if (vops && vops->read_version) + return vops->read_version(nvm); + + return -EOPNOTSUPP; +} + +/** + * tb_nvm_validate() - Validate new NVM image + * @nvm: NVM structure + * + * Runs vendor specific validation over the new NVM image and if all + * checks pass returns %0. As side effect updates @nvm->buf_data_start + * and @nvm->buf_data_size fields to match the actual data to be written + * to the NVM. + * + * If the validation does not pass then returns negative errno. + */ +int tb_nvm_validate(struct tb_nvm *nvm) +{ + const struct tb_nvm_vendor_ops *vops = nvm->vops; + unsigned int image_size; + u8 *buf = nvm->buf; + + if (!buf) + return -EINVAL; + if (!vops) + return -EOPNOTSUPP; + + /* Just do basic image size checks */ + image_size = nvm->buf_data_size; + if (image_size < NVM_MIN_SIZE || image_size > NVM_MAX_SIZE) + return -EINVAL; + + /* + * Set the default data start in the buffer. The validate method + * below can change this if needed. + */ + nvm->buf_data_start = buf; + + return vops->validate ? vops->validate(nvm) : 0; +} + +/** + * tb_nvm_write_header() - Write headers before the rest of the image + * @nvm: NVM structure + * + * If the vendor NVM format requires writing headers before the rest of + * the image, this function does that. Can be called even if the device + * does not need this. + * + * Returns %0 in case of success and negative errno otherwise. + */ +int tb_nvm_write_headers(struct tb_nvm *nvm) +{ + const struct tb_nvm_vendor_ops *vops = nvm->vops; + + return vops->write_headers ? vops->write_headers(nvm) : 0; +} + /** * tb_nvm_add_active() - Adds active NVMem device to NVM * @nvm: NVM structure - * @size: Size of the active NVM in bytes * @reg_read: Pointer to the function to read the NVM (passed directly to the * NVMem device) * @@ -54,7 +384,7 @@ struct tb_nvm *tb_nvm_alloc(struct device *dev) * needed. The first parameter passed to @reg_read is @nvm structure. * Returns %0 in success and negative errno otherwise. */ -int tb_nvm_add_active(struct tb_nvm *nvm, size_t size, nvmem_reg_read_t reg_read) +int tb_nvm_add_active(struct tb_nvm *nvm, nvmem_reg_read_t reg_read) { struct nvmem_config config; struct nvmem_device *nvmem; @@ -67,7 +397,7 @@ int tb_nvm_add_active(struct tb_nvm *nvm, size_t size, nvmem_reg_read_t reg_read config.id = nvm->id; config.stride = 4; config.word_size = 4; - config.size = size; + config.size = nvm->active_size; config.dev = nvm->dev; config.owner = THIS_MODULE; config.priv = nvm; @@ -109,17 +439,17 @@ int tb_nvm_write_buf(struct tb_nvm *nvm, unsigned int offset, void *val, /** * tb_nvm_add_non_active() - Adds non-active NVMem device to NVM * @nvm: NVM structure - * @size: Size of the non-active NVM in bytes * @reg_write: Pointer to the function to write the NVM (passed directly * to the NVMem device) * * Registers new non-active NVmem device for @nvm. The @reg_write is called * directly from NVMem so it must handle possible concurrent access if * needed. The first parameter passed to @reg_write is @nvm structure. + * The size of the NVMem device is set to %NVM_MAX_SIZE. + * * Returns %0 in success and negative errno otherwise. */ -int tb_nvm_add_non_active(struct tb_nvm *nvm, size_t size, - nvmem_reg_write_t reg_write) +int tb_nvm_add_non_active(struct tb_nvm *nvm, nvmem_reg_write_t reg_write) { struct nvmem_config config; struct nvmem_device *nvmem; @@ -132,7 +462,7 @@ int tb_nvm_add_non_active(struct tb_nvm *nvm, size_t size, config.id = nvm->id; config.stride = 4; config.word_size = 4; - config.size = size; + config.size = NVM_MAX_SIZE; config.dev = nvm->dev; config.owner = THIS_MODULE; config.priv = nvm; diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c index bec02ad4cb3b..dd8f033b1690 100644 --- a/drivers/thunderbolt/retimer.c +++ b/drivers/thunderbolt/retimer.c @@ -73,34 +73,23 @@ static int nvm_write(void *priv, unsigned int offset, void *val, size_t bytes) static int tb_retimer_nvm_add(struct tb_retimer *rt) { struct tb_nvm *nvm; - u32 val, nvm_size; int ret; nvm = tb_nvm_alloc(&rt->dev); - if (IS_ERR(nvm)) - return PTR_ERR(nvm); - - ret = usb4_port_retimer_nvm_read(rt->port, rt->index, NVM_VERSION, &val, - sizeof(val)); - if (ret) + if (IS_ERR(nvm)) { + ret = PTR_ERR(nvm) == -EOPNOTSUPP ? 0 : PTR_ERR(nvm); goto err_nvm; + } - nvm->major = (val >> 16) & 0xff; - nvm->minor = (val >> 8) & 0xff; - - ret = usb4_port_retimer_nvm_read(rt->port, rt->index, NVM_FLASH_SIZE, - &val, sizeof(val)); + ret = tb_nvm_read_version(nvm); if (ret) goto err_nvm; - nvm_size = (SZ_1M << (val & 7)) / 8; - nvm_size = (nvm_size - SZ_16K) / 2; - - ret = tb_nvm_add_active(nvm, nvm_size, nvm_read); + ret = tb_nvm_add_active(nvm, nvm_read); if (ret) goto err_nvm; - ret = tb_nvm_add_non_active(nvm, NVM_MAX_SIZE, nvm_write); + ret = tb_nvm_add_non_active(nvm, nvm_write); if (ret) goto err_nvm; @@ -108,59 +97,33 @@ static int tb_retimer_nvm_add(struct tb_retimer *rt) return 0; err_nvm: - tb_nvm_free(nvm); + dev_dbg(&rt->dev, "NVM upgrade disabled\n"); + if (!IS_ERR(nvm)) + tb_nvm_free(nvm); + return ret; } static int tb_retimer_nvm_validate_and_write(struct tb_retimer *rt) { - unsigned int image_size, hdr_size; - const u8 *buf = rt->nvm->buf; - u16 ds_size, device; + unsigned int image_size; + const u8 *buf; int ret; - image_size = rt->nvm->buf_data_size; - if (image_size < NVM_MIN_SIZE || image_size > NVM_MAX_SIZE) - return -EINVAL; - - /* - * FARB pointer must point inside the image and must at least - * contain parts of the digital section we will be reading here. - */ - hdr_size = (*(u32 *)buf) & 0xffffff; - if (hdr_size + NVM_DEVID + 2 >= image_size) - return -EINVAL; - - /* Digital section start should be aligned to 4k page */ - if (!IS_ALIGNED(hdr_size, SZ_4K)) - return -EINVAL; - - /* - * Read digital section size and check that it also fits inside - * the image. - */ - ds_size = *(u16 *)(buf + hdr_size); - if (ds_size >= image_size) - return -EINVAL; - - /* - * Make sure the device ID in the image matches the retimer - * hardware. - */ - device = *(u16 *)(buf + hdr_size + NVM_DEVID); - if (device != rt->device) - return -EINVAL; + ret = tb_nvm_validate(rt->nvm); + if (ret) + return ret; - /* Skip headers in the image */ - buf += hdr_size; - image_size -= hdr_size; + buf = rt->nvm->buf_data_start; + image_size = rt->nvm->buf_data_size; ret = usb4_port_retimer_nvm_write(rt->port, rt->index, 0, buf, image_size); - if (!ret) - rt->nvm->flushed = true; + if (ret) + return ret; - return ret; + rt->nvm->flushed = true; + return 0; } static int tb_retimer_nvm_authenticate(struct tb_retimer *rt, bool auth_only) @@ -214,6 +177,8 @@ static ssize_t nvm_authenticate_show(struct device *dev, if (!rt->nvm) ret = -EAGAIN; + else if (rt->no_nvm_upgrade) + ret = -EOPNOTSUPP; else ret = sprintf(buf, "%#x\n", rt->auth_status); diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index a073a702c381..2bcb6753d569 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -19,8 +19,6 @@ /* Switch NVM support */ -#define NVM_CSS 0x10 - struct nvm_auth_status { struct list_head list; uuid_t uuid; @@ -102,70 +100,30 @@ static void nvm_clear_auth_status(const struct tb_switch *sw) static int nvm_validate_and_write(struct tb_switch *sw) { - unsigned int image_size, hdr_size; - const u8 *buf = sw->nvm->buf; - u16 ds_size; + unsigned int image_size; + const u8 *buf; int ret; - if (!buf) - return -EINVAL; - - image_size = sw->nvm->buf_data_size; - if (image_size < NVM_MIN_SIZE || image_size > NVM_MAX_SIZE) - return -EINVAL; - - /* - * FARB pointer must point inside the image and must at least - * contain parts of the digital section we will be reading here. - */ - hdr_size = (*(u32 *)buf) & 0xffffff; - if (hdr_size + NVM_DEVID + 2 >= image_size) - return -EINVAL; - - /* Digital section start should be aligned to 4k page */ - if (!IS_ALIGNED(hdr_size, SZ_4K)) - return -EINVAL; - - /* - * Read digital section size and check that it also fits inside - * the image. - */ - ds_size = *(u16 *)(buf + hdr_size); - if (ds_size >= image_size) - return -EINVAL; - - if (!sw->safe_mode) { - u16 device_id; + ret = tb_nvm_validate(sw->nvm); + if (ret) + return ret; - /* - * Make sure the device ID in the image matches the one - * we read from the switch config space. - */ - device_id = *(u16 *)(buf + hdr_size + NVM_DEVID); - if (device_id != sw->config.device_id) - return -EINVAL; - - if (sw->generation < 3) { - /* Write CSS headers first */ - ret = dma_port_flash_write(sw->dma_port, - DMA_PORT_CSS_ADDRESS, buf + NVM_CSS, - DMA_PORT_CSS_MAX_SIZE); - if (ret) - return ret; - } + ret = tb_nvm_write_headers(sw->nvm); + if (ret) + return ret; - /* Skip headers in the image */ - buf += hdr_size; - image_size -= hdr_size; - } + buf = sw->nvm->buf_data_start; + image_size = sw->nvm->buf_data_size; if (tb_switch_is_usb4(sw)) ret = usb4_switch_nvm_write(sw, 0, buf, image_size); else ret = dma_port_flash_write(sw->dma_port, 0, buf, image_size); - if (!ret) - sw->nvm->flushed = true; - return ret; + if (ret) + return ret; + + sw->nvm->flushed = true; + return 0; } static int nvm_authenticate_host_dma_port(struct tb_switch *sw) @@ -393,28 +351,20 @@ static int nvm_write(void *priv, unsigned int offset, void *val, size_t bytes) static int tb_switch_nvm_add(struct tb_switch *sw) { struct tb_nvm *nvm; - u32 val; int ret; if (!nvm_readable(sw)) return 0; - /* - * The NVM format of non-Intel hardware is not known so - * currently restrict NVM upgrade for Intel hardware. We may - * relax this in the future when we learn other NVM formats. - */ - if (sw->config.vendor_id != PCI_VENDOR_ID_INTEL && - sw->config.vendor_id != 0x8087) { - dev_info(&sw->dev, - "NVM format of vendor %#x is not known, disabling NVM upgrade\n", - sw->config.vendor_id); - return 0; + nvm = tb_nvm_alloc(&sw->dev); + if (IS_ERR(nvm)) { + ret = PTR_ERR(nvm) == -EOPNOTSUPP ? 0 : PTR_ERR(nvm); + goto err_nvm; } - nvm = tb_nvm_alloc(&sw->dev); - if (IS_ERR(nvm)) - return PTR_ERR(nvm); + ret = tb_nvm_read_version(nvm); + if (ret) + goto err_nvm; /* * If the switch is in safe-mode the only accessible portion of @@ -422,30 +372,13 @@ static int tb_switch_nvm_add(struct tb_switch *sw) * write new functional NVM. */ if (!sw->safe_mode) { - u32 nvm_size, hdr_size; - - ret = tb_switch_nvm_read(sw, NVM_FLASH_SIZE, &val, sizeof(val)); - if (ret) - goto err_nvm; - - hdr_size = sw->generation < 3 ? SZ_8K : SZ_16K; - nvm_size = (SZ_1M << (val & 7)) / 8; - nvm_size = (nvm_size - hdr_size) / 2; - - ret = tb_switch_nvm_read(sw, NVM_VERSION, &val, sizeof(val)); - if (ret) - goto err_nvm; - - nvm->major = (val >> 16) & 0xff; - nvm->minor = (val >> 8) & 0xff; - - ret = tb_nvm_add_active(nvm, nvm_size, nvm_read); + ret = tb_nvm_add_active(nvm, nvm_read); if (ret) goto err_nvm; } if (!sw->no_nvm_upgrade) { - ret = tb_nvm_add_non_active(nvm, NVM_MAX_SIZE, nvm_write); + ret = tb_nvm_add_non_active(nvm, nvm_write); if (ret) goto err_nvm; } @@ -454,7 +387,11 @@ static int tb_switch_nvm_add(struct tb_switch *sw) return 0; err_nvm: - tb_nvm_free(nvm); + tb_sw_dbg(sw, "NVM upgrade disabled\n"); + sw->no_nvm_upgrade = true; + if (!IS_ERR(nvm)) + tb_nvm_free(nvm); + return ret; } @@ -2003,6 +1940,11 @@ static ssize_t nvm_authenticate_sysfs(struct device *dev, const char *buf, goto exit_rpm; } + if (sw->no_nvm_upgrade) { + ret = -EOPNOTSUPP; + goto exit_unlock; + } + /* If NVMem devices are not yet added */ if (!sw->nvm) { ret = -EAGAIN; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 5a4aface19e7..bba4f6b29ee4 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -23,11 +23,6 @@ #define NVM_MAX_SIZE SZ_512K #define NVM_DATA_DWORDS 16 -/* Intel specific NVM offsets */ -#define NVM_DEVID 0x05 -#define NVM_VERSION 0x08 -#define NVM_FLASH_SIZE 0x45 - /** * struct tb_nvm - Structure holding NVM information * @dev: Owner of the NVM @@ -35,13 +30,17 @@ * @minor: Minor version number of the active NVM portion * @id: Identifier used with both NVM portions * @active: Active portion NVMem device + * @active_size: Size in bytes of the active NVM * @non_active: Non-active portion NVMem device * @buf: Buffer where the NVM image is stored before it is written to * the actual NVM flash device + * @buf_data_start: Where the actual image starts after skipping + * possible headers * @buf_data_size: Number of bytes actually consumed by the new NVM * image * @authenticating: The device is authenticating the new NVM * @flushed: The image has been flushed to the storage area + * @vops: Router vendor specific NVM operations (optional) * * The user of this structure needs to handle serialization of possible * concurrent access. @@ -52,11 +51,14 @@ struct tb_nvm { u32 minor; int id; struct nvmem_device *active; + size_t active_size; struct nvmem_device *non_active; void *buf; + void *buf_data_start; size_t buf_data_size; bool authenticating; bool flushed; + const struct tb_nvm_vendor_ops *vops; }; enum tb_nvm_write_ops { @@ -300,6 +302,7 @@ struct usb4_port { * @device: Device ID of the retimer * @port: Pointer to the lane 0 adapter * @nvm: Pointer to the NVM if the retimer has one (%NULL otherwise) + * @no_nvm_upgrade: Prevent NVM upgrade of this retimer * @auth_status: Status of last NVM authentication */ struct tb_retimer { @@ -310,6 +313,7 @@ struct tb_retimer { u32 device; struct tb_port *port; struct tb_nvm *nvm; + bool no_nvm_upgrade; u32 auth_status; }; @@ -741,11 +745,13 @@ static inline void tb_domain_put(struct tb *tb) } struct tb_nvm *tb_nvm_alloc(struct device *dev); -int tb_nvm_add_active(struct tb_nvm *nvm, size_t size, nvmem_reg_read_t reg_read); +int tb_nvm_read_version(struct tb_nvm *nvm); +int tb_nvm_validate(struct tb_nvm *nvm); +int tb_nvm_write_headers(struct tb_nvm *nvm); +int tb_nvm_add_active(struct tb_nvm *nvm, nvmem_reg_read_t reg_read); int tb_nvm_write_buf(struct tb_nvm *nvm, unsigned int offset, void *val, size_t bytes); -int tb_nvm_add_non_active(struct tb_nvm *nvm, size_t size, - nvmem_reg_write_t reg_write); +int tb_nvm_add_non_active(struct tb_nvm *nvm, nvmem_reg_write_t reg_write); void tb_nvm_free(struct tb_nvm *nvm); void tb_nvm_exit(void); From patchwork Mon Sep 5 12:02:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 12966026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A24CECAAD3 for ; Mon, 5 Sep 2022 12:02:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236720AbiIEMCR (ORCPT ); Mon, 5 Sep 2022 08:02:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236832AbiIEMCA (ORCPT ); Mon, 5 Sep 2022 08:02:00 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C76757E03 for ; Mon, 5 Sep 2022 05:01:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1662379318; x=1693915318; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=faW4CsHzIIJ/763i7/5zlOy4v4vRxbE0KLLC3CY4mqU=; b=NhIJzrNhudSV+bkfTYHBsGXnMjnh5KytTAsGcfTubYK2TRkU0mh5MtuP lMBSJ+Zt0HSZ8oP9zUbeJrZza3WMi9VBZjZniYicF8rv2YjK9fPZKpbju +7ciDWuFJCqRgDfLtGFyE8s7pXjiGzkUhghcI/f/5TIF2tS0yrTy97KeQ /DICvXlUTHWGVkqSJCjE3GH9GWZ1AQASjWMefiRXy6va7OXETudKssl4T X+gFMEy/qGHMzLu+UwyH8EJawkb00iK+uNIus9BaWKxV/EBjBarBZv/SJ b+L0Y8I7Eceij5tfZL142Sbh+7qXMnKiCeLRKFrYlRJk922we0ubN93vh A==; X-IronPort-AV: E=McAfee;i="6500,9779,10460"; a="322543271" X-IronPort-AV: E=Sophos;i="5.93,291,1654585200"; d="scan'208";a="322543271" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Sep 2022 05:01:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,291,1654585200"; d="scan'208";a="609657974" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga007.jf.intel.com with ESMTP; 05 Sep 2022 05:01:54 -0700 Received: by black.fi.intel.com (Postfix, from userid 1001) id 587224F1; Mon, 5 Sep 2022 15:02:06 +0300 (EEST) From: Mika Westerberg To: Szuying Chen Cc: gregkh@linuxfoundation.org, mario.limonciello@amd.com, andreas.noever@gmail.com, michael.jamet@intel.com, YehezkelShB@gmail.com, Yd_Tseng@asmedia.com.tw, Richard_Hsu@asmedia.com.tw, Lukas Wunner , Mika Westerberg , linux-usb@vger.kernel.org Subject: [PATCH v9 6/6] thunderbolt: Add support for ASMedia NVM image format Date: Mon, 5 Sep 2022 15:02:05 +0300 Message-Id: <20220905120205.23025-7-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220905120205.23025-1-mika.westerberg@linux.intel.com> References: <20220905120205.23025-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Szuying Chen Add support for ASMedia specific NVM image format. This makes it possible to upgrade the NVM firmware of ASMedia routers in addition to Intel ones. Signed-off-by: Szuying Chen Signed-off-by: Mika Westerberg --- drivers/thunderbolt/nvm.c | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/drivers/thunderbolt/nvm.c b/drivers/thunderbolt/nvm.c index 2e9b132af321..710536de12f3 100644 --- a/drivers/thunderbolt/nvm.c +++ b/drivers/thunderbolt/nvm.c @@ -18,6 +18,10 @@ #define INTEL_NVM_CSS 0x10 #define INTEL_NVM_FLASH_SIZE 0x45 +/* ASMedia specific NVM offsets */ +#define ASMEDIA_NVM_DATE 0x1c +#define ASMEDIA_NVM_VERSION 0x28 + static DEFINE_IDA(nvm_ida); /** @@ -149,8 +153,41 @@ static const struct tb_nvm_vendor_ops intel_switch_nvm_ops = { .write_headers = intel_switch_nvm_write_headers, }; +static int asmedia_switch_nvm_version(struct tb_nvm *nvm) +{ + struct tb_switch *sw = tb_to_switch(nvm->dev); + u32 val; + int ret; + + ret = tb_switch_nvm_read(sw, ASMEDIA_NVM_VERSION, &val, sizeof(val)); + if (ret) + return ret; + + nvm->major = (val << 16) & 0xff0000; + nvm->minor |= val & 0x00ff00; + nvm->major |= (val >> 16) & 0x0000ff; + + ret = tb_switch_nvm_read(sw, ASMEDIA_NVM_DATE, &val, sizeof(val)); + if (ret) + return ret; + + nvm->minor = (val << 16) & 0xff0000; + nvm->minor |= val & 0x00ff00; + nvm->minor |= (val >> 16) & 0x0000ff; + + /* ASMedia NVM size is fixed to 512k */ + nvm->active_size = SZ_512K; + + return 0; +} + +static const struct tb_nvm_vendor_ops asmedia_switch_nvm_ops = { + .read_version = asmedia_switch_nvm_version, +}; + /* Router vendor NVM support table */ static const struct tb_nvm_vendor switch_nvm_vendors[] = { + { 0x174c, &asmedia_switch_nvm_ops }, { PCI_VENDOR_ID_INTEL, &intel_switch_nvm_ops }, { 0x8087, &intel_switch_nvm_ops }, };