From patchwork Mon Oct 7 22:01:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Yidong (David)" X-Patchwork-Id: 13825319 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2053.outbound.protection.outlook.com [40.107.223.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C094D18A6DC; Mon, 7 Oct 2024 22:01:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.53 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728338500; cv=fail; b=V3AYqfgy8eJSd5Pin9S5TTFI+cM2rnrv/tjKh3qbo6kCoNCkTNiNC2D3dSLKqqromVa7x6WCVEmXp35Wgsyo7XlC+AeDpsRFisAE3kW8KpUBxCM/mjaWMYu71H3YzHhwYZNjyopCwf+wiW+K7ifzEgmplgDjr/zw1rnURfFkYJo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728338500; c=relaxed/simple; bh=8JeASCvPeN8fFS/VYs9MvOsS2ATK2FAfn0fq/swkqWE=; h=From:To:CC:Subject:Date:Message-ID:MIME-Version:Content-Type; b=X6+LPIA1Euj5XpFWEPdWfwdpmIFwTR+oIzYIffCD5eSamdSN+4mQlRSxvdi//oSPXh60B26wjZ0QkaqvZrV+BheS6WxQZqQc0TU7HBCg6S3w5ZSTg7gN6MfB+/r6/qqOCo+MVce7TPYknRO3OWZjXcBV6MQM4ftGewBTAuYJmHo= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=N8Ztyabw; arc=fail smtp.client-ip=40.107.223.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="N8Ztyabw" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=I8hp8YnKCpWyc1Fur7SFdylcBdJwDY3wez0mybY7hI+QDSifFt2F8OA9tg/Znqh8opqCtAZXJbtyjjCekXfpFDRejtD3iJ/oG0esC+p/+VyV6DjFL5o0ZiY33UfTbiyPo4aP9mtHDG+Ox3YEXs1/TbYYZWELTWCvewOJSY8Zrsgjwj1w8BeezwOpVDsXwGA6V7HmWTfn4xiwfu6UGmh/DmMY5wpMiOjx6hpsPBCdiDTiZ44ntE9lsJm+0CwMJlI2yTK/Ecy4Wcb+d34QmXnQbIFgnlljzuRSgJ7qPk9OqKPaQyXLDhaguK1xNrbFlOYWYnDo+DSyj3KHvEf812zSSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oCRy3kC0unmQWEF0XKnvmfFQlDTgqjedB9su2SCWCCo=; b=FoydI2DQG+fuBrsxJTBC3gs/6/9/IuJOjMulc2KKH1FiKdjZZCFejAncjzuQPPPRUmKPptgsogO0QTEbchV7URfZtcp7sT3eoTmthWaiHefjX+cX1NJkWXavE2ngZNKrWcFq3vEaTGcBNSnaHi/EsxGFX7jrbLDu4ooad5SEd0Sc6THucGojt48GqRweJl5RgTerqg+u2xs94r4+laLD3WJycupuOpJ5fPF98XG8mGLQXtV2H5qatRbarhvOholTsboQYYyVjme6zW0E2MGD6DEF9AXQLtz5bbrFTj0ZQYsAIxYf4s4Kbx8Mckwg0p/vezJtxtGB4pP1ZOGdaPlebQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oCRy3kC0unmQWEF0XKnvmfFQlDTgqjedB9su2SCWCCo=; b=N8ZtyabwN5hcI+mZHdJzImvnYI9zx/4y1er/dbU5OHUgO43AXRMVN8pVJy0CmlHc7yu8OL7fN7BRwrIBKioaZGnkJnx3X+AdCyUc75ZaURdbB1ipHxaVAblUKk4OxPI5zNAXpLoQfHb0N6UWR1zaRvqsJcszv+iQie6OSH3yVgA= Received: from MN2PR01CA0014.prod.exchangelabs.com (2603:10b6:208:10c::27) by IA0PR12MB7601.namprd12.prod.outlook.com (2603:10b6:208:43b::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.22; Mon, 7 Oct 2024 22:01:32 +0000 Received: from BN2PEPF000055E1.namprd21.prod.outlook.com (2603:10b6:208:10c:cafe::97) by MN2PR01CA0014.outlook.office365.com (2603:10b6:208:10c::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7982.29 via Frontend Transport; Mon, 7 Oct 2024 22:01:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN2PEPF000055E1.mail.protection.outlook.com (10.167.245.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8069.0 via Frontend Transport; Mon, 7 Oct 2024 22:01:32 +0000 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 7 Oct 2024 17:01:31 -0500 Received: from xsjlizhih51.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Mon, 7 Oct 2024 17:01:30 -0500 From: David Zhang To: , , , , CC: Yidong Zhang , , DMG Karthik , Nishad Saraf , Prapul Krishnamurthy Subject: [PATCH V1 1/3] drivers/fpga/amd: Add new driver for AMD Versal PCIe card Date: Mon, 7 Oct 2024 15:01:26 -0700 Message-ID: <20241007220128.3023169-1-yidong.zhang@amd.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-fpga@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Received-SPF: None (SATLEXMB04.amd.com: yidong.zhang@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN2PEPF000055E1:EE_|IA0PR12MB7601:EE_ X-MS-Office365-Filtering-Correlation-Id: 595b2b7f-19e8-4a67-c5cd-08dce71b9a70 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|1800799024|82310400026|36860700013; X-Microsoft-Antispam-Message-Info: 24MGsLEpxns0N3FGJgDjRfQeH/mQL0Us5emSZSqSlrX1TWy0dtD4A3UnSdYkE+TqExR7cPgJGVBcVefaHz/mRCABrnsUzSvfTGUPe+eIlVqiXd8i3i+VyBZ8mC0T9xqL0wsrSMPPospEGFZr/ieAZTIYe8LgHfHqvfqJukDPWOmdMzVytS9Y9rB6f3fEZ6q6zh7fhZSUoyQKKVfDwsoGqDv0bTN5s+2jw/YhuQ1g3F9Mfd4C4ubf7pgeJCxxXyuvgZYICAsDMjKSO72tBM4LR1j5lWzA0FfAyWN73RnVPL620LjHtIH8J63gFolHP98CE4WL+6yLGlDwscpRuvnCQ1yDMo2+y+4+VL6cxj9Yh9rHf6cAvvvyV8U6X+LJtXaw1MWYOHDA7NdHexszRe7P/UEwZibpvv5m4Gk08VvCiJMNS1FxJ0mUGDxhKGtnWBaxE+cXTQ2ZniD5L6rPA9KfBUujyLWt4KlKq77lmgVb00NB0dzY1odPZK3VubWs2CKetm3iOLXmKVDYNNkYI8E4zYPK+YHSLsGZpGiDpFDJi2igIoYKVHffxSHzCUJKTLpVgezfEv04fIhQBD06MG1562/bxZXDLO7IuNClhRLqScU4Q6RrIfTu2puCvoP2C5+sKhe+nN/cfsjFbdwB+Nzx5HrNmzqKDvA8Sklfs9xLNuhx5y88Qy9W1ouILIztX7MtY6DpFK1gOrBpm/VE+s8ue7DNvc3qhwiS6Ds1BdhzNgnzEdT5xs9T6nMlIBmK3M0VGV/Ipu89flD1HNq5yVApo+GRULkLR8Cjka/1jh4pcbubirEDD4w+cCIJCHzT9lI3oiRl94sRdM5HHCBty9CdvzISBWVuGmnLTdDLEFCkjbGBtyNshZzMq1S/C/HYsujCP3GlCp73tB8cXYdCNLFeaGiOkRLmk3ehrvApxXIB68pIUFr5dyTCt9JLss8iUkLHJb4Nb2m2kmvp5HNVVZkqUV7BtknOeAy6pZxDum2y4UwwO8FkACg+LUmmsb16KH3ZGc3mvsCIgNLFGtjGrF8JNGXEG2G+TnkVPp7Tgf/pBfV1v0jlk57S2wG7hlzB5ptSCwBRA+0IxVHuWhnPiFRT1tEDNkTZuzk+Gsjn/Ia0Hl8leJ7BgB2NvS0F4p4XuA3btgBhWuvsXcBc3jadbUrnVQuWK8UaSf9PecuZUzGq/uECwtB0FdBZCnlRjE/hmJL2gGzGsCgNE5T692efyOAKrKBmEVa3sBKeRsS8gA7KK4/Z/DIlmora/DVVlsrgLw+j01Szxy3JShqI9oyRg1X8JNvP2UJU81Qk+z0jvTDl+JQQpp/a2wG6grCc96d6LWREjFArbT1VWaz2Z2OqnV2scnFwW8Ja/z82Fr+G/WbI7VgdX87CD3rpQc2StVlRwmCY X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(376014)(1800799024)(82310400026)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2024 22:01:32.0357 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 595b2b7f-19e8-4a67-c5cd-08dce71b9a70 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN2PEPF000055E1.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7601 From: Yidong Zhang AMD Versal based PCIe card, including V70, is designed for AI inference efficiency and is tuned for video analytics and natural language processing applications. Add the driver to support AMD Versal card management physical function. Only very basic functionalities are added. - module and PCI device initialization - fpga framework ops callbacks - communication with user physical function Co-developed-by: DMG Karthik Signed-off-by: DMG Karthik Co-developed-by: Nishad Saraf Signed-off-by: Nishad Saraf Co-developed-by: Prapul Krishnamurthy Signed-off-by: Prapul Krishnamurthy Signed-off-by: Yidong Zhang --- MAINTAINERS | 7 + drivers/fpga/Kconfig | 3 + drivers/fpga/Makefile | 3 + drivers/fpga/amd/Kconfig | 17 ++ drivers/fpga/amd/Makefile | 6 + drivers/fpga/amd/vmgmt-comms.c | 344 ++++++++++++++++++++++++++++ drivers/fpga/amd/vmgmt-comms.h | 14 ++ drivers/fpga/amd/vmgmt.c | 395 +++++++++++++++++++++++++++++++++ drivers/fpga/amd/vmgmt.h | 100 +++++++++ include/uapi/linux/vmgmt.h | 25 +++ 10 files changed, 914 insertions(+) create mode 100644 drivers/fpga/amd/Kconfig create mode 100644 drivers/fpga/amd/Makefile create mode 100644 drivers/fpga/amd/vmgmt-comms.c create mode 100644 drivers/fpga/amd/vmgmt-comms.h create mode 100644 drivers/fpga/amd/vmgmt.c create mode 100644 drivers/fpga/amd/vmgmt.h create mode 100644 include/uapi/linux/vmgmt.h diff --git a/MAINTAINERS b/MAINTAINERS index a097afd76ded..645f00ccb342 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1185,6 +1185,13 @@ M: Sanjay R Mehta S: Maintained F: drivers/spi/spi-amd.c +AMD VERSAL PCI DRIVER +M: Yidong Zhang +L: linux-fpga@vger.kernel.org +S: Supported +F: drivers/fpga/amd/ +F: include/uapi/linux/vmgmt.h + AMD XGBE DRIVER M: "Shyam Sundar S K" L: netdev@vger.kernel.org diff --git a/drivers/fpga/Kconfig b/drivers/fpga/Kconfig index 37b35f58f0df..dce060a7bd8f 100644 --- a/drivers/fpga/Kconfig +++ b/drivers/fpga/Kconfig @@ -290,4 +290,7 @@ config FPGA_MGR_LATTICE_SYSCONFIG_SPI source "drivers/fpga/tests/Kconfig" +# Driver files +source "drivers/fpga/amd/Kconfig" + endif # FPGA diff --git a/drivers/fpga/Makefile b/drivers/fpga/Makefile index aeb89bb13517..5e8a3869f9a0 100644 --- a/drivers/fpga/Makefile +++ b/drivers/fpga/Makefile @@ -58,5 +58,8 @@ obj-$(CONFIG_FPGA_DFL_NIOS_INTEL_PAC_N3000) += dfl-n3000-nios.o # Drivers for FPGAs which implement DFL obj-$(CONFIG_FPGA_DFL_PCI) += dfl-pci.o +# AMD PCIe Versal Management Driver +obj-y += amd/ + # KUnit tests obj-$(CONFIG_FPGA_KUNIT_TESTS) += tests/ diff --git a/drivers/fpga/amd/Kconfig b/drivers/fpga/amd/Kconfig new file mode 100644 index 000000000000..126bc579a333 --- /dev/null +++ b/drivers/fpga/amd/Kconfig @@ -0,0 +1,17 @@ +# SPDX-License-Identifier: GPL-2.0-only + +config AMD_VERSAL_MGMT + tristate "AMD PCIe Versal Management Driver" + select FW_LOADER + select FW_UPLOAD + select REGMAP_MMIO + depends on FPGA_BRIDGE + depends on FPGA_REGION + depends on HAS_IOMEM + depends on PCI + help + AMD PCIe Versal Management Driver provides management services to + download firmware, program bitstream, collect sensor data, control + resets, and communicate with the User function. + + If "M" is selected, the driver module will be amd-vmgmt. diff --git a/drivers/fpga/amd/Makefile b/drivers/fpga/amd/Makefile new file mode 100644 index 000000000000..3e4c6dd3b787 --- /dev/null +++ b/drivers/fpga/amd/Makefile @@ -0,0 +1,6 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_AMD_VERSAL_MGMT) += amd-vmgmt.o + +amd-vmgmt-$(CONFIG_AMD_VERSAL_MGMT) := vmgmt.o \ + vmgmt-comms.o diff --git a/drivers/fpga/amd/vmgmt-comms.c b/drivers/fpga/amd/vmgmt-comms.c new file mode 100644 index 000000000000..bed0d369a744 --- /dev/null +++ b/drivers/fpga/amd/vmgmt-comms.c @@ -0,0 +1,344 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Driver for Versal PCIe device + * + * Copyright (C) 2024 Advanced Micro Devices, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "vmgmt.h" +#include "vmgmt-comms.h" + +#define COMMS_PROTOCOL_VERSION 1 +#define COMMS_PCI_BAR_OFF 0x2000000 +#define COMMS_TIMER (HZ / 10) +#define COMMS_DATA_LEN 16 +#define COMMS_DATA_TYPE_MASK GENMASK(7, 0) +#define COMMS_DATA_EOM_MASK BIT(31) +#define COMMS_MSG_END BIT(31) + +#define COMMS_REG_WRDATA_OFF 0x0 +#define COMMS_REG_RDDATA_OFF 0x8 +#define COMMS_REG_STATUS_OFF 0x10 +#define COMMS_REG_ERROR_OFF 0x14 +#define COMMS_REG_RIT_OFF 0x1C +#define COMMS_REG_IS_OFF 0x20 +#define COMMS_REG_IE_OFF 0x24 +#define COMMS_REG_CTRL_OFF 0x2C +#define COMMS_REGS_SIZE 0x1000 + +#define COMMS_IRQ_DISABLE_ALL 0 +#define COMMS_IRQ_RECEIVE_ENABLE BIT(1) +#define COMMS_IRQ_CLEAR_ALL GENMASK(2, 0) +#define COMMS_CLEAR_FIFO GENMASK(1, 0) +#define COMMS_RECEIVE_THRESHOLD 15 + +enum comms_req_ops { + COMMS_REQ_OPS_UNKNOWN = 0, + COMMS_REQ_OPS_HOT_RESET = 5, + COMMS_REQ_OPS_GET_PROTOCOL_VERSION = 19, + COMMS_REQ_OPS_GET_XCLBIN_UUID = 20, + COMMS_REQ_OPS_MAX, +}; + +enum comms_msg_type { + COMMS_MSG_INVALID = 0, + COMMS_MSG_START = 2, + COMMS_MSG_BODY = 3, +}; + +enum comms_msg_service_type { + COMMS_MSG_SRV_RESPONSE = BIT(0), + COMMS_MSG_SRV_REQUEST = BIT(1), +}; + +struct comms_hw_msg { + struct { + u32 type; + u32 payload_size; + } header; + struct { + u64 id; + u32 flags; + u32 size; + u32 payload[COMMS_DATA_LEN - 6]; + } body; +} __packed; + +struct comms_srv_req { + u64 flags; + u32 opcode; + u32 data[]; +}; + +struct comms_srv_ver_resp { + u32 version; +}; + +struct comms_srv_uuid_resp { + uuid_t uuid; +}; + +struct comms_msg { + u64 id; + u32 flags; + u32 len; + u32 bytes_read; + u32 data[10]; +}; + +struct comms_device { + struct vmgmt_device *vdev; + struct regmap *regmap; + struct timer_list timer; + struct work_struct work; +}; + +static bool comms_regmap_rd_regs(struct device *dev, unsigned int reg) +{ + switch (reg) { + case COMMS_REG_RDDATA_OFF: + case COMMS_REG_IS_OFF: + return true; + default: + return false; + } +} + +static bool comms_regmap_wr_regs(struct device *dev, unsigned int reg) +{ + switch (reg) { + case COMMS_REG_WRDATA_OFF: + case COMMS_REG_IS_OFF: + case COMMS_REG_IE_OFF: + case COMMS_REG_CTRL_OFF: + case COMMS_REG_RIT_OFF: + return true; + default: + return false; + } +} + +static bool comms_regmap_nir_regs(struct device *dev, unsigned int reg) +{ + switch (reg) { + case COMMS_REG_RDDATA_OFF: + return true; + default: + return false; + } +} + +static const struct regmap_config comms_regmap_config = { + .name = "comms_config", + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, + .readable_reg = comms_regmap_rd_regs, + .writeable_reg = comms_regmap_wr_regs, + .readable_noinc_reg = comms_regmap_nir_regs, +}; + +static inline struct comms_device *to_ccdev_work(struct work_struct *w) +{ + return container_of(w, struct comms_device, work); +} + +static inline struct comms_device *to_ccdev_timer(struct timer_list *t) +{ + return container_of(t, struct comms_device, timer); +} + +static u32 comms_set_uuid_resp(struct vmgmt_device *vdev, void *payload) +{ + struct comms_srv_uuid_resp *resp; + u32 resp_len = sizeof(*resp); + + resp = (struct comms_srv_uuid_resp *)payload; + uuid_copy(&resp->uuid, &vdev->xclbin_uuid); + vmgmt_dbg(vdev, "xclbin UUID: %pUb", &resp->uuid); + + return resp_len; +} + +static u32 comms_set_protocol_resp(void *payload) +{ + struct comms_srv_ver_resp *resp = (struct comms_srv_ver_resp *)payload; + u32 resp_len = sizeof(*resp); + + resp->version = COMMS_PROTOCOL_VERSION; + + return sizeof(resp_len); +} + +static void comms_send_response(struct comms_device *ccdev, + struct comms_msg *msg) +{ + struct comms_srv_req *req = (struct comms_srv_req *)msg->data; + struct vmgmt_device *vdev = ccdev->vdev; + struct comms_hw_msg response = {0}; + u32 size; + int ret; + u8 i; + + switch (req->opcode) { + case COMMS_REQ_OPS_GET_PROTOCOL_VERSION: + size = comms_set_protocol_resp(response.body.payload); + break; + case COMMS_REQ_OPS_GET_XCLBIN_UUID: + size = comms_set_uuid_resp(vdev, response.body.payload); + break; + default: + vmgmt_err(vdev, "Unsupported request opcode: %d", req->opcode); + *response.body.payload = -1; + size = sizeof(int); + } + + vmgmt_dbg(vdev, "Response opcode: %d", req->opcode); + + response.header.type = COMMS_MSG_START | COMMS_MSG_END; + response.header.payload_size = size; + + response.body.flags = COMMS_MSG_SRV_RESPONSE; + response.body.size = size; + response.body.id = msg->id; + + for (i = 0; i < COMMS_DATA_LEN; i++) { + ret = regmap_write(ccdev->regmap, COMMS_REG_WRDATA_OFF, ((u32 *)&response)[i]); + if (ret < 0) { + vmgmt_err(vdev, "regmap write failed: %d", ret); + return; + } + } +} + +#define STATUS_IS_READY(status) ((status) & BIT(1)) +#define STATUS_IS_ERROR(status) ((status) & BIT(2)) + +static void comms_check_request(struct work_struct *w) +{ + struct comms_device *ccdev = to_ccdev_work(w); + u32 status = 0, request[COMMS_DATA_LEN] = {0}; + struct comms_hw_msg *hw_msg; + struct comms_msg msg; + u8 type, eom; + int ret; + int i; + + ret = regmap_read(ccdev->regmap, COMMS_REG_IS_OFF, &status); + if (ret) { + vmgmt_err(ccdev->vdev, "regmap read failed: %d", ret); + return; + } + if (!STATUS_IS_READY(status)) + return; + if (STATUS_IS_ERROR(status)) { + vmgmt_err(ccdev->vdev, "An error has occurred with comms"); + return; + } + + /* ACK status */ + regmap_write(ccdev->regmap, COMMS_REG_IS_OFF, status); + + for (i = 0; i < COMMS_DATA_LEN; i++) { + if (regmap_read(ccdev->regmap, COMMS_REG_RDDATA_OFF, &request[i]) < 0) { + vmgmt_err(ccdev->vdev, "regmap read failed"); + return; + } + } + + hw_msg = (struct comms_hw_msg *)request; + type = FIELD_GET(COMMS_DATA_TYPE_MASK, hw_msg->header.type); + eom = FIELD_GET(COMMS_DATA_EOM_MASK, hw_msg->header.type); + + /* Only support fixed size 64B messages */ + if (!eom || type != COMMS_MSG_START) { + vmgmt_err(ccdev->vdev, "Unsupported message format or length"); + return; + } + + msg.flags = hw_msg->body.flags; + msg.len = hw_msg->body.size; + msg.id = hw_msg->body.id; + + if (msg.flags != COMMS_MSG_SRV_REQUEST) { + vmgmt_err(ccdev->vdev, "Unsupported service request"); + return; + } + + if (hw_msg->body.size > sizeof(msg.data) * sizeof(msg.data[0])) { + vmgmt_err(ccdev->vdev, "msg is too big: %d", hw_msg->body.size); + return; + } + memcpy(msg.data, hw_msg->body.payload, hw_msg->body.size); + + /* Now decode and respond appropriately */ + comms_send_response(ccdev, &msg); +} + +static void comms_sched_work(struct timer_list *t) +{ + struct comms_device *ccdev = to_ccdev_timer(t); + + /* Schedule a work in the general workqueue */ + schedule_work(&ccdev->work); + /* Periodic timer */ + mod_timer(&ccdev->timer, jiffies + COMMS_TIMER); +} + +static void comms_config(struct comms_device *ccdev) +{ + /* Disable interrupts */ + regmap_write(ccdev->regmap, COMMS_REG_IE_OFF, COMMS_IRQ_DISABLE_ALL); + /* Clear request and response FIFOs */ + regmap_write(ccdev->regmap, COMMS_REG_CTRL_OFF, COMMS_CLEAR_FIFO); + /* Clear interrupts */ + regmap_write(ccdev->regmap, COMMS_REG_IS_OFF, COMMS_IRQ_CLEAR_ALL); + /* Setup RIT reg */ + regmap_write(ccdev->regmap, COMMS_REG_RIT_OFF, COMMS_RECEIVE_THRESHOLD); + /* Enable RIT interrupt */ + regmap_write(ccdev->regmap, COMMS_REG_IE_OFF, COMMS_IRQ_RECEIVE_ENABLE); + + /* Create and schedule timer to do recurring work */ + INIT_WORK(&ccdev->work, &comms_check_request); + timer_setup(&ccdev->timer, &comms_sched_work, 0); + mod_timer(&ccdev->timer, jiffies + COMMS_TIMER); +} + +void vmgmtm_comms_fini(struct comms_device *ccdev) +{ + /* First stop scheduling new work then cancel work */ + del_timer_sync(&ccdev->timer); + cancel_work_sync(&ccdev->work); +} + +struct comms_device *vmgmtm_comms_init(struct vmgmt_device *vdev) +{ + struct comms_device *ccdev; + + ccdev = devm_kzalloc(&vdev->pdev->dev, sizeof(*ccdev), GFP_KERNEL); + if (!ccdev) + return ERR_PTR(-ENOMEM); + + ccdev->vdev = vdev; + + ccdev->regmap = devm_regmap_init_mmio(&vdev->pdev->dev, + vdev->tbl + COMMS_PCI_BAR_OFF, + &comms_regmap_config); + if (IS_ERR(ccdev->regmap)) { + vmgmt_err(vdev, "Comms regmap init failed"); + return ERR_CAST(ccdev->regmap); + } + + comms_config(ccdev); + return ccdev; +} diff --git a/drivers/fpga/amd/vmgmt-comms.h b/drivers/fpga/amd/vmgmt-comms.h new file mode 100644 index 000000000000..0afb14c8bd32 --- /dev/null +++ b/drivers/fpga/amd/vmgmt-comms.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Driver for Versal PCIe device + * + * Copyright (C) 2024 Advanced Micro Devices, Inc. All rights reserved. + */ + +#ifndef __VMGMT_COMMS_H +#define __VMGMT_COMMS_H + +struct comms_device *vmgmtm_comms_init(struct vmgmt_device *vdev); +void vmgmtm_comms_fini(struct comms_device *ccdev); + +#endif /* __VMGMT_COMMS_H */ diff --git a/drivers/fpga/amd/vmgmt.c b/drivers/fpga/amd/vmgmt.c new file mode 100644 index 000000000000..b72eff9e8bc0 --- /dev/null +++ b/drivers/fpga/amd/vmgmt.c @@ -0,0 +1,395 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Driver for Versal PCIe device + * + * Copyright (C) 2024 Advanced Micro Devices, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "vmgmt.h" +#include "vmgmt-comms.h" + +#define DRV_NAME "amd-vmgmt" +#define CLASS_NAME DRV_NAME + +#define PCI_DEVICE_ID_V70PQ2 0x50B0 +#define VERSAL_XCLBIN_MAGIC_ID "xclbin2" + +static DEFINE_IDA(vmgmt_dev_minor_ida); +static dev_t vmgmt_devnode; +struct class *vmgmt_class; +static struct fpga_bridge_ops vmgmt_br_ops; + +struct vmgmt_fpga_region { + struct fpga_device *fdev; + uuid_t *uuid; +}; + +static inline struct vmgmt_device *vmgmt_inode_to_vdev(struct inode *inode) +{ + return (struct vmgmt_device *)container_of(inode->i_cdev, struct vmgmt_device, cdev); +} + +static enum fpga_mgr_states vmgmt_fpga_state(struct fpga_manager *mgr) +{ + struct fpga_device *fdev = mgr->priv; + + return fdev->state; +} + +static const struct fpga_manager_ops vmgmt_fpga_ops = { + .state = vmgmt_fpga_state, +}; + +static int vmgmt_get_bridges(struct fpga_region *region) +{ + struct fpga_device *fdev = region->priv; + + return fpga_bridge_get_to_list(&fdev->vdev->pdev->dev, region->info, + ®ion->bridge_list); +} + +static void vmgmt_fpga_fini(struct fpga_device *fdev) +{ + fpga_region_unregister(fdev->region); + fpga_bridge_unregister(fdev->bridge); + fpga_mgr_unregister(fdev->mgr); +} + +static struct fpga_device *vmgmt_fpga_init(struct vmgmt_device *vdev) +{ + struct device *dev = &vdev->pdev->dev; + struct fpga_region_info region = { 0 }; + struct fpga_manager_info info = { 0 }; + struct fpga_device *fdev; + int ret; + + fdev = devm_kzalloc(dev, sizeof(*fdev), GFP_KERNEL); + if (!fdev) + return ERR_PTR(-ENOMEM); + + fdev->vdev = vdev; + + info = (struct fpga_manager_info) { + .name = "AMD Versal FPGA Manager", + .mops = &vmgmt_fpga_ops, + .priv = fdev, + }; + + fdev->mgr = fpga_mgr_register_full(dev, &info); + if (IS_ERR(fdev->mgr)) { + ret = PTR_ERR(fdev->mgr); + vmgmt_err(vdev, "Failed to register FPGA manager, err %d", ret); + return ERR_PTR(ret); + } + + /* create fgpa bridge, region for the base shell */ + fdev->bridge = fpga_bridge_register(dev, "AMD Versal FPGA Bridge", + &vmgmt_br_ops, fdev); + if (IS_ERR(fdev->bridge)) { + vmgmt_err(vdev, "Failed to register FPGA bridge, err %ld", + PTR_ERR(fdev->bridge)); + ret = PTR_ERR(fdev->bridge); + goto unregister_fpga_mgr; + } + + region = (struct fpga_region_info) { + .compat_id = (struct fpga_compat_id *)&vdev->intf_uuid, + .get_bridges = vmgmt_get_bridges, + .mgr = fdev->mgr, + .priv = fdev, + }; + + fdev->region = fpga_region_register_full(dev, ®ion); + if (IS_ERR(fdev->region)) { + vmgmt_err(vdev, "Failed to register FPGA region, err %ld", + PTR_ERR(fdev->region)); + ret = PTR_ERR(fdev->region); + goto unregister_fpga_bridge; + } + + return fdev; + +unregister_fpga_bridge: + fpga_bridge_unregister(fdev->bridge); + +unregister_fpga_mgr: + fpga_mgr_unregister(fdev->mgr); + + return ERR_PTR(ret); +} + +static int vmgmt_open(struct inode *inode, struct file *filep) +{ + struct vmgmt_device *vdev = vmgmt_inode_to_vdev(inode); + + if (WARN_ON(!vdev)) + return -ENODEV; + + filep->private_data = vdev; + + return 0; +} + +static int vmgmt_release(struct inode *inode, struct file *filep) +{ + filep->private_data = NULL; + + return 0; +} + +static const struct file_operations vmgmt_fops = { + .owner = THIS_MODULE, + .open = vmgmt_open, + .release = vmgmt_release, +}; + +static void vmgmt_chrdev_destroy(struct vmgmt_device *vdev) +{ + device_destroy(vmgmt_class, vdev->cdev.dev); + cdev_del(&vdev->cdev); + ida_free(&vmgmt_dev_minor_ida, vdev->minor); +} + +static int vmgmt_chrdev_create(struct vmgmt_device *vdev) +{ + u32 devid; + int ret; + + vdev->minor = ida_alloc(&vmgmt_dev_minor_ida, GFP_KERNEL); + if (vdev->minor < 0) { + vmgmt_err(vdev, "Failed to allocate chrdev ID"); + return -ENODEV; + } + + cdev_init(&vdev->cdev, &vmgmt_fops); + + vdev->cdev.owner = THIS_MODULE; + vdev->cdev.dev = MKDEV(MAJOR(vmgmt_devnode), vdev->minor); + ret = cdev_add(&vdev->cdev, vdev->cdev.dev, 1); + if (ret) { + vmgmt_err(vdev, "Failed to add char device: %d\n", ret); + ida_free(&vmgmt_dev_minor_ida, vdev->minor); + return -ENODEV; + } + + devid = PCI_DEVID(vdev->pdev->bus->number, vdev->pdev->devfn); + vdev->device = device_create(vmgmt_class, &vdev->pdev->dev, + vdev->cdev.dev, NULL, "%s%x", DRV_NAME, + devid); + if (IS_ERR(vdev->device)) { + vmgmt_err(vdev, "Failed to create device: %ld\n", + PTR_ERR(vdev->device)); + cdev_del(&vdev->cdev); + ida_free(&vmgmt_dev_minor_ida, vdev->minor); + return -ENODEV; + } + + return 0; +} + +static void vmgmt_fw_cancel(struct fw_upload *fw_upload) +{ + struct firmware_device *fwdev = fw_upload->dd_handle; + + vmgmt_warn(fwdev->vdev, "canceled"); +} + +static const struct fw_upload_ops vmgmt_fw_ops = { + .cancel = vmgmt_fw_cancel, +}; + +static void vmgmt_fw_upload_fini(struct firmware_device *fwdev) +{ + firmware_upload_unregister(fwdev->fw); + kfree(fwdev->name); +} + +static struct firmware_device *vmgmt_fw_upload_init(struct vmgmt_device *vdev) +{ + struct device *dev = &vdev->pdev->dev; + struct firmware_device *fwdev; + u32 devid; + + fwdev = devm_kzalloc(dev, sizeof(*fwdev), GFP_KERNEL); + if (!fwdev) + return ERR_PTR(-ENOMEM); + + devid = PCI_DEVID(vdev->pdev->bus->number, vdev->pdev->devfn); + fwdev->name = kasprintf(GFP_KERNEL, "%s%x", DRV_NAME, devid); + if (!fwdev->name) + return ERR_PTR(-ENOMEM); + + fwdev->fw = firmware_upload_register(THIS_MODULE, dev, fwdev->name, + &vmgmt_fw_ops, fwdev); + if (IS_ERR(fwdev->fw)) { + kfree(fwdev->name); + return ERR_CAST(fwdev->fw); + } + + fwdev->vdev = vdev; + + return fwdev; +} + +static void vmgmt_device_teardown(struct vmgmt_device *vdev) +{ + vmgmt_fpga_fini(vdev->fdev); + vmgmt_fw_upload_fini(vdev->fwdev); + vmgmtm_comms_fini(vdev->ccdev); +} + +static int vmgmt_device_setup(struct vmgmt_device *vdev) +{ + int ret; + + vdev->fwdev = vmgmt_fw_upload_init(vdev); + if (IS_ERR(vdev->fwdev)) { + ret = PTR_ERR(vdev->fwdev); + vmgmt_err(vdev, "Failed to init FW uploader, err %d", ret); + goto done; + } + + vdev->ccdev = vmgmtm_comms_init(vdev); + if (IS_ERR(vdev->ccdev)) { + ret = PTR_ERR(vdev->ccdev); + vmgmt_err(vdev, "Failed to init comms channel, err %d", ret); + goto upload_fini; + } + + vdev->fdev = vmgmt_fpga_init(vdev); + if (IS_ERR(vdev->fdev)) { + ret = PTR_ERR(vdev->fdev); + vmgmt_err(vdev, "Failed to init FPGA maanger, err %d", ret); + goto comms_fini; + } + + return 0; +comms_fini: + vmgmtm_comms_fini(vdev->ccdev); +upload_fini: + vmgmt_fw_upload_fini(vdev->fwdev); +done: + return ret; +} + +static void vmgmt_remove(struct pci_dev *pdev) +{ + struct vmgmt_device *vdev = pci_get_drvdata(pdev); + + vmgmt_chrdev_destroy(vdev); + vmgmt_device_teardown(vdev); +} + +static int vmgmt_probe(struct pci_dev *pdev, + const struct pci_device_id *pdev_id) +{ + struct vmgmt_device *vdev; + int ret; + + vdev = devm_kzalloc(&pdev->dev, sizeof(*vdev), GFP_KERNEL); + if (!vdev) + return -ENOMEM; + + pci_set_drvdata(pdev, vdev); + vdev->pdev = pdev; + + ret = pcim_enable_device(pdev); + if (ret) { + vmgmt_err(vdev, "Failed to enable device %d", ret); + return ret; + } + + ret = pcim_iomap_regions(vdev->pdev, AMD_VMGMT_BAR_MASK, "amd-vmgmt"); + if (ret) { + vmgmt_err(vdev, "Failed iomap regions %d", ret); + return -ENOMEM; + } + + vdev->tbl = pcim_iomap_table(vdev->pdev)[AMD_VMGMT_BAR]; + if (IS_ERR(vdev->tbl)) { + vmgmt_err(vdev, "Failed to map RM shared memory BAR%d", AMD_VMGMT_BAR); + return -ENOMEM; + } + + ret = vmgmt_device_setup(vdev); + if (ret) { + vmgmt_err(vdev, "Failed to setup Versal device %d", ret); + return ret; + } + + ret = vmgmt_chrdev_create(vdev); + if (ret) { + vmgmt_device_teardown(vdev); + return ret; + } + + vmgmt_dbg(vdev, "Successfully probed %s driver!", DRV_NAME); + return 0; +} + +static const struct pci_device_id vmgmt_pci_ids[] = { + { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_V70PQ2), }, + { 0 } +}; + +MODULE_DEVICE_TABLE(pci, vmgmt_pci_ids); + +static struct pci_driver amd_vmgmt_driver = { + .name = DRV_NAME, + .id_table = vmgmt_pci_ids, + .probe = vmgmt_probe, + .remove = vmgmt_remove, +}; + +static int amd_vmgmt_init(void) +{ + int ret; + + vmgmt_class = class_create(CLASS_NAME); + if (IS_ERR(vmgmt_class)) + return PTR_ERR(vmgmt_class); + + ret = alloc_chrdev_region(&vmgmt_devnode, 0, MINORMASK, DRV_NAME); + if (ret) + goto chr_err; + + ret = pci_register_driver(&amd_vmgmt_driver); + if (ret) + goto pci_err; + + return 0; + +pci_err: + unregister_chrdev_region(vmgmt_devnode, MINORMASK); +chr_err: + class_destroy(vmgmt_class); + return ret; +} + +static void amd_vmgmt_exit(void) +{ + pci_unregister_driver(&amd_vmgmt_driver); + unregister_chrdev_region(vmgmt_devnode, MINORMASK); + class_destroy(vmgmt_class); +} + +module_init(amd_vmgmt_init); +module_exit(amd_vmgmt_exit); + +MODULE_DESCRIPTION("AMD PCIe Versal Management Driver"); +MODULE_AUTHOR("XRT Team "); +MODULE_LICENSE("GPL"); diff --git a/drivers/fpga/amd/vmgmt.h b/drivers/fpga/amd/vmgmt.h new file mode 100644 index 000000000000..4dc8a43f825e --- /dev/null +++ b/drivers/fpga/amd/vmgmt.h @@ -0,0 +1,100 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Driver for Versal PCIe device + * + * Copyright (C) 2024 Advanced Micro Devices, Inc. All rights reserved. + */ + +#ifndef __VMGMT_H +#define __VMGMT_H + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define AMD_VMGMT_BAR 0 +#define AMD_VMGMT_BAR_MASK BIT(0) + +#define vmgmt_info(vdev, fmt, args...) \ + dev_info(&(vdev)->pdev->dev, "%s: "fmt, __func__, ##args) + +#define vmgmt_warn(vdev, fmt, args...) \ + dev_warn(&(vdev)->pdev->dev, "%s: "fmt, __func__, ##args) + +#define vmgmt_err(vdev, fmt, args...) \ + dev_err(&(vdev)->pdev->dev, "%s: "fmt, __func__, ##args) + +#define vmgmt_dbg(vdev, fmt, args...) \ + dev_dbg(&(vdev)->pdev->dev, fmt, ##args) + +struct vmgmt_device; +struct comms_device; +struct rm_cmd; + +struct axlf_header { + u64 length; + unsigned char reserved1[24]; + uuid_t rom_uuid; + unsigned char reserved2[64]; + uuid_t uuid; + unsigned char reserved3[24]; +} __packed; + +struct axlf { + char magic[8]; + unsigned char reserved[296]; + struct axlf_header header; +} __packed; + +struct fw_tnx { + struct rm_cmd *cmd; + int opcode; + int id; +}; + +struct fpga_device { + enum fpga_mgr_states state; + struct fpga_manager *mgr; + struct fpga_bridge *bridge; + struct fpga_region *region; + struct vmgmt_device *vdev; + struct fw_tnx fw; +}; + +struct firmware_device { + struct vmgmt_device *vdev; + struct fw_upload *fw; + char *name; + u32 fw_name_id; + struct rm_cmd *cmd; + int id; + uuid_t uuid; +}; + +struct vmgmt_device { + struct pci_dev *pdev; + + struct rm_device *rdev; + struct comms_device *ccdev; + struct fpga_device *fdev; + struct firmware_device *fwdev; + struct cdev cdev; + struct device *device; + + int minor; + void __iomem *tbl; + uuid_t xclbin_uuid; + uuid_t intf_uuid; + + void *debugfs_root; +}; + +#endif /* __VMGMT_H */ diff --git a/include/uapi/linux/vmgmt.h b/include/uapi/linux/vmgmt.h new file mode 100644 index 000000000000..2269ceb5c131 --- /dev/null +++ b/include/uapi/linux/vmgmt.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* + * Header file for Versal PCIe device user API + * + * Copyright (C) 2024 AMD Corporation, Inc. + */ + +#ifndef _UAPI_LINUX_VMGMT_H +#define _UAPI_LINUX_VMGMT_H + +#include + +#define VERSAL_MGMT_MAGIC 0xB7 +#define VERSAL_MGMT_BASE 0 + +/** + * VERSAL_MGMT_LOAD_XCLBIN_IOCTL - Download XCLBIN to the device + * + * This IOCTL is used to download XCLBIN down to the device. + * Return: 0 on success, -errno on failure. + */ +#define VERSAL_MGMT_LOAD_XCLBIN_IOCTL _IOW(VERSAL_MGMT_MAGIC, \ + VERSAL_MGMT_BASE + 0, void *) + +#endif /* _UAPI_LINUX_VMGMT_H */ From patchwork Mon Oct 7 22:01:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Yidong (David)" X-Patchwork-Id: 13825317 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2068.outbound.protection.outlook.com [40.107.244.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 013FD184549; Mon, 7 Oct 2024 22:01:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.68 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728338499; cv=fail; b=G3nq2R+QCBdFHj0/dZ1w5gB9KMEM9Z+nfjimj0fqKuzVOq46SlJdwpD/5woyQiGB5RKM6jwpGZ2B4gLXTxhhuZ8FbVWG1tMzmJV2TBKzpsKMd5QJETMA+eCLQCFrfaTLte1CbUBW/p8g8Qz9i0YCM9gZ1SIFo763AOE+o7ofFB4= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728338499; c=relaxed/simple; bh=OOLKTiNi2D3+/iJLJtaTHdSkhh0kExl1HrRyUqlqxLk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FYycM1sHzlEMcjhuCDoCVrCt1QPFIFLvRsQ3vzjg0o+HDsEKkAswZDZzpe9Eacj2QtkBfUSVa8dWE6iQGU14mOG7DhFkXnHMt87nuoSqXoqfLuAn2ovfKClNYPfPQKVLx8bfZr6JKOL0THwKjpROj5jWhUqp7Nghe3zydpL3RbA= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=kYign4y9; arc=fail smtp.client-ip=40.107.244.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="kYign4y9" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Fb3hex8QYsa9SPnwIKfMEgizzJBuhurENVqMIjjOZJjhgwUFDyMftuDuIIKHOeUWjmMDswN15ikTOy3jBH+kBH7VoKQZKJoEHpNgYoRYWYQnFdOaAhGh3f8pCbRtnf/X8Cqdcp7w1x8WMgwDbbeOffiDQUfRfWLnGop8zzfCHk05jKV117pLVEreLgA1+1O6UOH6twV95Hj5cti8C3iU3tJ6O59FOlWnpjb/PndEkdP1nIiyN34rIRylO+YoaueIWYnr3IYQVo59ZDQLj89hvR2uC4V2LXY+ja/A7hvz/cXp5c2kGOhoG7krxrQMMym5bfb4GJvxEAvFbzExtykyuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=v8lIf6mTsCB1FAMCr/1E+2Wcsvx0tNeoiKzEow/kpQk=; b=MvDxuShn6XkQwDx/F6aXdz1d77UcbBs9iWBfzSl5/lHeaLdo7Z4KFFdxRQqhByBY5dzpVXG1DqxKllHolspQEFmLIbtd3tmh5uqnB7VxGwbKosZMMaVYbeBQYR23shnX5x8avSFSIAOW+6o4ofoO1GOBp2/mkEhYyoMBAL9OLQsuQfH/kJ/8xSEQfvm7hNMWi9Ore1weufjiEEeoOEEGMFwe4AUvmo2r0VQnv9YiZ/eF2GihK//3i2QYmfG7WcOE1v+TP5yKYaBAynw18CPW3Ud4LiCvok68UuMYyX6r30poX4ayi143W2GdtHtKXmEjhCQKh+3WngmEjIKXK8Y61g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=v8lIf6mTsCB1FAMCr/1E+2Wcsvx0tNeoiKzEow/kpQk=; b=kYign4y9/vJM1P6qw38fmOxwR64JXyqFx2etmv6+IRr+3ndPMm7qaGy8rEpID0YHlZA+780V2eZr8hlDZJoWfSy0N0C+MQyf2GeLk036/HymOre//utnZ/cI+wfM29CAD9vpZvO/+oLSA5icLRYDUbobcCvZ4PBkejT/KJSMWpY= Received: from BN8PR15CA0067.namprd15.prod.outlook.com (2603:10b6:408:80::44) by PH7PR12MB7842.namprd12.prod.outlook.com (2603:10b6:510:27a::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.22; Mon, 7 Oct 2024 22:01:32 +0000 Received: from BN2PEPF000055DC.namprd21.prod.outlook.com (2603:10b6:408:80:cafe::8a) by BN8PR15CA0067.outlook.office365.com (2603:10b6:408:80::44) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.20 via Frontend Transport; Mon, 7 Oct 2024 22:01:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN2PEPF000055DC.mail.protection.outlook.com (10.167.245.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8069.0 via Frontend Transport; Mon, 7 Oct 2024 22:01:32 +0000 Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 7 Oct 2024 17:01:32 -0500 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 7 Oct 2024 17:01:32 -0500 Received: from xsjlizhih51.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Mon, 7 Oct 2024 17:01:31 -0500 From: David Zhang To: , , , , CC: Yidong Zhang , , Nishad Saraf , Prapul Krishnamurthy Subject: [PATCH V1 2/3] drivers/fpga/amd: Add communication with firmware Date: Mon, 7 Oct 2024 15:01:27 -0700 Message-ID: <20241007220128.3023169-2-yidong.zhang@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241007220128.3023169-1-yidong.zhang@amd.com> References: <20241007220128.3023169-1-yidong.zhang@amd.com> Precedence: bulk X-Mailing-List: linux-fpga@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN2PEPF000055DC:EE_|PH7PR12MB7842:EE_ X-MS-Office365-Filtering-Correlation-Id: 96da1385-dda8-4e9b-0c5b-08dce71b9ae2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|82310400026|376014; X-Microsoft-Antispam-Message-Info: SqKqYdKjXa6xL5qtlJP60+i5gEaVqwCDtW9TjrsN3EeL5CdhOTfCKLNXv0Dt5vN+tlipQU1kDLcn00EEer4sK2p5BBcCbMGNJ1e0fi6CEYTRmE9rdEYLKdcED6cJ61ErPAcqlWl2nYDhEAz8Bc4XCZssRVsSlvuxNe7xhvqAYc3oj5cjBy2AJghmCz5TgeDLrA33VmWNYUC9I0h16Fezs/zzbW/mXNoChllfP4bgwg8Yi1YlU4KQlZnxHSn2a1xSuehjOHfVzgc4sYUbaWEzfiaHIHjb0yjzu9pyBO7V+NjtgWHNcEsINEOvOKOfuZkrA8pxpFRnhvfaRRtYNWQTEUK1IU8Xm/Ic3hNC+iaF5pnKU99MtDYFw1l8PfUgHYlq11MrUfp0EdDO9XAac7wdXwSpTEqS0w9qqkV1z+/srMjoZBrIcWO4KNmraxIw18EgI9Pyv2jIpEQBDwpVY9AqgLPP5WRH+QYEd9yV00zy8WcOslC8tiLXLcZs+DYKn+6+iwDswnuHwPdjU8ZYihvCTwMimFoxEyYQ9frW84GHni5J3RFCp6FWK7GxSUJqXvuXQzRy5O3gCkJgdzW7iOAsOgL4O5GZgWPqqqM0+wYuFOPBj46cWIR2LeLLeBvDeZCsmfjwsJyxi0aAiWpVP/4moTInwnQJx+uEsPXC+RpBlWSkpmUr+1UIqL0P+C31WFFaNcqWoqbMZT3hULulAdbKlryg6e0Rr5ASu9Zi2iR2kXOa/iQsFmBMzmNfPFj6YUx/u+k33nk8+Vzs7MOpDr52Qmrg4G7aCRMm12+xYk3wF2CMBkU/7+1PSN9cz0i1SGwdQmNND0TJxHerWk7Z0EbkGNv4G9svu4yCIgLHBUpf+4kSgYrCNnVDmy7lWU79011Xm+vJrgh+cT0jOobdZVR46ixPEVtY49sRVMs4Bfe7NNU8BBHYe5PIp/fQj+W63K4ve3SjZV58+BKBfwSxclIk406NyZtKVDaVKbDH97N0+z5+PzVS7h75zPigw8quNYkfBH49kFKLQCLvyBfEklIWqHnTDcH9uvpi65XLiysdl5IYZodR+iPsi2w6UqZGw2pgl8UEhme3/LI/Twm3mJqg6b4S8k+hJcmBChN/H9goNc/gjvl03W/4CE3MteCezecsXLbYZvC+e088zaISZjZAFV2w4t8fLa5zGf28XID4KAILdS2eEiAixsVG+vgeY5DdAuspF+zjcayivtu5EiF4uWbY7sPMekSCNx+Dyd1U8Jtvslp+g5KFtHj3MnvUl7gFvWlvXddxmwkanAnEoopwhCTP0Oii5AvIcDXsBqDxivtbBS3azjv6t1RUVafq52YLVwQRdCvQ5gLSvt9H1H1c5NELiqveQzP8QZAKQx9POG7D9v1fmJ/KjbLWv26hrlhH X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(82310400026)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2024 22:01:32.7781 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 96da1385-dda8-4e9b-0c5b-08dce71b9ae2 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN2PEPF000055DC.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7842 From: Yidong Zhang Add queue based communication between host driver and firmware on the card. The remote queue (rm) can send/receive messages and providing firmware downloading services. Co-developed-by: Nishad Saraf Signed-off-by: Nishad Saraf Co-developed-by: Prapul Krishnamurthy Signed-off-by: Prapul Krishnamurthy Signed-off-by: Yidong Zhang --- drivers/fpga/amd/Makefile | 6 +- drivers/fpga/amd/vmgmt-rm-queue.c | 38 +++ drivers/fpga/amd/vmgmt-rm-queue.h | 15 + drivers/fpga/amd/vmgmt-rm.c | 543 ++++++++++++++++++++++++++++++ drivers/fpga/amd/vmgmt-rm.h | 222 ++++++++++++ drivers/fpga/amd/vmgmt.c | 305 ++++++++++++++++- drivers/fpga/amd/vmgmt.h | 7 +- 7 files changed, 1130 insertions(+), 6 deletions(-) create mode 100644 drivers/fpga/amd/vmgmt-rm-queue.c create mode 100644 drivers/fpga/amd/vmgmt-rm-queue.h create mode 100644 drivers/fpga/amd/vmgmt-rm.c create mode 100644 drivers/fpga/amd/vmgmt-rm.h diff --git a/drivers/fpga/amd/Makefile b/drivers/fpga/amd/Makefile index 3e4c6dd3b787..97cfff6be204 100644 --- a/drivers/fpga/amd/Makefile +++ b/drivers/fpga/amd/Makefile @@ -2,5 +2,7 @@ obj-$(CONFIG_AMD_VERSAL_MGMT) += amd-vmgmt.o -amd-vmgmt-$(CONFIG_AMD_VERSAL_MGMT) := vmgmt.o \ - vmgmt-comms.o +amd-vmgmt-$(CONFIG_AMD_VERSAL_MGMT) := vmgmt.o \ + vmgmt-comms.o \ + vmgmt-rm.o \ + vmgmt-rm-queue.o diff --git a/drivers/fpga/amd/vmgmt-rm-queue.c b/drivers/fpga/amd/vmgmt-rm-queue.c new file mode 100644 index 000000000000..fe805373ea32 --- /dev/null +++ b/drivers/fpga/amd/vmgmt-rm-queue.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Driver for Versal PCIe device + * + * Copyright (C) 2024 Advanced Micro Devices, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "vmgmt.h" +#include "vmgmt-rm.h" +#include "vmgmt-rm-queue.h" + +int rm_queue_send_cmd(struct rm_cmd *cmd, unsigned long timeout) +{ + return 0; +} + +void rm_queue_fini(struct rm_device *rdev) +{ +} + +int rm_queue_init(struct rm_device *rdev) +{ + return 0; +} diff --git a/drivers/fpga/amd/vmgmt-rm-queue.h b/drivers/fpga/amd/vmgmt-rm-queue.h new file mode 100644 index 000000000000..6fd0e0026a13 --- /dev/null +++ b/drivers/fpga/amd/vmgmt-rm-queue.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Driver for Versal PCIe device + * + * Copyright (C) 2024 Advanced Micro Devices, Inc. All rights reserved. + */ + +#ifndef __VMGMT_RM_QUEUE_H +#define __VMGMT_RM_QUEUE_H + +int rm_queue_init(struct rm_device *rdev); +void rm_queue_fini(struct rm_device *rdev); +int rm_queue_send_cmd(struct rm_cmd *cmd, unsigned long timeout); + +#endif /* __VMGMT_RM_QUEUE_H */ diff --git a/drivers/fpga/amd/vmgmt-rm.c b/drivers/fpga/amd/vmgmt-rm.c new file mode 100644 index 000000000000..856d5af52c8d --- /dev/null +++ b/drivers/fpga/amd/vmgmt-rm.c @@ -0,0 +1,543 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Driver for Versal PCIe device + * + * Copyright (C) 2024 Advanced Micro Devices, Inc. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "vmgmt.h" +#include "vmgmt-rm.h" +#include "vmgmt-rm-queue.h" + +static DEFINE_IDA(rm_cmd_ids); + +static const struct regmap_config rm_shmem_regmap_config = { + .name = "rm_shmem_config", + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, +}; + +static const struct regmap_config rm_io_regmap_config = { + .name = "rm_io_config", + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, +}; + +static void rm_uninstall_health_monitor(struct rm_device *rdev); + +static inline struct rm_device *to_rdev_health_monitor(struct work_struct *w) +{ + return container_of(w, struct rm_device, health_monitor); +} + +static inline struct rm_device *to_rdev_health_timer(struct timer_list *t) +{ + return container_of(t, struct rm_device, health_timer); +} + +static inline int rm_shmem_read(struct rm_device *rdev, u32 offset, u32 *value) +{ + return regmap_read(rdev->shmem_regmap, offset, value); +} + +static inline int rm_shmem_bulk_read(struct rm_device *rdev, u32 offset, + u32 *value, u32 size) +{ + return regmap_bulk_read(rdev->shmem_regmap, offset, value, + DIV_ROUND_UP(size, 4)); +} + +static inline int rm_shmem_bulk_write(struct rm_device *rdev, u32 offset, + u32 *value, u32 size) +{ + return regmap_bulk_write(rdev->shmem_regmap, offset, value, + DIV_ROUND_UP(size, 4)); +} + +void rm_queue_destory_cmd(struct rm_cmd *cmd) +{ + ida_free(&rm_cmd_ids, cmd->sq_msg.hdr.id); + kfree(cmd); +} + +int rm_queue_copy_response(struct rm_cmd *cmd, void *buffer, ssize_t len) +{ + struct rm_cmd_cq_log_page *result = &cmd->cq_msg.data.page; + u64 off = cmd->sq_msg.data.page.address; + + if (!result->len || len < result->len) { + vmgmt_err(cmd->rdev->vdev, "Invalid response or buffer size"); + return -EINVAL; + } + + return rm_shmem_bulk_read(cmd->rdev, off, (u32 *)buffer, result->len); +} + +static void rm_queue_payload_fini(struct rm_cmd *cmd) +{ + up(&cmd->rdev->cq.data_lock); +} + +static int rm_queue_payload_init(struct rm_cmd *cmd, + enum rm_cmd_log_page_type type) +{ + struct rm_device *rdev = cmd->rdev; + int ret; + + ret = down_interruptible(&rdev->cq.data_lock); + if (ret) + return ret; + + cmd->sq_msg.data.page.address = rdev->cq.data_offset; + cmd->sq_msg.data.page.size = rdev->cq.data_size; + cmd->sq_msg.data.page.reserved1 = 0; + cmd->sq_msg.data.page.type = FIELD_PREP(RM_CMD_LOG_PAGE_TYPE_MASK, + type); + return 0; +} + +void rm_queue_data_fini(struct rm_cmd *cmd) +{ + up(&cmd->rdev->sq.data_lock); +} + +int rm_queue_data_init(struct rm_cmd *cmd, const char *buffer, ssize_t size) +{ + struct rm_device *rdev = cmd->rdev; + int ret; + + if (!size || size > rdev->sq.data_size) { + vmgmt_err(rdev->vdev, "Unsupported file size"); + return -ENOMEM; + } + + ret = down_interruptible(&rdev->sq.data_lock); + if (ret) + return ret; + + ret = rm_shmem_bulk_write(cmd->rdev, rdev->sq.data_offset, + (u32 *)buffer, size); + if (ret) { + vmgmt_err(rdev->vdev, "Failed to copy binary to SQ buffer"); + up(&cmd->rdev->sq.data_lock); + return ret; + } + + cmd->sq_msg.data.bin.address = rdev->sq.data_offset; + cmd->sq_msg.data.bin.size = size; + return 0; +} + +int rm_queue_create_cmd(struct rm_device *rdev, enum rm_queue_opcode opcode, + struct rm_cmd **cmd_ptr) +{ + struct rm_cmd *cmd = NULL; + int ret, id; + u16 size; + + if (rdev->firewall_tripped) + return -ENODEV; + + cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); + if (!cmd) + return -ENOMEM; + cmd->rdev = rdev; + + switch (opcode) { + case RM_QUEUE_OP_LOAD_XCLBIN: + fallthrough; + case RM_QUEUE_OP_LOAD_FW: + fallthrough; + case RM_QUEUE_OP_LOAD_APU_FW: + size = sizeof(struct rm_cmd_sq_bin); + break; + case RM_QUEUE_OP_GET_LOG_PAGE: + size = sizeof(struct rm_cmd_sq_log_page); + break; + case RM_QUEUE_OP_IDENTIFY: + size = 0; + break; + case RM_QUEUE_OP_VMR_CONTROL: + size = sizeof(struct rm_cmd_sq_ctrl); + break; + default: + vmgmt_err(rdev->vdev, "Invalid cmd opcode %d", opcode); + ret = -EINVAL; + goto error; + }; + + cmd->opcode = opcode; + cmd->sq_msg.hdr.opcode = FIELD_PREP(RM_CMD_SQ_HDR_OPS_MSK, opcode); + cmd->sq_msg.hdr.msg_size = FIELD_PREP(RM_CMD_SQ_HDR_SIZE_MSK, size); + + id = ida_alloc_range(&rm_cmd_ids, RM_CMD_ID_MIN, RM_CMD_ID_MAX, GFP_KERNEL); + if (id < 0) { + vmgmt_err(rdev->vdev, "Failed to alloc cmd ID: %d", id); + ret = id; + goto error; + } + cmd->sq_msg.hdr.id = id; + + init_completion(&cmd->executed); + + *cmd_ptr = cmd; + return 0; +error: + kfree(cmd); + return ret; +} + +static int rm_queue_verify(struct rm_device *rdev) +{ + struct vmgmt_device *vdev = rdev->vdev; + struct rm_cmd_cq_identify *result; + struct rm_cmd *cmd; + u32 major, minor; + int ret; + + ret = rm_queue_create_cmd(rdev, RM_QUEUE_OP_IDENTIFY, &cmd); + if (ret) + return ret; + + ret = rm_queue_send_cmd(cmd, RM_CMD_WAIT_CONFIG_TIMEOUT); + if (ret) + goto error; + + result = &cmd->cq_msg.data.identify; + major = result->major; + minor = result->minor; + vmgmt_dbg(vdev, "VMR version %d.%d", major, minor); + if (!major) { + vmgmt_err(vdev, "VMR version is unsupported"); + ret = -EOPNOTSUPP; + } + +error: + rm_queue_destory_cmd(cmd); + return ret; +} + +static int rm_check_apu_status(struct rm_device *rdev, bool *status) +{ + struct rm_cmd_cq_control *result; + struct rm_cmd *cmd; + int ret; + + ret = rm_queue_create_cmd(rdev, RM_QUEUE_OP_VMR_CONTROL, &cmd); + if (ret) + return ret; + + ret = rm_queue_send_cmd(cmd, RM_CMD_WAIT_CONFIG_TIMEOUT); + if (ret) + goto error; + + result = &cmd->cq_msg.data.ctrl; + *status = FIELD_GET(RM_CMD_VMR_CONTROL_PS_MASK, result->status); + + rm_queue_destory_cmd(cmd); + return 0; + +error: + rm_queue_destory_cmd(cmd); + return ret; +} + +static int rm_download_apu_fw(struct rm_device *rdev, char *data, ssize_t size) +{ + struct rm_cmd *cmd; + int ret; + + ret = rm_queue_create_cmd(rdev, RM_QUEUE_OP_LOAD_APU_FW, &cmd); + if (ret) + return ret; + + ret = rm_queue_data_init(cmd, data, size); + if (ret) + goto done; + + ret = rm_queue_send_cmd(cmd, RM_CMD_WAIT_DOWNLOAD_TIMEOUT); + +done: + rm_queue_destory_cmd(cmd); + return ret; +} + +int rm_boot_apu(struct rm_device *rdev) +{ + char *bin = "xilinx/xrt-versal-apu.xsabin"; + const struct firmware *fw = NULL; + bool status; + int ret; + + ret = rm_check_apu_status(rdev, &status); + if (ret) { + vmgmt_err(rdev->vdev, "Failed to get APU status"); + return ret; + } + + if (status) { + vmgmt_dbg(rdev->vdev, "APU online. Skipping APU FW download"); + return 0; + } + + ret = request_firmware(&fw, bin, &rdev->vdev->pdev->dev); + if (ret) { + vmgmt_warn(rdev->vdev, "Request APU FW %s failed %d", bin, ret); + return ret; + } + + vmgmt_dbg(rdev->vdev, "Starting... APU FW download"); + ret = rm_download_apu_fw(rdev, (char *)fw->data, fw->size); + vmgmt_dbg(rdev->vdev, "Finished... APU FW download %d", ret); + + if (ret) + vmgmt_err(rdev->vdev, "Failed to download APU FW, ret:%d", ret); + + release_firmware(fw); + + return ret; +} + +static void rm_check_health(struct work_struct *w) +{ + struct rm_device *rdev = to_rdev_health_monitor(w); + ssize_t len = PAGE_SIZE; + char *buffer = NULL; + struct rm_cmd *cmd; + int ret; + + buffer = vzalloc(len); + if (!buffer) + return; + + ret = rm_queue_create_cmd(rdev, RM_QUEUE_OP_GET_LOG_PAGE, &cmd); + if (ret) + return; + + ret = rm_queue_payload_init(cmd, RM_CMD_LOG_PAGE_AXI_TRIP_STATUS); + if (ret) + goto error; + + ret = rm_queue_send_cmd(cmd, RM_CMD_WAIT_CONFIG_TIMEOUT); + if (ret == -ETIME || ret == -EINVAL) + goto payload_fini; + + if (cmd->cq_msg.data.page.len) { + ret = rm_queue_copy_response(cmd, buffer, len); + if (ret) + goto payload_fini; + + vmgmt_err(rdev->vdev, "%s", buffer); + rdev->firewall_tripped = 1; + } + + vfree(buffer); + + rm_queue_payload_fini(cmd); + rm_queue_destory_cmd(cmd); + + return; + +payload_fini: + rm_queue_payload_fini(cmd); +error: + rm_queue_destory_cmd(cmd); + vfree(buffer); +} + +static void rm_sched_health_check(struct timer_list *t) +{ + struct rm_device *rdev = to_rdev_health_timer(t); + + if (rdev->firewall_tripped) { + vmgmt_err(rdev->vdev, "Firewall tripped, health check paused. Please reset card"); + return; + } + /* Schedule a work in the general workqueue */ + schedule_work(&rdev->health_monitor); + /* Periodic timer */ + mod_timer(&rdev->health_timer, jiffies + RM_HEALTH_CHECK_TIMER); +} + +static void rm_uninstall_health_monitor(struct rm_device *rdev) +{ + del_timer_sync(&rdev->health_timer); + cancel_work_sync(&rdev->health_monitor); +} + +static void rm_install_health_monitor(struct rm_device *rdev) +{ + INIT_WORK(&rdev->health_monitor, &rm_check_health); + timer_setup(&rdev->health_timer, &rm_sched_health_check, 0); + mod_timer(&rdev->health_timer, jiffies + RM_HEALTH_CHECK_TIMER); +} + +void vmgmt_rm_fini(struct rm_device *rdev) +{ + rm_uninstall_health_monitor(rdev); + rm_queue_fini(rdev); +} + +struct rm_device *vmgmt_rm_init(struct vmgmt_device *vdev) +{ + struct rm_header *header; + struct rm_device *rdev; + u32 status; + int ret; + + rdev = devm_kzalloc(&vdev->pdev->dev, sizeof(*rdev), GFP_KERNEL); + if (!rdev) + return ERR_PTR(-ENOMEM); + + rdev->vdev = vdev; + header = &rdev->rm_metadata; + + rdev->shmem_regmap = devm_regmap_init_mmio(&vdev->pdev->dev, + vdev->tbl + RM_PCI_SHMEM_BAR_OFF, + &rm_shmem_regmap_config); + if (IS_ERR(rdev->shmem_regmap)) { + vmgmt_err(vdev, "Failed to init RM shared memory regmap"); + return ERR_CAST(rdev->shmem_regmap); + } + + ret = rm_shmem_bulk_read(rdev, RM_HDR_OFF, (u32 *)header, + sizeof(*header)); + if (ret) { + vmgmt_err(vdev, "Failed to read RM shared mem, ret %d", ret); + ret = -ENODEV; + goto err; + } + + if (header->magic != RM_HDR_MAGIC_NUM) { + vmgmt_err(vdev, "Invalid RM header 0x%x", header->magic); + ret = -ENODEV; + goto err; + } + + ret = rm_shmem_read(rdev, header->status_off, &status); + if (ret) { + vmgmt_err(vdev, "Failed to read RM shared mem, ret %d", ret); + ret = -ENODEV; + goto err; + } + + if (!status) { + vmgmt_err(vdev, "RM status %d is not ready", status); + ret = -ENODEV; + goto err; + } + + rdev->queue_buffer_size = header->data_end - header->data_start + 1; + rdev->queue_buffer_start = header->data_start; + rdev->queue_base = header->queue_base; + + rdev->io_regmap = devm_regmap_init_mmio(&vdev->pdev->dev, + vdev->tbl + RM_PCI_IO_BAR_OFF, + &rm_io_regmap_config); + if (IS_ERR(rdev->io_regmap)) { + vmgmt_err(vdev, "Failed to init RM IO regmap"); + ret = PTR_ERR(rdev->io_regmap); + goto err; + } + + ret = rm_queue_init(rdev); + if (ret) { + vmgmt_err(vdev, "Failed to init cmd queue, ret %d", ret); + ret = -ENODEV; + goto err; + } + + ret = rm_queue_verify(rdev); + if (ret) { + vmgmt_err(vdev, "Failed to verify cmd queue, ret %d", ret); + ret = -ENODEV; + goto queue_fini; + } + + ret = rm_boot_apu(rdev); + if (ret) { + vmgmt_err(vdev, "Failed to bringup APU, ret %d", ret); + ret = -ENODEV; + goto queue_fini; + } + + rm_install_health_monitor(rdev); + + return rdev; +queue_fini: + rm_queue_fini(rdev); +err: + return ERR_PTR(ret); +} + +int vmgmt_rm_get_fw_id(struct rm_device *rdev, uuid_t *uuid) +{ + char str[UUID_STRING_LEN]; + ssize_t len = PAGE_SIZE; + char *buffer = NULL; + struct rm_cmd *cmd; + u8 i, j; + int ret; + + buffer = vmalloc(len); + if (!buffer) + return -ENOMEM; + + memset(buffer, 0, len); + + ret = rm_queue_create_cmd(rdev, RM_QUEUE_OP_GET_LOG_PAGE, &cmd); + if (ret) + return ret; + + ret = rm_queue_payload_init(cmd, RM_CMD_LOG_PAGE_FW_ID); + if (ret) + goto error; + + ret = rm_queue_send_cmd(cmd, RM_CMD_WAIT_CONFIG_TIMEOUT); + if (ret) + goto payload; + + ret = rm_queue_copy_response(cmd, buffer, len); + if (ret) + goto payload; + + /* parse uuid into a valid uuid string format */ + for (i = 0, j = 0; i < strlen(buffer); i++) { + str[j++] = buffer[i]; + if (j == 8 || j == 13 || j == 18 || j == 23) + str[j++] = '-'; + } + + uuid_parse(str, uuid); + vmgmt_dbg(rdev->vdev, "Interface uuid %pU", uuid); + + vfree(buffer); + + rm_queue_payload_fini(cmd); + rm_queue_destory_cmd(cmd); + + return 0; + +payload: + rm_queue_payload_fini(cmd); +error: + rm_queue_destory_cmd(cmd); + vfree(buffer); + return ret; +} diff --git a/drivers/fpga/amd/vmgmt-rm.h b/drivers/fpga/amd/vmgmt-rm.h new file mode 100644 index 000000000000..a74f93cefbe8 --- /dev/null +++ b/drivers/fpga/amd/vmgmt-rm.h @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Driver for Versal PCIe device + * + * Copyright (C) 2024 Advanced Micro Devices, Inc. All rights reserved. + */ + +#ifndef __VMGMT_RM_H +#define __VMGMT_RM_H + +#define RM_HDR_OFF 0x0 +#define RM_HDR_MAGIC_NUM 0x564D5230 +#define RM_QUEUE_HDR_MAGIC_NUM 0x5847513F +#define RM_PCI_IO_BAR_OFF 0x2010000 +#define RM_PCI_IO_SIZE 0x1000 +#define RM_PCI_SHMEM_BAR_OFF 0x8000000 +#define RM_PCI_SHMEM_SIZE 0x8000000 /* 128 MB */ +#define RM_PCI_SHMEM_HDR_SIZE 0x28 + +#define RM_QUEUE_HDR_MAGIC_NUM_OFF 0x0 +#define RM_IO_SQ_PIDX_OFF 0x0 +#define RM_IO_CQ_PIDX_OFF 0x100 + +#define RM_CMD_ID_MIN 1 +#define RM_CMD_ID_MAX (BIT(17) - 1) +#define RM_CMD_SQ_HDR_OPS_MSK GENMASK(15, 0) +#define RM_CMD_SQ_HDR_SIZE_MSK GENMASK(14, 0) +#define RM_CMD_SQ_SLOT_SIZE 512 +#define RM_CMD_CQ_SLOT_SIZE 16 +#define RM_CMD_CQ_BUFFER_SIZE (1024 * 1024) +#define RM_CMD_CQ_BUFFER_OFFSET 0x0 +#define RM_CMD_LOG_PAGE_TYPE_MASK GENMASK(15, 0) +#define RM_CMD_VMR_CONTROL_MSK GENMASK(10, 8) +#define RM_CMD_VMR_CONTROL_PS_MASK BIT(9) + +#define RM_CMD_WAIT_CONFIG_TIMEOUT msecs_to_jiffies(10 * 1000) +#define RM_CMD_WAIT_DOWNLOAD_TIMEOUT msecs_to_jiffies(300 * 1000) + +#define RM_COMPLETION_TIMER (HZ / 10) +#define RM_HEALTH_CHECK_TIMER (HZ) + +#define RM_INVALID_SLOT 0 + +enum rm_queue_opcode { + RM_QUEUE_OP_LOAD_XCLBIN = 0x0, + RM_QUEUE_OP_GET_LOG_PAGE = 0x8, + RM_QUEUE_OP_LOAD_FW = 0xA, + RM_QUEUE_OP_LOAD_APU_FW = 0xD, + RM_QUEUE_OP_VMR_CONTROL = 0xE, + RM_QUEUE_OP_IDENTIFY = 0x202, +}; + +struct rm_cmd_sq_hdr { + u16 opcode; + u16 msg_size; + u16 id; + u16 reserved; +} __packed; + +struct rm_cmd_cq_hdr { + u16 id; + u16 reserved; +} __packed; + +struct rm_cmd_sq_bin { + u64 address; + u32 size; + u32 reserved1; + u32 reserved2; + u32 reserved3; + u64 reserved4; +} __packed; + +struct rm_cmd_sq_log_page { + u64 address; + u32 size; + u32 reserved1; + u32 type; + u32 reserved2; +} __packed; + +struct rm_cmd_sq_ctrl { + u32 status; +} __packed; + +struct rm_cmd_sq_data { + union { + struct rm_cmd_sq_log_page page; + struct rm_cmd_sq_bin bin; + struct rm_cmd_sq_ctrl ctrl; + }; +} __packed; + +struct rm_cmd_cq_identify { + u16 major; + u16 minor; + u32 reserved; +} __packed; + +struct rm_cmd_cq_log_page { + u32 len; + u32 reserved; +} __packed; + +struct rm_cmd_cq_control { + u16 status; + u16 reserved1; + u32 reserved2; +} __packed; + +struct rm_cmd_cq_data { + union { + struct rm_cmd_cq_identify identify; + struct rm_cmd_cq_log_page page; + struct rm_cmd_cq_control ctrl; + u32 reserved[2]; + }; + u32 rcode; +} __packed; + +struct rm_cmd_sq_msg { + struct rm_cmd_sq_hdr hdr; + struct rm_cmd_sq_data data; +} __packed; + +struct rm_cmd_cq_msg { + struct rm_cmd_cq_hdr hdr; + struct rm_cmd_cq_data data; +} __packed; + +struct rm_cmd { + struct rm_device *rdev; + struct list_head list; + struct completion executed; + struct rm_cmd_sq_msg sq_msg; + struct rm_cmd_cq_msg cq_msg; + enum rm_queue_opcode opcode; + void *buffer; + ssize_t size; +}; + +enum rm_queue_type { + RM_QUEUE_SQ, + RM_QUEUE_CQ +}; + +enum rm_cmd_log_page_type { + RM_CMD_LOG_PAGE_AXI_TRIP_STATUS = 0x0, + RM_CMD_LOG_PAGE_FW_ID = 0xA, +}; + +struct rm_queue { + enum rm_queue_type type; + u32 pidx; + u32 cidx; + u32 offset; + u32 data_offset; + u32 data_size; + struct semaphore data_lock; +}; + +struct rm_queue_header { + u32 magic; + u32 version; + u32 size; + u32 sq_off; + u32 sq_slot_size; + u32 cq_off; + u32 sq_cidx; + u32 cq_cidx; +}; + +struct rm_header { + u32 magic; + u32 queue_base; + u32 queue_size; + u32 status_off; + u32 status_len; + u32 log_index; + u32 log_off; + u32 log_size; + u32 data_start; + u32 data_end; +}; + +struct rm_device { + struct vmgmt_device *vdev; + struct regmap *shmem_regmap; + struct regmap *io_regmap; + + struct rm_header rm_metadata; + u32 queue_buffer_start; + u32 queue_buffer_size; + u32 queue_base; + + /* Lock to queue access */ + struct mutex queue; + struct rm_queue sq; + struct rm_queue cq; + u32 queue_size; + + struct timer_list msg_timer; + struct work_struct msg_monitor; + struct timer_list health_timer; + struct work_struct health_monitor; + struct list_head submitted_cmds; + + int firewall_tripped; +}; + +int rm_queue_create_cmd(struct rm_device *rdev, enum rm_queue_opcode opcode, + struct rm_cmd **cmd_ptr); +void rm_queue_destory_cmd(struct rm_cmd *cmd); + +int rm_queue_data_init(struct rm_cmd *cmd, const char *buffer, ssize_t size); +void rm_queue_data_fini(struct rm_cmd *cmd); + +int rm_queue_copy_response(struct rm_cmd *cmd, void *buffer, ssize_t len); + +int rm_boot_apu(struct rm_device *rdev); + +#endif /* __VMGMT_RM_H */ diff --git a/drivers/fpga/amd/vmgmt.c b/drivers/fpga/amd/vmgmt.c index b72eff9e8bc0..198213a13c7d 100644 --- a/drivers/fpga/amd/vmgmt.c +++ b/drivers/fpga/amd/vmgmt.c @@ -21,6 +21,8 @@ #include "vmgmt.h" #include "vmgmt-comms.h" +#include "vmgmt-rm.h" +#include "vmgmt-rm-queue.h" #define DRV_NAME "amd-vmgmt" #define CLASS_NAME DRV_NAME @@ -43,6 +45,61 @@ static inline struct vmgmt_device *vmgmt_inode_to_vdev(struct inode *inode) return (struct vmgmt_device *)container_of(inode->i_cdev, struct vmgmt_device, cdev); } +static int vmgmt_fpga_write_init(struct fpga_manager *mgr, + struct fpga_image_info *info, const char *buf, + size_t count) +{ + struct fpga_device *fdev = mgr->priv; + struct fw_tnx *tnx = &fdev->fw; + int ret; + + ret = rm_queue_create_cmd(fdev->vdev->rdev, tnx->opcode, &tnx->cmd); + if (ret) { + fdev->state = FPGA_MGR_STATE_WRITE_INIT_ERR; + return ret; + } + + fdev->state = FPGA_MGR_STATE_WRITE_INIT; + return ret; +} + +static int vmgmt_fpga_write(struct fpga_manager *mgr, const char *buf, + size_t count) +{ + struct fpga_device *fdev = mgr->priv; + int ret; + + ret = rm_queue_data_init(fdev->fw.cmd, buf, count); + if (ret) { + fdev->state = FPGA_MGR_STATE_WRITE_ERR; + rm_queue_destory_cmd(fdev->fw.cmd); + return ret; + } + + fdev->state = FPGA_MGR_STATE_WRITE; + return ret; +} + +static int vmgmt_fpga_write_complete(struct fpga_manager *mgr, + struct fpga_image_info *info) +{ + struct fpga_device *fdev = mgr->priv; + int ret; + + ret = rm_queue_send_cmd(fdev->fw.cmd, RM_CMD_WAIT_DOWNLOAD_TIMEOUT); + if (ret) { + fdev->state = FPGA_MGR_STATE_WRITE_COMPLETE_ERR; + vmgmt_err(fdev->vdev, "Send cmd failed:%d, cid:%d", ret, fdev->fw.id); + } else { + fdev->state = FPGA_MGR_STATE_WRITE_COMPLETE; + } + + rm_queue_data_fini(fdev->fw.cmd); + rm_queue_destory_cmd(fdev->fw.cmd); + memset(&fdev->fw, 0, sizeof(fdev->fw)); + return ret; +} + static enum fpga_mgr_states vmgmt_fpga_state(struct fpga_manager *mgr) { struct fpga_device *fdev = mgr->priv; @@ -51,6 +108,9 @@ static enum fpga_mgr_states vmgmt_fpga_state(struct fpga_manager *mgr) } static const struct fpga_manager_ops vmgmt_fpga_ops = { + .write_init = vmgmt_fpga_write_init, + .write = vmgmt_fpga_write, + .write_complete = vmgmt_fpga_write_complete, .state = vmgmt_fpga_state, }; @@ -96,6 +156,13 @@ static struct fpga_device *vmgmt_fpga_init(struct vmgmt_device *vdev) return ERR_PTR(ret); } + ret = vmgmt_rm_get_fw_id(vdev->rdev, &vdev->intf_uuid); + if (ret) { + vmgmt_warn(vdev, "Failed to get interface uuid"); + ret = -EINVAL; + goto unregister_fpga_mgr; + } + /* create fgpa bridge, region for the base shell */ fdev->bridge = fpga_bridge_register(dev, "AMD Versal FPGA Bridge", &vmgmt_br_ops, fdev); @@ -132,6 +199,149 @@ static struct fpga_device *vmgmt_fpga_init(struct vmgmt_device *vdev) return ERR_PTR(ret); } +static int vmgmt_region_program(struct fpga_region *region, const void *data) +{ + struct fpga_device *fdev = region->priv; + struct vmgmt_device *vdev = fdev->vdev; + const struct axlf *xclbin = data; + struct fpga_image_info *info; + int ret; + + info = fpga_image_info_alloc(&vdev->pdev->dev); + if (!info) + return -ENOMEM; + + region->info = info; + + info->flags |= FPGA_MGR_PARTIAL_RECONFIG; + info->count = xclbin->header.length; + info->buf = (char *)xclbin; + + ret = fpga_region_program_fpga(region); + if (ret) { + vmgmt_err(vdev, "Programming xclbin failed: %d", ret); + goto exit; + } + + /* free bridges to allow reprogram */ + if (region->get_bridges) + fpga_bridges_put(®ion->bridge_list); + +exit: + fpga_image_info_free(info); + return ret; +} + +static int vmgmt_fpga_region_match(struct device *dev, const void *data) +{ + const struct vmgmt_fpga_region *arg = data; + const struct fpga_region *match_region; + struct fpga_device *fdev = arg->fdev; + uuid_t compat_uuid; + + if (dev->parent != &fdev->vdev->pdev->dev) + return false; + + match_region = to_fpga_region(dev); + + import_uuid(&compat_uuid, (const char *)match_region->compat_id); + if (uuid_equal(&compat_uuid, arg->uuid)) { + vmgmt_dbg(fdev->vdev, "Region match found"); + return true; + } + + vmgmt_err(fdev->vdev, "download uuid %pUb is not the same as device uuid %pUb", + arg->uuid, &compat_uuid); + return false; +} + +static long vmgmt_ioctl(struct file *filep, unsigned int cmd, unsigned long arg) +{ + struct vmgmt_device *vdev = (struct vmgmt_device *)filep->private_data; + struct vmgmt_fpga_region reg = { 0 }; + struct fpga_region *region = NULL; + struct axlf *axlf = NULL; + void *data = NULL; + size_t size = 0; + int ret = 0; + + axlf = vmalloc(sizeof(*axlf)); + if (!axlf) + return -ENOMEM; + + ret = copy_from_user((void *)axlf, (void *)arg, sizeof(*axlf)); + if (ret) { + vmgmt_err(vdev, "Failed to copy axlf: %d", ret); + ret = -EFAULT; + goto exit; + } + + ret = memcmp(axlf->magic, VERSAL_XCLBIN_MAGIC_ID, + sizeof(VERSAL_XCLBIN_MAGIC_ID)); + if (ret) { + vmgmt_err(vdev, "unknown axlf magic %s", axlf->magic); + ret = -EINVAL; + goto exit; + } + + /* axlf should never be over 1G and less than size of struct axlf */ + size = axlf->header.length; + if (size < sizeof(struct axlf) || size > 1024 * 1024 * 1024) { + vmgmt_err(vdev, "axlf length %zu is invalid", size); + ret = -EINVAL; + goto exit; + } + + data = vmalloc(size); + if (!data) { + ret = -ENOMEM; + goto exit; + } + + ret = copy_from_user((void *)data, (void *)arg, size); + if (ret) { + vmgmt_err(vdev, "Failed to copy data: %d", ret); + ret = -EFAULT; + goto exit; + } + + switch (cmd) { + case VERSAL_MGMT_LOAD_XCLBIN_IOCTL: + vdev->fdev->fw.opcode = RM_QUEUE_OP_LOAD_XCLBIN; + break; + default: + vmgmt_err(vdev, "Invalid IOCTL command: %d", cmd); + ret = -EINVAL; + goto exit; + } + + reg.uuid = &axlf->header.rom_uuid; + reg.fdev = vdev->fdev; + + region = fpga_region_class_find(NULL, ®, vmgmt_fpga_region_match); + if (!region) { + vmgmt_err(vdev, "Failed to find compatible region"); + ret = -ENOENT; + goto exit; + } + + ret = vmgmt_region_program(region, data); + if (ret) { + vmgmt_err(vdev, "Failed to program region"); + goto exit; + } + + vmgmt_info(vdev, "Downloaded axlf %pUb of size %zu Bytes", + &axlf->header.uuid, size); + uuid_copy(&vdev->xclbin_uuid, &axlf->header.uuid); + +exit: + vfree(data); + vfree(axlf); + + return ret; +} + static int vmgmt_open(struct inode *inode, struct file *filep) { struct vmgmt_device *vdev = vmgmt_inode_to_vdev(inode); @@ -155,6 +365,7 @@ static const struct file_operations vmgmt_fops = { .owner = THIS_MODULE, .open = vmgmt_open, .release = vmgmt_release, + .unlocked_ioctl = vmgmt_ioctl, }; static void vmgmt_chrdev_destroy(struct vmgmt_device *vdev) @@ -201,6 +412,69 @@ static int vmgmt_chrdev_create(struct vmgmt_device *vdev) return 0; } +static enum fw_upload_err vmgmt_fw_prepare(struct fw_upload *fw_upload, + const u8 *data, u32 size) +{ + struct firmware_device *fwdev = fw_upload->dd_handle; + struct axlf *xsabin = (struct axlf *)data; + int ret; + + ret = memcmp(xsabin->magic, VERSAL_XCLBIN_MAGIC_ID, + sizeof(VERSAL_XCLBIN_MAGIC_ID)); + if (ret) { + vmgmt_err(fwdev->vdev, "Invalid device firmware"); + return FW_UPLOAD_ERR_INVALID_SIZE; + } + + /* Firmware size should never be over 1G and less than size of struct axlf */ + if (!size || size != xsabin->header.length || size < sizeof(*xsabin) || + size > 1024 * 1024 * 1024) { + vmgmt_err(fwdev->vdev, "Invalid device firmware size"); + return FW_UPLOAD_ERR_INVALID_SIZE; + } + + ret = rm_queue_create_cmd(fwdev->vdev->rdev, RM_QUEUE_OP_LOAD_FW, + &fwdev->cmd); + if (ret) + return FW_UPLOAD_ERR_RW_ERROR; + + uuid_copy(&fwdev->uuid, &xsabin->header.uuid); + return FW_UPLOAD_ERR_NONE; +} + +static enum fw_upload_err vmgmt_fw_write(struct fw_upload *fw_upload, + const u8 *data, u32 offset, u32 size, + u32 *written) +{ + struct firmware_device *fwdev = fw_upload->dd_handle; + int ret; + + ret = rm_queue_data_init(fwdev->cmd, data, size); + if (ret) + return FW_UPLOAD_ERR_RW_ERROR; + + *written = size; + return FW_UPLOAD_ERR_NONE; +} + +static enum fw_upload_err vmgmt_fw_poll_complete(struct fw_upload *fw_upload) +{ + struct firmware_device *fwdev = fw_upload->dd_handle; + int ret; + + vmgmt_info(fwdev->vdev, "Programming device firmware: %pUb", &fwdev->uuid); + + ret = rm_queue_send_cmd(fwdev->cmd, RM_CMD_WAIT_DOWNLOAD_TIMEOUT); + if (ret) { + vmgmt_err(fwdev->vdev, "Send cmd failedi:%d, cid %d", ret, fwdev->id); + return FW_UPLOAD_ERR_HW_ERROR; + } + + vmgmt_info(fwdev->vdev, "Successfully programmed device firmware: %pUb", + &fwdev->uuid); + return FW_UPLOAD_ERR_NONE; +} + static void vmgmt_fw_cancel(struct fw_upload *fw_upload) { struct firmware_device *fwdev = fw_upload->dd_handle; @@ -208,8 +482,26 @@ static void vmgmt_fw_cancel(struct fw_upload *fw_upload) vmgmt_warn(fwdev->vdev, "canceled"); } +static void vmgmt_fw_cleanup(struct fw_upload *fw_upload) +{ + struct firmware_device *fwdev = fw_upload->dd_handle; + + if (!fwdev->cmd) + return; + + rm_queue_data_fini(fwdev->cmd); + rm_queue_destory_cmd(fwdev->cmd); + + fwdev->cmd = NULL; + fwdev->id = 0; +} + static const struct fw_upload_ops vmgmt_fw_ops = { + .prepare = vmgmt_fw_prepare, + .write = vmgmt_fw_write, + .poll_complete = vmgmt_fw_poll_complete, .cancel = vmgmt_fw_cancel, + .cleanup = vmgmt_fw_cleanup, }; static void vmgmt_fw_upload_fini(struct firmware_device *fwdev) @@ -250,17 +542,25 @@ static void vmgmt_device_teardown(struct vmgmt_device *vdev) vmgmt_fpga_fini(vdev->fdev); vmgmt_fw_upload_fini(vdev->fwdev); vmgmtm_comms_fini(vdev->ccdev); + vmgmt_rm_fini(vdev->rdev); } static int vmgmt_device_setup(struct vmgmt_device *vdev) { int ret; + vdev->rdev = vmgmt_rm_init(vdev); + if (IS_ERR(vdev->rdev)) { + ret = PTR_ERR(vdev->rdev); + vmgmt_err(vdev, "Failed to init runtime manager, err %d", ret); + return ret; + } + vdev->fwdev = vmgmt_fw_upload_init(vdev); if (IS_ERR(vdev->fwdev)) { ret = PTR_ERR(vdev->fwdev); vmgmt_err(vdev, "Failed to init FW uploader, err %d", ret); - goto done; + goto rm_fini; } vdev->ccdev = vmgmtm_comms_init(vdev); @@ -282,7 +582,8 @@ static int vmgmt_device_setup(struct vmgmt_device *vdev) vmgmtm_comms_fini(vdev->ccdev); upload_fini: vmgmt_fw_upload_fini(vdev->fwdev); -done: +rm_fini: + vmgmt_rm_fini(vdev->rdev); return ret; } diff --git a/drivers/fpga/amd/vmgmt.h b/drivers/fpga/amd/vmgmt.h index 4dc8a43f825e..c767d1372881 100644 --- a/drivers/fpga/amd/vmgmt.h +++ b/drivers/fpga/amd/vmgmt.h @@ -19,6 +19,7 @@ #include #include #include +#include #define AMD_VMGMT_BAR 0 #define AMD_VMGMT_BAR_MASK BIT(0) @@ -93,8 +94,10 @@ struct vmgmt_device { void __iomem *tbl; uuid_t xclbin_uuid; uuid_t intf_uuid; - - void *debugfs_root; }; +struct rm_device *vmgmt_rm_init(struct vmgmt_device *vdev); +void vmgmt_rm_fini(struct rm_device *rdev); +int vmgmt_rm_get_fw_id(struct rm_device *rdev, uuid_t *uuid); + #endif /* __VMGMT_H */ From patchwork Mon Oct 7 22:01:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhang, Yidong (David)" X-Patchwork-Id: 13825318 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2081.outbound.protection.outlook.com [40.107.220.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00E2118C93B; Mon, 7 Oct 2024 22:01:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.81 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728338499; cv=fail; b=cctjQJCDm/OKhEDg1n7lDArp1Ajna5zxS+O3rcjHKNdEGWFtvn18KvhKKeIjP37NbLCJPzYEUgI5HVEN0RX27hzcs+2O0Ar32rAENOtWPYuUr2VWumTEkdKfCNank3OrYh8s7Nr7I6QdMP+gx+Kp0qH7GGffU1unOqFcwAEENbM= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728338499; c=relaxed/simple; bh=0HihpjafPJikGUQIplfw/Y11aGqK/+GqslLNgJKgaV8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=fGxObe1Gs7m0cTQ56svg2dVcFctFX0HlrzW9nFTErtTYFk/qG6iUx1eySlIK1hTDsJqZn/TotSZ6bkUSO/SXbGjjSK9Zw1csDDGfh1cmcuRVu8xBOif47qTZZCqzwXKmY+cJnz+VgITuApjhM4KqsT7+QmlEvSuMfi3DLEuV4XQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=p9t1msqL; arc=fail smtp.client-ip=40.107.220.81 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="p9t1msqL" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=kQcIfnZTZFZn0LMhrbHUjWCai2Den9bU9sM0JK/LoGdEvHcRTrhZ0d5Dq8Jh8eiJBKl8LOjjAusGGlvvhRIVdDvy4DEYrcWjoCcMI8Tt8aKI4KXIEo3wN5NObxYkZV35UU+Y8UOpTaLzwn0WH3pBW9IPPctwialo5+OO3LPxeXE0HS9qcF45gDoDTbwOnw9RFVJF6qFAc+3yKjm7wcyaU4fafklqgKyGnNJVJHBE7Pw2Cwrk8tUX7oe0emWscQ8CpoB4G22JBxOD/V+OXpiBRC2LLpX+KNS9dcDTmf02p4mGFHLhtpQD4mBzRomR0caRwGm5ZWTfOCL9Bzpszafl2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ri9NWmrgzg9r8/FzGs5wrRlTc+ijqL/Zn+1LG+tt9+w=; b=XquvEW1pKTTRFEZZNhrVqpfgNtjsXvtqSfGckLYwIKNxEwTiULHpOEq6u+o/KQnaLdbTjb9bb+4MP/bz3Wp+k+powSY3dPOsU5TGyiJFubl1gqD+6TxM5ejkgCQmIX3Y3tyDkxnyipnMw/Yt90R8ge0IFG5nU5kKJ8jEEk1phuBqOHO9KYClfzrYZQy+h9FmRb10bHB9RKuJDby18JwY0HupfR8XpOeFZiNOrEGDy5E90I8r40jfj/G4B12SXsoI03Bs1e9AIibCG3Ied/7WtftfQuSLAjimHVuIkIDN3cI5SG7KVhaA+SNAXzHS4rSP1ZjUhfbu6rObbPazdpAT2Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ri9NWmrgzg9r8/FzGs5wrRlTc+ijqL/Zn+1LG+tt9+w=; b=p9t1msqLd/uZ7uv3j1J3LViMj4rM9qGuYdNBlUivU6E8XKg6s5/EjqwJ6o89kPwqXI+QD9TgqRg0q4aOrlv0he6umGZaWBaU7qw5vJJs5ZOS5jq2W9vrfk6GnaQoRy2+VXKVK+uK4Ehnl6h0Zhjy6GYZaq94RLwVg75iRt/mKcU= Received: from BN8PR15CA0067.namprd15.prod.outlook.com (2603:10b6:408:80::44) by SN7PR12MB6814.namprd12.prod.outlook.com (2603:10b6:806:266::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.16; Mon, 7 Oct 2024 22:01:33 +0000 Received: from BN2PEPF000055DC.namprd21.prod.outlook.com (2603:10b6:408:80:cafe::8a) by BN8PR15CA0067.outlook.office365.com (2603:10b6:408:80::44) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.20 via Frontend Transport; Mon, 7 Oct 2024 22:01:33 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BN2PEPF000055DC.mail.protection.outlook.com (10.167.245.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8069.0 via Frontend Transport; Mon, 7 Oct 2024 22:01:33 +0000 Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 7 Oct 2024 17:01:33 -0500 Received: from xsjlizhih51.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Mon, 7 Oct 2024 17:01:32 -0500 From: David Zhang To: , , , , CC: Yidong Zhang , , Nishad Saraf , Prapul Krishnamurthy Subject: [PATCH V1 3/3] drivers/fpga/amd: Add remote queue service APIs Date: Mon, 7 Oct 2024 15:01:28 -0700 Message-ID: <20241007220128.3023169-3-yidong.zhang@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241007220128.3023169-1-yidong.zhang@amd.com> References: <20241007220128.3023169-1-yidong.zhang@amd.com> Precedence: bulk X-Mailing-List: linux-fpga@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Received-SPF: None (SATLEXMB04.amd.com: yidong.zhang@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN2PEPF000055DC:EE_|SN7PR12MB6814:EE_ X-MS-Office365-Filtering-Correlation-Id: 9fb2fb01-93cc-4c6f-a5ef-08dce71b9b3c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|1800799024|376014|36860700013; X-Microsoft-Antispam-Message-Info: le5kTdSa7EwYOi4X87U2R+z08O8ya+vwGDGvgiTqD1jW18EZtxH7iye60HNXC1oWYeJaec92TZlxWL7tjORiZQjDMNnSPjobblNQuLx7UgaQyuQdKt8Z87+V0F4I8CqwdNusi/bBn9VrlasSEYkJKKK/pcPj17/8oaYZC2D1+zbqgwAbehKUQD+LPI/3aaW2fboRozsjxAxCPUF/1/ONU3g0vn+3kDaKTKO2QEtmRdFJo/LBix/nmYhSGzH5tlI2NyQMGxOcq/ljaUjwQdCpOj6k/BQaQTSP2EaMk1Raw68NXycOMwbs4EPuN7tx0tHUqp04ahps0Ee3uIBBxAqtfC2N6K2sc23N/V3jYLBsmcF5I/Xb8QyLk2zPoXv0iqtF9Yf/4p9TD/+UQ36+Vf7p2xZKIwnNRMEyZ5WxHBYsXZBWAyOChS8MYvfleIfzfQTYxOdH17YMD8bqw6IzESjJAYmNUQmsefpAZi/la7r8SSTSKpKhUrh5XWKQIZQaAe84x+L99GjK+A+ksNnUWnQqJDSt5Ipp3ynOjNF/idIcMpOnnFnEvmKLeb1nfOEnosXsGlMzOXvTmTRVWgZnrlFM2YJxsw6PFcQOH1qJuNygZGqfbnHczsu/dTXTwQE377AmPHpG//OCiezr9EFI0b4drNqz1rzfOFHn+++/v73RQ1lWpZwCdGJ4+1mLaZ93hkH1qdDWojGmm6oGNf2hDR1vlmLbA1FeqbAW/s7SuHj/AW/JpagbzVP3NBT6HhZH6Zq1MqBiBKZ86HE5Y8waE9LVaBiFi0IPDchh1DwlVDx1tFLvzARd5F4NqNCGKPAcccdHV9CO16GZ/ilB3LEG+MfNWaXCIXMIRhYnsWtMNt/LFe8fbWuGzM3AQzbO9QnPMuYtajxvv8jrWPQVY5UkiEOezTerYsrODzKFGxmyHhPuWOaNq+Jfuj80FaG2AtXp9Nc7BqpH0lvpFxYMORKnWdyuv+ujCkJFPhseXCGFCLJnCAaiFkwZNgKsk+bgFG1sOgZ6tKOpItYBfpU5sJZawAi35q/CuGii4Oam3K/p8jXJGuoi528AaPZFpIIdGS3Oaz7CZ9h/W/sJC4Ye4+6ZnEZ3dB2kt59yC1xIWo1H9vTzZPJXjys3Ygjp3vw/+FVc5SQkmkgUtIk/+lQjU/6cu351GFqDTChmK0rVGbUQptC5N6egT+SeNLeg8Y0VXbH5aF85h9UImZc3UjLSozyf2Vkvq8xTIGt7npQkChR96zUNT8Qlfa/2lpcpvRu0+yZTRdXfeH8Tlmrzo8pgucZh7NAJ814e3cR9NLIdrPozxaqKKE6fvZZdEp+BZphOzZZPMFMEZ/F87Y3SrDH4sekoM111BP7h7YhtY6i6a4hSHAxIY/EZg/PU2rL6RiRZOCCU9vUb X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(1800799024)(376014)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2024 22:01:33.3719 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9fb2fb01-93cc-4c6f-a5ef-08dce71b9b3c X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BN2PEPF000055DC.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6814 From: Yidong Zhang Adds remote queue services inlcuding init, fini, and send command. Co-developed-by: Nishad Saraf Signed-off-by: Nishad Saraf Co-developed-by: Prapul Krishnamurthy Signed-off-by: Prapul Krishnamurthy Signed-off-by: Yidong Zhang --- drivers/fpga/amd/vmgmt-rm-queue.c | 342 +++++++++++++++++++++++++++++- 1 file changed, 341 insertions(+), 1 deletion(-) diff --git a/drivers/fpga/amd/vmgmt-rm-queue.c b/drivers/fpga/amd/vmgmt-rm-queue.c index fe805373ea32..f68439833d51 100644 --- a/drivers/fpga/amd/vmgmt-rm-queue.c +++ b/drivers/fpga/amd/vmgmt-rm-queue.c @@ -23,16 +23,356 @@ #include "vmgmt-rm.h" #include "vmgmt-rm-queue.h" +static inline struct rm_device *to_rdev_msg_monitor(struct work_struct *w) +{ + return container_of(w, struct rm_device, msg_monitor); +} + +static inline struct rm_device *to_rdev_msg_timer(struct timer_list *t) +{ + return container_of(t, struct rm_device, msg_timer); +} + +static inline int rm_queue_write(struct rm_device *rdev, u32 offset, u32 value) +{ + return regmap_write(rdev->shmem_regmap, rdev->queue_base + offset, value); +} + +static inline int rm_queue_read(struct rm_device *rdev, u32 offset, u32 *value) +{ + return regmap_read(rdev->shmem_regmap, rdev->queue_base + offset, value); +} + +static inline int rm_queue_bulk_read(struct rm_device *rdev, u32 offset, + u32 *value, u32 size) +{ + if (size & 0x3) { + vmgmt_err(rdev->vdev, "size %d is not 4 Bytes aligned", size); + return -EINVAL; + } + + return regmap_bulk_read(rdev->shmem_regmap, rdev->queue_base + offset, + value, DIV_ROUND_UP(size, 4)); +} + +static inline int rm_queue_bulk_write(struct rm_device *rdev, u32 offset, + u32 *value, u32 size) +{ + if (size & 0x3) { + vmgmt_err(rdev->vdev, "size %d is not 4 Bytes aligned", size); + return -EINVAL; + } + + return regmap_bulk_write(rdev->shmem_regmap, rdev->queue_base + offset, + value, DIV_ROUND_UP(size, 4)); +} + +static inline int rm_queue_get_cidx(struct rm_device *rdev, + enum rm_queue_type type, u32 *value) +{ + u32 off; + + if (type == RM_QUEUE_SQ) + off = offsetof(struct rm_queue_header, sq_cidx); + else + off = offsetof(struct rm_queue_header, cq_cidx); + + return rm_queue_read(rdev, off, value); +} + +static inline int rm_queue_set_cidx(struct rm_device *rdev, + enum rm_queue_type type, u32 value) +{ + u32 off; + + if (type == RM_QUEUE_SQ) + off = offsetof(struct rm_queue_header, sq_cidx); + else + off = offsetof(struct rm_queue_header, cq_cidx); + + return rm_queue_write(rdev, off, value); +} + +static inline int rm_queue_get_pidx(struct rm_device *rdev, + enum rm_queue_type type, u32 *value) +{ + if (type == RM_QUEUE_SQ) + return regmap_read(rdev->io_regmap, RM_IO_SQ_PIDX_OFF, value); + else + return regmap_read(rdev->io_regmap, RM_IO_CQ_PIDX_OFF, value); +} + +static inline int rm_queue_set_pidx(struct rm_device *rdev, + enum rm_queue_type type, u32 value) +{ + if (type == RM_QUEUE_SQ) + return regmap_write(rdev->io_regmap, RM_IO_SQ_PIDX_OFF, value); + else + return regmap_write(rdev->io_regmap, RM_IO_CQ_PIDX_OFF, value); +} + +static inline u32 rm_queue_get_sq_slot_offset(struct rm_device *rdev) +{ + u32 index; + + if ((rdev->sq.pidx - rdev->sq.cidx) >= rdev->queue_size) + return RM_INVALID_SLOT; + + index = rdev->sq.pidx & (rdev->queue_size - 1); + return rdev->sq.offset + RM_CMD_SQ_SLOT_SIZE * index; +} + +static inline u32 rm_queue_get_cq_slot_offset(struct rm_device *rdev) +{ + u32 index; + + index = rdev->cq.cidx & (rdev->queue_size - 1); + return rdev->cq.offset + RM_CMD_CQ_SLOT_SIZE * index; +} + +static int rm_queue_submit_cmd(struct rm_cmd *cmd) +{ + struct vmgmt_device *vdev = cmd->rdev->vdev; + struct rm_device *rdev = cmd->rdev; + u32 offset; + int ret; + + mutex_lock(&rdev->queue); + + offset = rm_queue_get_sq_slot_offset(rdev); + if (!offset) { + vmgmt_err(vdev, "No SQ slot available"); + ret = -ENOSPC; + goto exit; + } + + ret = rm_queue_bulk_write(rdev, offset, (u32 *)&cmd->sq_msg, + sizeof(cmd->sq_msg)); + if (ret) { + vmgmt_err(vdev, "Failed to write msg to ring, ret %d", ret); + goto exit; + } + + ret = rm_queue_set_pidx(rdev, RM_QUEUE_SQ, ++rdev->sq.pidx); + if (ret) { + vmgmt_err(vdev, "Failed to update PIDX, ret %d", ret); + goto exit; + } + + list_add_tail(&cmd->list, &rdev->submitted_cmds); +exit: + mutex_unlock(&rdev->queue); + return ret; +} + +static void rm_queue_withdraw_cmd(struct rm_cmd *cmd) +{ + mutex_lock(&cmd->rdev->queue); + list_del(&cmd->list); + mutex_unlock(&cmd->rdev->queue); +} + +static int rm_queue_wait_cmd_timeout(struct rm_cmd *cmd, unsigned long timeout) +{ + struct vmgmt_device *vdev = cmd->rdev->vdev; + int ret; + + if (wait_for_completion_timeout(&cmd->executed, timeout)) { + ret = cmd->cq_msg.data.rcode; + if (!ret) + return 0; + + vmgmt_err(vdev, "CMD returned with a failure: %d", ret); + return ret; + } + + /* + * each cmds will be cleaned up by complete before it times out. + * if we reach here, the cmd should be cleared and hot reset should + * be issued. + */ + vmgmt_err(vdev, "cmd is timedout after, please reset the card"); + rm_queue_withdraw_cmd(cmd); + return -ETIME; +} + int rm_queue_send_cmd(struct rm_cmd *cmd, unsigned long timeout) { - return 0; + int ret; + + ret = rm_queue_submit_cmd(cmd); + if (ret) + return ret; + + return rm_queue_wait_cmd_timeout(cmd, timeout); +} + +static int rm_process_msg(struct rm_device *rdev) +{ + struct rm_cmd *cmd, *next; + struct vmgmt_device *vdev = rdev->vdev; + struct rm_cmd_cq_hdr header; + u32 offset; + int ret; + + offset = rm_queue_get_cq_slot_offset(rdev); + if (!offset) { + vmgmt_err(vdev, "Invalid CQ offset"); + return -EINVAL; + } + + ret = rm_queue_bulk_read(rdev, offset, (u32 *)&header, sizeof(header)); + if (ret) { + vmgmt_err(vdev, "Failed to read queue msg, %d", ret); + return ret; + } + + list_for_each_entry_safe(cmd, next, &rdev->submitted_cmds, list) { + u32 value = 0; + + if (cmd->sq_msg.hdr.id != header.id) + continue; + + ret = rm_queue_bulk_read(rdev, offset + sizeof(cmd->cq_msg.hdr), + (u32 *)&cmd->cq_msg.data, + sizeof(cmd->cq_msg.data)); + if (ret) + vmgmt_warn(vdev, "Failed to read queue msg, %d", ret); + + ret = rm_queue_write(rdev, offset, value); + if (ret) + vmgmt_warn(vdev, "Failed to write queue msg, %d", ret); + + list_del(&cmd->list); + complete(&cmd->executed); + return 0; + } + + vmgmt_err(vdev, "Unknown cmd ID %d found in CQ", header.id); + return -EFAULT; +} + +static void rm_check_msg(struct work_struct *w) +{ + struct rm_device *rdev = to_rdev_msg_monitor(w); + int ret; + + mutex_lock(&rdev->queue); + + ret = rm_queue_get_cidx(rdev, RM_QUEUE_SQ, &rdev->sq.cidx); + if (ret) + goto error; + + ret = rm_queue_get_pidx(rdev, RM_QUEUE_CQ, &rdev->cq.pidx); + if (ret) + goto error; + + while (rdev->cq.cidx < rdev->cq.pidx) { + ret = rm_process_msg(rdev); + if (ret) + break; + + rdev->cq.cidx++; + + ret = rm_queue_set_cidx(rdev, RM_QUEUE_CQ, rdev->cq.cidx); + if (ret) + break; + }; + +error: + mutex_unlock(&rdev->queue); +} + +static void rm_sched_work(struct timer_list *t) +{ + struct rm_device *rdev = to_rdev_msg_timer(t); + + /* Schedule a work in the general workqueue */ + schedule_work(&rdev->msg_monitor); + /* Periodic timer */ + mod_timer(&rdev->msg_timer, jiffies + RM_COMPLETION_TIMER); } void rm_queue_fini(struct rm_device *rdev) { + del_timer_sync(&rdev->msg_timer); + cancel_work_sync(&rdev->msg_monitor); + mutex_destroy(&rdev->queue); } int rm_queue_init(struct rm_device *rdev) { + struct vmgmt_device *vdev = rdev->vdev; + struct rm_queue_header header = {0}; + int ret; + + INIT_LIST_HEAD(&rdev->submitted_cmds); + mutex_init(&rdev->queue); + + ret = rm_queue_bulk_read(rdev, RM_HDR_OFF, (u32 *)&header, + sizeof(header)); + if (ret) { + vmgmt_err(vdev, "Failed to read RM shared mem, ret %d", ret); + goto error; + } + + if (header.magic != RM_QUEUE_HDR_MAGIC_NUM) { + vmgmt_err(vdev, "Invalid RM queue header"); + ret = -ENODEV; + goto error; + } + + if (!header.version) { + vmgmt_err(vdev, "Invalid RM queue header"); + ret = -ENODEV; + goto error; + } + + sema_init(&rdev->sq.data_lock, 1); + sema_init(&rdev->cq.data_lock, 1); + rdev->queue_size = header.size; + rdev->sq.offset = header.sq_off; + rdev->cq.offset = header.cq_off; + rdev->sq.type = RM_QUEUE_SQ; + rdev->cq.type = RM_QUEUE_CQ; + rdev->sq.data_size = rdev->queue_buffer_size - RM_CMD_CQ_BUFFER_SIZE; + rdev->cq.data_size = RM_CMD_CQ_BUFFER_SIZE; + rdev->sq.data_offset = rdev->queue_buffer_start + + RM_CMD_CQ_BUFFER_OFFSET + RM_CMD_CQ_BUFFER_SIZE; + rdev->cq.data_offset = rdev->queue_buffer_start + + RM_CMD_CQ_BUFFER_OFFSET; + rdev->sq.cidx = header.sq_cidx; + rdev->cq.cidx = header.cq_cidx; + + ret = rm_queue_get_pidx(rdev, RM_QUEUE_SQ, &rdev->sq.pidx); + if (ret) { + vmgmt_err(vdev, "Failed to read sq.pidx, ret %d", ret); + goto error; + } + + ret = rm_queue_get_pidx(rdev, RM_QUEUE_CQ, &rdev->cq.pidx); + if (ret) { + vmgmt_err(vdev, "Failed to read cq.pidx, ret %d", ret); + goto error; + } + + if (rdev->cq.cidx != rdev->cq.pidx) { + vmgmt_warn(vdev, "Clearing stale completions"); + rdev->cq.cidx = rdev->cq.pidx; + ret = rm_queue_set_cidx(rdev, RM_QUEUE_CQ, rdev->cq.cidx); + if (ret) { + vmgmt_err(vdev, "Failed to cleanup CQ, ret %d", ret); + goto error; + } + } + + /* Create and schedule timer to do recurring work */ + INIT_WORK(&rdev->msg_monitor, &rm_check_msg); + timer_setup(&rdev->msg_timer, &rm_sched_work, 0); + mod_timer(&rdev->msg_timer, jiffies + RM_COMPLETION_TIMER); + return 0; +error: + mutex_destroy(&rdev->queue); + return ret; }