From patchwork Thu Mar 3 13:58:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 897B8C433F5 for ; Thu, 3 Mar 2022 13:59:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233863AbiCCOAP (ORCPT ); Thu, 3 Mar 2022 09:00:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231603AbiCCOAO (ORCPT ); Thu, 3 Mar 2022 09:00:14 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3CE1EB3E54; Thu, 3 Mar 2022 05:59:28 -0800 (PST) Received: from fraeml743-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8Xd23pSqz67k5K; Thu, 3 Mar 2022 21:58:14 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml743-chm.china.huawei.com (10.206.15.224) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:59:26 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 13:59:25 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 01/14] PCI: Add vendor ID for the PCI SIG Date: Thu, 3 Mar 2022 13:58:52 +0000 Message-ID: <20220303135905.10420-2-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org This ID is used in DOE headers to identify protocols that are defined within the PCI Express Base Specification. Specified in Table 7-x2 of the Data Object Exchange ECN (approved 12 March 2020) available from https://members.pcisig.com/wg/PCI-SIG/document/14143 Acked-by: Bjorn Helgaas Reviewed-by: Dan Williams Signed-off-by: Jonathan Cameron --- include/linux/pci_ids.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index aad54c666407..bb5eaf8f973d 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -149,6 +149,7 @@ #define PCI_CLASS_OTHERS 0xff /* Vendors and devices. Sort key: vendor first, device next. */ +#define PCI_VENDOR_ID_PCI_SIG 0x0001 #define PCI_VENDOR_ID_LOONGSON 0x0014 From patchwork Thu Mar 3 13:58:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FB97C433F5 for ; Thu, 3 Mar 2022 14:00:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233141AbiCCOAr (ORCPT ); Thu, 3 Mar 2022 09:00:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232118AbiCCOAn (ORCPT ); Thu, 3 Mar 2022 09:00:43 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B14DAC6801; Thu, 3 Mar 2022 05:59:58 -0800 (PST) Received: from fraeml739-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8Xdd0ZsHz67TNp; Thu, 3 Mar 2022 21:58:45 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml739-chm.china.huawei.com (10.206.15.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:59:56 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 13:59:56 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 02/14] PCI: Replace magic constant for PCI Sig Vendor ID Date: Thu, 3 Mar 2022 13:58:53 +0000 Message-ID: <20220303135905.10420-3-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org From: Ira Weiny Based on Bjorn's suggestion[1], now that the PCI Sig Vendor ID is defined the define should be used in pci_bus_crs_vendor_id() rather than the hard coded magic value. Replace the magic value in pci_bus_crs_vendor_id() with PCI_VENDOR_ID_PCI_SIG. [1] https://lore.kernel.org/linux-cxl/20211117215044.GA1777828@bhelgaas/ Suggested-by: Bjorn Helgaas Signed-off-by: Ira Weiny Reviewed-by: Dan Williams Acked-by: Bjorn Helgaas Signed-off-by: Jonathan Cameron --- drivers/pci/probe.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index 17a969942d37..6280e780a48c 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c @@ -2312,7 +2312,7 @@ EXPORT_SYMBOL(pci_alloc_dev); static bool pci_bus_crs_vendor_id(u32 l) { - return (l & 0xffff) == 0x0001; + return (l & 0xffff) == PCI_VENDOR_ID_PCI_SIG; } static bool pci_bus_wait_crs(struct pci_bus *bus, int devfn, u32 *l, From patchwork Thu Mar 3 13:58:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D72E4C433EF for ; Thu, 3 Mar 2022 14:00:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231766AbiCCOBS (ORCPT ); Thu, 3 Mar 2022 09:01:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229881AbiCCOBR (ORCPT ); Thu, 3 Mar 2022 09:01:17 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8045C64BC9; Thu, 3 Mar 2022 06:00:29 -0800 (PST) Received: from fraeml736-chm.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8Xf90VhJz67vnQ; Thu, 3 Mar 2022 21:59:13 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml736-chm.china.huawei.com (10.206.15.217) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 15:00:27 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:00:26 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 03/14] PCI/DOE: Add Data Object Exchange Aux Driver Date: Thu, 3 Mar 2022 13:58:54 +0000 Message-ID: <20220303135905.10420-4-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org Introduced in a PCI ECN [1], DOE provides a config space based mailbox with standard protocol discovery. Each mailbox is accessed through a DOE Extended Capability. Define an auxiliary device driver which control DOE auxiliary devices registered on the auxiliary bus. A DOE mailbox is allowed to support any number of protocols while some DOE protocol specifications apply additional restrictions. The protocols supported are queried and cached. pci_doe_supports_prot() can be used to determine if the DOE device supports the protocol specified. A synchronous interface is provided in pci_doe_exchange_sync() to perform a single query / response exchange from the driver through the device specified. Testing was conducted against QEMU using: https://lore.kernel.org/qemu-devel/1619454964-10190-1-git-send-email-cbrowy@avery-design.com/ This code is based on Jonathan's V4 series here: https://lore.kernel.org/linux-cxl/20210524133938.2815206-1-Jonathan.Cameron@huawei.com/ [1] https://members.pcisig.com/wg/PCI-SIG/document/14143 Data Object Exchange (DOE) - Approved 12 March 2020 Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny Signed-off-by: Jonathan Cameron --- drivers/pci/Kconfig | 10 + drivers/pci/Makefile | 3 + drivers/pci/doe.c | 675 ++++++++++++++++++++++++++++++++++ include/linux/pci-doe.h | 60 +++ include/uapi/linux/pci_regs.h | 29 +- 5 files changed, 776 insertions(+), 1 deletion(-) diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index d98fafdd0f99..8b9b3341b553 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig @@ -118,6 +118,16 @@ config XEN_PCIDEV_FRONTEND The PCI device frontend driver allows the kernel to import arbitrary PCI devices from a PCI backend to support PCI driver domains. +config PCI_DOE_DRIVER + tristate "PCI Data Object Exchange (DOE) driver" + select AUXILIARY_BUS + help + Driver for DOE auxiliary devices. + + DOE provides a simple mailbox in PCI config space that is used by a + number of different protocols. DOE is defined in the Data Object + Exchange ECN to the PCIe r5.0 spec. + config PCI_ATS bool diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile index 37be95adf169..0a15f36e97f4 100644 --- a/drivers/pci/Makefile +++ b/drivers/pci/Makefile @@ -29,8 +29,11 @@ obj-$(CONFIG_PCI_STUB) += pci-stub.o obj-$(CONFIG_PCI_PF_STUB) += pci-pf-stub.o obj-$(CONFIG_PCI_ECAM) += ecam.o obj-$(CONFIG_PCI_P2PDMA) += p2pdma.o +obj-$(CONFIG_PCI_DOE_DRIVER) += pci-doe.o obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o +pci-doe-y := doe.o + # Endpoint library must be initialized before its users obj-$(CONFIG_PCI_ENDPOINT) += endpoint/ diff --git a/drivers/pci/doe.c b/drivers/pci/doe.c new file mode 100644 index 000000000000..4ff54bade8ec --- /dev/null +++ b/drivers/pci/doe.c @@ -0,0 +1,675 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Data Object Exchange ECN + * https://members.pcisig.com/wg/PCI-SIG/document/14143 + * + * Copyright (C) 2021 Huawei + * Jonathan Cameron + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define PCI_DOE_PROTOCOL_DISCOVERY 0 + +#define PCI_DOE_BUSY_MAX_RETRIES 16 +#define PCI_DOE_POLL_INTERVAL (HZ / 128) + +/* Timeout of 1 second from 6.xx.1 (Operation), ECN - Data Object Exchange */ +#define PCI_DOE_TIMEOUT HZ + +enum pci_doe_state { + DOE_IDLE, + DOE_WAIT_RESP, + DOE_WAIT_ABORT, + DOE_WAIT_ABORT_ON_ERR, +}; + +/** + * struct pci_doe_task - description of a query / response task + * @ex: The details of the task to be done + * @rv: Return value. Length of received response or error + * @cb: Callback for completion of task + * @private: Private data passed to callback on completion + */ +struct pci_doe_task { + struct pci_doe_exchange *ex; + int rv; + void (*cb)(void *private); + void *private; +}; + +/** + * struct pci_doe - A single DOE mailbox driver + * + * @doe_dev: The DOE Auxiliary device being driven + * @abort_c: Completion used for initial abort handling + * @irq: Interrupt used for signaling DOE ready or abort + * @irq_name: Name used to identify the irq for a particular DOE + * @prots: Array of identifiers for protocols supported + * @num_prots: Size of prots array + * @cur_task: Current task the state machine is working on + * @wq: Wait queue to wait on if a query is in progress + * @state_lock: Protect the state of cur_task, abort, and dead + * @statemachine: Work item for the DOE state machine + * @state: Current state of this DOE + * @timeout_jiffies: 1 second after GO set + * @busy_retries: Count of retry attempts + * @abort: Request a manual abort (e.g. on init) + * @dead: Used to mark a DOE for which an ABORT has timed out. Further messages + * will immediately be aborted with error + */ +struct pci_doe { + struct pci_doe_dev *doe_dev; + struct completion abort_c; + int irq; + char *irq_name; + struct pci_doe_protocol *prots; + int num_prots; + + struct pci_doe_task *cur_task; + wait_queue_head_t wq; + struct mutex state_lock; + struct delayed_work statemachine; + enum pci_doe_state state; + unsigned long timeout_jiffies; + unsigned int busy_retries; + unsigned int abort:1; + unsigned int dead:1; +}; + +static irqreturn_t pci_doe_irq(int irq, void *data) +{ + struct pci_doe *doe = data; + struct pci_dev *pdev = doe->doe_dev->pdev; + int offset = doe->doe_dev->cap_offset; + u32 val; + + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); + + /* Leave the error case to be handled outside IRQ */ + if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) { + mod_delayed_work(system_wq, &doe->statemachine, 0); + return IRQ_HANDLED; + } + + if (FIELD_GET(PCI_DOE_STATUS_INT_STATUS, val)) { + pci_write_config_dword(pdev, offset + PCI_DOE_STATUS, + PCI_DOE_STATUS_INT_STATUS); + mod_delayed_work(system_wq, &doe->statemachine, 0); + return IRQ_HANDLED; + } + + return IRQ_NONE; +} + +/* + * Only call when safe to directly access the DOE, either because no tasks yet + * queued, or called from doe_statemachine_work() which has exclusive access to + * the DOE config space. + */ +static void pci_doe_abort_start(struct pci_doe *doe) +{ + struct pci_dev *pdev = doe->doe_dev->pdev; + int offset = doe->doe_dev->cap_offset; + u32 val; + + val = PCI_DOE_CTRL_ABORT; + if (doe->irq) + val |= PCI_DOE_CTRL_INT_EN; + pci_write_config_dword(pdev, offset + PCI_DOE_CTRL, val); + + doe->timeout_jiffies = jiffies + HZ; + schedule_delayed_work(&doe->statemachine, HZ); +} + +static int pci_doe_send_req(struct pci_doe *doe, struct pci_doe_exchange *ex) +{ + struct pci_dev *pdev = doe->doe_dev->pdev; + int offset = doe->doe_dev->cap_offset; + u32 val; + int i; + + /* + * Check the DOE busy bit is not set. If it is set, this could indicate + * someone other than Linux (e.g. firmware) is using the mailbox. Note + * it is expected that firmware and OS will negotiate access rights via + * an, as yet to be defined method. + */ + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); + if (FIELD_GET(PCI_DOE_STATUS_BUSY, val)) + return -EBUSY; + + if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) + return -EIO; + + /* Write DOE Header */ + val = FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_VID, ex->prot.vid) | + FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, ex->prot.type); + pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, val); + /* Length is 2 DW of header + length of payload in DW */ + pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, + FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH, + 2 + ex->request_pl_sz / + sizeof(u32))); + for (i = 0; i < ex->request_pl_sz / sizeof(u32); i++) + pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, + ex->request_pl[i]); + + val = PCI_DOE_CTRL_GO; + if (doe->irq) + val |= PCI_DOE_CTRL_INT_EN; + + pci_write_config_dword(pdev, offset + PCI_DOE_CTRL, val); + /* Request is sent - now wait for poll or IRQ */ + return 0; +} + +static int pci_doe_recv_resp(struct pci_doe *doe, struct pci_doe_exchange *ex) +{ + struct pci_dev *pdev = doe->doe_dev->pdev; + int offset = doe->doe_dev->cap_offset; + size_t length; + u32 val; + int i; + + /* Read the first dword to get the protocol */ + pci_read_config_dword(pdev, offset + PCI_DOE_READ, &val); + if ((FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_VID, val) != ex->prot.vid) || + (FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, val) != ex->prot.type)) { + pci_err(pdev, + "Expected [VID, Protocol] = [%#x, %#x], got [%#x, %#x]\n", + ex->prot.vid, ex->prot.type, + FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_VID, val), + FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, val)); + return -EIO; + } + + pci_write_config_dword(pdev, offset + PCI_DOE_READ, 0); + /* Read the second dword to get the length */ + pci_read_config_dword(pdev, offset + PCI_DOE_READ, &val); + pci_write_config_dword(pdev, offset + PCI_DOE_READ, 0); + + length = FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH, val); + if (length > SZ_1M || length < 2) + return -EIO; + + /* First 2 dwords have already been read */ + length -= 2; + /* Read the rest of the response payload */ + for (i = 0; i < min(length, ex->response_pl_sz / sizeof(u32)); i++) { + pci_read_config_dword(pdev, offset + PCI_DOE_READ, + &ex->response_pl[i]); + pci_write_config_dword(pdev, offset + PCI_DOE_READ, 0); + } + + /* Flush excess length */ + for (; i < length; i++) { + pci_read_config_dword(pdev, offset + PCI_DOE_READ, &val); + pci_write_config_dword(pdev, offset + PCI_DOE_READ, 0); + } + /* Final error check to pick up on any since Data Object Ready */ + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); + if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) + return -EIO; + + return min(length, ex->response_pl_sz / sizeof(u32)) * sizeof(u32); +} + +static void doe_statemachine_work(struct work_struct *work) +{ + struct delayed_work *w = to_delayed_work(work); + struct pci_doe *doe = container_of(w, struct pci_doe, statemachine); + struct pci_dev *pdev = doe->doe_dev->pdev; + int offset = doe->doe_dev->cap_offset; + struct pci_doe_task *task; + bool abort; + u32 val; + int rc; + + mutex_lock(&doe->state_lock); + task = doe->cur_task; + abort = doe->abort; + doe->abort = false; + mutex_unlock(&doe->state_lock); + + if (abort) { + /* + * Currently only used during init - care needed if + * pci_doe_abort() is generally exposed as it would impact + * queries in flight. + */ + WARN_ON(task); + doe->state = DOE_WAIT_ABORT; + pci_doe_abort_start(doe); + return; + } + + switch (doe->state) { + case DOE_IDLE: + if (task == NULL) + return; + + /* Nothing currently in flight so queue a task */ + rc = pci_doe_send_req(doe, task->ex); + /* + * The specification does not provide any guidance on how long + * some other entity could keep the DOE busy, so try for 1 + * second then fail. Busy handling is best effort only, because + * there is no way of avoiding racing against another user of + * the DOE. + */ + if (rc == -EBUSY) { + doe->busy_retries++; + if (doe->busy_retries == PCI_DOE_BUSY_MAX_RETRIES) { + /* Long enough, fail this request */ + pci_warn(pdev, + "DOE busy for too long (> 1 sec)\n"); + doe->busy_retries = 0; + goto err_busy; + } + schedule_delayed_work(w, HZ / PCI_DOE_BUSY_MAX_RETRIES); + return; + } + if (rc) + goto err_abort; + doe->busy_retries = 0; + + doe->state = DOE_WAIT_RESP; + doe->timeout_jiffies = jiffies + HZ; + /* Now poll or wait for IRQ with timeout */ + if (doe->irq > 0) + schedule_delayed_work(w, PCI_DOE_TIMEOUT); + else + schedule_delayed_work(w, PCI_DOE_POLL_INTERVAL); + return; + + case DOE_WAIT_RESP: + /* Not possible to get here with NULL task */ + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); + if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) { + rc = -EIO; + goto err_abort; + } + + if (!FIELD_GET(PCI_DOE_STATUS_DATA_OBJECT_READY, val)) { + /* If not yet at timeout reschedule otherwise abort */ + if (time_after(jiffies, doe->timeout_jiffies)) { + rc = -ETIMEDOUT; + goto err_abort; + } + schedule_delayed_work(w, PCI_DOE_POLL_INTERVAL); + return; + } + + rc = pci_doe_recv_resp(doe, task->ex); + if (rc < 0) + goto err_abort; + + doe->state = DOE_IDLE; + + mutex_lock(&doe->state_lock); + doe->cur_task = NULL; + mutex_unlock(&doe->state_lock); + wake_up_interruptible(&doe->wq); + + /* Set the return value to the length of received payload */ + task->rv = rc; + task->cb(task->private); + + return; + + case DOE_WAIT_ABORT: + case DOE_WAIT_ABORT_ON_ERR: + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); + + if (!FIELD_GET(PCI_DOE_STATUS_ERROR, val) && + !FIELD_GET(PCI_DOE_STATUS_BUSY, val)) { + /* Back to normal state - carry on */ + mutex_lock(&doe->state_lock); + doe->cur_task = NULL; + mutex_unlock(&doe->state_lock); + wake_up_interruptible(&doe->wq); + + /* + * For deliberately triggered abort, someone is + * waiting. + */ + if (doe->state == DOE_WAIT_ABORT) + complete(&doe->abort_c); + + doe->state = DOE_IDLE; + return; + } + if (time_after(jiffies, doe->timeout_jiffies)) { + /* Task has timed out and is dead - abort */ + pci_err(pdev, "DOE ABORT timed out\n"); + mutex_lock(&doe->state_lock); + doe->dead = true; + doe->cur_task = NULL; + mutex_unlock(&doe->state_lock); + wake_up_interruptible(&doe->wq); + + if (doe->state == DOE_WAIT_ABORT) + complete(&doe->abort_c); + } + return; + } + +err_abort: + doe->state = DOE_WAIT_ABORT_ON_ERR; + pci_doe_abort_start(doe); +err_busy: + task->rv = rc; + task->cb(task->private); + /* If here via err_busy, signal the task done. */ + if (doe->state == DOE_IDLE) { + mutex_lock(&doe->state_lock); + doe->cur_task = NULL; + mutex_unlock(&doe->state_lock); + wake_up_interruptible(&doe->wq); + } +} + +static void pci_doe_task_complete(void *private) +{ + complete(private); +} + +/** + * pci_doe_exchange_sync() - Send a request, then wait for and receive a + * response + * @doe_dev: DOE mailbox state structure + * @ex: Description of the buffers and Vendor ID + type used in this + * request/response pair + * + * Excess data will be discarded. + * + * RETURNS: payload in bytes on success, < 0 on error + */ +int pci_doe_exchange_sync(struct pci_doe_dev *doe_dev, + struct pci_doe_exchange *ex) +{ + struct pci_doe *doe = dev_get_drvdata(&doe_dev->adev.dev); + struct pci_doe_task task; + DECLARE_COMPLETION_ONSTACK(c); + + if (!doe) + return -EAGAIN; + + /* DOE requests must be a whole number of DW */ + if (ex->request_pl_sz % sizeof(u32)) + return -EINVAL; + + task.ex = ex; + task.cb = pci_doe_task_complete; + task.private = &c; + +again: + mutex_lock(&doe->state_lock); + if (doe->cur_task) { + mutex_unlock(&doe->state_lock); + wait_event_interruptible(doe->wq, doe->cur_task == NULL); + goto again; + } + + if (doe->dead) { + mutex_unlock(&doe->state_lock); + return -EIO; + } + doe->cur_task = &task; + schedule_delayed_work(&doe->statemachine, 0); + mutex_unlock(&doe->state_lock); + + wait_for_completion(&c); + + return task.rv; +} +EXPORT_SYMBOL_GPL(pci_doe_exchange_sync); + +/** + * pci_doe_supports_prot() - Return if the DOE instance supports the given + * protocol + * @pdev: Device on which to find the DOE instance + * @vid: Protocol Vendor ID + * @type: protocol type + * + * This device can then be passed to pci_doe_exchange_sync() to execute a + * mailbox exchange through that DOE mailbox. + * + * RETURNS: True if the DOE device supports the protocol specified + */ +bool pci_doe_supports_prot(struct pci_doe_dev *doe_dev, u16 vid, u8 type) +{ + struct pci_doe *doe = dev_get_drvdata(&doe_dev->adev.dev); + int i; + + if (!doe) + return false; + + for (i = 0; i < doe->num_prots; i++) + if ((doe->prots[i].vid == vid) && + (doe->prots[i].type == type)) + return true; + + return false; +} +EXPORT_SYMBOL_GPL(pci_doe_supports_prot); + +static int pci_doe_discovery(struct pci_doe *doe, u8 *index, u16 *vid, + u8 *protocol) +{ + u32 request_pl = FIELD_PREP(PCI_DOE_DATA_OBJECT_DISC_REQ_3_INDEX, + *index); + u32 response_pl; + struct pci_doe_exchange ex = { + .prot.vid = PCI_VENDOR_ID_PCI_SIG, + .prot.type = PCI_DOE_PROTOCOL_DISCOVERY, + .request_pl = &request_pl, + .request_pl_sz = sizeof(request_pl), + .response_pl = &response_pl, + .response_pl_sz = sizeof(response_pl), + }; + int ret; + + ret = pci_doe_exchange_sync(doe->doe_dev, &ex); + if (ret < 0) + return ret; + + if (ret != sizeof(response_pl)) + return -EIO; + + *vid = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_VID, response_pl); + *protocol = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_PROTOCOL, + response_pl); + *index = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_NEXT_INDEX, + response_pl); + + return 0; +} + +static int pci_doe_cache_protocols(struct pci_doe *doe) +{ + u8 index = 0; + int num_prots; + int rc; + + /* Discovery protocol must always be supported and must report itself */ + num_prots = 1; + doe->prots = devm_kcalloc(&doe->doe_dev->adev.dev, num_prots, + sizeof(*doe->prots), GFP_KERNEL); + if (doe->prots == NULL) + return -ENOMEM; + + do { + struct pci_doe_protocol *prot; + + prot = &doe->prots[num_prots - 1]; + rc = pci_doe_discovery(doe, &index, &prot->vid, &prot->type); + if (rc) + return rc; + + if (index) { + struct pci_doe_protocol *prot_new; + + num_prots++; + prot_new = devm_krealloc(&doe->doe_dev->adev.dev, + doe->prots, + sizeof(*doe->prots) * + num_prots, + GFP_KERNEL); + if (prot_new == NULL) + return -ENOMEM; + doe->prots = prot_new; + } + } while (index); + + doe->num_prots = num_prots; + return 0; +} + +static int pci_doe_abort(struct pci_doe *doe) +{ + reinit_completion(&doe->abort_c); + mutex_lock(&doe->state_lock); + doe->abort = true; + mutex_unlock(&doe->state_lock); + schedule_delayed_work(&doe->statemachine, 0); + wait_for_completion(&doe->abort_c); + + if (doe->dead) + return -EIO; + + return 0; +} + +static int pci_doe_reg_irq(struct pci_doe *doe) +{ + struct pci_dev *pdev = doe->doe_dev->pdev; + bool poll = !pci_dev_msi_enabled(pdev); + int offset = doe->doe_dev->cap_offset; + int rc, irq; + u32 val; + + pci_read_config_dword(pdev, offset + PCI_DOE_CAP, &val); + + if (!poll && FIELD_GET(PCI_DOE_CAP_INT, val)) { + irq = pci_irq_vector(pdev, FIELD_GET(PCI_DOE_CAP_IRQ, val)); + if (irq < 0) + return irq; + + doe->irq_name = devm_kasprintf(&doe->doe_dev->adev.dev, + GFP_KERNEL, + "DOE[%s]", + doe->doe_dev->adev.name); + if (!doe->irq_name) + return -ENOMEM; + + rc = devm_request_irq(&pdev->dev, irq, pci_doe_irq, 0, + doe->irq_name, doe); + if (rc) + return rc; + + doe->irq = irq; + pci_write_config_dword(pdev, offset + PCI_DOE_CTRL, + PCI_DOE_CTRL_INT_EN); + } + + return 0; +} + +/* + * pci_doe_probe() - Set up the Mailbox + * @aux_dev: Auxiliary Device + * @id: Auxiliary device ID + * + * Probe the mailbox found for all protocols and set up the Mailbox + * + * RETURNS: 0 on success, < 0 on error + */ +static int pci_doe_probe(struct auxiliary_device *aux_dev, + const struct auxiliary_device_id *id) +{ + struct pci_doe_dev *doe_dev = container_of(aux_dev, + struct pci_doe_dev, + adev); + struct pci_doe *doe; + int rc; + + doe = devm_kzalloc(&aux_dev->dev, sizeof(*doe), GFP_KERNEL); + if (!doe) + return -ENOMEM; + + mutex_init(&doe->state_lock); + init_completion(&doe->abort_c); + doe->doe_dev = doe_dev; + init_waitqueue_head(&doe->wq); + INIT_DELAYED_WORK(&doe->statemachine, doe_statemachine_work); + dev_set_drvdata(&aux_dev->dev, doe); + + rc = pci_doe_reg_irq(doe); + if (rc) + return rc; + + /* Reset the mailbox by issuing an abort */ + rc = pci_doe_abort(doe); + if (rc) + return rc; + + rc = pci_doe_cache_protocols(doe); + if (rc) + return rc; + + return 0; +} + +static void pci_doe_remove(struct auxiliary_device *aux_dev) +{ + struct pci_doe *doe = dev_get_drvdata(&aux_dev->dev); + + /* First halt the state machine */ + cancel_delayed_work_sync(&doe->statemachine); +} + +static const struct auxiliary_device_id pci_doe_auxiliary_id_table[] = { + {}, +}; + +MODULE_DEVICE_TABLE(auxiliary, pci_doe_auxiliary_id_table); + +struct auxiliary_driver pci_doe_auxiliary_drv = { + .name = "pci_doe", + .id_table = pci_doe_auxiliary_id_table, + .probe = pci_doe_probe, + .remove = pci_doe_remove +}; + +static int __init pci_doe_init_module(void) +{ + int ret; + + ret = auxiliary_driver_register(&pci_doe_auxiliary_drv); + if (ret) { + pr_err("Failed pci_doe auxiliary_driver_register() ret=%d\n", + ret); + return ret; + } + + return 0; +} + +static void __exit pci_doe_exit_module(void) +{ + auxiliary_driver_unregister(&pci_doe_auxiliary_drv); +} + +module_init(pci_doe_init_module); +module_exit(pci_doe_exit_module); +MODULE_LICENSE("GPL v2"); diff --git a/include/linux/pci-doe.h b/include/linux/pci-doe.h new file mode 100644 index 000000000000..2f52b31c6f32 --- /dev/null +++ b/include/linux/pci-doe.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Data Object Exchange was added as an ECN to the PCIe r5.0 spec. + * + * Copyright (C) 2021 Huawei + * Jonathan Cameron + */ + +#include +#include +#include + +#ifndef LINUX_PCI_DOE_H +#define LINUX_PCI_DOE_H + +struct pci_doe_protocol { + u16 vid; + u8 type; +}; + +/** + * struct pci_doe_exchange - represents a single query/response + * + * @prot: DOE Protocol + * @request_pl: The request payload + * @request_pl_sz: Size of the request payload + * @response_pl: The response payload + * @response_pl_sz: Size of the response payload + */ +struct pci_doe_exchange { + struct pci_doe_protocol prot; + u32 *request_pl; + size_t request_pl_sz; + u32 *response_pl; + size_t response_pl_sz; +}; + +/** + * struct pci_doe_dev - DOE mailbox device + * + * @adrv: Auxiliary Driver data + * @pdev: PCI device this belongs to + * @offset: Capability offset + * + * This represents a single DOE mailbox device. Devices should create this + * device and register it on the Auxiliary bus for the DOE driver to maintain. + * + */ +struct pci_doe_dev { + struct auxiliary_device adev; + struct pci_dev *pdev; + int cap_offset; +}; + +/* Library operations */ +int pci_doe_exchange_sync(struct pci_doe_dev *doe_dev, + struct pci_doe_exchange *ex); +bool pci_doe_supports_prot(struct pci_doe_dev *doe_dev, u16 vid, u8 type); + +#endif diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h index bee1a9ed6e66..4e96b45ee36d 100644 --- a/include/uapi/linux/pci_regs.h +++ b/include/uapi/linux/pci_regs.h @@ -736,7 +736,8 @@ #define PCI_EXT_CAP_ID_DVSEC 0x23 /* Designated Vendor-Specific */ #define PCI_EXT_CAP_ID_DLF 0x25 /* Data Link Feature */ #define PCI_EXT_CAP_ID_PL_16GT 0x26 /* Physical Layer 16.0 GT/s */ -#define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_PL_16GT +#define PCI_EXT_CAP_ID_DOE 0x2E /* Data Object Exchange */ +#define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_DOE #define PCI_EXT_CAP_DSN_SIZEOF 12 #define PCI_EXT_CAP_MCAST_ENDPOINT_SIZEOF 40 @@ -1102,4 +1103,30 @@ #define PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK 0x000000F0 #define PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT 4 +/* Data Object Exchange */ +#define PCI_DOE_CAP 0x04 /* DOE Capabilities Register */ +#define PCI_DOE_CAP_INT 0x00000001 /* Interrupt Support */ +#define PCI_DOE_CAP_IRQ 0x00000ffe /* Interrupt Message Number */ +#define PCI_DOE_CTRL 0x08 /* DOE Control Register */ +#define PCI_DOE_CTRL_ABORT 0x00000001 /* DOE Abort */ +#define PCI_DOE_CTRL_INT_EN 0x00000002 /* DOE Interrupt Enable */ +#define PCI_DOE_CTRL_GO 0x80000000 /* DOE Go */ +#define PCI_DOE_STATUS 0x0c /* DOE Status Register */ +#define PCI_DOE_STATUS_BUSY 0x00000001 /* DOE Busy */ +#define PCI_DOE_STATUS_INT_STATUS 0x00000002 /* DOE Interrupt Status */ +#define PCI_DOE_STATUS_ERROR 0x00000004 /* DOE Error */ +#define PCI_DOE_STATUS_DATA_OBJECT_READY 0x80000000 /* Data Object Ready */ +#define PCI_DOE_WRITE 0x10 /* DOE Write Data Mailbox Register */ +#define PCI_DOE_READ 0x14 /* DOE Read Data Mailbox Register */ + +/* DOE Data Object - note not actually registers */ +#define PCI_DOE_DATA_OBJECT_HEADER_1_VID 0x0000ffff +#define PCI_DOE_DATA_OBJECT_HEADER_1_TYPE 0x00ff0000 +#define PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH 0x0003ffff + +#define PCI_DOE_DATA_OBJECT_DISC_REQ_3_INDEX 0x000000ff +#define PCI_DOE_DATA_OBJECT_DISC_RSP_3_VID 0x0000ffff +#define PCI_DOE_DATA_OBJECT_DISC_RSP_3_PROTOCOL 0x00ff0000 +#define PCI_DOE_DATA_OBJECT_DISC_RSP_3_NEXT_INDEX 0xff000000 + #endif /* LINUX_PCI_REGS_H */ From patchwork Thu Mar 3 13:58:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 936C3C433EF for ; Thu, 3 Mar 2022 14:01:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233871AbiCCOBs (ORCPT ); Thu, 3 Mar 2022 09:01:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54502 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229881AbiCCOBr (ORCPT ); Thu, 3 Mar 2022 09:01:47 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29710DB48B; Thu, 3 Mar 2022 06:00:59 -0800 (PST) Received: from fraeml737-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8Xfl3sdCz67xgR; Thu, 3 Mar 2022 21:59:43 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml737-chm.china.huawei.com (10.206.15.218) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 15:00:58 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:00:57 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 04/14] PCI/DOE: Introduce pci_doe_create_doe_devices Date: Thu, 3 Mar 2022 13:58:55 +0000 Message-ID: <20220303135905.10420-5-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org From: Ira Weiny CXL and/or PCI devices can define DOE mailboxes. Normally the kernel will want to maintain control of all of these mailboxes. However, under a limited number of use cases users may want to allow user space access to some of these mailboxes while the kernel retains control of the rest. An example of this is for CXL Compliance Testing (see CXL 2.0 14.16.4 Compliance Mode DOE) which offers a mechanism to set different test modes for a device. Rather than re-invent the wheel the architecture creates auxiliary devices for each DOE mailbox which can then be driven by a generic DOE mailbox driver. If access to an individual mailbox is required by user space the driver for that mailbox can be unloaded and access handed to user space. Create the helper pci_doe_create_doe_devices() which iterates each DOE mailbox found in the device and creates a DOE auxiliary device on the auxiliary bus. While doing so ensure that the auxiliary DOE driver loads to drive that device. Co-developed-by: Jonathan Cameron Signed-off-by: Jonathan Cameron Signed-off-by: Ira Weiny --- drivers/pci/doe.c | 123 ++++++++++++++++++++++++++++++++++++++++ include/linux/pci-doe.h | 3 + 2 files changed, 126 insertions(+) diff --git a/drivers/pci/doe.c b/drivers/pci/doe.c index 4ff54bade8ec..1b2e69774ccf 100644 --- a/drivers/pci/doe.c +++ b/drivers/pci/doe.c @@ -383,6 +383,128 @@ static void pci_doe_task_complete(void *private) complete(private); } +static void pci_doe_free_irq_vectors(void *data) +{ + pci_free_irq_vectors(data); +} + +static DEFINE_IDA(pci_doe_adev_ida); + +static void pci_doe_dev_release(struct device *dev) +{ + struct auxiliary_device *adev = container_of(dev, + struct auxiliary_device, + dev); + struct pci_doe_dev *doe_dev = container_of(adev, struct pci_doe_dev, + adev); + + ida_free(&pci_doe_adev_ida, adev->id); + kfree(doe_dev); +} + +static void pci_doe_destroy_device(void *ad) +{ + auxiliary_device_delete(ad); + auxiliary_device_uninit(ad); +} + +/** + * pci_doe_create_doe_devices - Create auxiliary DOE devices for all DOE + * mailboxes found + * @pci_dev: The PCI device to scan for DOE mailboxes + * + * There is no coresponding destroy of these devices. This function associates + * the DOE auxiliary devices created with the pci_dev passed in. That + * association is device managed (devm_*) such that the DOE auxiliary device + * lifetime is always greater than or equal to the lifetime of the pci_dev. + * + * RETURNS: 0 on success -ERRNO on failure. + */ +int pci_doe_create_doe_devices(struct pci_dev *pdev) +{ + struct device *dev = &pdev->dev; + int irqs, rc; + u16 pos = 0; + + /* + * An implementation may support an unknown number of interrupts. + * Assume that number is not that large and request them all. + */ + irqs = pci_msix_vec_count(pdev); + rc = pci_alloc_irq_vectors(pdev, irqs, irqs, PCI_IRQ_MSIX); + if (rc != irqs) { + /* No interrupt available - carry on */ + pci_dbg(pdev, "No interrupts available for DOE\n"); + } else { + /* + * Enabling bus mastering is require for MSI/MSIx. It could be + * done later within the DOE initialization, but as it + * potentially has other impacts keep it here when setting up + * the IRQ's. + */ + pci_set_master(pdev); + rc = devm_add_action_or_reset(dev, + pci_doe_free_irq_vectors, + pdev); + if (rc) + return rc; + } + + pos = pci_find_next_ext_capability(pdev, pos, PCI_EXT_CAP_ID_DOE); + + while (pos > 0) { + struct auxiliary_device *adev; + struct pci_doe_dev *new_dev; + int id; + + new_dev = kzalloc(sizeof(*new_dev), GFP_KERNEL); + if (!new_dev) + return -ENOMEM; + + new_dev->pdev = pdev; + new_dev->cap_offset = pos; + + /* Set up struct auxiliary_device */ + adev = &new_dev->adev; + id = ida_alloc(&pci_doe_adev_ida, GFP_KERNEL); + if (id < 0) { + kfree(new_dev); + return -ENOMEM; + } + + adev->id = id; + adev->name = DOE_DEV_NAME; + adev->dev.release = pci_doe_dev_release; + adev->dev.parent = dev; + + if (auxiliary_device_init(adev)) { + pci_doe_dev_release(&adev->dev); + return -EIO; + } + + if (auxiliary_device_add(adev)) { + auxiliary_device_uninit(adev); + return -EIO; + } + + rc = devm_add_action_or_reset(dev, pci_doe_destroy_device, adev); + if (rc) + return rc; + + if (device_attach(&adev->dev) != 1) { + dev_err(&adev->dev, + "Failed to attach a driver to DOE device %d\n", + adev->id); + return -ENODEV; + } + + pos = pci_find_next_ext_capability(pdev, pos, PCI_EXT_CAP_ID_DOE); + } + + return 0; +} +EXPORT_SYMBOL_GPL(pci_doe_create_doe_devices); + /** * pci_doe_exchange_sync() - Send a request, then wait for and receive a * response @@ -639,6 +761,7 @@ static void pci_doe_remove(struct auxiliary_device *aux_dev) } static const struct auxiliary_device_id pci_doe_auxiliary_id_table[] = { + {.name = "pci_doe.doe", }, {}, }; diff --git a/include/linux/pci-doe.h b/include/linux/pci-doe.h index 2f52b31c6f32..9ae2e96a0211 100644 --- a/include/linux/pci-doe.h +++ b/include/linux/pci-doe.h @@ -13,6 +13,8 @@ #ifndef LINUX_PCI_DOE_H #define LINUX_PCI_DOE_H +#define DOE_DEV_NAME "doe" + struct pci_doe_protocol { u16 vid; u8 type; @@ -53,6 +55,7 @@ struct pci_doe_dev { }; /* Library operations */ +int pci_doe_create_doe_devices(struct pci_dev *pdev); int pci_doe_exchange_sync(struct pci_doe_dev *doe_dev, struct pci_doe_exchange *ex); bool pci_doe_supports_prot(struct pci_doe_dev *doe_dev, u16 vid, u8 type); From patchwork Thu Mar 3 13:58:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62776C433EF for ; Thu, 3 Mar 2022 14:01:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233875AbiCCOCT (ORCPT ); Thu, 3 Mar 2022 09:02:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229881AbiCCOCS (ORCPT ); Thu, 3 Mar 2022 09:02:18 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 56F3764BC9; Thu, 3 Mar 2022 06:01:30 -0800 (PST) Received: from fraeml734-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8XgK6tnzz67xLh; Thu, 3 Mar 2022 22:00:13 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml734-chm.china.huawei.com (10.206.15.215) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 15:01:28 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:01:27 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 05/14] cxl/pci: Create DOE auxiliary devices Date: Thu, 3 Mar 2022 13:58:56 +0000 Message-ID: <20220303135905.10420-6-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org From: Ira Weiny CXL devices will need DOE mailbox access to read things like CDAT. Call the PCI core helper to find all DOE mailboxes on the device and create the auxiliary devices for those mailboxes. sysfs shows this relationship. Starting with a qemu system with 2 memory devices mem0 and mem1. $ ls -l /sys/bus/cxl/devices/mem* lrwxrwxrwx 1 root root 0 Jan 25 16:15 /sys/bus/cxl/devices/mem0 -> ../../../devices/pci0000:34/0000:34:00.0/0000:35:00.0/mem0 lrwxrwxrwx 1 root root 0 Jan 25 16:15 /sys/bus/cxl/devices/mem1 -> ../../../devices/pci0000:34/0000:34:01.0/0000:36:00.0/mem1 $ ls -l /sys/bus/auxiliary/devices/ total 0 lrwxrwxrwx 1 root root 0 Jan 25 16:16 pci_doe.doe.0 -> ../../../devices/pci0000:34/0000:34:00.0/0000:35:00.0/pci_doe.doe.0 lrwxrwxrwx 1 root root 0 Jan 25 16:16 pci_doe.doe.1 -> ../../../devices/pci0000:34/0000:34:01.0/0000:36:00.0/pci_doe.doe.1 lrwxrwxrwx 1 root root 0 Jan 25 16:16 pci_doe.doe.2 -> ../../../devices/pci0000:34/0000:34:01.0/0000:36:00.0/pci_doe.doe.2 lrwxrwxrwx 1 root root 0 Jan 25 16:16 pci_doe.doe.3 -> ../../../devices/pci0000:34/0000:34:00.0/0000:35:00.0/pci_doe.doe.3 $ ls -l /sys/bus/auxiliary/drivers total 0 drwxr-xr-x 2 root root 0 Jan 25 16:15 pci_doe.pci_doe Signed-off-by: Ira Weiny Signed-off-by: Jonathan Cameron --- drivers/cxl/Kconfig | 1 + drivers/cxl/pci.c | 13 +++++++++++++ 2 files changed, 14 insertions(+) diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 67c91378f2dd..9d53720bea07 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -16,6 +16,7 @@ if CXL_BUS config CXL_MEM tristate "CXL.mem: Memory Devices" default CXL_BUS + select PCI_DOE_DRIVER help The CXL.mem protocol allows a device to act as a provider of "System RAM" and/or "Persistent Memory" that is fully coherent diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 8dc91fd3396a..0adc7798e0cf 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include "cxlmem.h" #include "pci.h" @@ -471,6 +472,14 @@ static int cxl_setup_regs(struct pci_dev *pdev, enum cxl_regloc_type type, return rc; } +static int cxl_setup_doe_devices(struct cxl_dev_state *cxlds) +{ + struct device *dev = cxlds->dev; + struct pci_dev *pdev = to_pci_dev(dev); + + return pci_doe_create_doe_devices(pdev); +} + static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct cxl_register_map map; @@ -497,6 +506,10 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; + rc = cxl_setup_doe_devices(cxlds); + if (rc) + return rc; + rc = cxl_map_regs(cxlds, &map); if (rc) return rc; From patchwork Thu Mar 3 13:58:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36DB0C433EF for ; Thu, 3 Mar 2022 14:02:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232100AbiCCODF (ORCPT ); Thu, 3 Mar 2022 09:03:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233897AbiCCOCs (ORCPT ); Thu, 3 Mar 2022 09:02:48 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6F1B188A08; Thu, 3 Mar 2022 06:02:00 -0800 (PST) Received: from fraeml713-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8Xgz1fv0z67btc; Thu, 3 Mar 2022 22:00:47 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml713-chm.china.huawei.com (10.206.15.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 15:01:58 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:01:58 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 06/14] cxl/pci: Find the DOE mailbox which supports CDAT Date: Thu, 3 Mar 2022 13:58:57 +0000 Message-ID: <20220303135905.10420-7-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org From: Ira Weiny Memory devices need the CDAT data from the device. This data is read from a DOE mailbox which supports the CDAT protocol. Search the DOE auxiliary devices for the one which supports the CDAT protocol. Cache that device to be used for future queries. Signed-off-by: Ira Weiny Signed-off-by: Jonathan Cameron --- drivers/cxl/cxl.h | 3 +++ drivers/cxl/cxlmem.h | 2 ++ drivers/cxl/pci.c | 43 ++++++++++++++++++++++++++++++++++++++++++- 3 files changed, 47 insertions(+), 1 deletion(-) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index a5a0be3f088b..a5a071056dac 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -75,6 +75,9 @@ static inline int cxl_hdm_decoder_count(u32 cap_hdr) #define CXLDEV_MBOX_BG_CMD_STATUS_OFFSET 0x18 #define CXLDEV_MBOX_PAYLOAD_OFFSET 0x20 +#define CXL_DOE_PROTOCOL_COMPLIANCE 0 +#define CXL_DOE_PROTOCOL_TABLE_ACCESS 2 + /* * Using struct_group() allows for per register-block-type helper routines, * without requiring block-type agnostic code to include the prefix. diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 8d96d009ad90..176228d8c66d 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -97,6 +97,7 @@ struct cxl_mbox_cmd { * Currently only memory devices are represented. * * @dev: The device associated with this CXL state + * @cdat_doe: Auxiliary DOE device capabile of reading CDAT * @regs: Parsed register blocks * @payload_size: Size of space for payload * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) @@ -124,6 +125,7 @@ struct cxl_mbox_cmd { struct cxl_dev_state { struct device *dev; + struct pci_doe_dev *cdat_doe; struct cxl_regs regs; size_t payload_size; diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 0adc7798e0cf..adcabc0bcb38 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -472,12 +472,53 @@ static int cxl_setup_regs(struct pci_dev *pdev, enum cxl_regloc_type type, return rc; } +static int cxl_match_cdat_doe_device(struct device *dev, const void *data) +{ + const struct cxl_dev_state *cxlds = data; + struct auxiliary_device *adev; + struct pci_doe_dev *doe_dev; + + /* First determine if this auxiliary device belongs to the cxlds */ + if (cxlds->dev != dev->parent) + return 0; + + adev = to_auxiliary_dev(dev); + doe_dev = container_of(adev, struct pci_doe_dev, adev); + + /* If it is one of ours check for the CDAT protocol */ + if (pci_doe_supports_prot(doe_dev, PCI_DVSEC_VENDOR_ID_CXL, + CXL_DOE_PROTOCOL_TABLE_ACCESS)) + return 1; + + return 0; +} + static int cxl_setup_doe_devices(struct cxl_dev_state *cxlds) { struct device *dev = cxlds->dev; struct pci_dev *pdev = to_pci_dev(dev); + struct auxiliary_device *adev; + int rc; - return pci_doe_create_doe_devices(pdev); + rc = pci_doe_create_doe_devices(pdev); + if (rc) + return rc; + + adev = auxiliary_find_device(NULL, cxlds, &cxl_match_cdat_doe_device); + + if (adev) { + struct pci_doe_dev *doe_dev = container_of(adev, + struct pci_doe_dev, + adev); + + /* + * No reference need be taken. The DOE device lifetime is + * longer that the CXL device state lifetime + */ + cxlds->cdat_doe = doe_dev; + } + + return 0; } static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) From patchwork Thu Mar 3 13:58:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D51FC433F5 for ; Thu, 3 Mar 2022 14:02:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233897AbiCCODU (ORCPT ); Thu, 3 Mar 2022 09:03:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233902AbiCCODT (ORCPT ); Thu, 3 Mar 2022 09:03:19 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79CE7320; Thu, 3 Mar 2022 06:02:31 -0800 (PST) Received: from fraeml715-chm.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8Xjk4yNqz67XhV; Thu, 3 Mar 2022 22:02:18 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml715-chm.china.huawei.com (10.206.15.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 15:02:29 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:02:28 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 07/14] cxl/mem: Read CDAT table Date: Thu, 3 Mar 2022 13:58:58 +0000 Message-ID: <20220303135905.10420-8-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org The OS will need CDAT data from the CXL devices to properly set up interleave sets. Search the DOE driver/devices attached to the CXL device for one which supports the CDAT protocol. If found, read the CDAT data from that mailbox. Currently this is only supported by a PCI CXL object through a DOE mailbox which supports CDAT. But any cxl_mem type object can provide this data later if need be. For example for testing. Cache this data for later parsing. Provide a sysfs binary attribute to allow dumping of the CDAT. Binary dumping is modeled on /sys/firmware/ACPI/tables/ The ability to dump this table will be very useful for emulation of real devices once they become available as QEMU CXL type 3 device emulation will be able to load this file in. This does not support table updates at runtime. It will always provide whatever was there when first cached. Handling of table updates can be implemented later. Once there are more users, this code can move out to driver/cxl/cdat.c or similar. Finally create a complete list of DOE defines within cdat.h for anyone wishing to decode the CDAT table. Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny Signed-off-by: Jonathan Cameron --- drivers/cxl/cdat.h | 97 +++++++++++++++++++++++++++++++++++++++ drivers/cxl/core/memdev.c | 56 ++++++++++++++++++++++ drivers/cxl/cxlmem.h | 25 ++++++++++ drivers/cxl/pci.c | 87 +++++++++++++++++++++++++++++++++++ 4 files changed, 265 insertions(+) diff --git a/drivers/cxl/cdat.h b/drivers/cxl/cdat.h new file mode 100644 index 000000000000..4722b6bbbaf0 --- /dev/null +++ b/drivers/cxl/cdat.h @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __CXL_CDAT_H__ +#define __CXL_CDAT_H__ + +/* + * Coherent Device Attribute table (CDAT) + * + * Specification available from UEFI.org + * + * Whilst CDAT is defined as a single table, the access via DOE maiboxes is + * done one entry at a time, where the first entry is the header. + */ + +#define CXL_DOE_TABLE_ACCESS_REQ_CODE 0x000000ff +#define CXL_DOE_TABLE_ACCESS_REQ_CODE_READ 0 +#define CXL_DOE_TABLE_ACCESS_TABLE_TYPE 0x0000ff00 +#define CXL_DOE_TABLE_ACCESS_TABLE_TYPE_CDATA 0 +#define CXL_DOE_TABLE_ACCESS_ENTRY_HANDLE 0xffff0000 + +/* + * CDAT entries are little endian and are read from PCI config space which + * is also little endian. + * As such, on a big endian system these will have been reversed. + * This prevents us from making easy use of packed structures. + * Style form pci_regs.h + */ + +#define CDAT_HEADER_LENGTH_DW 4 +#define CDAT_HEADER_LENGTH_BYTES (CDAT_HEADER_LENGTH_DW * sizeof(u32)) +#define CDAT_HEADER_DW0_LENGTH 0xffffffff +#define CDAT_HEADER_DW1_REVISION 0x000000ff +#define CDAT_HEADER_DW1_CHECKSUM 0x0000ff00 +/* CDAT_HEADER_DW2_RESERVED */ +#define CDAT_HEADER_DW3_SEQUENCE 0xffffffff + +/* All structures have a common first DW */ +#define CDAT_STRUCTURE_DW0_TYPE 0x000000ff +#define CDAT_STRUCTURE_DW0_TYPE_DSMAS 0 +#define CDAT_STRUCTURE_DW0_TYPE_DSLBIS 1 +#define CDAT_STRUCTURE_DW0_TYPE_DSMSCIS 2 +#define CDAT_STRUCTURE_DW0_TYPE_DSIS 3 +#define CDAT_STRUCTURE_DW0_TYPE_DSEMTS 4 +#define CDAT_STRUCTURE_DW0_TYPE_SSLBIS 5 + +#define CDAT_STRUCTURE_DW0_LENGTH 0xffff0000 + +/* Device Scoped Memory Affinity Structure */ +#define CDAT_DSMAS_DW1_DSMAD_HANDLE 0x000000ff +#define CDAT_DSMAS_DW1_FLAGS 0x0000ff00 +#define CDAT_DSMAS_DPA_OFFSET(entry) ((u64)((entry)[3]) << 32 | (entry)[2]) +#define CDAT_DSMAS_DPA_LEN(entry) ((u64)((entry)[5]) << 32 | (entry)[4]) +#define CDAT_DSMAS_NON_VOLATILE(flags) ((flags & 0x04) >> 2) + +/* Device Scoped Latency and Bandwidth Information Structure */ +#define CDAT_DSLBIS_DW1_HANDLE 0x000000ff +#define CDAT_DSLBIS_DW1_FLAGS 0x0000ff00 +#define CDAT_DSLBIS_DW1_DATA_TYPE 0x00ff0000 +#define CDAT_DSLBIS_BASE_UNIT(entry) ((u64)((entry)[3]) << 32 | (entry)[2]) +#define CDAT_DSLBIS_DW4_ENTRY_0 0x0000ffff +#define CDAT_DSLBIS_DW4_ENTRY_1 0xffff0000 +#define CDAT_DSLBIS_DW5_ENTRY_2 0x0000ffff + +/* Device Scoped Memory Side Cache Information Structure */ +#define CDAT_DSMSCIS_DW1_HANDLE 0x000000ff +#define CDAT_DSMSCIS_MEMORY_SIDE_CACHE_SIZE(entry) \ + ((u64)((entry)[3]) << 32 | (entry)[2]) +#define CDAT_DSMSCIS_DW4_MEMORY_SIDE_CACHE_ATTRS 0xffffffff + +/* Device Scoped Initiator Structure */ +#define CDAT_DSIS_DW1_FLAGS 0x000000ff +#define CDAT_DSIS_DW1_HANDLE 0x0000ff00 + +/* Device Scoped EFI Memory Type Structure */ +#define CDAT_DSEMTS_DW1_HANDLE 0x000000ff +#define CDAT_DSEMTS_DW1_EFI_MEMORY_TYPE_ATTR 0x0000ff00 +#define CDAT_DSEMTS_DPA_OFFSET(entry) ((u64)((entry)[3]) << 32 | (entry)[2]) +#define CDAT_DSEMTS_DPA_LENGTH(entry) ((u64)((entry)[5]) << 32 | (entry)[4]) + +/* Switch Scoped Latency and Bandwidth Information Structure */ +#define CDAT_SSLBIS_DW1_DATA_TYPE 0x000000ff +#define CDAT_SSLBIS_BASE_UNIT(entry) ((u64)((entry)[3]) << 32 | (entry)[2]) +#define CDAT_SSLBIS_ENTRY_PORT_X(entry, i) ((entry)[4 + (i) * 2] & 0x0000ffff) +#define CDAT_SSLBIS_ENTRY_PORT_Y(entry, i) (((entry)[4 + (i) * 2] & 0xffff0000) >> 16) +#define CDAT_SSLBIS_ENTRY_LAT_OR_BW(entry, i) ((entry)[4 + (i) * 2 + 1] & 0x0000ffff) + +/** + * struct cxl_cdat - CXL CDAT data + * + * @table: cache of CDAT table + * @length: length of cached CDAT table + */ +struct cxl_cdat { + void *table; + size_t length; +}; + +#endif /* !__CXL_CDAT_H__ */ diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 61029cb7ac62..82e84fac2eca 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -86,6 +86,35 @@ static ssize_t pmem_size_show(struct device *dev, struct device_attribute *attr, return sysfs_emit(buf, "%#llx\n", len); } +static ssize_t CDAT_read(struct file *filp, struct kobject *kobj, + struct bin_attribute *bin_attr, char *buf, + loff_t offset, size_t count) +{ + struct device *dev = kobj_to_dev(kobj); + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + + if (!cxlmd->cdat.table) + return 0; + + return memory_read_from_buffer(buf, count, &offset, + cxlmd->cdat.table, + cxlmd->cdat.length); +} + +static BIN_ATTR_RO(CDAT, 0); + +static umode_t cxl_memdev_bin_attr_is_visible(struct kobject *kobj, + struct bin_attribute *attr, int i) +{ + struct device *dev = kobj_to_dev(kobj); + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + + if ((attr == &bin_attr_CDAT) && cxlmd->cdat.table) + return 0400; + + return 0; +} + static struct device_attribute dev_attr_pmem_size = __ATTR(size, 0444, pmem_size_show, NULL); @@ -96,6 +125,11 @@ static struct attribute *cxl_memdev_attributes[] = { NULL, }; +static struct bin_attribute *cxl_memdev_bin_attributes[] = { + &bin_attr_CDAT, + NULL, +}; + static struct attribute *cxl_memdev_pmem_attributes[] = { &dev_attr_pmem_size.attr, NULL, @@ -108,6 +142,8 @@ static struct attribute *cxl_memdev_ram_attributes[] = { static struct attribute_group cxl_memdev_attribute_group = { .attrs = cxl_memdev_attributes, + .bin_attrs = cxl_memdev_bin_attributes, + .is_bin_visible = cxl_memdev_bin_attr_is_visible, }; static struct attribute_group cxl_memdev_ram_attribute_group = { @@ -276,6 +312,21 @@ static const struct file_operations cxl_memdev_fops = { .llseek = noop_llseek, }; +static int read_cdat_data(struct cxl_memdev *cxlmd, struct cxl_dev_state *cxlds) +{ + struct device *dev = &cxlmd->dev; + size_t cdat_length; + + if (cxl_mem_cdat_get_length(cxlds, &cdat_length)) + return 0; + + cxlmd->cdat.table = devm_kzalloc(dev, cdat_length, GFP_KERNEL); + if (!cxlmd->cdat.table) + return -ENOMEM; + cxlmd->cdat.length = cdat_length; + return cxl_mem_cdat_read_table(cxlds, &cxlmd->cdat); +} + struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds) { struct cxl_memdev *cxlmd; @@ -292,6 +343,11 @@ struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds) if (rc) goto err; + /* Cache the data early to ensure is_visible() works */ + rc = read_cdat_data(cxlmd, cxlds); + if (rc) + goto err; + /* * Activate ioctl operations, no cxl_memdev_rwsem manipulation * needed as this is ordered with cdev_add() publishing the device. diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 176228d8c66d..b4209170f4ac 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -5,6 +5,7 @@ #include #include #include "cxl.h" +#include "cdat.h" /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ #define CXLMDEV_STATUS_OFFSET 0x0 @@ -40,6 +41,7 @@ struct cxl_memdev { struct device dev; struct cdev cdev; struct cxl_dev_state *cxlds; + struct cxl_cdat cdat; int id; }; @@ -118,6 +120,10 @@ struct cxl_mbox_cmd { * @next_volatile_bytes: volatile capacity change pending device reset * @next_persistent_bytes: persistent capacity change pending device reset * @mbox_send: @dev specific transport for transmitting mailbox commands + * @cdat_get_length: @dev specific function for reading the CDAT table length + * returns -errno if CDAT not supported on this device + * @cdat_read_table: @dev specific function for reading the table + * returns -errno if CDAT not supported on this device * * See section 8.2.9.5.2 Capacity Configuration and Label Storage for * details on capacity parameters. @@ -148,6 +154,9 @@ struct cxl_dev_state { u64 next_persistent_bytes; int (*mbox_send)(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd); + int (*cdat_get_length)(struct cxl_dev_state *cxlds, size_t *length); + int (*cdat_read_table)(struct cxl_dev_state *cxlds, + struct cxl_cdat *cdat); }; enum cxl_opcode { @@ -266,4 +275,20 @@ int cxl_mem_create_range_info(struct cxl_dev_state *cxlds); struct cxl_dev_state *cxl_dev_state_create(struct device *dev); void set_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); + +static inline int cxl_mem_cdat_get_length(struct cxl_dev_state *cxlds, size_t *length) +{ + if (cxlds->cdat_get_length) + return cxlds->cdat_get_length(cxlds, length); + return -EOPNOTSUPP; +} + +static inline int cxl_mem_cdat_read_table(struct cxl_dev_state *cxlds, + struct cxl_cdat *cdat) +{ + if (cxlds->cdat_read_table) + return cxlds->cdat_read_table(cxlds, cdat); + return -EOPNOTSUPP; +} + #endif /* __CXL_MEM_H__ */ diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index adcabc0bcb38..ebd98a8a310f 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -11,6 +11,7 @@ #include "cxlmem.h" #include "pci.h" #include "cxl.h" +#include "cdat.h" /** * DOC: cxl pci @@ -521,6 +522,90 @@ static int cxl_setup_doe_devices(struct cxl_dev_state *cxlds) return 0; } +#define CDAT_DOE_REQ(entry_handle) \ + (FIELD_PREP(CXL_DOE_TABLE_ACCESS_REQ_CODE, \ + CXL_DOE_TABLE_ACCESS_REQ_CODE_READ) | \ + FIELD_PREP(CXL_DOE_TABLE_ACCESS_TABLE_TYPE, \ + CXL_DOE_TABLE_ACCESS_TABLE_TYPE_CDATA) | \ + FIELD_PREP(CXL_DOE_TABLE_ACCESS_ENTRY_HANDLE, (entry_handle))) + +static int cxl_cdat_get_length(struct cxl_dev_state *cxlds, size_t *length) +{ + struct pci_doe_dev *doe_dev = cxlds->cdat_doe; + u32 cdat_request_pl = CDAT_DOE_REQ(0); + u32 cdat_response_pl[32]; + struct pci_doe_exchange ex = { + .prot.vid = PCI_DVSEC_VENDOR_ID_CXL, + .prot.type = CXL_DOE_PROTOCOL_TABLE_ACCESS, + .request_pl = &cdat_request_pl, + .request_pl_sz = sizeof(cdat_request_pl), + .response_pl = cdat_response_pl, + .response_pl_sz = sizeof(cdat_response_pl), + }; + + ssize_t rc; + + rc = pci_doe_exchange_sync(doe_dev, &ex); + if (rc < 0) + return rc; + if (rc < 1) + return -EIO; + + *length = cdat_response_pl[1]; + return 0; +} + +static int cxl_cdat_read_table(struct cxl_dev_state *cxlds, + struct cxl_cdat *cdat) +{ + struct pci_doe_dev *doe_dev = cxlds->cdat_doe; + size_t length = cdat->length; + u32 *data = cdat->table; + int entry_handle = 0; + int rc; + + do { + u32 cdat_request_pl = CDAT_DOE_REQ(entry_handle); + u32 cdat_response_pl[32]; + struct pci_doe_exchange ex = { + .prot.vid = PCI_DVSEC_VENDOR_ID_CXL, + .prot.type = CXL_DOE_PROTOCOL_TABLE_ACCESS, + .request_pl = &cdat_request_pl, + .request_pl_sz = sizeof(cdat_request_pl), + .response_pl = cdat_response_pl, + .response_pl_sz = sizeof(cdat_response_pl), + }; + size_t entry_dw; + u32 *entry; + + rc = pci_doe_exchange_sync(doe_dev, &ex); + if (rc < 0) + return rc; + + entry = cdat_response_pl + 1; + entry_dw = rc / sizeof(u32); + /* Skip Header */ + entry_dw -= 1; + entry_dw = min(length / 4, entry_dw); + memcpy(data, entry, entry_dw * sizeof(u32)); + length -= entry_dw * sizeof(u32); + data += entry_dw; + entry_handle = FIELD_GET(CXL_DOE_TABLE_ACCESS_ENTRY_HANDLE, cdat_response_pl[0]); + + } while (entry_handle != 0xFFFF); + + return 0; +} + +static void cxl_initialize_cdat_callbacks(struct cxl_dev_state *cxlds) +{ + if (!cxlds->cdat_doe) + return; + + cxlds->cdat_get_length = cxl_cdat_get_length; + cxlds->cdat_read_table = cxl_cdat_read_table; +} + static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct cxl_register_map map; @@ -551,6 +636,8 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; + cxl_initialize_cdat_callbacks(cxlds); + rc = cxl_map_regs(cxlds, &map); if (rc) return rc; From patchwork Thu Mar 3 13:58:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DE6BC433F5 for ; Thu, 3 Mar 2022 14:03:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232974AbiCCODs (ORCPT ); Thu, 3 Mar 2022 09:03:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233798AbiCCODs (ORCPT ); Thu, 3 Mar 2022 09:03:48 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A51AF427CF; Thu, 3 Mar 2022 06:03:02 -0800 (PST) Received: from fraeml712-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8Xj55DSmz67x9g; Thu, 3 Mar 2022 22:01:45 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml712-chm.china.huawei.com (10.206.15.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 15:03:00 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:02:59 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 08/14] cxl/cdat: Introduce cdat_hdr_valid() Date: Thu, 3 Mar 2022 13:58:59 +0000 Message-ID: <20220303135905.10420-9-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org From: Ira Weiny The CDAT data is protected by a checksum which should be checked when the CDAT is read to ensure it is valid. In addition the lengths specified should be checked. Introduce cdat_hdr_valid() to check the checksum. While at it check and store the sequence number. Signed-off-by: Ira Weiny Signed-off-by: Jonathan Cameron --- drivers/cxl/cdat.h | 2 ++ drivers/cxl/pci.c | 32 ++++++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+) diff --git a/drivers/cxl/cdat.h b/drivers/cxl/cdat.h index 4722b6bbbaf0..a7725d26f2d2 100644 --- a/drivers/cxl/cdat.h +++ b/drivers/cxl/cdat.h @@ -88,10 +88,12 @@ * * @table: cache of CDAT table * @length: length of cached CDAT table + * @seq: Last read Sequence number of the CDAT table */ struct cxl_cdat { void *table; size_t length; + u32 seq; }; #endif /* !__CXL_CDAT_H__ */ diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index ebd98a8a310f..ed94a6bef2de 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -522,6 +522,35 @@ static int cxl_setup_doe_devices(struct cxl_dev_state *cxlds) return 0; } +static bool cxl_cdat_hdr_valid(struct device *dev, struct cxl_cdat *cdat) +{ + u32 *table = cdat->table; + u8 *data8 = cdat->table; + u32 length, seq; + u8 check; + int i; + + length = FIELD_GET(CDAT_HEADER_DW0_LENGTH, table[0]); + if (length < CDAT_HEADER_LENGTH_BYTES) + return false; + + if (length > cdat->length) + return false; + + seq = FIELD_GET(CDAT_HEADER_DW3_SEQUENCE, table[3]); + + /* Store the sequence for now. */ + if (cdat->seq != seq) { + dev_info(dev, "CDAT seq change %x -> %x\n", cdat->seq, seq); + cdat->seq = seq; + } + + for (check = 0, i = 0; i < length; i++) + check += data8[i]; + + return check == 0; +} + #define CDAT_DOE_REQ(entry_handle) \ (FIELD_PREP(CXL_DOE_TABLE_ACCESS_REQ_CODE, \ CXL_DOE_TABLE_ACCESS_REQ_CODE_READ) | \ @@ -594,6 +623,9 @@ static int cxl_cdat_read_table(struct cxl_dev_state *cxlds, } while (entry_handle != 0xFFFF); + if (!cxl_cdat_hdr_valid(cxlds->dev, cdat)) + return -EIO; + return 0; } From patchwork Thu Mar 3 13:59:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A280BC433FE for ; Thu, 3 Mar 2022 14:03:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233930AbiCCOET (ORCPT ); Thu, 3 Mar 2022 09:04:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232569AbiCCOES (ORCPT ); Thu, 3 Mar 2022 09:04:18 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB9751A380; Thu, 3 Mar 2022 06:03:32 -0800 (PST) Received: from fraeml709-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8Xjh37kdz67s7H; Thu, 3 Mar 2022 22:02:16 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml709-chm.china.huawei.com (10.206.15.37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 15:03:30 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:03:30 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 09/14] cxl/mem: Retry reading CDAT on failure Date: Thu, 3 Mar 2022 13:59:00 +0000 Message-ID: <20220303135905.10420-10-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org From: Ira Weiny The CDAT read may fail for a number of reasons but mainly it is possible to get different parts of a valid state. The checksum in the CDAT table protects against this. Now that the checksum is validated issue a retry if the CDAT read fails. For now 2 retries are implemented. Signed-off-by: Ira Weiny Signed-off-by: Jonathan Cameron --- drivers/cxl/core/memdev.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 82e84fac2eca..66229ba2a0f8 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -312,7 +312,8 @@ static const struct file_operations cxl_memdev_fops = { .llseek = noop_llseek, }; -static int read_cdat_data(struct cxl_memdev *cxlmd, struct cxl_dev_state *cxlds) +static int __read_cdat_data(struct cxl_memdev *cxlmd, + struct cxl_dev_state *cxlds) { struct device *dev = &cxlmd->dev; size_t cdat_length; @@ -327,6 +328,20 @@ static int read_cdat_data(struct cxl_memdev *cxlmd, struct cxl_dev_state *cxlds) return cxl_mem_cdat_read_table(cxlds, &cxlmd->cdat); } +static int read_cdat_data(struct cxl_memdev *cxlmd, + struct cxl_dev_state *cxlds) +{ + int retries = 2; + int rc; + + while (--retries) { + rc = __read_cdat_data(cxlmd, cxlds); + if (!rc) + break; + } + return rc; +} + struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds) { struct cxl_memdev *cxlmd; From patchwork Thu Mar 3 13:59:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E160BC433F5 for ; Thu, 3 Mar 2022 14:04:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232183AbiCCOEt (ORCPT ); Thu, 3 Mar 2022 09:04:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231287AbiCCOEs (ORCPT ); Thu, 3 Mar 2022 09:04:48 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02A04419B9; Thu, 3 Mar 2022 06:04:03 -0800 (PST) Received: from fraeml708-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8XkK29FXz67Nm9; Thu, 3 Mar 2022 22:02:49 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml708-chm.china.huawei.com (10.206.15.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 15:04:01 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:04:00 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 10/14] cxl/cdat: Parse out DSMAS data from CDAT table Date: Thu, 3 Mar 2022 13:59:01 +0000 Message-ID: <20220303135905.10420-11-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org From: Ira Weiny CXL memory devices need the information in the Device Scoped Memory Affinity Structure (DSMAS). This information is contained within the CDAT table buffer which is already read and cached. Parse and cache DSMAS data from the CDAT table. Store this data in unmarshaled struct dsmas data structures for ease of use. Signed-off-by: Ira Weiny Signed-off-by: Jonathan Cameron --- drivers/cxl/cdat.h | 21 ++++++++++++ drivers/cxl/core/memdev.c | 70 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 91 insertions(+) diff --git a/drivers/cxl/cdat.h b/drivers/cxl/cdat.h index a7725d26f2d2..f8c126190d18 100644 --- a/drivers/cxl/cdat.h +++ b/drivers/cxl/cdat.h @@ -83,17 +83,38 @@ #define CDAT_SSLBIS_ENTRY_PORT_Y(entry, i) (((entry)[4 + (i) * 2] & 0xffff0000) >> 16) #define CDAT_SSLBIS_ENTRY_LAT_OR_BW(entry, i) ((entry)[4 + (i) * 2 + 1] & 0x0000ffff) +/** + * struct cxl_dsmas - host unmarshaled version of DSMAS data + * + * As defined in the Coherent Device Attribute Table (CDAT) specification this + * represents a single DSMAS entry in that table. + * + * @dpa_base: The lowest DPA address associated with this DSMAD + * @dpa_length: Length in bytes of this DSMAD + * @non_volatile: If set, the memory region represents Non-Volatile memory + */ +struct cxl_dsmas { + u64 dpa_base; + u64 dpa_length; + /* Flags */ + u8 non_volatile:1; +}; + /** * struct cxl_cdat - CXL CDAT data * * @table: cache of CDAT table * @length: length of cached CDAT table * @seq: Last read Sequence number of the CDAT table + * @dsmas_ary: Array of DSMAS entries as parsed from the CDAT table + * @nr_dsmas: Number of entries in dsmas_ary */ struct cxl_cdat { void *table; size_t length; u32 seq; + struct cxl_dsmas *dsmas_ary; + int nr_dsmas; }; #endif /* !__CXL_CDAT_H__ */ diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 66229ba2a0f8..2d909830918f 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -6,6 +6,7 @@ #include #include #include +#include "cdat.h" #include "core.h" static DECLARE_RWSEM(cxl_memdev_rwsem); @@ -342,6 +343,71 @@ static int read_cdat_data(struct cxl_memdev *cxlmd, return rc; } +static int parse_dsmas(struct cxl_memdev *cxlmd) +{ + struct cxl_dsmas *dsmas_ary = NULL; + u32 *data = cxlmd->cdat.table; + int bytes_left = cxlmd->cdat.length; + int nr_dsmas = 0; + + if (!data) + return -ENXIO; + + /* Skip header */ + data += CDAT_HEADER_LENGTH_DW; + bytes_left -= CDAT_HEADER_LENGTH_BYTES; + + while (bytes_left > 0) { + u32 *cur_rec = data; + u8 type = FIELD_GET(CDAT_STRUCTURE_DW0_TYPE, cur_rec[0]); + u16 length = FIELD_GET(CDAT_STRUCTURE_DW0_LENGTH, cur_rec[0]); + + if (type == CDAT_STRUCTURE_DW0_TYPE_DSMAS) { + struct cxl_dsmas *new_ary; + u8 flags; + + new_ary = devm_krealloc(&cxlmd->dev, dsmas_ary, + sizeof(*dsmas_ary) * (nr_dsmas + 1), + GFP_KERNEL); + if (!new_ary) { + dev_err(&cxlmd->dev, + "Failed to allocate memory for DSMAS data\n"); + return -ENOMEM; + } + dsmas_ary = new_ary; + + flags = FIELD_GET(CDAT_DSMAS_DW1_FLAGS, cur_rec[1]); + + dsmas_ary[nr_dsmas].dpa_base = CDAT_DSMAS_DPA_OFFSET(cur_rec); + dsmas_ary[nr_dsmas].dpa_length = CDAT_DSMAS_DPA_LEN(cur_rec); + dsmas_ary[nr_dsmas].non_volatile = CDAT_DSMAS_NON_VOLATILE(flags); + + dev_dbg(&cxlmd->dev, "DSMAS %d: %llx:%llx %s\n", + nr_dsmas, + dsmas_ary[nr_dsmas].dpa_base, + dsmas_ary[nr_dsmas].dpa_base + + dsmas_ary[nr_dsmas].dpa_length, + (dsmas_ary[nr_dsmas].non_volatile ? + "Persistent" : "Volatile") + ); + + nr_dsmas++; + } + + data += (length / sizeof(u32)); + bytes_left -= length; + } + + if (nr_dsmas == 0) + return -ENXIO; + + dev_dbg(&cxlmd->dev, "Found %d DSMAS entries\n", nr_dsmas); + cxlmd->cdat.dsmas_ary = dsmas_ary; + cxlmd->cdat.nr_dsmas = nr_dsmas; + + return 0; +} + struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds) { struct cxl_memdev *cxlmd; @@ -363,6 +429,10 @@ struct cxl_memdev *devm_cxl_add_memdev(struct cxl_dev_state *cxlds) if (rc) goto err; + rc = parse_dsmas(cxlmd); + if (rc) + dev_warn(dev, "No DSMAS data found: %d\n", rc); + /* * Activate ioctl operations, no cxl_memdev_rwsem manipulation * needed as this is ordered with cdev_add() publishing the device. From patchwork Thu Mar 3 13:59:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DDE3C433EF for ; Thu, 3 Mar 2022 14:04:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231287AbiCCOFU (ORCPT ); Thu, 3 Mar 2022 09:05:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232127AbiCCOFT (ORCPT ); Thu, 3 Mar 2022 09:05:19 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C8B9424BE; Thu, 3 Mar 2022 06:04:33 -0800 (PST) Received: from fraeml706-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8Xks1mSMz67Zts; Thu, 3 Mar 2022 22:03:17 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml706-chm.china.huawei.com (10.206.15.55) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.21; Thu, 3 Mar 2022 15:04:31 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:04:31 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 11/14] lib/asn1_encoder: Add a function to encode many byte integer values. Date: Thu, 3 Mar 2022 13:59:02 +0000 Message-ID: <20220303135905.10420-12-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org An example is the encoding of ECC signatures used by the ECDSA signature verification code. A user is the new SPDM support where raw signatures are returned by the responder. These can then be encoded so that we can pass them to signature_verify() Signed-off-by: Jonathan Cameron --- include/linux/asn1_encoder.h | 3 ++ lib/asn1_encoder.c | 54 ++++++++++++++++++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/include/linux/asn1_encoder.h b/include/linux/asn1_encoder.h index 08cd0c2ad34f..30c3ebacd46c 100644 --- a/include/linux/asn1_encoder.h +++ b/include/linux/asn1_encoder.h @@ -19,6 +19,9 @@ unsigned char * asn1_encode_tag(unsigned char *data, const unsigned char *end_data, u32 tag, const unsigned char *string, int len); unsigned char * +asn1_encode_integer_large_positive(unsigned char *data, const unsigned char *end_data, + u32 tag, const unsigned char *integer, int len); +unsigned char * asn1_encode_octet_string(unsigned char *data, const unsigned char *end_data, const unsigned char *string, u32 len); diff --git a/lib/asn1_encoder.c b/lib/asn1_encoder.c index 0fd3c454a468..3aaffe6a376d 100644 --- a/lib/asn1_encoder.c +++ b/lib/asn1_encoder.c @@ -315,6 +315,60 @@ asn1_encode_tag(unsigned char *data, const unsigned char *end_data, } EXPORT_SYMBOL_GPL(asn1_encode_tag); +unsigned char * +asn1_encode_integer_large_positive(unsigned char *data, const unsigned char *end_data, + u32 tag, const unsigned char *integer, int len) +{ + int data_len = end_data - data; + unsigned char *d = &data[2]; + bool found = false; + int i; + + if (WARN(tag > 30, "ASN.1 tag can't be > 30")) + return ERR_PTR(-EINVAL); + + if (!integer && WARN(len > 127, + "BUG: recode tag is too big (>127)")) + return ERR_PTR(-EINVAL); + + if (IS_ERR(data)) + return data; + + if (data_len < 3) + return ERR_PTR(-EINVAL); + + + data[0] = _tagn(UNIV, PRIM, tag); + /* Leave space for length */ + data_len -= 2; + + for (i = 0; i < len; i++) { + int byte = integer[i]; + + if (!found && byte == 0) + continue; + + /* + * as per encode_integer + */ + if (!found && (byte & 0x80)) { + *d++ = 0; + data_len--; + } + found = true; + if (data_len == 0) + return ERR_PTR(-EINVAL); + + *d++ = byte; + data_len--; + } + + data[1] = d - data - 2; + + return d; +} +EXPORT_SYMBOL_GPL(asn1_encode_integer_large_positive); + /** * asn1_encode_octet_string() - encode an ASN.1 OCTET STRING * @data: pointer to encode at From patchwork Thu Mar 3 13:59:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16AFBC433F5 for ; Thu, 3 Mar 2022 14:05:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232381AbiCCOFx (ORCPT ); Thu, 3 Mar 2022 09:05:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230456AbiCCOFw (ORCPT ); Thu, 3 Mar 2022 09:05:52 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4989338BF1; Thu, 3 Mar 2022 06:05:04 -0800 (PST) Received: from fraeml704-chm.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8XlR64L3z67xmM; Thu, 3 Mar 2022 22:03:47 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml704-chm.china.huawei.com (10.206.15.53) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.21; Thu, 3 Mar 2022 15:05:01 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:05:01 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 12/14] spdm: Introduce a library for DMTF SPDM Date: Thu, 3 Mar 2022 13:59:03 +0000 Message-ID: <20220303135905.10420-13-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org The Security Protocol and Data Model (SPDM) defines messages, data objects and sequences for performing message exchanges between devices over various transports and physical media. As the kernel supports several possible transports (mctp, PCI DOE) introduce a library than can in turn be used with all those transports. There are a large number of open questions around how we do this that need to be resolved. These include: * Key chain management - Current approach is to use a keychain provide as part of per transport initialization for the root certificates which are assumed to be loaded into that keychain, perhaps in an initrd script. - Each SPDM instance then has its own keychain to manage its certificates. It may make sense to drop this, but that looks like it will make a lot of the standard infrastructure harder to use. * ECC algorithms needing ASN1 encoded signatures. I'm struggling to find any specification that actual 'requires' that choice vs raw data, so my guess is that this is a question of existing usecases (x509 certs seem to use this form, but CHALLENGE_AUTH SPDM seems to use raw data). I'm not sure whether we are better off just encoding the signature in ASN1 as currently done in this series, or if it is worth a tweaking things in the crypto layers. * Lots of options in actual implementation to look at. Signed-off-by: Jonathan Cameron --- include/linux/spdm.h | 104 ++++ lib/Kconfig | 3 + lib/Makefile | 2 + lib/spdm.c | 1196 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 1305 insertions(+) diff --git a/include/linux/spdm.h b/include/linux/spdm.h new file mode 100644 index 000000000000..bb58b85315d6 --- /dev/null +++ b/include/linux/spdm.h @@ -0,0 +1,104 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * DMTF Security Protocol and Data Model + * + * Copyright (C) 2021 Huawei + * Jonathan Cameron + */ + +#ifndef _SPDM_H_ +#define _SPDM_H_ + +#include + +enum spdm_base_hash_algo { + spdm_base_hash_sha_256 = 0, + spdm_base_hash_sha_384 = 1, + spdm_base_hash_sha_512 = 2, + spdm_base_hash_sha3_256 = 3, + spdm_base_hash_sha3_384 = 4, + spdm_base_hash_sha3_512 = 5, +}; + +enum spdm_meas_hash_algo { + spdm_meas_hash_raw = 0, + spdm_meas_hash_sha_256 = 1, + spdm_meas_hash_sha_384 = 2, + spdm_meas_hash_sha_512 = 3, + spdm_meas_hash_sha3_256 = 4, + spdm_meas_hash_sha3_384 = 5, + spdm_meas_hash_sha3_512 = 6, +}; + +enum spdm_base_asym_algo { + spdm_asym_rsassa_2048 = 0, + spdm_asym_rsapss_2048 = 1, + spdm_asym_rsassa_3072 = 2, + spdm_asym_rsapss_3072 = 3, + spdm_asym_ecdsa_ecc_nist_p256 = 4, + spdm_asym_rsassa_4096 = 5, + spdm_asym_rsapss_4096 = 6, + spdm_asym_ecdsa_ecc_nist_p384 = 7, + spdm_asym_ecdsa_ecc_nist_p521 = 8 +}; + +struct crypto_shash; +struct shash_desc; +struct key; +struct device; + +/* + * The SPDM specification does not actually define this as the header + * but all messages have the same first 4 named fields. + * Note however that the meaning of param1 and param2 is message dependent. + */ +struct spdm_header { + u8 version; + u8 code; /* requestresponsecode */ + u8 param1; + u8 param2; +}; + +struct spdm_exchange { + struct spdm_header *request_pl; + size_t request_pl_sz; + struct spdm_header *response_pl; + size_t response_pl_sz; + u8 code; /* Overwrites the request code */ +}; + +struct spdm_state { + enum spdm_meas_hash_algo measurement_hash_alg; + enum spdm_base_asym_algo base_asym_alg; + enum spdm_base_hash_algo base_hash_alg; + struct key *leaf_key; + size_t h; /* base hash length - H in specification */ + size_t s; /* base asymmetric signature length - S in specification */ + u32 responder_caps; + + /* + * The base hash algorithm is not know until we reach the + * NEGOTIATE_ALGORITHMS response. + * The CHALLENGE_AUTH response handling requires a digest of all + * prior messages in the sequence. As such cache the messages in @a until + * they can be used as the first input to the hash function. + * Once the hash is known it can be updated as each additional req/rsp + * becomes available. + */ + void *a; + size_t a_length; + struct crypto_shash *shash; + struct shash_desc *desc; + + struct key *root_keyring; /* Keyring against which to check the root */ + struct key *keyring; /* used to store certs from device */ + + /* Transport specific */ + struct device *dev; /* For error reporting only */ + void *transport_priv; + int (*transport_ex)(void *priv, struct spdm_exchange *spdm_ex); +}; + +int spdm_authenticate(struct spdm_state *spdm_state); +int spdm_measurements_get(struct spdm_state *spdm_state); +#endif diff --git a/lib/Kconfig b/lib/Kconfig index c80fde816a7e..aa14b0c423fa 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -729,3 +729,6 @@ config PLDMFW config ASN1_ENCODER tristate + +config SPDM + tristate diff --git a/lib/Makefile b/lib/Makefile index 300f569c626b..3693978ed18d 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -289,6 +289,8 @@ obj-$(CONFIG_PERCPU_TEST) += percpu_test.o obj-$(CONFIG_ASN1) += asn1_decoder.o obj-$(CONFIG_ASN1_ENCODER) += asn1_encoder.o +obj-$(CONFIG_SPDM) += spdm.o + obj-$(CONFIG_FONT_SUPPORT) += fonts/ hostprogs := gen_crc32table diff --git a/lib/spdm.c b/lib/spdm.c new file mode 100644 index 000000000000..3ce2341647f8 --- /dev/null +++ b/lib/spdm.c @@ -0,0 +1,1196 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMTF Security Protocol and Data Model + * + * Copyright (C) 2021 Huawei + * Jonathan Cameron + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +/* + * Todo + * - Secure channel setup. + * - Multiple slot support. + * - Measurement support (over secure channel or within CHALLENGE_AUTH. + * - Support more core algorithms (not CMA does not require them, but may use + * them if present. + * - Extended algorithm, support. + */ +/* + * Discussions points + * 1. Worth adding an SPDM layer around a transport layer? + * 2. Pad all SPDM request response to DWORD so we don't have to bounce for CMA /DOE. + * 3. Currently only implement one flow - so ignore whether we have certs cached. + * Could implement the alternative flows, but at cost of complexity. + * 4. Keyring management. How to ensure we can easily check root key against + * keys in appropriate keyring, but ensure we can't cross check keys + * from different devices. Current solution of one keyring per SPDM has issues + * around cleanup when an error occurs. + * 5. Several corners of the SPDM specification were not totally clear to me, so + * where unsure I verified options against openSPDM. + * Detailed stuff + * - SPDM spec doesn't define a header, but all requests and responses have same + * first 4 bytes. Either could define that as a header, or givem the better names + * to reflect how param1 and param2 are actually used. + */ + +static int spdm_append_buffer_a(struct spdm_state *spdm_state, void *data, + size_t data_size, bool reset) +{ + u8 *a_new; + + if (reset) { + kfree(spdm_state->a); + spdm_state->a = NULL; + spdm_state->a_length = 0; + } + + a_new = krealloc(spdm_state->a, spdm_state->a_length + data_size, GFP_KERNEL); + if (!a_new) + return -ENOMEM; + + spdm_state->a = a_new; + memcpy(spdm_state->a + spdm_state->a_length, data, data_size); + spdm_state->a_length += data_size; + + return 0; +} + +#define SPDM_REQ 0x80 +#define SPDM_GET_VERSION 0x04 + +struct spdm_get_version_req { + u8 version; + u8 code; + u8 param1; + u8 param2; +}; + +struct spdm_get_version_rsp { + u8 version; + u8 code; + u8 param1; + u8 param2; + + u8 reserved; + u8 version_number_entry_count; + __le16 version_number_entries[]; +}; + +#define SPDM_GET_CAPABILITIES 0x61 + +/* For this exchange the request and response messages have the same form. */ +struct spdm_get_capabilities_reqrsp { + u8 version; + u8 code; + u8 param1; + u8 param2; + + u8 reserved; + u8 ctexponent; + u16 reserved2; + +/* CACHE_CAP is only valid for response */ +#define SPDM_GET_CAP_FLAG_CACHE_CAP BIT(0) +#define SPDM_GET_CAP_FLAG_CERT_CAP BIT(1) +#define SPDM_GET_CAP_FLAG_CHAL_CAP BIT(2) + +/* MEAS_CAP values other than 0 only for response */ +#define SPDM_GET_CAP_FLAG_MEAS_CAP_MSK GENMASK(4, 3) +#define SPDM_GET_CAP_FLAG_MEAS_CAP_NO 0 +#define SPDM_GET_CAP_FLAG_MEAS_CAP_MEAS 1 +#define SPDM_GET_CAP_FLAG_MEAS_CAP_MEAS_SIG 2 + +/* MEAS_FRESH_CAP == 0 for request */ +#define SPDM_GET_CAP_FLAG_MEAS_FRESH_CAP BIT(5) +#define SPDM_GET_CAP_FLAG_ENCRYPT_CAP BIT(6) +#define SPDM_GET_CAP_FLAG_MAC_CAP BIT(7) +#define SPDM_GET_CAP_FLAG_MUT_AUTH_CAP BIT(8) +#define SPDM_GET_CAP_FLAG_KEY_EX_CAP BIT(9) + +#define SPDM_GET_CAP_FLAG_PSK_CAP_MSK GENMASK(11, 10) +#define SPDM_GET_CAP_FLAG_PSK_CAP_NO_PRESHARE 0 +#define SPDM_GET_CAP_FLAG_PSK_CAP_PRESHARE 1 + +#define SPDM_GET_CAP_FLAG_ENCAP_CAP BIT(12) +#define SPDM_GET_CAP_FLAG_HBEAT_CAP BIT(13) +#define SPDM_GET_CAP_FLAG_KEY_UPD_CAP BIT(14) +#define SPDM_GET_CAP_FLAG_HANDSHAKE_ITC_CAP BIT(15) +#define SPDM_GET_CAP_FLAG_PUB_KEY_ID_CAP BIT(16) + __le32 flags; +}; + +#define SPDM_NEGOTIATE_ALGS 0x63 + +struct spdm_negotiate_algs_req { + u8 version; + u8 code; + u8 param1; /* Numer of algorithm structure tables */ + u8 param2; + + __le16 length; /* <= 128 bytes */ + u8 measurement_specification; /* Only one bit set, BIT 0 == DMTF */ + u8 reserved; + __le32 base_asym_algo; + + /* Bit mask, entries form spdm_base_hash_algo */ + __le32 base_hash_algo; + + u8 reserved2[12]; + u8 ext_asm_count; + u8 ext_hash_count; + u8 reserved3[2]; + + /* + * Additional fields at end of this structure + * - ExtAsym 4 * ext_asm_count + * - ExtHash 4 * ext_hash_count + * - ReqAlgStruct size * param1 + */ +}; + +struct spdm_negotiate_algs_rsp { + u8 version; + u8 code; + u8 param1; /* Numer of algorithm structure tables */ + u8 param2; + + __le16 length; /* <= 128 bytes */ + u8 measurement_specification; /* Only one bit set, BIT 0 == DMTF */ + u8 reserved; + + /* Exactly one bit must be set if GET_MEASUREMENTS is supported */ + __le32 measurement_hash_algo; + /* At most one bit set to reflect negotiated alg */ + __le32 base_asym_sel; + __le32 base_hash_sel; + u8 reserved2[12]; + u8 ext_asym_sel_count; /* Either 0 or 1 */ + u8 ext_hash_sel_count; /* Either 0 or 1 */ + u8 reserved3[2]; + + /* + * Additional fields at end of this structure + * - ExtAsym 4 * ext_asm_count + * - ExtHash 4 * ext_hash_count + * - ReqAlgStruct size * param1 + */ +}; + +#define SPDM_REQ_ALG_STRUCT_TYPE_DHE 0x2 +#define SPDM_DHE_ALGO_FFDHE_2048 BIT(0) +#define SPDM_DHE_ALGO_FFDHE_3072 BIT(1) +#define SPDM_DHE_ALGO_FFDHE_4096 BIT(2) +#define SPDM_DHE_ALGO_SECP_256R1 BIT(3) +#define SPDM_DHE_ALGO_SECP_384R1 BIT(4) +#define SPDM_DHE_ALGO_SECP_521R1 BIT(5) + +#define SPDM_REQ_ALG_STRUCT_TYPE_AEAD_CIPHER_SUITE 0x3 +#define SPDM_AEAD_ALGO_AES_128_GCM BIT(0) +#define SPDM_AEAD_ALGO_AES_256_GCM BIT(1) +#define SPDM_AEAD_ALGO_CHACHA20_POLY1305 BIT(2) +#define SPDM_REQ_ALG_STRUCT_TYPE_REQ_BASE_ASYM_ALG 0x4 +/* As for base_asym_algo above */ + +#define SPDM_REQ_ALG_STRUCT_TYPE_KEY_SCHEDULE 0x5 +#define SPDM_KEY_SCHEDULE_SPDM BIT(0) + +struct spdm_req_alg_struct { + u8 alg_type; + u8 alg_count; /* 0x2K where K is number of alg_external entries */ + /* This field is sized based on alg_count[7:5] - currently always 2 */ + __le16 alg_supported; + __le32 alg_external[]; +}; + +#define SPDM_GET_DIGESTS 0x01 +struct spdm_digest_req { + u8 version; + u8 code; + u8 param1; /* Reserved */ + u8 param2; /* Reserved */ +}; + +struct spdm_cert_chain { + __le16 length; + u8 reserved[2]; + /* + * Additional fields: + * - Hash of the root + * - Certs: ASN.1 Der-encoded X.509 v3 - First cert signed by root or is the root. + */ +}; + +struct spdm_digest_rsp { + u8 version; + u8 code; + u8 param1; /* Reserved */ + u8 param2; /* Slot mask */ + /* Hash of spdm_cert_chain for each slot */ + u8 digests[]; +}; + +#define SPDM_GET_CERTIFICATE 0x02 +struct spdm_certificate_req { + u8 version; + u8 code; + u8 param1; /* Slot number 0..7 */ + u8 param2; /* Reserved */ + __le16 offset; + __le16 length; /* Note 0xFFFF and offset 0 is special value meaning whole chain */ +}; + +struct spdm_certificate_rsp { + u8 version; + u8 code; + u8 param1; /* Slot number 0..7 */ + u8 param2; /* Reserved */ + __le16 portion_length; + __le16 remainder_length; + u8 cert_chain[]; /* Portion Length Long */ +}; + + +#define SPDM_CHALLENGE 0x03 +struct spdm_challenge_req { + u8 version; + u8 code; + u8 param1; /* Slot number 0..7 */ + u8 param2; /* Measurement summary hash type */ + u8 nonce[32]; +}; + +struct spdm_challenge_rsp { + u8 version; + u8 code; + u8 param1; /* response attribute field, slot id from challenge, bit 7 is mutual auth */ + u8 param2; /* slot mask */ + /* Hash length cert chain */ + /* Nonce, 32 bytes */ + /* Measurement Summary Hash - if present */ + /* 2 byte opaque length */ + /* opaque data if length non 0 */ + /* Signature */ +}; + +static size_t spdm_challenge_rsp_signature_offset(struct spdm_state *spdm_state, + struct spdm_challenge_req *req, + struct spdm_challenge_rsp *rsp) +{ + u16 opaque_length; + size_t offset; + + offset = sizeof(*rsp); /* Header offset */ + offset += spdm_state->h; /* CertChain hash */ + offset += 32; /* Nonce */ + + /* Measurement summary hash */ + if (req->param2 && + (spdm_state->responder_caps & SPDM_GET_CAP_FLAG_MAC_CAP)) + offset += spdm_state->h; + /* + * This is almost certainly aligned, but that's not obvious from nearby code + * so play safe. + */ + opaque_length = get_unaligned_le16((u8 *)rsp + offset); + offset += sizeof(__le16); + offset += opaque_length; + + return offset; +} + +#define SPDM_ERROR 0x7f +enum spdm_error_code { + spdm_invalid_request = 0x01, + spdm_invalid_session = 0x02, + spdm_busy = 0x03, + spdm_unexpected_request = 0x04, + spdm_unspecified = 0x05, + spdm_decrypt_error = 0x06, + spdm_unsupported_request = 0x07, + spdm_request_in_flight = 0x08, + spdm_invalid_response_code = 0x09, + spdm_session_limit_exceeded = 0x0a, + spdm_major_version_missmatch = 0x41, + spdm_response_not_ready = 0x42, + spdm_request_resync = 0x43, + spdm_vendor_defined_error = 0xff, +}; + +struct spdm_error_rsp { + u8 version; + u8 code; + u8 param1; /* Error code */ + u8 param2; /* Error data */ + u8 extended_error_data[]; +}; +#define SPDM_ERROR_MIN_SIZE sizeof(struct spdm_error_rsp) + +static void spdm_err(struct device *dev, enum spdm_error_code error_code, + u8 error_data) +{ + switch (error_code) { + case spdm_invalid_request: + dev_err(dev, "Invalid Request\n"); + break; + case spdm_invalid_session: + dev_err(dev, "Invalid Session %#x\n", error_data); + break; + case spdm_busy: + dev_err(dev, "Busy\n"); + break; + case spdm_unexpected_request: + dev_err(dev, "Unexpected request\n"); + break; + case spdm_unspecified: + dev_err(dev, "Unspecified\n"); + break; + case spdm_decrypt_error: + dev_err(dev, "Decrypt Error\n"); + break; + case spdm_unsupported_request: + dev_err(dev, "Unsupported Request %#x\n", error_data); + break; + case spdm_request_in_flight: + dev_err(dev, "Request in flight\n"); + break; + case spdm_invalid_response_code: + dev_err(dev, "Invalid response code\n"); + break; + case spdm_session_limit_exceeded: + dev_err(dev, "Session limit exceeded\n"); + break; + case spdm_major_version_missmatch: + dev_err(dev, "Major version mismatch\n"); + break; + case spdm_response_not_ready: + dev_err(dev, "Response not ready\n"); + break; + case spdm_request_resync: + dev_err(dev, "Request resynchronization\n"); + break; + case spdm_vendor_defined_error: + dev_err(dev, "Vendor defined error\n"); + break; + } +} + +static int __spdm_exchange(struct spdm_state *spdm_state, struct spdm_exchange *ex, u8 version) +{ + int length; + int rc; + + if (ex->request_pl_sz < sizeof(*ex->request_pl) || + ex->response_pl_sz < sizeof(*ex->response_pl)) + return -EINVAL; + + ex->request_pl->version = version; + ex->request_pl->code = SPDM_REQ | ex->code; + + /* Will become an op pointer if we have a second transport */ + rc = spdm_state->transport_ex(spdm_state->transport_priv, ex); + if (rc < 0) + return rc; + + length = rc; + if (length < SPDM_ERROR_MIN_SIZE) + return -EIO; + + if (ex->response_pl->code == SPDM_ERROR) { + spdm_err(spdm_state->dev, ex->response_pl->param1, + ex->response_pl->param2); + return -EIO; + } + + if (ex->response_pl->code != ex->code) { + dev_err(spdm_state->dev, + "Invalid SPDM response received - does not match request code\n"); + return -EIO; + } + + return length; +} + +static int spdm1p0_exchange(struct spdm_state *spdm_state, struct spdm_exchange *ex) +{ + return __spdm_exchange(spdm_state, ex, 0x10); +} + +static int spdm1p1_exchange(struct spdm_state *spdm_state, struct spdm_exchange *ex) +{ + return __spdm_exchange(spdm_state, ex, 0x11); +} + +static int spdm_get_version_1p1(struct spdm_state *spdm_state) +{ + struct spdm_get_version_req req = {}; + struct spdm_get_version_rsp *rsp; + struct spdm_exchange spdm_ex; + ssize_t rc, rsp_sz, length; + int numversions = 2; + +retry: + rsp_sz = struct_size(rsp, version_number_entries, numversions); + rsp = kzalloc(rsp_sz, GFP_KERNEL); + + spdm_ex = (struct spdm_exchange) { + .request_pl = (struct spdm_header *)&req, + .request_pl_sz = sizeof(req), + .response_pl = (struct spdm_header *)rsp, + .response_pl_sz = rsp_sz, + .code = SPDM_GET_VERSION, + }; + rc = spdm1p0_exchange(spdm_state, &spdm_ex); + if (rc < 0) + goto err; + + length = rc; + + if (length < struct_size(rsp, version_number_entries, 1)) { + dev_err(spdm_state->dev, "SPDM must support at least one version\n"); + rc = -EIO; + goto err; + } + + /* If we didn't allocate enough space the first time, go around again */ + if (rsp->version_number_entry_count > numversions) { + numversions = rsp->version_number_entry_count; + kfree(rsp); + goto retry; + } + + /* + * Cache the request and response to use later to compute a message digest, + * as the algorithm to use is not yet known. + * + * Response may be padded, so we need to compute the length rather than relying + * on the length returned by the transport. + */ + length = struct_size(rsp, version_number_entries, rsp->version_number_entry_count); + rc = spdm_append_buffer_a(spdm_state, &req, sizeof(req), true); + if (rc) + goto err; + + rc = spdm_append_buffer_a(spdm_state, rsp, length, false); + +err: + kfree(rsp); + + return rc; +} + + +static int spdm_negotiate_caps(struct spdm_state *spdm_state, u32 req_caps, + u32 *rsp_caps) +{ + struct spdm_get_capabilities_reqrsp req = { + .ctexponent = 2, /* FIXME: Chose sensible value */ + .flags = cpu_to_le32(req_caps), + }; + struct spdm_get_capabilities_reqrsp rsp; + struct spdm_exchange spdm_ex = { + .request_pl = (struct spdm_header *)&req, + .request_pl_sz = sizeof(req), + .response_pl = (struct spdm_header *)&rsp, + .response_pl_sz = sizeof(rsp), + .code = SPDM_GET_CAPABILITIES, + }; + int rc, length; + + rc = spdm1p1_exchange(spdm_state, &spdm_ex); + if (rc < 0) + return rc; + length = rc; + + if (length < sizeof(rsp)) { + dev_err(spdm_state->dev, "NEGOTIATE_CAPS response short\n"); + return -EIO; + } + /* Cache capability as can affect data layout for other messages */ + spdm_state->responder_caps = le32_to_cpu(rsp.flags); + + if (rsp_caps) + *rsp_caps = spdm_state->responder_caps; + + rc = spdm_append_buffer_a(spdm_state, &req, sizeof(req), false); + if (rc) + return rc; + + return spdm_append_buffer_a(spdm_state, &rsp, sizeof(rsp), false); +} + +static int spdm_start_digest(struct spdm_state *spdm_state, + void *req, size_t req_sz, void *rsp, size_t rsp_sz) +{ + int rc; + + /* Build first part of challenge hash */ + switch (spdm_state->base_hash_alg) { + case spdm_base_hash_sha_384: + spdm_state->shash = crypto_alloc_shash("sha384", 0, CRYPTO_ALG_ASYNC); + break; + case spdm_base_hash_sha_256: + spdm_state->shash = crypto_alloc_shash("sha256", 0, CRYPTO_ALG_ASYNC); + break; + default: + /* Given device must support one of the above, lets stick to them for now */ + return -EINVAL; + } + + if (!spdm_state->shash) + return -ENOMEM; + + /* Used frequently to compute offsets, so cache H */ + spdm_state->h = crypto_shash_digestsize(spdm_state->shash); + + spdm_state->desc = kzalloc(struct_size(spdm_state->desc, __ctx, + crypto_shash_descsize(spdm_state->shash)), + GFP_KERNEL); + if (!spdm_state->desc) { + rc = -ENOMEM; + goto err_free_shash; + } + spdm_state->desc->tfm = spdm_state->shash; + + rc = crypto_shash_init(spdm_state->desc); + if (rc) + goto err_free_desc; + + rc = crypto_shash_update(spdm_state->desc, spdm_state->a, + spdm_state->a_length); + if (rc) + goto err_free_desc; + + rc = crypto_shash_update(spdm_state->desc, (u8 *)req, req_sz); + + if (rc) + goto err_free_desc; + + rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp, rsp_sz); + if (rc) + goto err_free_desc; + + kfree(spdm_state->a); + spdm_state->a = NULL; + spdm_state->a_length = 0; + + return 0; + +err_free_desc: + kfree(spdm_state->desc); +err_free_shash: + crypto_free_shash(spdm_state->shash); + return rc; +} + +static int spdm_negotiate_algs(struct spdm_state *spdm_state) +{ + struct spdm_req_alg_struct *req_alg_struct; + /* + * Current support is focused on enabling PCI/CMA + * + * CMA requires that device must support + * One or more of: + * - RSASSA_3072 + * - ECDSA_ECC_NIST_P256 + * - ECDSA_ECC_NIST_P384 + * One or more of: + * - SHA_256 + * - SHA_384 + */ + struct spdm_negotiate_algs_req *req; + struct spdm_negotiate_algs_rsp *rsp; + struct spdm_exchange spdm_ex; + /* + * There are several variable length elements at the end of these structures. + * Allow for 0 Ext Asym, 0 Ext Hash, 4 ReqAlgStructs + */ + size_t req_sz = sizeof(*req) + 16; + /* + * Response length may be less than this if not all algorithm types supported, + * resulting in fewer AlgStructure fields than in the request. + */ + size_t rsp_sz = sizeof(*rsp) + 16; + size_t length; + int rc; + + req = kzalloc(req_sz, GFP_KERNEL); + if (!req) + return -ENOMEM; + + req->param1 = 4; + req->code = SPDM_NEGOTIATE_ALGS | SPDM_REQ; + req->length = cpu_to_le16(req_sz); + req->measurement_specification = BIT(0); + req->base_asym_algo = cpu_to_le16(BIT(spdm_asym_rsassa_3072) | + BIT(spdm_asym_ecdsa_ecc_nist_p256) | + BIT(spdm_asym_ecdsa_ecc_nist_p384)); + req->base_hash_algo = cpu_to_le16(BIT(spdm_base_hash_sha_256) | + BIT(spdm_base_hash_sha_384)); + + req->ext_asm_count = 0; + req->ext_hash_count = 0; + req_alg_struct = (struct spdm_req_alg_struct *)(req + 1); + /* + * For CMA only we don't need to support DHE, AEAD or Key Schedule - but the + * responder may reply with them anyway. + * TODO: Identify minimum set needed. + */ + req_alg_struct[0] = (struct spdm_req_alg_struct) { + .alg_type = SPDM_REQ_ALG_STRUCT_TYPE_DHE, + .alg_count = 0x20, + .alg_supported = cpu_to_le16(SPDM_DHE_ALGO_FFDHE_2048 | + SPDM_DHE_ALGO_FFDHE_3072 | + SPDM_DHE_ALGO_SECP_256R1 | + SPDM_DHE_ALGO_SECP_384R1) + }; + req_alg_struct[1] = (struct spdm_req_alg_struct) { + .alg_type = SPDM_REQ_ALG_STRUCT_TYPE_AEAD_CIPHER_SUITE, + .alg_count = 0x20, + .alg_supported = cpu_to_le16(SPDM_AEAD_ALGO_AES_256_GCM | + SPDM_AEAD_ALGO_CHACHA20_POLY1305) + }; + req_alg_struct[2] = (struct spdm_req_alg_struct) { + .alg_type = SPDM_REQ_ALG_STRUCT_TYPE_REQ_BASE_ASYM_ALG, + .alg_count = 0x20, + .alg_supported = cpu_to_le16(BIT(spdm_asym_rsassa_3072) | + BIT(spdm_asym_ecdsa_ecc_nist_p256) | + BIT(spdm_asym_ecdsa_ecc_nist_p384)), + }; + req_alg_struct[3] = (struct spdm_req_alg_struct) { + .alg_type = SPDM_REQ_ALG_STRUCT_TYPE_KEY_SCHEDULE, + .alg_count = 0x20, + .alg_supported = cpu_to_le16(SPDM_KEY_SCHEDULE_SPDM) + }; + + rsp = kzalloc(rsp_sz, GFP_KERNEL); + if (!rsp) { + rc = -ENOMEM; + goto err_free_req; + } + + spdm_ex = (struct spdm_exchange) { + .request_pl = (struct spdm_header *)req, + .request_pl_sz = req_sz, + .response_pl = (struct spdm_header *)rsp, + .response_pl_sz = rsp_sz, + .code = SPDM_NEGOTIATE_ALGS, + }; + + rc = spdm1p1_exchange(spdm_state, &spdm_ex); + if (rc < 0) + goto err_free_rsp; + length = rc; + + //TODO: Check this cannot return short. + if (length < rsp_sz) { + dev_err(spdm_state->dev, "Response too short\n"); + rc = -EIO; + goto err_free_rsp; + } + + spdm_state->measurement_hash_alg = __ffs(le16_to_cpu(rsp->measurement_hash_algo)); + spdm_state->base_asym_alg = __ffs(le16_to_cpu(rsp->base_asym_sel)); + spdm_state->base_hash_alg = __ffs(le16_to_cpu(rsp->base_hash_sel)); + + switch (spdm_state->base_asym_alg) { + case spdm_asym_rsassa_3072: + spdm_state->s = 384; + break; + case spdm_asym_ecdsa_ecc_nist_p256: + spdm_state->s = 64; + break; + case spdm_asym_ecdsa_ecc_nist_p384: + spdm_state->s = 96; + break; + default: + dev_err(spdm_state->dev, "Unknown async base algorithm\n"); + rc = -EINVAL; + goto err_free_rsp; + } + + rsp_sz = sizeof(*rsp) + rsp->param1 * sizeof(struct spdm_req_alg_struct); + rc = spdm_start_digest(spdm_state, req, req_sz, rsp, rsp_sz); + +err_free_rsp: + kfree(rsp); +err_free_req: + kfree(req); + + return rc; +} + +static int spdm_get_digests(struct spdm_state *spdm_state) +{ + struct spdm_digest_req req = {}; + struct spdm_digest_rsp *rsp; + struct spdm_exchange spdm_ex; + unsigned long slot_bm; + int rc; + /* Assume all slots may be present */ + size_t rsp_sz; + + rsp_sz = struct_size(rsp, digests, spdm_state->h * 8); + rsp = kzalloc(rsp_sz, GFP_KERNEL); + if (!rsp) + return -ENOMEM; + + spdm_ex = (struct spdm_exchange) { + .request_pl = (struct spdm_header *)&req, + .request_pl_sz = sizeof(req), + .response_pl = (struct spdm_header *)rsp, + .response_pl_sz = rsp_sz, + .code = SPDM_GET_DIGESTS, + }; + + rc = spdm1p1_exchange(spdm_state, &spdm_ex); + if (rc < 0) + return rc; + + rc = crypto_shash_update(spdm_state->desc, (u8 *)&req, sizeof(req)); + if (rc) + goto err_free_rsp; + + slot_bm = rsp->param2; + rsp_sz = struct_size(rsp, digests, + spdm_state->h * + bitmap_weight(&slot_bm, 8 * sizeof(rsp->param2))); + + rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp, rsp_sz); + +err_free_rsp: + kfree(rsp); + + return rc; +} + +/* Used to give a unique name for per device keychains */ +static DEFINE_IDA(spdm_ida); + +static int spdm_get_certificate(struct spdm_state *spdm_state) +{ + //TODO: Test this. + /* This should ensure the limitation is rarely at the requester */ + u16 bufsize = 0x8000; + struct spdm_certificate_req req = { + .param1 = 0, /* Slot 0 */ + }; + struct spdm_certificate_rsp *rsp; + size_t rsp_sz; + struct spdm_exchange spdm_ex; + u16 remainder_length = bufsize; + u16 offset = 0; + char *keyring_name; + int keyring_id; + u8 *certs = NULL; + u16 certs_length = 0; + u16 next_cert; + int rc; + + rsp_sz = struct_size(rsp, cert_chain, bufsize); + rsp = kzalloc(rsp_sz, GFP_KERNEL); + if (!rsp) + return -ENOMEM; + + spdm_ex = (struct spdm_exchange) { + .request_pl = (struct spdm_header *)&req, + .request_pl_sz = sizeof(req), + .response_pl = (struct spdm_header *)rsp, + .response_pl_sz = rsp_sz, + .code = SPDM_GET_CERTIFICATE, + }; + + while (remainder_length > 0) { + u8 *newcerts; + + req.length = min(remainder_length, bufsize); + req.offset = offset; + + rc = spdm1p1_exchange(spdm_state, &spdm_ex); + if (rc < 0) + goto err_free_certs; + + /* + * Taking the hash of each message was established by looking at + * what openSPDM does, rather than it being clear from the SPDM + * specification. + */ + rc = crypto_shash_update(spdm_state->desc, (u8 *)&req, sizeof(req)); + if (rc) + goto err_free_certs; + + /* Care needed - each message might not be full length due to padding */ + rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp, + sizeof(rsp) + rsp->portion_length); + if (rc) + goto err_free_certs; + + certs_length += rsp->portion_length; + newcerts = krealloc(certs, certs_length, GFP_KERNEL); + if (!newcerts) { + rc = -ENOMEM; + goto err_free_certs; + } + certs = newcerts; + memcpy(certs + offset, rsp->cert_chain, rsp->portion_length); + offset += rsp->portion_length; + remainder_length = rsp->remainder_length; + } + + keyring_id = ida_alloc(&spdm_ida, GFP_KERNEL); + if (keyring_id < 0) { + rc = keyring_id; + goto err_free_certs; + + } + + keyring_name = kasprintf(GFP_KERNEL, "_spdm%02d", keyring_id); + if (!keyring_name) { + rc = -ENOMEM; + goto err_free_ida; + } + + /* + * Create a spdm instance specific keyring to avoid mixing certs, + * Not a child of _cma keyring, because the search below should + * not find a self signed cert in here. + * + * Not sure how to release a keyring, so currently if this fails we leak. + * That might be fine but an ida could get reused. + */ + spdm_state->keyring = keyring_alloc(keyring_name, + KUIDT_INIT(0), KGIDT_INIT(0), + current_cred(), + (KEY_POS_ALL & ~KEY_POS_SETATTR) | + KEY_USR_VIEW | KEY_USR_READ, + KEY_ALLOC_NOT_IN_QUOTA | + KEY_ALLOC_SET_KEEP, + NULL, NULL); + kfree(keyring_name); + if (IS_ERR(spdm_state->keyring)) { + dev_err(spdm_state->dev, + "Failed to allocate per spdm keyring\n"); + rc = PTR_ERR(spdm_state->keyring); + goto err_free_ida; + } + + next_cert = sizeof(struct spdm_cert_chain) + spdm_state->h; + + /* + * Store the certificate chain on the per SPDM instance keyring. + * Allow for up to 3 bytes padding as transport sends multiples of 4 bytes. + */ + while (next_cert < offset) { + struct key *key; + key_ref_t key2; + + key2 = key_create_or_update(make_key_ref(spdm_state->keyring, 1), + "asymmetric", NULL, + certs + next_cert, offset - next_cert, + (KEY_POS_ALL & ~KEY_POS_SETATTR) | + KEY_USR_VIEW | KEY_USR_READ, + KEY_ALLOC_NOT_IN_QUOTA); + + if (IS_ERR(key2)) { + /* FIXME: Any additional cleanup to do here? */ + rc = PTR_ERR(key2); + goto err_free_ida; + } + + if (!spdm_state->leaf_key) { + /* First key in chain, so check against keys on _cma keyring */ + struct public_key_signature *sig = + key_ref_to_ptr(key2)->payload.data[asym_auth]; + + key = find_asymmetric_key(spdm_state->root_keyring, sig->auth_ids[0], + sig->auth_ids[1], false); + if (IS_ERR(key)) { + dev_err(spdm_state->dev, + "Unable to retrieve signing certificate from _cma keyring\n"); + rc = PTR_ERR(key); + goto err_free_ida; + } + + rc = verify_signature(key, sig); + if (rc) { + dev_err(spdm_state->dev, + "Unable to check SPDM cert against _cma keyring\n"); + goto err_free_ida; + } + + spdm_state->leaf_key = key_ref_to_ptr(key2); + } else { + /* Not the first key in chain, so check it against previous one */ + struct public_key_signature *sig = + key_ref_to_ptr(key2)->payload.data[asym_auth]; + + rc = verify_signature(spdm_state->leaf_key, sig); + if (rc) { + dev_err(spdm_state->dev, + "Unable to verify SPDM cert against previous cert in chain\n"); + goto err_free_ida; + } + spdm_state->leaf_key = key_ref_to_ptr(key2); + } + /* + * Horrible but need to pull this directly from the ASN1 stream as the cert + * chain is a concatentation of multiple cerificates. + */ + next_cert += get_unaligned_be16(certs + next_cert + 2) + 4; + } + + kfree(certs); + kfree(rsp); + + return 0; + +err_free_ida: + ida_free(&spdm_ida, keyring_id); +err_free_certs: + kfree(certs); + kfree(rsp); + + return rc; +} + + +static int spdm_verify_signature(struct spdm_state *spdm_state, u8 *sig_ptr, + u8 *digest, unsigned int digest_size) +{ + const struct asymmetric_key_ids *ids; + struct public_key_signature sig = {}; + /* Large enough for an ASN1 enocding of supported ECC signatures */ + unsigned char buffer2[128] = {}; + int rc; + + /* + * The ecdsa signatures are raw concatentation of the two values. + * In order to use verify_signature we need to reformat them into ASN1. + */ + switch (spdm_state->base_asym_alg) { + case spdm_asym_ecdsa_ecc_nist_p256: + case spdm_asym_ecdsa_ecc_nist_p384: + { + unsigned char buffer[128] = {}; + unsigned char *p = buffer; + unsigned char *p2; + + //TODO: test the ASN1 function rather more extensively. + /* First pack the two large integer values */ + p = asn1_encode_integer_large_positive(p, buffer + sizeof(buffer), + ASN1_INT, sig_ptr, + spdm_state->s / 2); + p = asn1_encode_integer_large_positive(p, buffer + sizeof(buffer), + ASN1_INT, + sig_ptr + spdm_state->s / 2, + spdm_state->s / 2); + + /* In turn pack those two large integer values into a sequence */ + p2 = asn1_encode_sequence(buffer2, buffer2 + sizeof(buffer2), + buffer, p - buffer); + + sig.s = buffer2; + sig.s_size = p2 - buffer2; + sig.encoding = "raw"; + break; + } + + case spdm_asym_rsassa_3072: + sig.s = sig_ptr; + sig.s_size = spdm_state->s; + sig.encoding = "pkcs1"; + break; + default: + dev_err(spdm_state->dev, + "Signature algorithm not yet supported\n"); + return -EINVAL; + } + sig.digest = digest; + sig.digest_size = digest_size; + ids = asymmetric_key_ids(spdm_state->leaf_key); + sig.auth_ids[0] = ids->id[0]; + sig.auth_ids[1] = ids->id[1]; + + switch (spdm_state->base_hash_alg) { + case spdm_base_hash_sha_384: + sig.hash_algo = "sha384"; + break; + case spdm_base_hash_sha_256: + sig.hash_algo = "sha256"; + break; + default: + return -EINVAL; + } + + rc = verify_signature(spdm_state->leaf_key, &sig); + if (rc) { + dev_err(spdm_state->dev, + "Failed to verify challenge_auth signature %d\n", rc); + return rc; + } + + return 0; +} + +static int spdm_challenge(struct spdm_state *spdm_state) +{ + struct spdm_challenge_req req = { + .param1 = 0, /* slot 0 for now */ + .param2 = 0, /* no measurement summary hash */ + }; + struct spdm_challenge_rsp *rsp; + struct spdm_exchange spdm_ex; + size_t sig_offset, rsp_max_size; + int length, rc; + u8 *digest; + + /* + * The response length is up to: + * 4 byte header + * H byte CertChainHash + * 32 byte nonce + * (H byte Measurement Summary Hash - not currently requested) + * 2 byte Opaque Length + * <= 1024 bytes Opaque Data + * S byte signature + */ + rsp_max_size = 4 + spdm_state->h + 32 + 2 + 1024 + spdm_state->s; + rsp = kzalloc(rsp_max_size, GFP_KERNEL); + if (!rsp) + return -ENOMEM; + + get_random_bytes(&req.nonce, sizeof(req.nonce)); + + spdm_ex = (struct spdm_exchange) { + .request_pl = (struct spdm_header *)&req, + .request_pl_sz = sizeof(req), + .response_pl = (struct spdm_header *)rsp, + .response_pl_sz = rsp_max_size, + .code = SPDM_CHALLENGE, + }; + + rc = spdm1p1_exchange(spdm_state, &spdm_ex); + if (rc < 0) + goto err_free_rsp; + length = rc; + + /* Last step of building the digest */ + rc = crypto_shash_update(spdm_state->desc, (u8 *)&req, sizeof(req)); + if (rc) + goto err_free_rsp; + + /* The hash is complete + signature received; verify against leaf key */ + sig_offset = spdm_challenge_rsp_signature_offset(spdm_state, &req, rsp); + if (sig_offset >= length) { + rc = -EIO; + goto err_free_rsp; + } + + rc = crypto_shash_update(spdm_state->desc, (u8 *)rsp, sig_offset); + if (rc) + goto err_free_rsp; + + digest = kmalloc(spdm_state->h, GFP_KERNEL); + if (!digest) { + rc = -ENOMEM; + goto err_free_rsp; + } + + crypto_shash_final(spdm_state->desc, digest); + + rc = spdm_verify_signature(spdm_state, (u8 *)rsp + sig_offset, digest, + spdm_state->h); + if (rc) { + dev_err(spdm_state->dev, "Failed to verify SPDM challenge auth signature\n"); + goto err_free_digest; + } + + kfree(spdm_state->desc); + + /* Clear to give a simple way to detect out of order */ + spdm_state->desc = NULL; + +err_free_digest: + kfree(digest); + +err_free_rsp: + kfree(rsp); + return rc; +} + +int spdm_authenticate(struct spdm_state *spdm_state) +{ + u32 req_caps, rsp_caps; + int rc; + + rc = spdm_get_version_1p1(spdm_state); + if (rc) + return rc; + + /* TODO: work out if a subset of these is fine for CMA */ + req_caps = SPDM_GET_CAP_FLAG_CERT_CAP | + SPDM_GET_CAP_FLAG_CHAL_CAP | + SPDM_GET_CAP_FLAG_ENCRYPT_CAP | + SPDM_GET_CAP_FLAG_MAC_CAP | + SPDM_GET_CAP_FLAG_MUT_AUTH_CAP | + SPDM_GET_CAP_FLAG_KEY_EX_CAP | + FIELD_PREP(SPDM_GET_CAP_FLAG_PSK_CAP_MSK, SPDM_GET_CAP_FLAG_PSK_CAP_PRESHARE) | + SPDM_GET_CAP_FLAG_ENCAP_CAP | + SPDM_GET_CAP_FLAG_HBEAT_CAP | + SPDM_GET_CAP_FLAG_KEY_UPD_CAP | + SPDM_GET_CAP_FLAG_HANDSHAKE_ITC_CAP; + + rc = spdm_negotiate_caps(spdm_state, req_caps, &rsp_caps); + if (rc) + return rc; + + rc = spdm_negotiate_algs(spdm_state); + if (rc) + return rc; + + /* At this point we know the hash so can start calculating it as we go */ + rc = spdm_get_digests(spdm_state); + if (rc) + goto err_free_hash; + + rc = spdm_get_certificate(spdm_state); + if (rc) + goto err_free_hash; + + rc = spdm_challenge(spdm_state); + if (rc) + goto err_free_hash; + + /* + * If we get to here, we have successfully verified the device is one we are happy + * with using. + */ + return 0; + +err_free_hash: + kfree(spdm_state->desc); + crypto_free_shash(spdm_state->shash); + + return rc; +} +EXPORT_SYMBOL_GPL(spdm_authenticate); + +MODULE_LICENSE("GPL v2"); From patchwork Thu Mar 3 13:59:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A257DC433EF for ; Thu, 3 Mar 2022 14:05:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230456AbiCCOGW (ORCPT ); Thu, 3 Mar 2022 09:06:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230351AbiCCOGU (ORCPT ); Thu, 3 Mar 2022 09:06:20 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FF6038BF1; Thu, 3 Mar 2022 06:05:35 -0800 (PST) Received: from fraeml702-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8Xm24fGFz67xg5; Thu, 3 Mar 2022 22:04:18 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml702-chm.china.huawei.com (10.206.15.51) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.21; Thu, 3 Mar 2022 15:05:33 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:05:32 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 13/14] PCI/CMA: Initial support for Component Measurement and Authentication ECN Date: Thu, 3 Mar 2022 13:59:04 +0000 Message-ID: <20220303135905.10420-14-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org This currently very much a PoC. Currently the SPDM library only provides a single function to allow a challenge / authentication of the PCI EP. SPDM exchanges must occur in one of a small set of valid squences over which the message digest used in authentication is built up. Placing that complexity in the SPDM library seems like a good way to enforce that logic, without having to do it for each transport. Signed-off-by: Jonathan Cameron --- drivers/pci/Kconfig | 9 ++++ drivers/pci/Makefile | 2 + drivers/pci/cma.c | 105 ++++++++++++++++++++++++++++++++++++++++ include/linux/pci-cma.h | 19 ++++++++ 4 files changed, 135 insertions(+) diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index 8b9b3341b553..3a6c4affe81f 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig @@ -128,6 +128,15 @@ config PCI_DOE_DRIVER number of different protocols. DOE is defined in the Data Object Exchange ECN to the PCIe r5.0 spec. +config PCI_CMA + tristate "PCI Component Measurement and Authentication" + select PCI_DOE + select ASN1_ENCODER + select SPDM + help + This enables library support for the PCI Component Measurement and + Authentication ECN. This uses DMTF SPDM 1.1 + config PCI_ATS bool diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile index 0a15f36e97f4..991d4186a01b 100644 --- a/drivers/pci/Makefile +++ b/drivers/pci/Makefile @@ -30,9 +30,11 @@ obj-$(CONFIG_PCI_PF_STUB) += pci-pf-stub.o obj-$(CONFIG_PCI_ECAM) += ecam.o obj-$(CONFIG_PCI_P2PDMA) += p2pdma.o obj-$(CONFIG_PCI_DOE_DRIVER) += pci-doe.o +obj-$(CONFIG_PCI_CMA) += pci-cma.o obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o pci-doe-y := doe.o +pci-cma-y := cma.o # Endpoint library must be initialized before its users obj-$(CONFIG_PCI_ENDPOINT) += endpoint/ diff --git a/drivers/pci/cma.c b/drivers/pci/cma.c new file mode 100644 index 000000000000..7a14a3365300 --- /dev/null +++ b/drivers/pci/cma.c @@ -0,0 +1,105 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Component Measurement and Authentication was added as an ECN to the + * PCIe r5.0 spec. + * + * Copyright (C) 2021 Huawei + * Jonathan Cameron + */ + +#include +#include +#include +#include +#include + +#define PCI_DOE_PROTOCOL_CMA 1 +/* Keyring that userspace can poke certs into */ +static struct key *cma_keyring; + +static int cma_spdm_ex(void *priv, struct spdm_exchange *spdm_ex) +{ + size_t request_padded_sz, response_padded_sz; + struct pci_doe_exchange ex = { + .prot = { + .vid = PCI_VENDOR_ID_PCI_SIG, + .type = PCI_DOE_PROTOCOL_CMA, + }, + }; + struct pci_doe_dev *doe_dev = priv; + int rc; + + /* DOE requires that response and request are padded to a multiple of 4 bytes */ + request_padded_sz = ALIGN(spdm_ex->request_pl_sz, sizeof(u32)); + if (request_padded_sz != spdm_ex->request_pl_sz) { + ex.request_pl = kzalloc(request_padded_sz, GFP_KERNEL); + if (!ex.request_pl) + return -ENOMEM; + memcpy(ex.request_pl, spdm_ex->request_pl, spdm_ex->request_pl_sz); + ex.request_pl_sz = request_padded_sz; + } else { + ex.request_pl = (u32 *)spdm_ex->request_pl; + ex.request_pl_sz = spdm_ex->request_pl_sz; + } + + response_padded_sz = ALIGN(spdm_ex->response_pl_sz, sizeof(u32)); + if (response_padded_sz != spdm_ex->response_pl_sz) { + ex.response_pl = kzalloc(response_padded_sz, GFP_KERNEL); + if (!ex.response_pl) { + rc = -ENOMEM; + goto err_free_req; + } + ex.response_pl_sz = response_padded_sz; + } else { + ex.response_pl = (u32 *)spdm_ex->response_pl; + ex.response_pl_sz = spdm_ex->response_pl_sz; + } + + rc = pci_doe_exchange_sync(doe_dev, &ex); + if (rc < 0) + goto err_free_rsp; + + if (response_padded_sz != spdm_ex->response_pl_sz) + memcpy(spdm_ex->response_pl, ex.response_pl, spdm_ex->response_pl_sz); + +err_free_rsp: + if (response_padded_sz != spdm_ex->response_pl_sz) + kfree(ex.response_pl); +err_free_req: + if (request_padded_sz != spdm_ex->request_pl_sz) + kfree(ex.request_pl); + + return rc; +} + +void pci_cma_init(struct pci_doe_dev *doe_dev, struct spdm_state *spdm_state) +{ + memset(spdm_state, 0, sizeof(*spdm_state)); + spdm_state->transport_ex = cma_spdm_ex; + spdm_state->transport_priv = doe_dev; + spdm_state->dev = &doe_dev->pdev->dev; + spdm_state->root_keyring = cma_keyring; +} +EXPORT_SYMBOL_GPL(pci_cma_init); + +int pci_cma_authenticate(struct spdm_state *spdm_state) +{ + return spdm_authenticate(spdm_state); +} +EXPORT_SYMBOL_GPL(pci_cma_authenticate); + +__init static int cma_keyring_init(void) +{ + cma_keyring = keyring_alloc("_cma", + KUIDT_INIT(0), KGIDT_INIT(0), + current_cred(), + (KEY_POS_ALL & ~KEY_POS_SETATTR) | + KEY_USR_VIEW | KEY_USR_READ | KEY_USR_WRITE | KEY_USR_SEARCH, + KEY_ALLOC_NOT_IN_QUOTA | KEY_ALLOC_SET_KEEP, NULL, NULL); + if (IS_ERR(cma_keyring)) + pr_err("Could not allocate cma keyring\n"); + + return 0; +} +device_initcall(cma_keyring_init); +MODULE_LICENSE("GPL v2"); diff --git a/include/linux/pci-cma.h b/include/linux/pci-cma.h new file mode 100644 index 000000000000..3790e532cd38 --- /dev/null +++ b/include/linux/pci-cma.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Component Measurement and Authentication was added as an ECN to the + * PCIe r5.0 spec. + * + * Copyright (C) 2021 Huawei + * Jonathan Cameron + */ + +#ifndef _PCI_CMA_H_ +#define _PCI_CMA_H_ +struct pci_doe_dev; +struct spdm_state; + +void pci_cma_init(struct pci_doe_dev *doe_dev, struct spdm_state *spdm_state); + +int pci_cma_authenticate(struct spdm_state *spdm_state); + +#endif From patchwork Thu Mar 3 13:59:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12767530 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81876C433EF for ; Thu, 3 Mar 2022 14:06:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233589AbiCCOHK (ORCPT ); Thu, 3 Mar 2022 09:07:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233718AbiCCOHJ (ORCPT ); Thu, 3 Mar 2022 09:07:09 -0500 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97EB118C7BC; Thu, 3 Mar 2022 06:06:22 -0800 (PST) Received: from fraeml744-chm.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4K8Xmg5fZjz67lrx; Thu, 3 Mar 2022 22:04:51 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml744-chm.china.huawei.com (10.206.15.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 15:06:03 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2308.21; Thu, 3 Mar 2022 14:06:02 +0000 From: Jonathan Cameron To: , CC: , Lorenzo Pieralisi , Chris Browy , , "Bjorn Helgaas" , "David E . Box" , Subject: [RFC PATCH v2 14/14] cxl/pci: Add really basic CMA authentication support. Date: Thu, 3 Mar 2022 13:59:05 +0000 Message-ID: <20220303135905.10420-15-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> References: <20220303135905.10420-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml706-chm.china.huawei.com (10.201.108.55) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: keyrings@vger.kernel.org This is just for purposes of poking the CMA / SPDM code. What exactly the model in the driver looks like is still to be worked out. Note the PROBE_FORCE_SYNCHRONOUS is a workaround to avoid warnings about trying to load an additional crypto module whilst doing an asychronous probe. Signed-off-by: Jonathan Cameron --- drivers/cxl/Kconfig | 1 + drivers/cxl/cxlmem.h | 2 ++ drivers/cxl/pci.c | 40 +++++++++++++++++++++++++++++++++++++++- lib/spdm.c | 2 +- 4 files changed, 43 insertions(+), 2 deletions(-) diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 9d53720bea07..4dfd2c19b285 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -17,6 +17,7 @@ config CXL_MEM tristate "CXL.mem: Memory Devices" default CXL_BUS select PCI_DOE_DRIVER + select PCI_CMA help The CXL.mem protocol allows a device to act as a provider of "System RAM" and/or "Persistent Memory" that is fully coherent diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index b4209170f4ac..b69328118632 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -100,6 +100,7 @@ struct cxl_mbox_cmd { * * @dev: The device associated with this CXL state * @cdat_doe: Auxiliary DOE device capabile of reading CDAT + * @cma_doe: Component measurement and authentication mailbox * @regs: Parsed register blocks * @payload_size: Size of space for payload * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) @@ -132,6 +133,7 @@ struct cxl_dev_state { struct device *dev; struct pci_doe_dev *cdat_doe; + struct pci_doe_dev *cma_doe; struct cxl_regs regs; size_t payload_size; diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index ed94a6bef2de..ad823eeaafd9 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -1,9 +1,14 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2020 Intel Corporation. All rights reserved. */ #include +//#include +#include +#include +//#include #include #include #include +#include #include #include #include @@ -494,6 +499,26 @@ static int cxl_match_cdat_doe_device(struct device *dev, const void *data) return 0; } +static int cxl_match_cma_doe_device(struct device *dev, const void *data) +{ + const struct cxl_dev_state *cxlds = data; + struct auxiliary_device *adev; + struct pci_doe_dev *doe_dev; + + /* First determine if this auxiliary device belongs to the cxlds */ + if (cxlds->dev != dev->parent) + return 0; + + adev = to_auxiliary_dev(dev); + doe_dev = container_of(adev, struct pci_doe_dev, adev); + + /* If it is one of ours check for the CMA protocol */ + if (pci_doe_supports_prot(doe_dev, PCI_VENDOR_ID_PCI_SIG, 1)) //hack + return 1; + + return 0; +} + static int cxl_setup_doe_devices(struct cxl_dev_state *cxlds) { struct device *dev = cxlds->dev; @@ -519,6 +544,10 @@ static int cxl_setup_doe_devices(struct cxl_dev_state *cxlds) cxlds->cdat_doe = doe_dev; } + adev = auxiliary_find_device(NULL, cxlds, &cxl_match_cma_doe_device); + if (adev) + cxlds->cma_doe = container_of(adev, struct pci_doe_dev, adev); + return 0; } @@ -643,6 +672,7 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) struct cxl_register_map map; struct cxl_memdev *cxlmd; struct cxl_dev_state *cxlds; + struct spdm_state spdm_state; int rc; /* @@ -670,6 +700,14 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) cxl_initialize_cdat_callbacks(cxlds); + /* CMA is optional - policy control will be needed */ + if (cxlds->cma_doe) { + pci_cma_init(cxlds->cma_doe, &spdm_state); + rc = pci_cma_authenticate(&spdm_state); + if (rc) + return rc; + } + rc = cxl_map_regs(cxlds, &map); if (rc) return rc; @@ -712,7 +750,7 @@ static struct pci_driver cxl_pci_driver = { .id_table = cxl_mem_pci_tbl, .probe = cxl_pci_probe, .driver = { - .probe_type = PROBE_PREFER_ASYNCHRONOUS, + .probe_type = PROBE_FORCE_SYNCHRONOUS, }, }; diff --git a/lib/spdm.c b/lib/spdm.c index 3ce2341647f8..84a2d7f3989e 100644 --- a/lib/spdm.c +++ b/lib/spdm.c @@ -921,7 +921,7 @@ static int spdm_get_certificate(struct spdm_state *spdm_state) key_ref_to_ptr(key2)->payload.data[asym_auth]; key = find_asymmetric_key(spdm_state->root_keyring, sig->auth_ids[0], - sig->auth_ids[1], false); + sig->auth_ids[1], NULL, false); if (IS_ERR(key)) { dev_err(spdm_state->dev, "Unable to retrieve signing certificate from _cma keyring\n");