From patchwork Wed Feb 17 04:09:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92385C43381 for ; Wed, 17 Feb 2021 04:11:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6E3ED64E2E for ; Wed, 17 Feb 2021 04:11:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231182AbhBQEKt (ORCPT ); Tue, 16 Feb 2021 23:10:49 -0500 Received: from mga07.intel.com ([134.134.136.100]:21744 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229808AbhBQEKr (ORCPT ); Tue, 16 Feb 2021 23:10:47 -0500 IronPort-SDR: pfOs4FOktMolimMC9zKl1DeukP1D/JcucTbfz7Mbki+lcgPo+hDEyCyHEH5m7+/F/GlWs81pPi yfjfQIEbsy6w== X-IronPort-AV: E=McAfee;i="6000,8403,9897"; a="247165906" X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="247165906" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:06 -0800 IronPort-SDR: zO4Z8qPBlkmIKZO3HlhH2sv5GjP0KiztRhX9Ssn3v4rN9F7ZdneGa+cysa6+kMNMFJ/NvOAP6i kKfjtSlLuh/w== X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="384948775" Received: from yxie-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.134.141]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:04 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , David Hildenbrand , David Rientjes , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , "John Groves (jgroves)" , "Kelley, Sean V" , Jonathan Corbet , Jonathan Cameron Subject: [PATCH v5 1/9] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints Date: Tue, 16 Feb 2021 20:09:50 -0800 Message-Id: <20210217040958.1354670-2-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210217040958.1354670-1-ben.widawsky@intel.com> References: <20210217040958.1354670-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Dan Williams The CXL.mem protocol allows a device to act as a provider of "System RAM" and/or "Persistent Memory" that is fully coherent as if the memory was attached to the typical CPU memory controller. With the CXL-2.0 specification a PCI endpoint can implement a "Type-3" device interface and give the operating system control over "Host Managed Device Memory". See section 2.3 Type 3 CXL Device. The memory range exported by the device may optionally be described by the platform firmware memory map, or by infrastructure like LIBNVDIMM to provision persistent memory capacity from one, or more, CXL.mem devices. A pre-requisite for Linux-managed memory-capacity provisioning is this cxl_mem driver that can speak the mailbox protocol defined in section 8.2.8.4 Mailbox Registers. For now just land the initial driver boiler-plate and Documentation/ infrastructure. Link: https://www.computeexpresslink.org/download-the-specification Cc: Jonathan Corbet Signed-off-by: Dan Williams Signed-off-by: Ben Widawsky Acked-by: David Rientjes (v1) Reviewed-by: Jonathan Cameron Reviewed-by: Konrad Rzeszutek Wilk --- Documentation/driver-api/cxl/index.rst | 12 ++++ .../driver-api/cxl/memory-devices.rst | 15 +++++ Documentation/driver-api/index.rst | 1 + drivers/Kconfig | 1 + drivers/Makefile | 1 + drivers/cxl/Kconfig | 35 +++++++++++ drivers/cxl/Makefile | 4 ++ drivers/cxl/mem.c | 62 +++++++++++++++++++ drivers/cxl/pci.h | 17 +++++ include/linux/pci_ids.h | 1 + 10 files changed, 149 insertions(+) create mode 100644 Documentation/driver-api/cxl/index.rst create mode 100644 Documentation/driver-api/cxl/memory-devices.rst create mode 100644 drivers/cxl/Kconfig create mode 100644 drivers/cxl/Makefile create mode 100644 drivers/cxl/mem.c create mode 100644 drivers/cxl/pci.h diff --git a/Documentation/driver-api/cxl/index.rst b/Documentation/driver-api/cxl/index.rst new file mode 100644 index 000000000000..036e49553542 --- /dev/null +++ b/Documentation/driver-api/cxl/index.rst @@ -0,0 +1,12 @@ +.. SPDX-License-Identifier: GPL-2.0 + +==================== +Compute Express Link +==================== + +.. toctree:: + :maxdepth: 1 + + memory-devices + +.. only:: subproject and html diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst new file mode 100644 index 000000000000..71617e9002f1 --- /dev/null +++ b/Documentation/driver-api/cxl/memory-devices.rst @@ -0,0 +1,15 @@ +.. SPDX-License-Identifier: GPL-2.0 +.. include:: + +=================================== +Compute Express Link Memory Devices +=================================== + +A Compute Express Link Memory Device is a CXL component that implements the +CXL.mem protocol. It contains some amount of volatile memory, persistent memory, +or both. It is enumerated as a PCI device for configuration and passing +messages over an MMIO mailbox. Its contribution to the System Physical +Address space is handled via HDM (Host Managed Device Memory) decoders +that optionally define a device's contribution to an interleaved address +range across multiple devices underneath a host-bridge or interleaved +across host-bridges. diff --git a/Documentation/driver-api/index.rst b/Documentation/driver-api/index.rst index 2456d0a97ed8..d246a18fd78f 100644 --- a/Documentation/driver-api/index.rst +++ b/Documentation/driver-api/index.rst @@ -35,6 +35,7 @@ available subsections can be seen below. usb/index firewire pci/index + cxl/index spi i2c ipmb diff --git a/drivers/Kconfig b/drivers/Kconfig index dcecc9f6e33f..62c753a73651 100644 --- a/drivers/Kconfig +++ b/drivers/Kconfig @@ -6,6 +6,7 @@ menu "Device Drivers" source "drivers/amba/Kconfig" source "drivers/eisa/Kconfig" source "drivers/pci/Kconfig" +source "drivers/cxl/Kconfig" source "drivers/pcmcia/Kconfig" source "drivers/rapidio/Kconfig" diff --git a/drivers/Makefile b/drivers/Makefile index fd11b9ac4cc3..678ea810410f 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -73,6 +73,7 @@ obj-$(CONFIG_NVM) += lightnvm/ obj-y += base/ block/ misc/ mfd/ nfc/ obj-$(CONFIG_LIBNVDIMM) += nvdimm/ obj-$(CONFIG_DAX) += dax/ +obj-$(CONFIG_CXL_BUS) += cxl/ obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/ obj-$(CONFIG_NUBUS) += nubus/ obj-y += macintosh/ diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig new file mode 100644 index 000000000000..9e80b311e928 --- /dev/null +++ b/drivers/cxl/Kconfig @@ -0,0 +1,35 @@ +# SPDX-License-Identifier: GPL-2.0-only +menuconfig CXL_BUS + tristate "CXL (Compute Express Link) Devices Support" + depends on PCI + help + CXL is a bus that is electrically compatible with PCI Express, but + layers three protocols on that signalling (CXL.io, CXL.cache, and + CXL.mem). The CXL.cache protocol allows devices to hold cachelines + locally, the CXL.mem protocol allows devices to be fully coherent + memory targets, the CXL.io protocol is equivalent to PCI Express. + Say 'y' to enable support for the configuration and management of + devices supporting these protocols. + +if CXL_BUS + +config CXL_MEM + tristate "CXL.mem: Memory Devices" + help + The CXL.mem protocol allows a device to act as a provider of + "System RAM" and/or "Persistent Memory" that is fully coherent + as if the memory was attached to the typical CPU memory + controller. + + Say 'y/m' to enable a driver (named "cxl_mem.ko" when built as + a module) that will attach to CXL.mem devices for + configuration, provisioning, and health monitoring. This + driver is required for dynamic provisioning of CXL.mem + attached memory which is a prerequisite for persistent memory + support. Typically volatile memory is mapped by platform + firmware and included in the platform memory map, but in some + cases the OS is responsible for mapping that memory. See + Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification. + + If unsure say 'm'. +endif diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile new file mode 100644 index 000000000000..4a30f7c3fc4a --- /dev/null +++ b/drivers/cxl/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_CXL_MEM) += cxl_mem.o + +cxl_mem-y := mem.o diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c new file mode 100644 index 000000000000..ce33c5ee77c9 --- /dev/null +++ b/drivers/cxl/mem.c @@ -0,0 +1,62 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */ +#include +#include +#include +#include "pci.h" + +static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) +{ + int pos; + + pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DVSEC); + if (!pos) + return 0; + + while (pos) { + u16 vendor, id; + + pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER1, &vendor); + pci_read_config_word(pdev, pos + PCI_DVSEC_HEADER2, &id); + if (vendor == PCI_DVSEC_VENDOR_ID_CXL && dvsec == id) + return pos; + + pos = pci_find_next_ext_capability(pdev, pos, + PCI_EXT_CAP_ID_DVSEC); + } + + return 0; +} + +static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct device *dev = &pdev->dev; + int regloc; + + regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET); + if (!regloc) { + dev_err(dev, "register location dvsec not found\n"); + return -ENXIO; + } + + return 0; +} + +static const struct pci_device_id cxl_mem_pci_tbl[] = { + /* PCI class code for CXL.mem Type-3 Devices */ + { PCI_DEVICE_CLASS((PCI_CLASS_MEMORY_CXL << 8 | CXL_MEMORY_PROGIF), ~0)}, + { /* terminate list */ }, +}; +MODULE_DEVICE_TABLE(pci, cxl_mem_pci_tbl); + +static struct pci_driver cxl_mem_driver = { + .name = KBUILD_MODNAME, + .id_table = cxl_mem_pci_tbl, + .probe = cxl_mem_probe, + .driver = { + .probe_type = PROBE_PREFER_ASYNCHRONOUS, + }, +}; + +MODULE_LICENSE("GPL v2"); +module_pci_driver(cxl_mem_driver); diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h new file mode 100644 index 000000000000..e464bea3f4d3 --- /dev/null +++ b/drivers/cxl/pci.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */ +#ifndef __CXL_PCI_H__ +#define __CXL_PCI_H__ + +#define CXL_MEMORY_PROGIF 0x10 + +/* + * See section 8.1 Configuration Space Registers in the CXL 2.0 + * Specification + */ +#define PCI_DVSEC_VENDOR_ID_CXL 0x1E98 +#define PCI_DVSEC_ID_CXL 0x0 + +#define PCI_DVSEC_ID_CXL_REGLOC_OFFSET 0x8 + +#endif /* __CXL_PCI_H__ */ diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h index d8156a5dbee8..766260a9b247 100644 --- a/include/linux/pci_ids.h +++ b/include/linux/pci_ids.h @@ -51,6 +51,7 @@ #define PCI_BASE_CLASS_MEMORY 0x05 #define PCI_CLASS_MEMORY_RAM 0x0500 #define PCI_CLASS_MEMORY_FLASH 0x0501 +#define PCI_CLASS_MEMORY_CXL 0x0502 #define PCI_CLASS_MEMORY_OTHER 0x0580 #define PCI_BASE_CLASS_BRIDGE 0x06 From patchwork Wed Feb 17 04:09:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D89CBC4332E for ; Wed, 17 Feb 2021 04:11:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B028764E33 for ; Wed, 17 Feb 2021 04:11:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231217AbhBQELE (ORCPT ); Tue, 16 Feb 2021 23:11:04 -0500 Received: from mga07.intel.com ([134.134.136.100]:21746 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231185AbhBQEKv (ORCPT ); Tue, 16 Feb 2021 23:10:51 -0500 IronPort-SDR: Jsf8zKMrwZUWthjA0e7pkjHZUWy0jPLg+uumtrUca/8EDfqu9XRrwRpcSapmLtiGVS7n3kVgCa NUGixwl1UVEw== X-IronPort-AV: E=McAfee;i="6000,8403,9897"; a="247165908" X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="247165908" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:07 -0800 IronPort-SDR: 99YNcOG0Sxr7/JUoO815rxvknFjilV/wzshNOSvmm5GLCzSkNC40K/adZO4eNoJijdwAgMZ25i gUL1qzY2P8VQ== X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="384948800" Received: from yxie-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.134.141]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:06 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , David Hildenbrand , David Rientjes , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , "John Groves (jgroves)" , "Kelley, Sean V" , Colin Ian King , Dan Carpenter Subject: [PATCH v5 2/9] cxl/mem: Find device capabilities Date: Tue, 16 Feb 2021 20:09:51 -0800 Message-Id: <20210217040958.1354670-3-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210217040958.1354670-1-ben.widawsky@intel.com> References: <20210217040958.1354670-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Provide enough functionality to utilize the mailbox of a memory device. The mailbox is used to interact with the firmware running on the memory device. The flow is proven with one implemented command, "identify". Because the class code has already told the driver this is a memory device and the identify command is mandatory. CXL devices contain an array of capabilities that describe the interactions software can have with the device or firmware running on the device. A CXL compliant device must implement the device status and the mailbox capability. Additionally, a CXL compliant memory device must implement the memory device capability. Each of the capabilities can [will] provide an offset within the MMIO region for interacting with the CXL device. The capabilities tell the driver how to find and map the register space for CXL Memory Devices. The registers are required to utilize the CXL spec defined mailbox interface. The spec outlines two mailboxes, primary and secondary. The secondary mailbox is earmarked for system firmware, and not handled in this driver. Primary mailboxes are capable of generating an interrupt when submitting a background command. That implementation is saved for a later time. Reported-by: Colin Ian King (coverity) Reported-by: Dan Carpenter (smatch) Link: https://www.computeexpresslink.org/download-the-specification Signed-off-by: Ben Widawsky Reviewed-by: Dan Williams (v2) Reviewed-by: Jonathan Cameron --- .../driver-api/cxl/memory-devices.rst | 14 + drivers/cxl/cxl.h | 90 +++ drivers/cxl/mem.c | 577 +++++++++++++++++- drivers/cxl/pci.h | 14 + 4 files changed, 693 insertions(+), 2 deletions(-) create mode 100644 drivers/cxl/cxl.h diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst index 71617e9002f1..43177e700d62 100644 --- a/Documentation/driver-api/cxl/memory-devices.rst +++ b/Documentation/driver-api/cxl/memory-devices.rst @@ -13,3 +13,17 @@ Address space is handled via HDM (Host Managed Device Memory) decoders that optionally define a device's contribution to an interleaved address range across multiple devices underneath a host-bridge or interleaved across host-bridges. + +Driver Infrastructure +===================== + +This section covers the driver infrastructure for a CXL memory device. + +CXL Memory Device +----------------- + +.. kernel-doc:: drivers/cxl/mem.c + :doc: cxl mem + +.. kernel-doc:: drivers/cxl/mem.c + :internal: diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h new file mode 100644 index 000000000000..baac26d9e63b --- /dev/null +++ b/drivers/cxl/cxl.h @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2020 Intel Corporation. */ + +#ifndef __CXL_H__ +#define __CXL_H__ + +#include +#include +#include + +/* CXL 2.0 8.2.8.1 Device Capabilities Array Register */ +#define CXLDEV_CAP_ARRAY_OFFSET 0x0 +#define CXLDEV_CAP_ARRAY_CAP_ID 0 +#define CXLDEV_CAP_ARRAY_ID_MASK GENMASK_ULL(15, 0) +#define CXLDEV_CAP_ARRAY_COUNT_MASK GENMASK_ULL(47, 32) +/* CXL 2.0 8.2.8.2 CXL Device Capability Header Register */ +#define CXLDEV_CAP_HDR_CAP_ID_MASK GENMASK(15, 0) +/* CXL 2.0 8.2.8.2.1 CXL Device Capabilities */ +#define CXLDEV_CAP_CAP_ID_DEVICE_STATUS 0x1 +#define CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX 0x2 +#define CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX 0x3 +#define CXLDEV_CAP_CAP_ID_MEMDEV 0x4000 + +/* CXL 2.0 8.2.8.4 Mailbox Registers */ +#define CXLDEV_MBOX_CAPS_OFFSET 0x00 +#define CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK GENMASK(4, 0) +#define CXLDEV_MBOX_CTRL_OFFSET 0x04 +#define CXLDEV_MBOX_CTRL_DOORBELL BIT(0) +#define CXLDEV_MBOX_CMD_OFFSET 0x08 +#define CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK GENMASK_ULL(15, 0) +#define CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK GENMASK_ULL(36, 16) +#define CXLDEV_MBOX_STATUS_OFFSET 0x10 +#define CXLDEV_MBOX_STATUS_RET_CODE_MASK GENMASK_ULL(47, 32) +#define CXLDEV_MBOX_BG_CMD_STATUS_OFFSET 0x18 +#define CXLDEV_MBOX_PAYLOAD_OFFSET 0x20 + +/* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ +#define CXLMDEV_STATUS_OFFSET 0x0 +#define CXLMDEV_DEV_FATAL BIT(0) +#define CXLMDEV_FW_HALT BIT(1) +#define CXLMDEV_STATUS_MEDIA_STATUS_MASK GENMASK(3, 2) +#define CXLMDEV_MS_NOT_READY 0 +#define CXLMDEV_MS_READY 1 +#define CXLMDEV_MS_ERROR 2 +#define CXLMDEV_MS_DISABLED 3 +#define CXLMDEV_READY(status) \ + (FIELD_GET(CXLMDEV_STATUS_MEDIA_STATUS_MASK, status) == \ + CXLMDEV_MS_READY) +#define CXLMDEV_MBOX_IF_READY BIT(4) +#define CXLMDEV_RESET_NEEDED_MASK GENMASK(7, 5) +#define CXLMDEV_RESET_NEEDED_NOT 0 +#define CXLMDEV_RESET_NEEDED_COLD 1 +#define CXLMDEV_RESET_NEEDED_WARM 2 +#define CXLMDEV_RESET_NEEDED_HOT 3 +#define CXLMDEV_RESET_NEEDED_CXL 4 +#define CXLMDEV_RESET_NEEDED(status) \ + (FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) != \ + CXLMDEV_RESET_NEEDED_NOT) + +/** + * struct cxl_mem - A CXL memory device + * @pdev: The PCI device associated with this CXL device. + * @regs: IO mappings to the device's MMIO + * @status_regs: CXL 2.0 8.2.8.3 Device Status Registers + * @mbox_regs: CXL 2.0 8.2.8.4 Mailbox Registers + * @memdev_regs: CXL 2.0 8.2.8.5 Memory Device Registers + * @payload_size: Size of space for payload + * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) + * @mbox_mutex: Mutex to synchronize mailbox access. + * @firmware_version: Firmware version for the memory device. + * @pmem_range: Persistent memory capacity information. + * @ram_range: Volatile memory capacity information. + */ +struct cxl_mem { + struct pci_dev *pdev; + void __iomem *regs; + + void __iomem *status_regs; + void __iomem *mbox_regs; + void __iomem *memdev_regs; + + size_t payload_size; + struct mutex mbox_mutex; /* Protects device mailbox and firmware */ + char firmware_version[0x10]; + + struct range pmem_range; + struct range ram_range; +}; + +#endif /* __CXL_H__ */ diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index ce33c5ee77c9..04e15759bc9b 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -3,7 +3,491 @@ #include #include #include +#include #include "pci.h" +#include "cxl.h" + +/** + * DOC: cxl mem + * + * This implements a CXL memory device ("type-3") as it is defined by the + * Compute Express Link specification. + * + * The driver has several responsibilities, mainly: + * - Create the memX device and register on the CXL bus. + * - Enumerate device's register interface and map them. + * - Probe the device attributes to establish sysfs interface. + * - Provide an IOCTL interface to userspace to communicate with the device for + * things like firmware update. + * - Support management of interleave sets. + * - Handle and manage error conditions. + */ + +#define cxl_doorbell_busy(cxlm) \ + (readl((cxlm)->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET) & \ + CXLDEV_MBOX_CTRL_DOORBELL) + +/* CXL 2.0 - 8.2.8.4 */ +#define CXL_MAILBOX_TIMEOUT_MS (2 * HZ) + +enum opcode { + CXL_MBOX_OP_IDENTIFY = 0x4000, + CXL_MBOX_OP_MAX = 0x10000 +}; + +/** + * struct mbox_cmd - A command to be submitted to hardware. + * @opcode: (input) The command set and command submitted to hardware. + * @payload_in: (input) Pointer to the input payload. + * @payload_out: (output) Pointer to the output payload. Must be allocated by + * the caller. + * @size_in: (input) Number of bytes to load from @payload_in. + * @size_out: (input) Max number of bytes loaded into @payload_out. + * (output) Number of bytes generated by the device. For fixed size + * outputs commands this is always expected to be deterministic. For + * variable sized output commands, it tells the exact number of bytes + * written. + * @return_code: (output) Error code returned from hardware. + * + * This is the primary mechanism used to send commands to the hardware. + * All the fields except @payload_* correspond exactly to the fields described in + * Command Register section of the CXL 2.0 8.2.8.4.5. @payload_in and + * @payload_out are written to, and read from the Command Payload Registers + * defined in CXL 2.0 8.2.8.4.8. + */ +struct mbox_cmd { + u16 opcode; + void *payload_in; + void *payload_out; + size_t size_in; + size_t size_out; + u16 return_code; +#define CXL_MBOX_SUCCESS 0 +}; + +static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm) +{ + const unsigned long start = jiffies; + unsigned long end = start; + + while (cxl_doorbell_busy(cxlm)) { + end = jiffies; + + if (time_after(end, start + CXL_MAILBOX_TIMEOUT_MS)) { + /* Check again in case preempted before timeout test */ + if (!cxl_doorbell_busy(cxlm)) + break; + return -ETIMEDOUT; + } + cpu_relax(); + } + + dev_dbg(&cxlm->pdev->dev, "Doorbell wait took %dms", + jiffies_to_msecs(end) - jiffies_to_msecs(start)); + return 0; +} + +static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm, + struct mbox_cmd *mbox_cmd) +{ + struct device *dev = &cxlm->pdev->dev; + + dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n", + mbox_cmd->opcode, mbox_cmd->size_in); +} + +/** + * __cxl_mem_mbox_send_cmd() - Execute a mailbox command + * @cxlm: The CXL memory device to communicate with. + * @mbox_cmd: Command to send to the memory device. + * + * Context: Any context. Expects mbox_mutex to be held. + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success. + * Caller should check the return code in @mbox_cmd to make sure it + * succeeded. + * + * This is a generic form of the CXL mailbox send command thus only using the + * registers defined by the mailbox capability ID - CXL 2.0 8.2.8.4. Memory + * devices, and perhaps other types of CXL devices may have further information + * available upon error conditions. Driver facilities wishing to send mailbox + * commands should use the wrapper command. + * + * The CXL spec allows for up to two mailboxes. The intention is for the primary + * mailbox to be OS controlled and the secondary mailbox to be used by system + * firmware. This allows the OS and firmware to communicate with the device and + * not need to coordinate with each other. The driver only uses the primary + * mailbox. + */ +static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, + struct mbox_cmd *mbox_cmd) +{ + void __iomem *payload = cxlm->mbox_regs + CXLDEV_MBOX_PAYLOAD_OFFSET; + u64 cmd_reg, status_reg; + size_t out_len; + int rc; + + lockdep_assert_held(&cxlm->mbox_mutex); + + /* + * Here are the steps from 8.2.8.4 of the CXL 2.0 spec. + * 1. Caller reads MB Control Register to verify doorbell is clear + * 2. Caller writes Command Register + * 3. Caller writes Command Payload Registers if input payload is non-empty + * 4. Caller writes MB Control Register to set doorbell + * 5. Caller either polls for doorbell to be clear or waits for interrupt if configured + * 6. Caller reads MB Status Register to fetch Return code + * 7. If command successful, Caller reads Command Register to get Payload Length + * 8. If output payload is non-empty, host reads Command Payload Registers + * + * Hardware is free to do whatever it wants before the doorbell is rung, + * and isn't allowed to change anything after it clears the doorbell. As + * such, steps 2 and 3 can happen in any order, and steps 6, 7, 8 can + * also happen in any order (though some orders might not make sense). + */ + + /* #1 */ + if (cxl_doorbell_busy(cxlm)) { + dev_err_ratelimited(&cxlm->pdev->dev, + "Mailbox re-busy after acquiring\n"); + return -EBUSY; + } + + cmd_reg = FIELD_PREP(CXLDEV_MBOX_CMD_COMMAND_OPCODE_MASK, + mbox_cmd->opcode); + if (mbox_cmd->size_in) { + if (WARN_ON(!mbox_cmd->payload_in)) + return -EINVAL; + + cmd_reg |= FIELD_PREP(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK, + mbox_cmd->size_in); + memcpy_toio(payload, mbox_cmd->payload_in, mbox_cmd->size_in); + } + + /* #2, #3 */ + writeq(cmd_reg, cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET); + + /* #4 */ + dev_dbg(&cxlm->pdev->dev, "Sending command\n"); + writel(CXLDEV_MBOX_CTRL_DOORBELL, + cxlm->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET); + + /* #5 */ + rc = cxl_mem_wait_for_doorbell(cxlm); + if (rc == -ETIMEDOUT) { + cxl_mem_mbox_timeout(cxlm, mbox_cmd); + return rc; + } + + /* #6 */ + status_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_STATUS_OFFSET); + mbox_cmd->return_code = + FIELD_GET(CXLDEV_MBOX_STATUS_RET_CODE_MASK, status_reg); + + if (mbox_cmd->return_code != 0) { + dev_dbg(&cxlm->pdev->dev, "Mailbox operation had an error\n"); + return 0; + } + + /* #7 */ + cmd_reg = readq(cxlm->mbox_regs + CXLDEV_MBOX_CMD_OFFSET); + out_len = FIELD_GET(CXLDEV_MBOX_CMD_PAYLOAD_LENGTH_MASK, cmd_reg); + + /* #8 */ + if (out_len && mbox_cmd->payload_out) { + /* + * Sanitize the copy. If hardware misbehaves, out_len per the + * spec can actually be greater than the max allowed size (21 + * bits available but spec defined 1M max). The caller also may + * have requested less data than the hardware supplied even + * within spec. + */ + size_t n = min3(mbox_cmd->size_out, cxlm->payload_size, out_len); + + memcpy_fromio(mbox_cmd->payload_out, payload, n); + mbox_cmd->size_out = n; + } else { + mbox_cmd->size_out = 0; + } + + return 0; +} + +/** + * cxl_mem_mbox_get() - Acquire exclusive access to the mailbox. + * @cxlm: The memory device to gain access to. + * + * Context: Any context. Takes the mbox_mutex. + * Return: 0 if exclusive access was acquired. + */ +static int cxl_mem_mbox_get(struct cxl_mem *cxlm) +{ + struct device *dev = &cxlm->pdev->dev; + u64 md_status; + int rc; + + mutex_lock_io(&cxlm->mbox_mutex); + + /* + * XXX: There is some amount of ambiguity in the 2.0 version of the spec + * around the mailbox interface ready (8.2.8.5.1.1). The purpose of the + * bit is to allow firmware running on the device to notify the driver + * that it's ready to receive commands. It is unclear if the bit needs + * to be read for each transaction mailbox, ie. the firmware can switch + * it on and off as needed. Second, there is no defined timeout for + * mailbox ready, like there is for the doorbell interface. + * + * Assumptions: + * 1. The firmware might toggle the Mailbox Interface Ready bit, check + * it for every command. + * + * 2. If the doorbell is clear, the firmware should have first set the + * Mailbox Interface Ready bit. Therefore, waiting for the doorbell + * to be ready is sufficient. + */ + rc = cxl_mem_wait_for_doorbell(cxlm); + if (rc) { + dev_warn(dev, "Mailbox interface not ready\n"); + goto out; + } + + md_status = readq(cxlm->memdev_regs + CXLMDEV_STATUS_OFFSET); + if (!(md_status & CXLMDEV_MBOX_IF_READY && CXLMDEV_READY(md_status))) { + dev_err(dev, "mbox: reported doorbell ready, but not mbox ready\n"); + rc = -EBUSY; + goto out; + } + + /* + * Hardware shouldn't allow a ready status but also have failure bits + * set. Spit out an error, this should be a bug report + */ + rc = -EFAULT; + if (md_status & CXLMDEV_DEV_FATAL) { + dev_err(dev, "mbox: reported ready, but fatal\n"); + goto out; + } + if (md_status & CXLMDEV_FW_HALT) { + dev_err(dev, "mbox: reported ready, but halted\n"); + goto out; + } + if (CXLMDEV_RESET_NEEDED(md_status)) { + dev_err(dev, "mbox: reported ready, but reset needed\n"); + goto out; + } + + /* with lock held */ + return 0; + +out: + mutex_unlock(&cxlm->mbox_mutex); + return rc; +} + +/** + * cxl_mem_mbox_put() - Release exclusive access to the mailbox. + * @cxlm: The CXL memory device to communicate with. + * + * Context: Any context. Expects mbox_mutex to be held. + */ +static void cxl_mem_mbox_put(struct cxl_mem *cxlm) +{ + mutex_unlock(&cxlm->mbox_mutex); +} + +/** + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device. + * @cxlm: The CXL memory device to communicate with. + * @opcode: Opcode for the mailbox command. + * @in: The input payload for the mailbox command. + * @in_size: The length of the input payload + * @out: Caller allocated buffer for the output. + * @out_size: Expected size of output. + * + * Context: Any context. Will acquire and release mbox_mutex. + * Return: + * * %>=0 - Number of bytes returned in @out. + * * %-E2BIG - Payload is too large for hardware. + * * %-EBUSY - Couldn't acquire exclusive mailbox access. + * * %-EFAULT - Hardware error occurred. + * * %-ENXIO - Command completed, but device reported an error. + * * %-EIO - Unexpected output size. + * + * Mailbox commands may execute successfully yet the device itself reported an + * error. While this distinction can be useful for commands from userspace, the + * kernel will only be able to use results when both are successful. It's + * expected that all callers of this function know exactly the size of the data + * they will consume from the hardware. + * + * See __cxl_mem_mbox_send_cmd() + */ +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, + void *in, size_t in_size, + void *out, size_t out_size) +{ + struct mbox_cmd mbox_cmd = { + .opcode = opcode, + .payload_in = in, + .size_in = in_size, + .size_out = out_size, + .payload_out = out, + }; + int rc; + + if (out_size > cxlm->payload_size) + return -E2BIG; + + rc = cxl_mem_mbox_get(cxlm); + if (rc) + return rc; + + rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd); + cxl_mem_mbox_put(cxlm); + if (rc) + return rc; + + /* TODO: Map return code to proper kernel style errno */ + if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) + return -ENXIO; + + if (mbox_cmd.size_out != out_size) + return -EIO; + + return 0; +} + +/** + * cxl_mem_setup_regs() - Setup necessary MMIO. + * @cxlm: The CXL memory device to communicate with. + * + * Return: 0 if all necessary registers mapped. + * + * A memory device is required by spec to implement a certain set of MMIO + * regions. The purpose of this function is to enumerate and map those + * registers. + */ +static int cxl_mem_setup_regs(struct cxl_mem *cxlm) +{ + struct device *dev = &cxlm->pdev->dev; + int cap, cap_count; + u64 cap_array; + + cap_array = readq(cxlm->regs + CXLDEV_CAP_ARRAY_OFFSET); + if (FIELD_GET(CXLDEV_CAP_ARRAY_ID_MASK, cap_array) != + CXLDEV_CAP_ARRAY_CAP_ID) + return -ENODEV; + + cap_count = FIELD_GET(CXLDEV_CAP_ARRAY_COUNT_MASK, cap_array); + + for (cap = 1; cap <= cap_count; cap++) { + void __iomem *register_block; + u32 offset; + u16 cap_id; + + cap_id = FIELD_GET(CXLDEV_CAP_HDR_CAP_ID_MASK, + readl(cxlm->regs + cap * 0x10)); + offset = readl(cxlm->regs + cap * 0x10 + 0x4); + register_block = cxlm->regs + offset; + + switch (cap_id) { + case CXLDEV_CAP_CAP_ID_DEVICE_STATUS: + dev_dbg(dev, "found Status capability (0x%x)\n", offset); + cxlm->status_regs = register_block; + break; + case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX: + dev_dbg(dev, "found Mailbox capability (0x%x)\n", offset); + cxlm->mbox_regs = register_block; + break; + case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX: + dev_dbg(dev, "found Secondary Mailbox capability (0x%x)\n", offset); + break; + case CXLDEV_CAP_CAP_ID_MEMDEV: + dev_dbg(dev, "found Memory Device capability (0x%x)\n", offset); + cxlm->memdev_regs = register_block; + break; + default: + dev_dbg(dev, "Unknown cap ID: %d (0x%x)\n", cap_id, offset); + break; + } + } + + if (!cxlm->status_regs || !cxlm->mbox_regs || !cxlm->memdev_regs) { + dev_err(dev, "registers not found: %s%s%s\n", + !cxlm->status_regs ? "status " : "", + !cxlm->mbox_regs ? "mbox " : "", + !cxlm->memdev_regs ? "memdev" : ""); + return -ENXIO; + } + + return 0; +} + +static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm) +{ + const int cap = readl(cxlm->mbox_regs + CXLDEV_MBOX_CAPS_OFFSET); + + cxlm->payload_size = + 1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap); + + /* + * CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register + * + * If the size is too small, mandatory commands will not work and so + * there's no point in going forward. If the size is too large, there's + * no harm is soft limiting it. + */ + cxlm->payload_size = min_t(size_t, cxlm->payload_size, SZ_1M); + if (cxlm->payload_size < 256) { + dev_err(&cxlm->pdev->dev, "Mailbox is too small (%zub)", + cxlm->payload_size); + return -ENXIO; + } + + dev_dbg(&cxlm->pdev->dev, "Mailbox payload sized %zu", + cxlm->payload_size); + + return 0; +} + +static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo, + u32 reg_hi) +{ + struct device *dev = &pdev->dev; + struct cxl_mem *cxlm; + void __iomem *regs; + u64 offset; + u8 bar; + int rc; + + cxlm = devm_kzalloc(&pdev->dev, sizeof(*cxlm), GFP_KERNEL); + if (!cxlm) { + dev_err(dev, "No memory available\n"); + return NULL; + } + + offset = ((u64)reg_hi << 32) | FIELD_GET(CXL_REGLOC_ADDR_MASK, reg_lo); + bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo); + + /* Basic sanity check that BAR is big enough */ + if (pci_resource_len(pdev, bar) < offset) { + dev_err(dev, "BAR%d: %pr: too small (offset: %#llx)\n", bar, + &pdev->resource[bar], (unsigned long long)offset); + return NULL; + } + + rc = pcim_iomap_regions(pdev, BIT(bar), pci_name(pdev)); + if (rc) { + dev_err(dev, "failed to map registers\n"); + return NULL; + } + regs = pcim_iomap_table(pdev)[bar]; + + mutex_init(&cxlm->mbox_mutex); + cxlm->pdev = pdev; + cxlm->regs = regs + offset; + + dev_dbg(dev, "Mapped CXL Memory Device resource\n"); + return cxlm; +} static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) { @@ -28,10 +512,65 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) return 0; } +/** + * cxl_mem_identify() - Send the IDENTIFY command to the device. + * @cxlm: The device to identify. + * + * Return: 0 if identify was executed successfully. + * + * This will dispatch the identify command to the device and on success populate + * structures to be exported to sysfs. + */ +static int cxl_mem_identify(struct cxl_mem *cxlm) +{ + struct cxl_mbox_identify { + char fw_revision[0x10]; + __le64 total_capacity; + __le64 volatile_capacity; + __le64 persistent_capacity; + __le64 partition_align; + __le16 info_event_log_size; + __le16 warning_event_log_size; + __le16 failure_event_log_size; + __le16 fatal_event_log_size; + __le32 lsa_size; + u8 poison_list_max_mer[3]; + __le16 inject_poison_limit; + u8 poison_caps; + u8 qos_telemetry_caps; + } __packed id; + int rc; + + rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_IDENTIFY, NULL, 0, &id, + sizeof(id)); + if (rc < 0) + return rc; + + /* + * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias. + * For now, only the capacity is exported in sysfs + */ + cxlm->ram_range.start = 0; + cxlm->ram_range.end = le64_to_cpu(id.volatile_capacity) - 1; + + cxlm->pmem_range.start = 0; + cxlm->pmem_range.end = le64_to_cpu(id.persistent_capacity) - 1; + + memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision)); + + return 0; +} + static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct device *dev = &pdev->dev; - int regloc; + struct cxl_mem *cxlm = NULL; + u32 regloc_size, regblocks; + int rc, regloc, i; + + rc = pcim_enable_device(pdev); + if (rc) + return rc; regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC_OFFSET); if (!regloc) { @@ -39,7 +578,41 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) return -ENXIO; } - return 0; + /* Get the size of the Register Locator DVSEC */ + pci_read_config_dword(pdev, regloc + PCI_DVSEC_HEADER1, ®loc_size); + regloc_size = FIELD_GET(PCI_DVSEC_HEADER1_LENGTH_MASK, regloc_size); + + regloc += PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET; + regblocks = (regloc_size - PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET) / 8; + + for (i = 0; i < regblocks; i++, regloc += 8) { + u32 reg_lo, reg_hi; + u8 reg_type; + + /* "register low and high" contain other bits */ + pci_read_config_dword(pdev, regloc, ®_lo); + pci_read_config_dword(pdev, regloc + 4, ®_hi); + + reg_type = FIELD_GET(CXL_REGLOC_RBI_MASK, reg_lo); + + if (reg_type == CXL_REGLOC_RBI_MEMDEV) { + cxlm = cxl_mem_create(pdev, reg_lo, reg_hi); + break; + } + } + + if (!cxlm) + return -ENODEV; + + rc = cxl_mem_setup_regs(cxlm); + if (rc) + return rc; + + rc = cxl_mem_setup_mailbox(cxlm); + if (rc) + return rc; + + return cxl_mem_identify(cxlm); } static const struct pci_device_id cxl_mem_pci_tbl[] = { diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h index e464bea3f4d3..af3ec078cf6c 100644 --- a/drivers/cxl/pci.h +++ b/drivers/cxl/pci.h @@ -9,9 +9,23 @@ * See section 8.1 Configuration Space Registers in the CXL 2.0 * Specification */ +#define PCI_DVSEC_HEADER1_LENGTH_MASK GENMASK(31, 20) #define PCI_DVSEC_VENDOR_ID_CXL 0x1E98 #define PCI_DVSEC_ID_CXL 0x0 #define PCI_DVSEC_ID_CXL_REGLOC_OFFSET 0x8 +#define PCI_DVSEC_ID_CXL_REGLOC_BLOCK1_OFFSET 0xC + +/* BAR Indicator Register (BIR) */ +#define CXL_REGLOC_BIR_MASK GENMASK(2, 0) + +/* Register Block Identifier (RBI) */ +#define CXL_REGLOC_RBI_MASK GENMASK(15, 8) +#define CXL_REGLOC_RBI_EMPTY 0 +#define CXL_REGLOC_RBI_COMPONENT 1 +#define CXL_REGLOC_RBI_VIRT 2 +#define CXL_REGLOC_RBI_MEMDEV 3 + +#define CXL_REGLOC_ADDR_MASK GENMASK(31, 16) #endif /* __CXL_PCI_H__ */ From patchwork Wed Feb 17 04:09:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B529EC433DB for ; Wed, 17 Feb 2021 04:11:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 826E564E02 for ; Wed, 17 Feb 2021 04:11:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231236AbhBQELG (ORCPT ); Tue, 16 Feb 2021 23:11:06 -0500 Received: from mga07.intel.com ([134.134.136.100]:21743 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231210AbhBQELC (ORCPT ); Tue, 16 Feb 2021 23:11:02 -0500 IronPort-SDR: jjjadyMIgX5KbXxUxhpQg1smawYxe/GnVgnxWiNU1iIvyuDH0GLiZlfrTjbXHyzrFeaH/uwYit uNOQP5f1DpjQ== X-IronPort-AV: E=McAfee;i="6000,8403,9897"; a="247165916" X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="247165916" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:09 -0800 IronPort-SDR: wSqbwkWvxDVRMl49WJivd2eZnucLiI6BpQjEZVZu7ZdmOpQYftOL8T8CC4x/5WMvqhUYcXtc/W xebSO0kqKvDQ== X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="384948820" Received: from yxie-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.134.141]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:07 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , David Hildenbrand , David Rientjes , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , "John Groves (jgroves)" , "Kelley, Sean V" , Jonathan Cameron Subject: [PATCH v5 3/9] cxl/mem: Register CXL memX devices Date: Tue, 16 Feb 2021 20:09:52 -0800 Message-Id: <20210217040958.1354670-4-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210217040958.1354670-1-ben.widawsky@intel.com> References: <20210217040958.1354670-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Dan Williams Create the /sys/bus/cxl hierarchy to enumerate: * Memory Devices (per-endpoint control devices) * Memory Address Space Devices (platform address ranges with interleaving, performance, and persistence attributes) * Memory Regions (active provisioned memory from an address space device that is in use as System RAM or delegated to libnvdimm as Persistent Memory regions). For now, only the per-endpoint control devices are registered on the 'cxl' bus. However, going forward it will provide a mechanism to coordinate cross-device interleave. Signed-off-by: Dan Williams Signed-off-by: Ben Widawsky Reviewed-by: Jonathan Cameron (v2) --- Documentation/ABI/testing/sysfs-bus-cxl | 26 ++ .../driver-api/cxl/memory-devices.rst | 5 + drivers/cxl/Makefile | 3 + drivers/cxl/bus.c | 29 ++ drivers/cxl/cxl.h | 3 + drivers/cxl/mem.c | 285 +++++++++++++++++- 6 files changed, 349 insertions(+), 2 deletions(-) create mode 100644 Documentation/ABI/testing/sysfs-bus-cxl create mode 100644 drivers/cxl/bus.c diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl new file mode 100644 index 000000000000..2fe7490ad6a8 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-bus-cxl @@ -0,0 +1,26 @@ +What: /sys/bus/cxl/devices/memX/firmware_version +Date: December, 2020 +KernelVersion: v5.12 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) "FW Revision" string as reported by the Identify + Memory Device Output Payload in the CXL-2.0 + specification. + +What: /sys/bus/cxl/devices/memX/ram/size +Date: December, 2020 +KernelVersion: v5.12 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) "Volatile Only Capacity" as bytes. Represents the + identically named field in the Identify Memory Device Output + Payload in the CXL-2.0 specification. + +What: /sys/bus/cxl/devices/memX/pmem/size +Date: December, 2020 +KernelVersion: v5.12 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) "Persistent Only Capacity" as bytes. Represents the + identically named field in the Identify Memory Device Output + Payload in the CXL-2.0 specification. diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst index 43177e700d62..1fef2c0a167d 100644 --- a/Documentation/driver-api/cxl/memory-devices.rst +++ b/Documentation/driver-api/cxl/memory-devices.rst @@ -27,3 +27,8 @@ CXL Memory Device .. kernel-doc:: drivers/cxl/mem.c :internal: + +CXL Bus +------- +.. kernel-doc:: drivers/cxl/bus.c + :doc: cxl bus diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile index 4a30f7c3fc4a..a314a1891f4d 100644 --- a/drivers/cxl/Makefile +++ b/drivers/cxl/Makefile @@ -1,4 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_CXL_BUS) += cxl_bus.o obj-$(CONFIG_CXL_MEM) += cxl_mem.o +ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL +cxl_bus-y := bus.o cxl_mem-y := mem.o diff --git a/drivers/cxl/bus.c b/drivers/cxl/bus.c new file mode 100644 index 000000000000..58f74796d525 --- /dev/null +++ b/drivers/cxl/bus.c @@ -0,0 +1,29 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */ +#include +#include + +/** + * DOC: cxl bus + * + * The CXL bus provides namespace for control devices and a rendezvous + * point for cross-device interleave coordination. + */ +struct bus_type cxl_bus_type = { + .name = "cxl", +}; +EXPORT_SYMBOL_GPL(cxl_bus_type); + +static __init int cxl_bus_init(void) +{ + return bus_register(&cxl_bus_type); +} + +static void cxl_bus_exit(void) +{ + bus_unregister(&cxl_bus_type); +} + +module_init(cxl_bus_init); +module_exit(cxl_bus_exit); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index baac26d9e63b..8fd4a177fe25 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -57,6 +57,7 @@ (FIELD_GET(CXLMDEV_RESET_NEEDED_MASK, status) != \ CXLMDEV_RESET_NEEDED_NOT) +struct cxl_memdev; /** * struct cxl_mem - A CXL memory device * @pdev: The PCI device associated with this CXL device. @@ -74,6 +75,7 @@ struct cxl_mem { struct pci_dev *pdev; void __iomem *regs; + struct cxl_memdev *cxlmd; void __iomem *status_regs; void __iomem *mbox_regs; @@ -87,4 +89,5 @@ struct cxl_mem { struct range ram_range; }; +extern struct bus_type cxl_bus_type; #endif /* __CXL_H__ */ diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 04e15759bc9b..1c0195b07063 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -1,6 +1,9 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2020 Intel Corporation. All rights reserved. */ #include +#include +#include +#include #include #include #include @@ -23,6 +26,12 @@ * - Handle and manage error conditions. */ +/* + * An entire PCI topology full of devices should be enough for any + * config + */ +#define CXL_MEM_MAX_DEVS 65536 + #define cxl_doorbell_busy(cxlm) \ (readl((cxlm)->mbox_regs + CXLDEV_MBOX_CTRL_OFFSET) & \ CXLDEV_MBOX_CTRL_DOORBELL) @@ -65,6 +74,27 @@ struct mbox_cmd { #define CXL_MBOX_SUCCESS 0 }; +/** + * struct cxl_memdev - CXL bus object representing a Type-3 Memory Device + * @dev: driver core device object + * @cdev: char dev core object for ioctl operations + * @cxlm: pointer to the parent device driver data + * @ops_active: active user of @cxlm in ops handlers + * @ops_dead: completion when all @cxlm ops users have exited + * @id: id number of this memdev instance. + */ +struct cxl_memdev { + struct device dev; + struct cdev cdev; + struct cxl_mem *cxlm; + struct percpu_ref ops_active; + struct completion ops_dead; + int id; +}; + +static int cxl_mem_major; +static DEFINE_IDA(cxl_memdev_ida); + static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm) { const unsigned long start = jiffies; @@ -294,6 +324,33 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm) mutex_unlock(&cxlm->mbox_mutex); } +static long cxl_memdev_ioctl(struct file *file, unsigned int cmd, + unsigned long arg) +{ + struct cxl_memdev *cxlmd; + struct inode *inode; + int rc = -ENOTTY; + + inode = file_inode(file); + cxlmd = container_of(inode->i_cdev, typeof(*cxlmd), cdev); + + if (!percpu_ref_tryget_live(&cxlmd->ops_active)) + return -ENXIO; + + /* TODO: ioctl body */ + + percpu_ref_put(&cxlmd->ops_active); + + return rc; +} + +static const struct file_operations cxl_memdev_fops = { + .owner = THIS_MODULE, + .unlocked_ioctl = cxl_memdev_ioctl, + .compat_ioctl = compat_ptr_ioctl, + .llseek = noop_llseek, +}; + /** * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device. * @cxlm: The CXL memory device to communicate with. @@ -512,6 +569,197 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) return 0; } +static struct cxl_memdev *to_cxl_memdev(struct device *dev) +{ + return container_of(dev, struct cxl_memdev, dev); +} + +static void cxl_memdev_release(struct device *dev) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + + percpu_ref_exit(&cxlmd->ops_active); + ida_free(&cxl_memdev_ida, cxlmd->id); + kfree(cxlmd); +} + +static char *cxl_memdev_devnode(struct device *dev, umode_t *mode, kuid_t *uid, + kgid_t *gid) +{ + return kasprintf(GFP_KERNEL, "cxl/%s", dev_name(dev)); +} + +static ssize_t firmware_version_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_mem *cxlm = cxlmd->cxlm; + + return sprintf(buf, "%.16s\n", cxlm->firmware_version); +} +static DEVICE_ATTR_RO(firmware_version); + +static ssize_t payload_max_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_mem *cxlm = cxlmd->cxlm; + + return sprintf(buf, "%zu\n", cxlm->payload_size); +} +static DEVICE_ATTR_RO(payload_max); + +static ssize_t ram_size_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_mem *cxlm = cxlmd->cxlm; + unsigned long long len = range_len(&cxlm->ram_range); + + return sprintf(buf, "%#llx\n", len); +} + +static struct device_attribute dev_attr_ram_size = + __ATTR(size, 0444, ram_size_show, NULL); + +static ssize_t pmem_size_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_mem *cxlm = cxlmd->cxlm; + unsigned long long len = range_len(&cxlm->pmem_range); + + return sprintf(buf, "%#llx\n", len); +} + +static struct device_attribute dev_attr_pmem_size = + __ATTR(size, 0444, pmem_size_show, NULL); + +static struct attribute *cxl_memdev_attributes[] = { + &dev_attr_firmware_version.attr, + &dev_attr_payload_max.attr, + NULL, +}; + +static struct attribute *cxl_memdev_pmem_attributes[] = { + &dev_attr_pmem_size.attr, + NULL, +}; + +static struct attribute *cxl_memdev_ram_attributes[] = { + &dev_attr_ram_size.attr, + NULL, +}; + +static struct attribute_group cxl_memdev_attribute_group = { + .attrs = cxl_memdev_attributes, +}; + +static struct attribute_group cxl_memdev_ram_attribute_group = { + .name = "ram", + .attrs = cxl_memdev_ram_attributes, +}; + +static struct attribute_group cxl_memdev_pmem_attribute_group = { + .name = "pmem", + .attrs = cxl_memdev_pmem_attributes, +}; + +static const struct attribute_group *cxl_memdev_attribute_groups[] = { + &cxl_memdev_attribute_group, + &cxl_memdev_ram_attribute_group, + &cxl_memdev_pmem_attribute_group, + NULL, +}; + +static const struct device_type cxl_memdev_type = { + .name = "cxl_memdev", + .release = cxl_memdev_release, + .devnode = cxl_memdev_devnode, + .groups = cxl_memdev_attribute_groups, +}; + +static void cxlmdev_unregister(void *_cxlmd) +{ + struct cxl_memdev *cxlmd = _cxlmd; + struct device *dev = &cxlmd->dev; + + percpu_ref_kill(&cxlmd->ops_active); + cdev_device_del(&cxlmd->cdev, dev); + wait_for_completion(&cxlmd->ops_dead); + cxlmd->cxlm = NULL; + put_device(dev); +} + +static void cxlmdev_ops_active_release(struct percpu_ref *ref) +{ + struct cxl_memdev *cxlmd = + container_of(ref, typeof(*cxlmd), ops_active); + + complete(&cxlmd->ops_dead); +} + +static int cxl_mem_add_memdev(struct cxl_mem *cxlm) +{ + struct pci_dev *pdev = cxlm->pdev; + struct cxl_memdev *cxlmd; + struct device *dev; + struct cdev *cdev; + int rc; + + cxlmd = kzalloc(sizeof(*cxlmd), GFP_KERNEL); + if (!cxlmd) + return -ENOMEM; + init_completion(&cxlmd->ops_dead); + + /* + * @cxlm is deallocated when the driver unbinds so operations + * that are using it need to hold a live reference. + */ + cxlmd->cxlm = cxlm; + rc = percpu_ref_init(&cxlmd->ops_active, cxlmdev_ops_active_release, 0, + GFP_KERNEL); + if (rc) + goto err_ref; + + rc = ida_alloc_range(&cxl_memdev_ida, 0, CXL_MEM_MAX_DEVS, GFP_KERNEL); + if (rc < 0) + goto err_id; + cxlmd->id = rc; + + dev = &cxlmd->dev; + device_initialize(dev); + dev->parent = &pdev->dev; + dev->bus = &cxl_bus_type; + dev->devt = MKDEV(cxl_mem_major, cxlmd->id); + dev->type = &cxl_memdev_type; + dev_set_name(dev, "mem%d", cxlmd->id); + + cdev = &cxlmd->cdev; + cdev_init(cdev, &cxl_memdev_fops); + + rc = cdev_device_add(cdev, dev); + if (rc) + goto err_add; + + return devm_add_action_or_reset(dev->parent, cxlmdev_unregister, cxlmd); + +err_add: + ida_free(&cxl_memdev_ida, cxlmd->id); +err_id: + /* + * Theoretically userspace could have already entered the fops, + * so flush ops_active. + */ + percpu_ref_kill(&cxlmd->ops_active); + wait_for_completion(&cxlmd->ops_dead); + percpu_ref_exit(&cxlmd->ops_active); +err_ref: + kfree(cxlmd); + + return rc; +} + /** * cxl_mem_identify() - Send the IDENTIFY command to the device. * @cxlm: The device to identify. @@ -612,7 +860,11 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; - return cxl_mem_identify(cxlm); + rc = cxl_mem_identify(cxlm); + if (rc) + return rc; + + return cxl_mem_add_memdev(cxlm); } static const struct pci_device_id cxl_mem_pci_tbl[] = { @@ -631,5 +883,34 @@ static struct pci_driver cxl_mem_driver = { }, }; +static __init int cxl_mem_init(void) +{ + dev_t devt; + int rc; + + rc = alloc_chrdev_region(&devt, 0, CXL_MEM_MAX_DEVS, "cxl"); + if (rc) + return rc; + + cxl_mem_major = MAJOR(devt); + + rc = pci_register_driver(&cxl_mem_driver); + if (rc) { + unregister_chrdev_region(MKDEV(cxl_mem_major, 0), + CXL_MEM_MAX_DEVS); + return rc; + } + + return 0; +} + +static __exit void cxl_mem_exit(void) +{ + pci_unregister_driver(&cxl_mem_driver); + unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS); +} + MODULE_LICENSE("GPL v2"); -module_pci_driver(cxl_mem_driver); +module_init(cxl_mem_init); +module_exit(cxl_mem_exit); +MODULE_IMPORT_NS(CXL); From patchwork Wed Feb 17 04:09:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36682C432C3 for ; Wed, 17 Feb 2021 04:11:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0ED1964DDA for ; Wed, 17 Feb 2021 04:11:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231247AbhBQELG (ORCPT ); Tue, 16 Feb 2021 23:11:06 -0500 Received: from mga07.intel.com ([134.134.136.100]:21744 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231216AbhBQELE (ORCPT ); Tue, 16 Feb 2021 23:11:04 -0500 IronPort-SDR: 8HHwFKctbP6zrL/dwDdxn5uL2qfneB9TbAnZ5kuQWOKPxTd9uIVYalE2NP4zEKJlVmwy35oStP 0LH3kE2iASBg== X-IronPort-AV: E=McAfee;i="6000,8403,9897"; a="247165922" X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="247165922" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:10 -0800 IronPort-SDR: Tu0ZIPsTWkBKxshEcsXQl23tXWAaBg3cyAorG0THP669xVH8akM9vRLrWShhl4fOn5OJhA4c0v j60v2FX+kVPw== X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="384948834" Received: from yxie-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.134.141]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:09 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , David Hildenbrand , David Rientjes , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , "John Groves (jgroves)" , "Kelley, Sean V" , kernel test robot , Stephen Rothwell , Al Viro Subject: [PATCH v5 4/9] cxl/mem: Add basic IOCTL interface Date: Tue, 16 Feb 2021 20:09:53 -0800 Message-Id: <20210217040958.1354670-5-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210217040958.1354670-1-ben.widawsky@intel.com> References: <20210217040958.1354670-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Add a straightforward IOCTL that provides a mechanism for userspace to query the supported memory device commands. CXL commands as they appear to userspace are described as part of the UAPI kerneldoc. The command list returned via this IOCTL will contain the full set of commands that the driver supports, however, some of those commands may not be available for use by userspace. Memory device commands first appear in the CXL 2.0 specification. They are submitted through a mailbox mechanism specified in the CXL 2.0 specification. The send command allows userspace to issue mailbox commands directly to the hardware. The list of available commands to send are the output of the query command. The driver verifies basic properties of the command and possibly inspect the input (or output) payload to determine whether or not the command is allowed (or might taint the kernel). Reported-by: kernel test robot # bug in earlier revision Reported-by: Stephen Rothwell Cc: Al Viro Signed-off-by: Ben Widawsky Reviewed-by: Dan Williams (v2) Reviewed-by: Jonathan Cameron --- .clang-format | 1 + .../driver-api/cxl/memory-devices.rst | 12 + .../userspace-api/ioctl/ioctl-number.rst | 1 + drivers/cxl/mem.c | 283 +++++++++++++++++- include/uapi/linux/cxl_mem.h | 156 ++++++++++ 5 files changed, 452 insertions(+), 1 deletion(-) create mode 100644 include/uapi/linux/cxl_mem.h diff --git a/.clang-format b/.clang-format index 10dc5a9a61b3..3f11c8901b43 100644 --- a/.clang-format +++ b/.clang-format @@ -109,6 +109,7 @@ ForEachMacros: - 'css_for_each_child' - 'css_for_each_descendant_post' - 'css_for_each_descendant_pre' + - 'cxl_for_each_cmd' - 'device_for_each_child_node' - 'dma_fence_chain_for_each' - 'do_for_each_ftrace_op' diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst index 1fef2c0a167d..1bad466f9167 100644 --- a/Documentation/driver-api/cxl/memory-devices.rst +++ b/Documentation/driver-api/cxl/memory-devices.rst @@ -32,3 +32,15 @@ CXL Bus ------- .. kernel-doc:: drivers/cxl/bus.c :doc: cxl bus + +External Interfaces +=================== + +CXL IOCTL Interface +------------------- + +.. kernel-doc:: include/uapi/linux/cxl_mem.h + :doc: UAPI + +.. kernel-doc:: include/uapi/linux/cxl_mem.h + :internal: diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst b/Documentation/userspace-api/ioctl/ioctl-number.rst index a4c75a28c839..6eb8e634664d 100644 --- a/Documentation/userspace-api/ioctl/ioctl-number.rst +++ b/Documentation/userspace-api/ioctl/ioctl-number.rst @@ -352,6 +352,7 @@ Code Seq# Include File Comments 0xCC 00-0F drivers/misc/ibmvmc.h pseries VMC driver 0xCD 01 linux/reiserfs_fs.h +0xCE 01-02 uapi/linux/cxl_mem.h Compute Express Link Memory Devices 0xCF 02 fs/cifs/ioctl.c 0xDB 00-0F drivers/char/mwave/mwavepub.h 0xDD 00-3F ZFCP device driver see drivers/s390/scsi/ diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 1c0195b07063..aa8f843fcca1 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2020 Intel Corporation. All rights reserved. */ +#include #include #include #include @@ -40,6 +41,7 @@ #define CXL_MAILBOX_TIMEOUT_MS (2 * HZ) enum opcode { + CXL_MBOX_OP_INVALID = 0x0000, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_MAX = 0x10000 }; @@ -95,6 +97,49 @@ struct cxl_memdev { static int cxl_mem_major; static DEFINE_IDA(cxl_memdev_ida); +/** + * struct cxl_mem_command - Driver representation of a memory device command + * @info: Command information as it exists for the UAPI + * @opcode: The actual bits used for the mailbox protocol + * + * The cxl_mem_command is the driver's internal representation of commands that + * are supported by the driver. Some of these commands may not be supported by + * the hardware. The driver will use @info to validate the fields passed in by + * the user then submit the @opcode to the hardware. + * + * See struct cxl_command_info. + */ +struct cxl_mem_command { + struct cxl_command_info info; + enum opcode opcode; +}; + +#define CXL_CMD(_id, sin, sout) \ + [CXL_MEM_COMMAND_ID_##_id] = { \ + .info = { \ + .id = CXL_MEM_COMMAND_ID_##_id, \ + .size_in = sin, \ + .size_out = sout, \ + }, \ + .opcode = CXL_MBOX_OP_##_id, \ + } + +/* + * This table defines the supported mailbox commands for the driver. This table + * is made up of a UAPI structure. Non-negative values as parameters in the + * table will be validated against the user's input. For example, if size_in is + * 0, and the user passed in 1, it is an error. + */ +static struct cxl_mem_command mem_commands[] = { + CXL_CMD(IDENTIFY, 0, 0x43), +}; + +#define cxl_for_each_cmd(cmd) \ + for ((cmd) = &mem_commands[0]; \ + ((cmd) - mem_commands) < ARRAY_SIZE(mem_commands); (cmd)++) + +#define cxl_cmd_count ARRAY_SIZE(mem_commands) + static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm) { const unsigned long start = jiffies; @@ -324,6 +369,242 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm) mutex_unlock(&cxlm->mbox_mutex); } +/** + * handle_mailbox_cmd_from_user() - Dispatch a mailbox command for userspace. + * @cxlm: The CXL memory device to communicate with. + * @cmd: The validated command. + * @in_payload: Pointer to userspace's input payload. + * @out_payload: Pointer to userspace's output payload. + * @size_out: (Input) Max payload size to copy out. + * (Output) Payload size hardware generated. + * @retval: Hardware generated return code from the operation. + * + * Return: + * * %0 - Mailbox transaction succeeded. This implies the mailbox + * protocol completed successfully not that the operation itself + * was successful. + * * %-ENOMEM - Couldn't allocate a bounce buffer. + * * %-EFAULT - Something happened with copy_to/from_user. + * * %-EINTR - Mailbox acquisition interrupted. + * * %-EXXX - Transaction level failures. + * + * Creates the appropriate mailbox command and dispatches it on behalf of a + * userspace request. The input and output payloads are copied between + * userspace. + * + * See cxl_send_cmd(). + */ +static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm, + const struct cxl_mem_command *cmd, + u64 in_payload, u64 out_payload, + s32 *size_out, u32 *retval) +{ + struct device *dev = &cxlm->pdev->dev; + struct mbox_cmd mbox_cmd = { + .opcode = cmd->opcode, + .size_in = cmd->info.size_in, + .size_out = cmd->info.size_out, + }; + int rc; + + if (cmd->info.size_out) { + mbox_cmd.payload_out = kvzalloc(cmd->info.size_out, GFP_KERNEL); + if (!mbox_cmd.payload_out) + return -ENOMEM; + } + + if (cmd->info.size_in) { + mbox_cmd.payload_in = vmemdup_user(u64_to_user_ptr(in_payload), + cmd->info.size_in); + if (IS_ERR(mbox_cmd.payload_in)) + return PTR_ERR(mbox_cmd.payload_in); + } + + rc = cxl_mem_mbox_get(cxlm); + if (rc) + goto out; + + dev_dbg(dev, + "Submitting %s command for user\n" + "\topcode: %x\n" + "\tsize: %ub\n", + cxl_command_names[cmd->info.id].name, mbox_cmd.opcode, + cmd->info.size_in); + + rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd); + cxl_mem_mbox_put(cxlm); + if (rc) + goto out; + + /* + * @size_out contains the max size that's allowed to be written back out + * to userspace. While the payload may have written more output than + * this it will have to be ignored. + */ + if (mbox_cmd.size_out) { + dev_WARN_ONCE(dev, mbox_cmd.size_out > *size_out, + "Invalid return size\n"); + if (copy_to_user(u64_to_user_ptr(out_payload), + mbox_cmd.payload_out, mbox_cmd.size_out)) { + rc = -EFAULT; + goto out; + } + } + + *size_out = mbox_cmd.size_out; + *retval = mbox_cmd.return_code; + +out: + kvfree(mbox_cmd.payload_in); + kvfree(mbox_cmd.payload_out); + return rc; +} + +/** + * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND. + * @cxlm: &struct cxl_mem device whose mailbox will be used. + * @send_cmd: &struct cxl_send_command copied in from userspace. + * @out_cmd: Sanitized and populated &struct cxl_mem_command. + * + * Return: + * * %0 - @out_cmd is ready to send. + * * %-ENOTTY - Invalid command specified. + * * %-EINVAL - Reserved fields or invalid values were used. + * * %-ENOMEM - Input or output buffer wasn't sized properly. + * + * The result of this command is a fully validated command in @out_cmd that is + * safe to send to the hardware. + * + * See handle_mailbox_cmd_from_user() + */ +static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm, + const struct cxl_send_command *send_cmd, + struct cxl_mem_command *out_cmd) +{ + const struct cxl_command_info *info; + struct cxl_mem_command *c; + + if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX) + return -ENOTTY; + + /* + * The user can never specify an input payload larger than what hardware + * supports, but output can be arbitrarily large (simply write out as + * much data as the hardware provides). + */ + if (send_cmd->in.size > cxlm->payload_size) + return -EINVAL; + + if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK) + return -EINVAL; + + if (send_cmd->rsvd) + return -EINVAL; + + if (send_cmd->in.rsvd || send_cmd->out.rsvd) + return -EINVAL; + + /* Convert user's command into the internal representation */ + c = &mem_commands[send_cmd->id]; + info = &c->info; + + /* Check the input buffer is the expected size */ + if (info->size_in >= 0 && info->size_in != send_cmd->in.size) + return -ENOMEM; + + /* Check the output buffer is at least large enough */ + if (info->size_out >= 0 && send_cmd->out.size < info->size_out) + return -ENOMEM; + + memcpy(out_cmd, c, sizeof(*c)); + out_cmd->info.size_in = send_cmd->in.size; + /* + * XXX: out_cmd->info.size_out will be controlled by the driver, and the + * specified number of bytes @send_cmd->out.size will be copied back out + * to userspace. + */ + + return 0; +} + +static int cxl_query_cmd(struct cxl_memdev *cxlmd, + struct cxl_mem_query_commands __user *q) +{ + struct device *dev = &cxlmd->dev; + struct cxl_mem_command *cmd; + u32 n_commands; + int j = 0; + + dev_dbg(dev, "Query IOCTL\n"); + + if (get_user(n_commands, &q->n_commands)) + return -EFAULT; + + /* returns the total number if 0 elements are requested. */ + if (n_commands == 0) + return put_user(cxl_cmd_count, &q->n_commands); + + /* + * otherwise, return max(n_commands, total commands) cxl_command_info + * structures. + */ + cxl_for_each_cmd(cmd) { + const struct cxl_command_info *info = &cmd->info; + + if (copy_to_user(&q->commands[j++], info, sizeof(*info))) + return -EFAULT; + + if (j == n_commands) + break; + } + + return 0; +} + +static int cxl_send_cmd(struct cxl_memdev *cxlmd, + struct cxl_send_command __user *s) +{ + struct cxl_mem *cxlm = cxlmd->cxlm; + struct device *dev = &cxlmd->dev; + struct cxl_send_command send; + struct cxl_mem_command c; + int rc; + + dev_dbg(dev, "Send IOCTL\n"); + + if (copy_from_user(&send, s, sizeof(send))) + return -EFAULT; + + rc = cxl_validate_cmd_from_user(cxlmd->cxlm, &send, &c); + if (rc) + return rc; + + /* Prepare to handle a full payload for variable sized output */ + if (c.info.size_out < 0) + c.info.size_out = cxlm->payload_size; + + rc = handle_mailbox_cmd_from_user(cxlm, &c, send.in.payload, + send.out.payload, &send.out.size, + &send.retval); + if (rc) + return rc; + + return copy_to_user(s, &send, sizeof(send)); +} + +static long __cxl_memdev_ioctl(struct cxl_memdev *cxlmd, unsigned int cmd, + unsigned long arg) +{ + switch (cmd) { + case CXL_MEM_QUERY_COMMANDS: + return cxl_query_cmd(cxlmd, (void __user *)arg); + case CXL_MEM_SEND_COMMAND: + return cxl_send_cmd(cxlmd, (void __user *)arg); + default: + return -ENOTTY; + } +} + static long cxl_memdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -337,7 +618,7 @@ static long cxl_memdev_ioctl(struct file *file, unsigned int cmd, if (!percpu_ref_tryget_live(&cxlmd->ops_active)) return -ENXIO; - /* TODO: ioctl body */ + rc = __cxl_memdev_ioctl(cxlmd, cmd, arg); percpu_ref_put(&cxlmd->ops_active); diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h new file mode 100644 index 000000000000..887781dc3b6c --- /dev/null +++ b/include/uapi/linux/cxl_mem.h @@ -0,0 +1,156 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* + * CXL IOCTLs for Memory Devices + */ + +#ifndef _UAPI_CXL_MEM_H_ +#define _UAPI_CXL_MEM_H_ + +#include + +/** + * DOC: UAPI + * + * Not all of all commands that the driver supports are always available for use + * by userspace. Userspace must check the results from the QUERY command in + * order to determine the live set of commands. + */ + +#define CXL_MEM_QUERY_COMMANDS _IOR(0xCE, 1, struct cxl_mem_query_commands) +#define CXL_MEM_SEND_COMMAND _IOWR(0xCE, 2, struct cxl_send_command) + +#define CXL_CMDS \ + ___C(INVALID, "Invalid Command"), \ + ___C(IDENTIFY, "Identify Command"), \ + ___C(MAX, "invalid / last command") + +#define ___C(a, b) CXL_MEM_COMMAND_ID_##a +enum { CXL_CMDS }; + +#undef ___C +#define ___C(a, b) { b } +static const struct { + const char *name; +} cxl_command_names[] = { CXL_CMDS }; + +/* + * Here's how this actually breaks out: + * cxl_command_names[] = { + * [CXL_MEM_COMMAND_ID_INVALID] = { "Invalid Command" }, + * [CXL_MEM_COMMAND_ID_IDENTIFY] = { "Identify Command" }, + * ... + * [CXL_MEM_COMMAND_ID_MAX] = { "invalid / last command" }, + * }; + */ + +#undef ___C + +/** + * struct cxl_command_info - Command information returned from a query. + * @id: ID number for the command. + * @flags: Flags that specify command behavior. + * @size_in: Expected input size, or -1 if variable length. + * @size_out: Expected output size, or -1 if variable length. + * + * Represents a single command that is supported by both the driver and the + * hardware. This is returned as part of an array from the query ioctl. The + * following would be a command that takes a variable length input and returns 0 + * bytes of output. + * + * - @id = 10 + * - @flags = 0 + * - @size_in = -1 + * - @size_out = 0 + * + * See struct cxl_mem_query_commands. + */ +struct cxl_command_info { + __u32 id; + + __u32 flags; +#define CXL_MEM_COMMAND_FLAG_MASK GENMASK(0, 0) + + __s32 size_in; + __s32 size_out; +}; + +/** + * struct cxl_mem_query_commands - Query supported commands. + * @n_commands: In/out parameter. When @n_commands is > 0, the driver will + * return min(num_support_commands, n_commands). When @n_commands + * is 0, driver will return the number of total supported commands. + * @rsvd: Reserved for future use. + * @commands: Output array of supported commands. This array must be allocated + * by userspace to be at least min(num_support_commands, @n_commands) + * + * Allow userspace to query the available commands supported by both the driver, + * and the hardware. Commands that aren't supported by either the driver, or the + * hardware are not returned in the query. + * + * Examples: + * + * - { .n_commands = 0 } // Get number of supported commands + * - { .n_commands = 15, .commands = buf } // Return first 15 (or less) + * supported commands + * + * See struct cxl_command_info. + */ +struct cxl_mem_query_commands { + /* + * Input: Number of commands to return (space allocated by user) + * Output: Number of commands supported by the driver/hardware + * + * If n_commands is 0, kernel will only return number of commands and + * not try to populate commands[], thus allowing userspace to know how + * much space to allocate + */ + __u32 n_commands; + __u32 rsvd; + + struct cxl_command_info __user commands[]; /* out: supported commands */ +}; + +/** + * struct cxl_send_command - Send a command to a memory device. + * @id: The command to send to the memory device. This must be one of the + * commands returned by the query command. + * @flags: Flags for the command (input). + * @rsvd: Must be zero. + * @retval: Return value from the memory device (output). + * @in: Parameters associated with input payload. + * @in.size: Size of the payload to provide to the device (input). + * @in.rsvd: Must be zero. + * @in.payload: Pointer to memory for payload input, payload is little endian. + * @out: Parameters associated with output payload. + * @out.size: Size of the payload received from the device (input/output). This + * field is filled in by userspace to let the driver know how much + * space was allocated for output. It is populated by the driver to + * let userspace know how large the output payload actually was. + * @out.rsvd: Must be zero. + * @out.payload: Pointer to memory for payload output, payload is little endian. + * + * Mechanism for userspace to send a command to the hardware for processing. The + * driver will do basic validation on the command sizes. In some cases even the + * payload may be introspected. Userspace is required to allocate large enough + * buffers for size_out which can be variable length in certain situations. + */ +struct cxl_send_command { + __u32 id; + __u32 flags; + __u32 rsvd; + __u32 retval; + + struct { + __s32 size; + __u32 rsvd; + __u64 payload; + } in; + + struct { + __s32 size; + __u32 rsvd; + __u64 payload; + } out; +}; + +#endif From patchwork Wed Feb 17 04:09:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 329E9C433E0 for ; Wed, 17 Feb 2021 04:11:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DE19761490 for ; Wed, 17 Feb 2021 04:11:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231278AbhBQELa (ORCPT ); Tue, 16 Feb 2021 23:11:30 -0500 Received: from mga07.intel.com ([134.134.136.100]:21746 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231256AbhBQELL (ORCPT ); Tue, 16 Feb 2021 23:11:11 -0500 IronPort-SDR: uh9HOx1AaHjIrlNXUoP88qP1L4nVE038hhtixAEO8Hjh1nuLJ6sJprXxcLY/tBJal1bXEqV5PE B2kFOGXj9jnw== X-IronPort-AV: E=McAfee;i="6000,8403,9897"; a="247165928" X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="247165928" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:12 -0800 IronPort-SDR: 1k30KkxJ8ahUZ/OSrDjaO6VZuPmTdAgE4YP1JrB4/rl28CcK/dPf5aRXNGFh7+/+DLtHdTuSWS 8G3pChbe5Srw== X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="384948853" Received: from yxie-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.134.141]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:10 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , David Hildenbrand , David Rientjes , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , "John Groves (jgroves)" , "Kelley, Sean V" , Ariel Sibley , Jonathan Cameron Subject: [PATCH v5 5/9] cxl/mem: Add a "RAW" send command Date: Tue, 16 Feb 2021 20:09:54 -0800 Message-Id: <20210217040958.1354670-6-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210217040958.1354670-1-ben.widawsky@intel.com> References: <20210217040958.1354670-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org The CXL memory device send interface will have a number of supported commands. The raw command is not such a command. Raw commands allow userspace to send a specified opcode to the underlying hardware and bypass all driver checks on the command. The primary use for this command is to [begrudgingly] allow undocumented vendor specific hardware commands. While not the main motivation, it also allows prototyping new hardware commands without a driver patch and rebuild. While this all sounds very powerful it comes with a couple of caveats: 1. Bug reports using raw commands will not get the same level of attention as bug reports using supported commands (via taint). 2. Supported commands will be rejected by the RAW command. With this comes new debugfs knob to allow full access to your toes with your weapon of choice. Cc: Ariel Sibley Signed-off-by: Ben Widawsky Reviewed-by: Dan Williams (v2) Reviewed-by: Jonathan Cameron Reviewed-by: Konrad Rzeszutek Wilk --- drivers/cxl/Kconfig | 18 +++++ drivers/cxl/mem.c | 132 +++++++++++++++++++++++++++++++++++ include/uapi/linux/cxl_mem.h | 12 +++- 3 files changed, 161 insertions(+), 1 deletion(-) diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 9e80b311e928..97dc4d751651 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -32,4 +32,22 @@ config CXL_MEM Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification. If unsure say 'm'. + +config CXL_MEM_RAW_COMMANDS + bool "RAW Command Interface for Memory Devices" + depends on CXL_MEM + help + Enable CXL RAW command interface. + + The CXL driver ioctl interface may assign a kernel ioctl command + number for each specification defined opcode. At any given point in + time the number of opcodes that the specification defines and a device + may implement may exceed the kernel's set of associated ioctl function + numbers. The mismatch is either by omission, specification is too new, + or by design. When prototyping new hardware, or developing / debugging + the driver it is useful to be able to submit any possible command to + the hardware, even commands that may crash the kernel due to their + potential impact to memory currently in use by the kernel. + + If developing CXL hardware or the driver say Y, otherwise say N. endif diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index aa8f843fcca1..5319412e245c 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -1,6 +1,8 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2020 Intel Corporation. All rights reserved. */ #include +#include +#include #include #include #include @@ -42,7 +44,14 @@ enum opcode { CXL_MBOX_OP_INVALID = 0x0000, + CXL_MBOX_OP_RAW = CXL_MBOX_OP_INVALID, + CXL_MBOX_OP_ACTIVATE_FW = 0x0202, CXL_MBOX_OP_IDENTIFY = 0x4000, + CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, + CXL_MBOX_OP_SET_LSA = 0x4103, + CXL_MBOX_OP_SET_SHUTDOWN_STATE = 0x4204, + CXL_MBOX_OP_SCAN_MEDIA = 0x4304, + CXL_MBOX_OP_GET_SCAN_MEDIA = 0x4305, CXL_MBOX_OP_MAX = 0x10000 }; @@ -96,6 +105,8 @@ struct cxl_memdev { static int cxl_mem_major; static DEFINE_IDA(cxl_memdev_ida); +static struct dentry *cxl_debugfs; +static bool cxl_raw_allow_all; /** * struct cxl_mem_command - Driver representation of a memory device command @@ -132,6 +143,49 @@ struct cxl_mem_command { */ static struct cxl_mem_command mem_commands[] = { CXL_CMD(IDENTIFY, 0, 0x43), +#ifdef CONFIG_CXL_MEM_RAW_COMMANDS + CXL_CMD(RAW, ~0, ~0), +#endif +}; + +/* + * Commands that RAW doesn't permit. The rationale for each: + * + * CXL_MBOX_OP_ACTIVATE_FW: Firmware activation requires adjustment / + * coordination of transaction timeout values at the root bridge level. + * + * CXL_MBOX_OP_SET_PARTITION_INFO: The device memory map may change live + * and needs to be coordinated with HDM updates. + * + * CXL_MBOX_OP_SET_LSA: The label storage area may be cached by the + * driver and any writes from userspace invalidates those contents. + * + * CXL_MBOX_OP_SET_SHUTDOWN_STATE: Set shutdown state assumes no writes + * to the device after it is marked clean, userspace can not make that + * assertion. + * + * CXL_MBOX_OP_[GET_]SCAN_MEDIA: The kernel provides a native error list that + * is kept up to date with patrol notifications and error management. + */ +static u16 cxl_disabled_raw_commands[] = { + CXL_MBOX_OP_ACTIVATE_FW, + CXL_MBOX_OP_SET_PARTITION_INFO, + CXL_MBOX_OP_SET_LSA, + CXL_MBOX_OP_SET_SHUTDOWN_STATE, + CXL_MBOX_OP_SCAN_MEDIA, + CXL_MBOX_OP_GET_SCAN_MEDIA, +}; + +/* + * Command sets that RAW doesn't permit. All opcodes in this set are + * disabled because they pass plain text security payloads over the + * user/kernel boundary. This functionality is intended to be wrapped + * behind the keys ABI which allows for encrypted payloads in the UAPI + */ +static u8 security_command_sets[] = { + 0x44, /* Sanitize */ + 0x45, /* Persistent Memory Data-at-rest Security */ + 0x46, /* Security Passthrough */ }; #define cxl_for_each_cmd(cmd) \ @@ -162,6 +216,16 @@ static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm) return 0; } +static bool cxl_is_security_command(u16 opcode) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(security_command_sets); i++) + if (security_command_sets[i] == (opcode >> 8)) + return true; + return false; +} + static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm, struct mbox_cmd *mbox_cmd) { @@ -431,6 +495,9 @@ static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm, cxl_command_names[cmd->info.id].name, mbox_cmd.opcode, cmd->info.size_in); + dev_WARN_ONCE(dev, cmd->info.id == CXL_MEM_COMMAND_ID_RAW, + "raw command path used\n"); + rc = __cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd); cxl_mem_mbox_put(cxlm); if (rc) @@ -460,6 +527,29 @@ static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm, return rc; } +static bool cxl_mem_raw_command_allowed(u16 opcode) +{ + int i; + + if (!IS_ENABLED(CONFIG_CXL_MEM_RAW_COMMANDS)) + return false; + + if (security_locked_down(LOCKDOWN_NONE)) + return false; + + if (cxl_raw_allow_all) + return true; + + if (cxl_is_security_command(opcode)) + return false; + + for (i = 0; i < ARRAY_SIZE(cxl_disabled_raw_commands); i++) + if (cxl_disabled_raw_commands[i] == opcode) + return false; + + return true; +} + /** * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND. * @cxlm: &struct cxl_mem device whose mailbox will be used. @@ -471,6 +561,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_mem *cxlm, * * %-ENOTTY - Invalid command specified. * * %-EINVAL - Reserved fields or invalid values were used. * * %-ENOMEM - Input or output buffer wasn't sized properly. + * * %-EPERM - Attempted to use a protected command. * * The result of this command is a fully validated command in @out_cmd that is * safe to send to the hardware. @@ -495,6 +586,40 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm, if (send_cmd->in.size > cxlm->payload_size) return -EINVAL; + /* + * Checks are bypassed for raw commands but a WARN/taint will occur + * later in the callchain + */ + if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) { + const struct cxl_mem_command temp = { + .info = { + .id = CXL_MEM_COMMAND_ID_RAW, + .flags = 0, + .size_in = send_cmd->in.size, + .size_out = send_cmd->out.size, + }, + .opcode = send_cmd->raw.opcode + }; + + if (send_cmd->raw.rsvd) + return -EINVAL; + + /* + * Unlike supported commands, the output size of RAW commands + * gets passed along without further checking, so it must be + * validated here. + */ + if (send_cmd->out.size > cxlm->payload_size) + return -EINVAL; + + if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode)) + return -EPERM; + + memcpy(out_cmd, &temp, sizeof(temp)); + + return 0; + } + if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK) return -EINVAL; @@ -1166,6 +1291,7 @@ static struct pci_driver cxl_mem_driver = { static __init int cxl_mem_init(void) { + struct dentry *mbox_debugfs; dev_t devt; int rc; @@ -1182,11 +1308,17 @@ static __init int cxl_mem_init(void) return rc; } + cxl_debugfs = debugfs_create_dir("cxl", NULL); + mbox_debugfs = debugfs_create_dir("mbox", cxl_debugfs); + debugfs_create_bool("raw_allow_all", 0600, mbox_debugfs, + &cxl_raw_allow_all); + return 0; } static __exit void cxl_mem_exit(void) { + debugfs_remove_recursive(cxl_debugfs); pci_unregister_driver(&cxl_mem_driver); unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS); } diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h index 887781dc3b6c..c316028730e7 100644 --- a/include/uapi/linux/cxl_mem.h +++ b/include/uapi/linux/cxl_mem.h @@ -22,6 +22,7 @@ #define CXL_CMDS \ ___C(INVALID, "Invalid Command"), \ ___C(IDENTIFY, "Identify Command"), \ + ___C(RAW, "Raw device command"), \ ___C(MAX, "invalid / last command") #define ___C(a, b) CXL_MEM_COMMAND_ID_##a @@ -115,6 +116,9 @@ struct cxl_mem_query_commands { * @id: The command to send to the memory device. This must be one of the * commands returned by the query command. * @flags: Flags for the command (input). + * @raw: Special fields for raw commands + * @raw.opcode: Opcode passed to hardware when using the RAW command. + * @raw.rsvd: Must be zero. * @rsvd: Must be zero. * @retval: Return value from the memory device (output). * @in: Parameters associated with input payload. @@ -137,7 +141,13 @@ struct cxl_mem_query_commands { struct cxl_send_command { __u32 id; __u32 flags; - __u32 rsvd; + union { + struct { + __u16 opcode; + __u16 rsvd; + } raw; + __u32 rsvd; + }; __u32 retval; struct { From patchwork Wed Feb 17 04:09:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AEFEC433E9 for ; Wed, 17 Feb 2021 04:11:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C06A764DF3 for ; Wed, 17 Feb 2021 04:11:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231287AbhBQELc (ORCPT ); Tue, 16 Feb 2021 23:11:32 -0500 Received: from mga07.intel.com ([134.134.136.100]:21743 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230520AbhBQELV (ORCPT ); Tue, 16 Feb 2021 23:11:21 -0500 IronPort-SDR: 1fjPmPpee8BPj3vsLioBf5YVceKiaYQ/U3KaLRFmWSiq0wqI4yhW7SpcWRvyfOH3hKVUiRU6Sp Vet/HHLsERKA== X-IronPort-AV: E=McAfee;i="6000,8403,9897"; a="247165930" X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="247165930" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:13 -0800 IronPort-SDR: cYLQnU3Z7GfhZ82ekd0x+BR5w2IdzP1XAxSUHrRobj7ETuaHyxi0t03VWiwjdzbce1TMcsXZe/ OvjaGKs8QZBA== X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="384948861" Received: from yxie-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.134.141]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:12 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , David Hildenbrand , David Rientjes , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , "John Groves (jgroves)" , "Kelley, Sean V" , Jonathan Cameron Subject: [PATCH v5 6/9] cxl/mem: Enable commands via CEL Date: Tue, 16 Feb 2021 20:09:55 -0800 Message-Id: <20210217040958.1354670-7-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210217040958.1354670-1-ben.widawsky@intel.com> References: <20210217040958.1354670-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org CXL devices identified by the memory-device class code must implement the Device Command Interface (described in 8.2.9 of the CXL 2.0 spec). While the driver already maintains a list of commands it supports, there is still a need to be able to distinguish between commands that the driver knows about from commands that are optionally supported by the hardware. The Command Effects Log (CEL) is specified in the CXL 2.0 specification. The CEL is one of two types of logs, the other being vendor specific. They are distinguished in hardware/spec via UUID. The CEL is useful for 2 things: 1. Determine which optional commands are supported by the CXL device. 2. Enumerate any vendor specific commands The CEL is used by the driver to determine which commands are available in the hardware and therefore which commands userspace is allowed to execute. The set of enabled commands might be a subset of commands which are advertised in UAPI via CXL_MEM_SEND_COMMAND IOCTL. With the CEL enabling comes a internal flag to indicate a base set of commands that are enabled regardless of CEL. Such commands are required for basic interaction with the hardware and thus can be useful in debug cases, for example if the CEL is corrupted. The implementation leaves the statically defined table of commands and supplements it with a bitmap to determine commands that are enabled. This organization was chosen for the following reasons: - Smaller memory footprint. Doesn't need a table per device. - Reduce memory allocation complexity. - Fixed command IDs to opcode mapping for all devices makes development and debugging easier. - Certain helpers are easily achievable, like cxl_for_each_cmd(). Signed-off-by: Ben Widawsky Reviewed-by: Dan Williams (v2) Reviewed-by: Jonathan Cameron (v3) Reviewed-by: Konrad Rzeszutek Wilk --- drivers/cxl/cxl.h | 2 + drivers/cxl/mem.c | 223 +++++++++++++++++++++++++++++++++-- include/uapi/linux/cxl_mem.h | 1 + 3 files changed, 219 insertions(+), 7 deletions(-) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 8fd4a177fe25..6f14838c2d25 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -69,6 +69,7 @@ struct cxl_memdev; * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) * @mbox_mutex: Mutex to synchronize mailbox access. * @firmware_version: Firmware version for the memory device. + * @enabled_commands: Hardware commands found enabled in CEL. * @pmem_range: Persistent memory capacity information. * @ram_range: Volatile memory capacity information. */ @@ -84,6 +85,7 @@ struct cxl_mem { size_t payload_size; struct mutex mbox_mutex; /* Protects device mailbox and firmware */ char firmware_version[0x10]; + unsigned long *enabled_cmds; struct range pmem_range; struct range ram_range; diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 5319412e245c..e31b3045e231 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -46,6 +46,8 @@ enum opcode { CXL_MBOX_OP_INVALID = 0x0000, CXL_MBOX_OP_RAW = CXL_MBOX_OP_INVALID, CXL_MBOX_OP_ACTIVATE_FW = 0x0202, + CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, + CXL_MBOX_OP_GET_LOG = 0x0401, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, CXL_MBOX_OP_SET_LSA = 0x4103, @@ -108,10 +110,28 @@ static DEFINE_IDA(cxl_memdev_ida); static struct dentry *cxl_debugfs; static bool cxl_raw_allow_all; +enum { + CEL_UUID, + VENDOR_DEBUG_UUID, +}; + +/* See CXL 2.0 Table 170. Get Log Input Payload */ +static const uuid_t log_uuid[] = { + [CEL_UUID] = UUID_INIT(0xda9c0b5, 0xbf41, 0x4b78, 0x8f, 0x79, 0x96, + 0xb1, 0x62, 0x3b, 0x3f, 0x17), + [VENDOR_DEBUG_UUID] = UUID_INIT(0xe1819d9, 0x11a9, 0x400c, 0x81, 0x1f, + 0xd6, 0x07, 0x19, 0x40, 0x3d, 0x86), +}; + /** * struct cxl_mem_command - Driver representation of a memory device command * @info: Command information as it exists for the UAPI * @opcode: The actual bits used for the mailbox protocol + * @flags: Set of flags effecting driver behavior. + * + * * %CXL_CMD_FLAG_FORCE_ENABLE: In cases of error, commands with this flag + * will be enabled by the driver regardless of what hardware may have + * advertised. * * The cxl_mem_command is the driver's internal representation of commands that * are supported by the driver. Some of these commands may not be supported by @@ -123,9 +143,12 @@ static bool cxl_raw_allow_all; struct cxl_mem_command { struct cxl_command_info info; enum opcode opcode; + u32 flags; +#define CXL_CMD_FLAG_NONE 0 +#define CXL_CMD_FLAG_FORCE_ENABLE BIT(0) }; -#define CXL_CMD(_id, sin, sout) \ +#define CXL_CMD(_id, sin, sout, _flags) \ [CXL_MEM_COMMAND_ID_##_id] = { \ .info = { \ .id = CXL_MEM_COMMAND_ID_##_id, \ @@ -133,6 +156,7 @@ struct cxl_mem_command { .size_out = sout, \ }, \ .opcode = CXL_MBOX_OP_##_id, \ + .flags = _flags, \ } /* @@ -142,10 +166,11 @@ struct cxl_mem_command { * 0, and the user passed in 1, it is an error. */ static struct cxl_mem_command mem_commands[] = { - CXL_CMD(IDENTIFY, 0, 0x43), + CXL_CMD(IDENTIFY, 0, 0x43, CXL_CMD_FLAG_FORCE_ENABLE), #ifdef CONFIG_CXL_MEM_RAW_COMMANDS - CXL_CMD(RAW, ~0, ~0), + CXL_CMD(RAW, ~0, ~0, 0), #endif + CXL_CMD(GET_SUPPORTED_LOGS, 0, ~0, CXL_CMD_FLAG_FORCE_ENABLE), }; /* @@ -633,6 +658,10 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm, c = &mem_commands[send_cmd->id]; info = &c->info; + /* Check that the command is enabled for hardware */ + if (!test_bit(info->id, cxlm->enabled_cmds)) + return -ENOTTY; + /* Check the input buffer is the expected size */ if (info->size_in >= 0 && info->size_in != send_cmd->in.size) return -ENOMEM; @@ -757,6 +786,17 @@ static const struct file_operations cxl_memdev_fops = { .llseek = noop_llseek, }; +static inline struct cxl_mem_command *cxl_mem_find_command(u16 opcode) +{ + struct cxl_mem_command *c; + + cxl_for_each_cmd(c) + if (c->opcode == opcode) + return c; + + return NULL; +} + /** * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device. * @cxlm: The CXL memory device to communicate with. @@ -777,9 +817,7 @@ static const struct file_operations cxl_memdev_fops = { * * Mailbox commands may execute successfully yet the device itself reported an * error. While this distinction can be useful for commands from userspace, the - * kernel will only be able to use results when both are successful. It's - * expected that all callers of this function know exactly the size of the data - * they will consume from the hardware. + * kernel will only be able to use results when both are successful. * * See __cxl_mem_mbox_send_cmd() */ @@ -787,6 +825,7 @@ static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, void *in, size_t in_size, void *out, size_t out_size) { + const struct cxl_mem_command *cmd = cxl_mem_find_command(opcode); struct mbox_cmd mbox_cmd = { .opcode = opcode, .payload_in = in, @@ -812,7 +851,11 @@ static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, u16 opcode, if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) return -ENXIO; - if (mbox_cmd.size_out != out_size) + /* + * Variable sized commands can't be validated and so it's up to the + * caller to do that if they wish. + */ + if (cmd->info.size_out >= 0 && mbox_cmd.size_out != out_size) return -EIO; return 0; @@ -947,6 +990,14 @@ static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo, mutex_init(&cxlm->mbox_mutex); cxlm->pdev = pdev; cxlm->regs = regs + offset; + cxlm->enabled_cmds = + devm_kmalloc_array(dev, BITS_TO_LONGS(cxl_cmd_count), + sizeof(unsigned long), + GFP_KERNEL | __GFP_ZERO); + if (!cxlm->enabled_cmds) { + dev_err(dev, "No memory available for bitmap\n"); + return NULL; + } dev_dbg(dev, "Mapped CXL Memory Device resource\n"); return cxlm; @@ -1166,6 +1217,160 @@ static int cxl_mem_add_memdev(struct cxl_mem *cxlm) return rc; } +static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out) +{ + u32 remaining = size; + u32 offset = 0; + + while (remaining) { + u32 xfer_size = min_t(u32, remaining, cxlm->payload_size); + struct cxl_mbox_get_log { + uuid_t uuid; + __le32 offset; + __le32 length; + } __packed log = { + .uuid = *uuid, + .offset = cpu_to_le32(offset), + .length = cpu_to_le32(xfer_size) + }; + int rc; + + rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_LOG, &log, + sizeof(log), out, xfer_size); + if (rc < 0) + return rc; + + out += xfer_size; + remaining -= xfer_size; + offset += xfer_size; + } + + return 0; +} + +/** + * cxl_walk_cel() - Walk through the Command Effects Log. + * @cxlm: Device. + * @size: Length of the Command Effects Log. + * @cel: CEL + * + * Iterate over each entry in the CEL and determine if the driver supports the + * command. If so, the command is enabled for the device and can be used later. + */ +static void cxl_walk_cel(struct cxl_mem *cxlm, size_t size, u8 *cel) +{ + struct cel_entry { + __le16 opcode; + __le16 effect; + } __packed * cel_entry; + const int cel_entries = size / sizeof(*cel_entry); + int i; + + cel_entry = (struct cel_entry *)cel; + + for (i = 0; i < cel_entries; i++) { + u16 opcode = le16_to_cpu(cel_entry[i].opcode); + struct cxl_mem_command *cmd = cxl_mem_find_command(opcode); + + if (!cmd) { + dev_dbg(&cxlm->pdev->dev, + "Opcode 0x%04x unsupported by driver", opcode); + continue; + } + + set_bit(cmd->info.id, cxlm->enabled_cmds); + } +} + +struct cxl_mbox_get_supported_logs { + __le16 entries; + u8 rsvd[6]; + struct gsl_entry { + uuid_t uuid; + __le32 size; + } __packed entry[]; +} __packed; + +static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_mem *cxlm) +{ + struct cxl_mbox_get_supported_logs *ret; + int rc; + + ret = kvmalloc(cxlm->payload_size, GFP_KERNEL); + if (!ret) + return ERR_PTR(-ENOMEM); + + rc = cxl_mem_mbox_send_cmd(cxlm, CXL_MBOX_OP_GET_SUPPORTED_LOGS, NULL, + 0, ret, cxlm->payload_size); + if (rc < 0) { + kvfree(ret); + return ERR_PTR(rc); + } + + return ret; +} + +/** + * cxl_mem_enumerate_cmds() - Enumerate commands for a device. + * @cxlm: The device. + * + * Returns 0 if enumerate completed successfully. + * + * CXL devices have optional support for certain commands. This function will + * determine the set of supported commands for the hardware and update the + * enabled_cmds bitmap in the @cxlm. + */ +static int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm) +{ + struct cxl_mbox_get_supported_logs *gsl; + struct device *dev = &cxlm->pdev->dev; + struct cxl_mem_command *cmd; + int i, rc; + + gsl = cxl_get_gsl(cxlm); + if (IS_ERR(gsl)) + return PTR_ERR(gsl); + + rc = -ENOENT; + for (i = 0; i < le16_to_cpu(gsl->entries); i++) { + u32 size = le32_to_cpu(gsl->entry[i].size); + uuid_t uuid = gsl->entry[i].uuid; + u8 *log; + + dev_dbg(dev, "Found LOG type %pU of size %d", &uuid, size); + + if (!uuid_equal(&uuid, &log_uuid[CEL_UUID])) + continue; + + log = kvmalloc(size, GFP_KERNEL); + if (!log) { + rc = -ENOMEM; + goto out; + } + + rc = cxl_xfer_log(cxlm, &uuid, size, log); + if (rc) { + kvfree(log); + goto out; + } + + cxl_walk_cel(cxlm, size, log); + kvfree(log); + + /* In case CEL was bogus, enable some default commands. */ + cxl_for_each_cmd(cmd) + if (cmd->flags & CXL_CMD_FLAG_FORCE_ENABLE) + set_bit(cmd->info.id, cxlm->enabled_cmds); + + /* Found the required CEL */ + rc = 0; + } + +out: + kvfree(gsl); + return rc; +} + /** * cxl_mem_identify() - Send the IDENTIFY command to the device. * @cxlm: The device to identify. @@ -1266,6 +1471,10 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; + rc = cxl_mem_enumerate_cmds(cxlm); + if (rc) + return rc; + rc = cxl_mem_identify(cxlm); if (rc) return rc; diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h index c316028730e7..59227f82a4c1 100644 --- a/include/uapi/linux/cxl_mem.h +++ b/include/uapi/linux/cxl_mem.h @@ -23,6 +23,7 @@ ___C(INVALID, "Invalid Command"), \ ___C(IDENTIFY, "Identify Command"), \ ___C(RAW, "Raw device command"), \ + ___C(GET_SUPPORTED_LOGS, "Get Supported Logs"), \ ___C(MAX, "invalid / last command") #define ___C(a, b) CXL_MEM_COMMAND_ID_##a From patchwork Wed Feb 17 04:09:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47824C433E0 for ; Wed, 17 Feb 2021 04:11:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2486261490 for ; Wed, 17 Feb 2021 04:11:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231334AbhBQELh (ORCPT ); Tue, 16 Feb 2021 23:11:37 -0500 Received: from mga07.intel.com ([134.134.136.100]:21744 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231273AbhBQELV (ORCPT ); Tue, 16 Feb 2021 23:11:21 -0500 IronPort-SDR: 72a0bemGjxUamc1Clhul64lt0osKYFBixpjSyzIaiXYx6cTMRKH+5+WrMGlp7k0Qw2d1oaeZUm uNLA+CjxYq3g== X-IronPort-AV: E=McAfee;i="6000,8403,9897"; a="247165937" X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="247165937" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:14 -0800 IronPort-SDR: nK2XZijp2iIhyQNYe85cCLUE9QbW6iMg4l7TMMDJAPd8dQNtFEm5amnnW+OI11gYjAWO8dbpfB bOldk/zgbxVQ== X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="384948871" Received: from yxie-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.134.141]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:13 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , David Hildenbrand , David Rientjes , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , "John Groves (jgroves)" , "Kelley, Sean V" , Jonathan Cameron Subject: [PATCH v5 7/9] cxl/mem: Add set of informational commands Date: Tue, 16 Feb 2021 20:09:56 -0800 Message-Id: <20210217040958.1354670-8-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210217040958.1354670-1-ben.widawsky@intel.com> References: <20210217040958.1354670-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Add initial set of formal commands beyond basic identify and command enumeration. Signed-off-by: Ben Widawsky Reviewed-by: Dan Williams Reviewed-by: Jonathan Cameron (v2) Reviewed-by: Konrad Rzeszutek Wilk --- drivers/cxl/mem.c | 9 +++++++++ include/uapi/linux/cxl_mem.h | 5 +++++ 2 files changed, 14 insertions(+) diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index e31b3045e231..6d7d3870b5da 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -45,12 +45,16 @@ enum opcode { CXL_MBOX_OP_INVALID = 0x0000, CXL_MBOX_OP_RAW = CXL_MBOX_OP_INVALID, + CXL_MBOX_OP_GET_FW_INFO = 0x0200, CXL_MBOX_OP_ACTIVATE_FW = 0x0202, CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, CXL_MBOX_OP_GET_LOG = 0x0401, CXL_MBOX_OP_IDENTIFY = 0x4000, + CXL_MBOX_OP_GET_PARTITION_INFO = 0x4100, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, + CXL_MBOX_OP_GET_LSA = 0x4102, CXL_MBOX_OP_SET_LSA = 0x4103, + CXL_MBOX_OP_GET_HEALTH_INFO = 0x4200, CXL_MBOX_OP_SET_SHUTDOWN_STATE = 0x4204, CXL_MBOX_OP_SCAN_MEDIA = 0x4304, CXL_MBOX_OP_GET_SCAN_MEDIA = 0x4305, @@ -171,6 +175,11 @@ static struct cxl_mem_command mem_commands[] = { CXL_CMD(RAW, ~0, ~0, 0), #endif CXL_CMD(GET_SUPPORTED_LOGS, 0, ~0, CXL_CMD_FLAG_FORCE_ENABLE), + CXL_CMD(GET_FW_INFO, 0, 0x50, 0), + CXL_CMD(GET_PARTITION_INFO, 0, 0x20, 0), + CXL_CMD(GET_LSA, 0x8, ~0, 0), + CXL_CMD(GET_HEALTH_INFO, 0, 0x12, 0), + CXL_CMD(GET_LOG, 0x18, ~0, CXL_CMD_FLAG_FORCE_ENABLE), }; /* diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h index 59227f82a4c1..3155382dfc9b 100644 --- a/include/uapi/linux/cxl_mem.h +++ b/include/uapi/linux/cxl_mem.h @@ -24,6 +24,11 @@ ___C(IDENTIFY, "Identify Command"), \ ___C(RAW, "Raw device command"), \ ___C(GET_SUPPORTED_LOGS, "Get Supported Logs"), \ + ___C(GET_FW_INFO, "Get FW Info"), \ + ___C(GET_PARTITION_INFO, "Get Partition Information"), \ + ___C(GET_LSA, "Get Label Storage Area"), \ + ___C(GET_HEALTH_INFO, "Get Health Info"), \ + ___C(GET_LOG, "Get Log"), \ ___C(MAX, "invalid / last command") #define ___C(a, b) CXL_MEM_COMMAND_ID_##a From patchwork Wed Feb 17 04:09:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D409C43331 for ; Wed, 17 Feb 2021 04:11:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0A60664E28 for ; Wed, 17 Feb 2021 04:11:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231300AbhBQELe (ORCPT ); Tue, 16 Feb 2021 23:11:34 -0500 Received: from mga07.intel.com ([134.134.136.100]:21746 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231157AbhBQEL3 (ORCPT ); Tue, 16 Feb 2021 23:11:29 -0500 IronPort-SDR: 8vD2W3v8KuiiFcngb5Ych6s06MntTV8mtLNmF/Jg+Japw5nPK0M29SIabKA3li63AqgIDvoxRc nT9lDCFXddhA== X-IronPort-AV: E=McAfee;i="6000,8403,9897"; a="247165940" X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="247165940" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:16 -0800 IronPort-SDR: 6JsFT80PbfG6FjHuutc5dMIKiq8RsDcU6DLB+WbIwQP2J80jgDZQMRoreqAJyhF6OqpGrRmyAd BxlleyC2sUwg== X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="384948876" Received: from yxie-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.134.141]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:14 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , David Hildenbrand , David Rientjes , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , "John Groves (jgroves)" , "Kelley, Sean V" , Alison Schofield Subject: [PATCH v5 8/9] MAINTAINERS: Add maintainers of the CXL driver Date: Tue, 16 Feb 2021 20:09:57 -0800 Message-Id: <20210217040958.1354670-9-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210217040958.1354670-1-ben.widawsky@intel.com> References: <20210217040958.1354670-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Cc: Dan Williams Cc: Vishal Verma Cc: Ira Weiny Cc: Alison Schofield Signed-off-by: Ben Widawsky --- MAINTAINERS | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 6eff4f720c72..93c8694a8f04 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4444,6 +4444,17 @@ M: Miguel Ojeda S: Maintained F: include/linux/compiler_attributes.h +COMPUTE EXPRESS LINK (CXL) +M: Alison Schofield +M: Vishal Verma +M: Ira Weiny +M: Ben Widawsky +M: Dan Williams +L: linux-cxl@vger.kernel.org +S: Maintained +F: drivers/cxl/ +F: include/uapi/linux/cxl_mem.h + CONEXANT ACCESSRUNNER USB DRIVER L: accessrunner-general@lists.sourceforge.net S: Orphan From patchwork Wed Feb 17 04:09:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BED28C433E6 for ; Wed, 17 Feb 2021 04:12:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7EABF64DF3 for ; Wed, 17 Feb 2021 04:12:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231338AbhBQEMN (ORCPT ); Tue, 16 Feb 2021 23:12:13 -0500 Received: from mga07.intel.com ([134.134.136.100]:21743 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231346AbhBQELr (ORCPT ); Tue, 16 Feb 2021 23:11:47 -0500 IronPort-SDR: GspS+H6jCKCprRbqT4PcDuYa9z98YqP6+cBY6mEKF7Pmw2Yq1/p7wmnnzzWWasIvoeXtBe/7AT a0B8B+Lsli0w== X-IronPort-AV: E=McAfee;i="6000,8403,9897"; a="247165948" X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="247165948" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:18 -0800 IronPort-SDR: Mug5PkrodM8Rl1xWiYxq98sDIf0X3SSeMjlZa9yQ7TA5gumGj0U/nmFtBL6W2PpAlt1kXP8boS FlOqJ5QddABA== X-IronPort-AV: E=Sophos;i="5.81,184,1610438400"; d="scan'208";a="384948885" Received: from yxie-mobl.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.134.141]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Feb 2021 20:10:16 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , David Hildenbrand , David Rientjes , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , "John Groves (jgroves)" , "Kelley, Sean V" Subject: [RFC PATCH v5 9/9] cxl/mem: Add payload dumping for debug Date: Tue, 16 Feb 2021 20:09:58 -0800 Message-Id: <20210217040958.1354670-10-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210217040958.1354670-1-ben.widawsky@intel.com> References: <20210217040958.1354670-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org It's often useful in debug scenarios to see what the hardware has dumped out. As it stands today, any device error will result in the payload not being copied out, so there is no way to triage commands which weren't expected to fail (and sometimes the payload may have that information). The functionality is protected by normal kernel security mechanisms as well as a CONFIG option in the CXL driver. This was extracted from the original version of the CXL enabling patch series. Signed-off-by: Ben Widawsky --- drivers/cxl/Kconfig | 13 +++++++++++++ drivers/cxl/mem.c | 8 ++++++++ 2 files changed, 21 insertions(+) diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 97dc4d751651..3eec9276e586 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -50,4 +50,17 @@ config CXL_MEM_RAW_COMMANDS potential impact to memory currently in use by the kernel. If developing CXL hardware or the driver say Y, otherwise say N. + +config CXL_MEM_INSECURE_DEBUG + bool "CXL.mem debugging" + depends on CXL_MEM + help + Enable debug of all CXL command payloads. + + Some CXL devices and controllers support encryption and other + security features. The payloads for the commands that enable + those features may contain sensitive clear-text security + material. Disable debug of those command payloads by default. + If you are a kernel developer actively working on CXL + security enabling say Y, otherwise say N. endif diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 6d7d3870b5da..d63c8eaf23c7 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -346,6 +346,14 @@ static int __cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, /* #5 */ rc = cxl_mem_wait_for_doorbell(cxlm); + + if (!cxl_is_security_command(mbox_cmd->opcode) || + IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) { + print_hex_dump_debug("Payload ", DUMP_PREFIX_OFFSET, 16, 1, + mbox_cmd->payload_in, mbox_cmd->size_in, + true); + } + if (rc == -ETIMEDOUT) { cxl_mem_mbox_timeout(cxlm, mbox_cmd); return rc;