From patchwork Sat Jan 30 00:24:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 078E3C433E0 for ; Sat, 30 Jan 2021 00:26:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A7F9B64E02 for ; Sat, 30 Jan 2021 00:26:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231316AbhA3A0C (ORCPT ); Fri, 29 Jan 2021 19:26:02 -0500 Received: from mga01.intel.com ([192.55.52.88]:38334 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231138AbhA3AZa (ORCPT ); Fri, 29 Jan 2021 19:25:30 -0500 IronPort-SDR: fTWEneBAuMBp2DwOiY5I6Y6OO8BD5g6ZPHumQdQGdaea7DOp/MZWAzCebvyWqNAErLCV1h1py6 pam1MuykNTxw== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350681" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350681" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:45 -0800 IronPort-SDR: CqRdBbsp55b6+sSNXhG3wGGUB2/578PfuTqvbSayjqJApMWWI+5ryRenXObkNHyZXkWhJhHCX2 PyYqjWy56nvg== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591645" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:45 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Dan Williams , Jonathan Corbet , Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 01/14] cxl/mem: Introduce a driver for CXL-2.0-Type-3 endpoints Date: Fri, 29 Jan 2021 16:24:25 -0800 Message-Id: <20210130002438.1872527-2-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Dan Williams The CXL.mem protocol allows a device to act as a provider of "System RAM" and/or "Persistent Memory" that is fully coherent as if the memory was attached to the typical CPU memory controller. With the CXL-2.0 specification a PCI endpoint can implement a "Type-3" device interface and give the operating system control over "Host Managed Device Memory". See section 2.3 Type 3 CXL Device. The memory range exported by the device may optionally be described by the platform firmware memory map, or by infrastructure like LIBNVDIMM to provision persistent memory capacity from one, or more, CXL.mem devices. A pre-requisite for Linux-managed memory-capacity provisioning is this cxl_mem driver that can speak the mailbox protocol defined in section 8.2.8.4 Mailbox Registers. For now just land the initial driver boiler-plate and Documentation/ infrastructure. Link: https://www.computeexpresslink.org/download-the-specification Cc: Jonathan Corbet Signed-off-by: Dan Williams Signed-off-by: Ben Widawsky Acked-by: David Rientjes --- Documentation/driver-api/cxl/index.rst | 12 ++++ .../driver-api/cxl/memory-devices.rst | 29 +++++++++ Documentation/driver-api/index.rst | 1 + drivers/Kconfig | 1 + drivers/Makefile | 1 + drivers/cxl/Kconfig | 35 +++++++++++ drivers/cxl/Makefile | 4 ++ drivers/cxl/mem.c | 61 +++++++++++++++++++ drivers/cxl/pci.h | 20 ++++++ 9 files changed, 164 insertions(+) create mode 100644 Documentation/driver-api/cxl/index.rst create mode 100644 Documentation/driver-api/cxl/memory-devices.rst create mode 100644 drivers/cxl/Kconfig create mode 100644 drivers/cxl/Makefile create mode 100644 drivers/cxl/mem.c create mode 100644 drivers/cxl/pci.h diff --git a/Documentation/driver-api/cxl/index.rst b/Documentation/driver-api/cxl/index.rst new file mode 100644 index 000000000000..036e49553542 --- /dev/null +++ b/Documentation/driver-api/cxl/index.rst @@ -0,0 +1,12 @@ +.. SPDX-License-Identifier: GPL-2.0 + +==================== +Compute Express Link +==================== + +.. toctree:: + :maxdepth: 1 + + memory-devices + +.. only:: subproject and html diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst new file mode 100644 index 000000000000..43177e700d62 --- /dev/null +++ b/Documentation/driver-api/cxl/memory-devices.rst @@ -0,0 +1,29 @@ +.. SPDX-License-Identifier: GPL-2.0 +.. include:: + +=================================== +Compute Express Link Memory Devices +=================================== + +A Compute Express Link Memory Device is a CXL component that implements the +CXL.mem protocol. It contains some amount of volatile memory, persistent memory, +or both. It is enumerated as a PCI device for configuration and passing +messages over an MMIO mailbox. Its contribution to the System Physical +Address space is handled via HDM (Host Managed Device Memory) decoders +that optionally define a device's contribution to an interleaved address +range across multiple devices underneath a host-bridge or interleaved +across host-bridges. + +Driver Infrastructure +===================== + +This section covers the driver infrastructure for a CXL memory device. + +CXL Memory Device +----------------- + +.. kernel-doc:: drivers/cxl/mem.c + :doc: cxl mem + +.. kernel-doc:: drivers/cxl/mem.c + :internal: diff --git a/Documentation/driver-api/index.rst b/Documentation/driver-api/index.rst index 2456d0a97ed8..d246a18fd78f 100644 --- a/Documentation/driver-api/index.rst +++ b/Documentation/driver-api/index.rst @@ -35,6 +35,7 @@ available subsections can be seen below. usb/index firewire pci/index + cxl/index spi i2c ipmb diff --git a/drivers/Kconfig b/drivers/Kconfig index dcecc9f6e33f..62c753a73651 100644 --- a/drivers/Kconfig +++ b/drivers/Kconfig @@ -6,6 +6,7 @@ menu "Device Drivers" source "drivers/amba/Kconfig" source "drivers/eisa/Kconfig" source "drivers/pci/Kconfig" +source "drivers/cxl/Kconfig" source "drivers/pcmcia/Kconfig" source "drivers/rapidio/Kconfig" diff --git a/drivers/Makefile b/drivers/Makefile index fd11b9ac4cc3..678ea810410f 100644 --- a/drivers/Makefile +++ b/drivers/Makefile @@ -73,6 +73,7 @@ obj-$(CONFIG_NVM) += lightnvm/ obj-y += base/ block/ misc/ mfd/ nfc/ obj-$(CONFIG_LIBNVDIMM) += nvdimm/ obj-$(CONFIG_DAX) += dax/ +obj-$(CONFIG_CXL_BUS) += cxl/ obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/ obj-$(CONFIG_NUBUS) += nubus/ obj-y += macintosh/ diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig new file mode 100644 index 000000000000..3b66b46af8a0 --- /dev/null +++ b/drivers/cxl/Kconfig @@ -0,0 +1,35 @@ +# SPDX-License-Identifier: GPL-2.0-only +menuconfig CXL_BUS + tristate "CXL (Compute Express Link) Devices Support" + depends on PCI + help + CXL is a bus that is electrically compatible with PCI Express, but + layers three protocols on that signalling (CXL.io, CXL.cache, and + CXL.mem). The CXL.cache protocol allows devices to hold cachelines + locally, the CXL.mem protocol allows devices to be fully coherent + memory targets, the CXL.io protocol is equivalent to PCI Express. + Say 'y' to enable support for the configuration and management of + devices supporting these protocols. + +if CXL_BUS + +config CXL_MEM + tristate "CXL.mem: Endpoint Support" + help + The CXL.mem protocol allows a device to act as a provider of + "System RAM" and/or "Persistent Memory" that is fully coherent + as if the memory was attached to the typical CPU memory + controller. + + Say 'y/m' to enable a driver (named "cxl_mem.ko" when built as + a module) that will attach to CXL.mem devices for + configuration, provisioning, and health monitoring. This + driver is required for dynamic provisioning of CXL.mem + attached memory which is a prerequisite for persistent memory + support. Typically volatile memory is mapped by platform + firmware and included in the platform memory map, but in some + cases the OS is responsible for mapping that memory. See + Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification. + + If unsure say 'm'. +endif diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile new file mode 100644 index 000000000000..4a30f7c3fc4a --- /dev/null +++ b/drivers/cxl/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_CXL_MEM) += cxl_mem.o + +cxl_mem-y := mem.o diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c new file mode 100644 index 000000000000..f4ee9a507ac9 --- /dev/null +++ b/drivers/cxl/mem.c @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */ +#include +#include +#include +#include "pci.h" + +static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) +{ + int pos; + + pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DVSEC); + if (!pos) + return 0; + + while (pos) { + u16 vendor, id; + + pci_read_config_word(pdev, pos + PCI_DVSEC_VENDOR_ID_OFFSET, + &vendor); + pci_read_config_word(pdev, pos + PCI_DVSEC_ID_OFFSET, &id); + if (vendor == PCI_DVSEC_VENDOR_ID_CXL && dvsec == id) + return pos; + + pos = pci_find_next_ext_capability(pdev, pos, + PCI_EXT_CAP_ID_DVSEC); + } + + return 0; +} + +static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct device *dev = &pdev->dev; + int regloc; + + regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC); + if (!regloc) { + dev_err(dev, "register location dvsec not found\n"); + return -ENXIO; + } + + return 0; +} + +static const struct pci_device_id cxl_mem_pci_tbl[] = { + /* PCI class code for CXL.mem Type-3 Devices */ + { PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, + PCI_CLASS_MEMORY_CXL, 0xffffff, 0 }, + { /* terminate list */ }, +}; +MODULE_DEVICE_TABLE(pci, cxl_mem_pci_tbl); + +static struct pci_driver cxl_mem_driver = { + .name = KBUILD_MODNAME, + .id_table = cxl_mem_pci_tbl, + .probe = cxl_mem_probe, +}; + +MODULE_LICENSE("GPL v2"); +module_pci_driver(cxl_mem_driver); diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h new file mode 100644 index 000000000000..a8a9935fa90b --- /dev/null +++ b/drivers/cxl/pci.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */ +#ifndef __CXL_PCI_H__ +#define __CXL_PCI_H__ + +#define PCI_CLASS_MEMORY_CXL 0x050210 + +/* + * See section 8.1 Configuration Space Registers in the CXL 2.0 + * Specification + */ +#define PCI_EXT_CAP_ID_DVSEC 0x23 +#define PCI_DVSEC_VENDOR_ID_CXL 0x1E98 +#define PCI_DVSEC_VENDOR_ID_OFFSET 0x4 +#define PCI_DVSEC_ID_CXL 0x0 +#define PCI_DVSEC_ID_OFFSET 0x8 + +#define PCI_DVSEC_ID_CXL_REGLOC 0x8 + +#endif /* __CXL_PCI_H__ */ From patchwork Sat Jan 30 00:24:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7463C433DB for ; Sat, 30 Jan 2021 00:25:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4BC1064E08 for ; Sat, 30 Jan 2021 00:25:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231184AbhA3AZn (ORCPT ); Fri, 29 Jan 2021 19:25:43 -0500 Received: from mga01.intel.com ([192.55.52.88]:38338 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231183AbhA3AZ3 (ORCPT ); Fri, 29 Jan 2021 19:25:29 -0500 IronPort-SDR: afE9TQsmGbBMNKCN/9AVNSqZsJgadBKkLWyHvuXwkI3y6DriyPvYzs6Z1WE23oFhKjaf8ex2M4 IvzCMCuSjV3g== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350682" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350682" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:46 -0800 IronPort-SDR: T5bT+znXC5VaRxWXLhS5ug7h/Z36z95lcPBf2nsIxYawb5Kws660bUtct6fJkWMxcSVmOPZB8D h/Td/Sb/azwg== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591648" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:45 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 02/14] cxl/mem: Map memory device registers Date: Fri, 29 Jan 2021 16:24:26 -0800 Message-Id: <20210130002438.1872527-3-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org All the necessary bits are initialized in order to find and map the register space for CXL Memory Devices. This is accomplished by using the Register Locator DVSEC (CXL 2.0 - 8.1.9.1) to determine which PCI BAR to use, and how much of an offset from that BAR should be added. If the memory device registers are found and mapped a new internal data structure tracking device state is allocated. Signed-off-by: Ben Widawsky Acked-by: David Rientjes --- drivers/cxl/cxl.h | 17 ++++++++++ drivers/cxl/mem.c | 83 +++++++++++++++++++++++++++++++++++++++++++++-- drivers/cxl/pci.h | 14 ++++++++ 3 files changed, 112 insertions(+), 2 deletions(-) create mode 100644 drivers/cxl/cxl.h diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h new file mode 100644 index 000000000000..d81d0ba4617c --- /dev/null +++ b/drivers/cxl/cxl.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2020 Intel Corporation. */ + +#ifndef __CXL_H__ +#define __CXL_H__ + +/** + * struct cxl_mem - A CXL memory device + * @pdev: The PCI device associated with this CXL device. + * @regs: IO mappings to the device's MMIO + */ +struct cxl_mem { + struct pci_dev *pdev; + void __iomem *regs; +}; + +#endif diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index f4ee9a507ac9..a869c8dc24cc 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -4,6 +4,58 @@ #include #include #include "pci.h" +#include "cxl.h" + +/** + * cxl_mem_create() - Create a new &struct cxl_mem. + * @pdev: The pci device associated with the new &struct cxl_mem. + * @reg_lo: Lower 32b of the register locator + * @reg_hi: Upper 32b of the register locator. + * + * Return: The new &struct cxl_mem on success, NULL on failure. + * + * Map the BAR for a CXL memory device. This BAR has the memory device's + * registers for the device as specified in CXL specification. + */ +static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo, + u32 reg_hi) +{ + struct device *dev = &pdev->dev; + struct cxl_mem *cxlm; + void __iomem *regs; + u64 offset; + u8 bar; + int rc; + + offset = ((u64)reg_hi << 32) | (reg_lo & CXL_REGLOC_ADDR_MASK); + bar = (reg_lo >> CXL_REGLOC_BIR_SHIFT) & CXL_REGLOC_BIR_MASK; + + /* Basic sanity check that BAR is big enough */ + if (pci_resource_len(pdev, bar) < offset) { + dev_err(dev, "BAR%d: %pr: too small (offset: %#llx)\n", bar, + &pdev->resource[bar], (unsigned long long)offset); + return NULL; + } + + rc = pcim_iomap_regions(pdev, BIT(bar), pci_name(pdev)); + if (rc != 0) { + dev_err(dev, "failed to map registers\n"); + return NULL; + } + + cxlm = devm_kzalloc(&pdev->dev, sizeof(*cxlm), GFP_KERNEL); + if (!cxlm) { + dev_err(dev, "No memory available\n"); + return NULL; + } + + regs = pcim_iomap_table(pdev)[bar]; + cxlm->pdev = pdev; + cxlm->regs = regs + offset; + + dev_dbg(dev, "Mapped CXL Memory Device resource\n"); + return cxlm; +} static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) { @@ -32,15 +84,42 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct device *dev = &pdev->dev; - int regloc; + struct cxl_mem *cxlm; + int rc, regloc, i; + + rc = pcim_enable_device(pdev); + if (rc) + return rc; regloc = cxl_mem_dvsec(pdev, PCI_DVSEC_ID_CXL_REGLOC); if (!regloc) { dev_err(dev, "register location dvsec not found\n"); return -ENXIO; } + regloc += 0xc; /* Skip DVSEC + reserved fields */ - return 0; + rc = -ENXIO; + for (i = regloc; i < regloc + 0x24; i += 8) { + u32 reg_lo, reg_hi; + u8 reg_type; + + /* "register low and high" contain other bits */ + pci_read_config_dword(pdev, i, ®_lo); + pci_read_config_dword(pdev, i + 4, ®_hi); + + reg_type = + (reg_lo >> CXL_REGLOC_RBI_SHIFT) & CXL_REGLOC_RBI_MASK; + + if (reg_type == CXL_REGLOC_RBI_MEMDEV) { + rc = 0; + cxlm = cxl_mem_create(pdev, reg_lo, reg_hi); + if (!cxlm) + rc = -ENODEV; + break; + } + } + + return rc; } static const struct pci_device_id cxl_mem_pci_tbl[] = { diff --git a/drivers/cxl/pci.h b/drivers/cxl/pci.h index a8a9935fa90b..df222edb6ac3 100644 --- a/drivers/cxl/pci.h +++ b/drivers/cxl/pci.h @@ -17,4 +17,18 @@ #define PCI_DVSEC_ID_CXL_REGLOC 0x8 +/* BAR Indicator Register (BIR) */ +#define CXL_REGLOC_BIR_SHIFT 0 +#define CXL_REGLOC_BIR_MASK 0x7 + +/* Register Block Identifier (RBI) */ +#define CXL_REGLOC_RBI_SHIFT 8 +#define CXL_REGLOC_RBI_MASK 0xff +#define CXL_REGLOC_RBI_EMPTY 0 +#define CXL_REGLOC_RBI_COMPONENT 1 +#define CXL_REGLOC_RBI_VIRT 2 +#define CXL_REGLOC_RBI_MEMDEV 3 + +#define CXL_REGLOC_ADDR_MASK 0xffff0000 + #endif /* __CXL_PCI_H__ */ From patchwork Sat Jan 30 00:24:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B34E3C433DB for ; Sat, 30 Jan 2021 10:24:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5510F64E08 for ; Sat, 30 Jan 2021 10:24:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232023AbhA3KXr (ORCPT ); Sat, 30 Jan 2021 05:23:47 -0500 Received: from mga01.intel.com ([192.55.52.88]:38338 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231863AbhA3AZz (ORCPT ); Fri, 29 Jan 2021 19:25:55 -0500 IronPort-SDR: x215d5KLD0t1r1pgXyxJndlETrsaT+I4ut5dVCQw1FhzN0EUnPHyFSPQDzQv3EX9xhIKs8sMwX 1W/4iz+Ey+QQ== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350683" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350683" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:46 -0800 IronPort-SDR: rwtIFrLbgdRfbKZdTrY4Ko2/Lhi+tupqhi7a6rJ5rf6+mSF3KNTStj/fB1X+1QPKASZzNMzYSw mlcWpz0sXtMQ== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591655" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:46 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 03/14] cxl/mem: Find device capabilities Date: Fri, 29 Jan 2021 16:24:27 -0800 Message-Id: <20210130002438.1872527-4-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org CXL devices contain an array of capabilities that describe the interactions software can have with the device or firmware running on the device. A CXL compliant device must implement the device status and the mailbox capability. A CXL compliant memory device must implement the memory device capability. Each of the capabilities can [will] provide an offset within the MMIO region for interacting with the CXL device. For more details see 8.2.8 of the CXL 2.0 specification (see Link). Link: https://www.computeexpresslink.org/download-the-specification Signed-off-by: Ben Widawsky --- drivers/cxl/cxl.h | 78 ++++++++++++++++++++++++++++++++++- drivers/cxl/mem.c | 102 +++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 178 insertions(+), 2 deletions(-) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index d81d0ba4617c..a3da7f8050c4 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -4,6 +4,37 @@ #ifndef __CXL_H__ #define __CXL_H__ +#include +#include +#include + +#define CXL_SET_FIELD(value, field) \ + ({ \ + WARN_ON(!FIELD_FIT(field##_MASK, value)); \ + FIELD_PREP(field##_MASK, value); \ + }) + +#define CXL_GET_FIELD(word, field) FIELD_GET(field##_MASK, word) + +/* Device Capabilities (CXL 2.0 - 8.2.8.1) */ +#define CXLDEV_CAP_ARRAY_OFFSET 0x0 +#define CXLDEV_CAP_ARRAY_CAP_ID 0 +#define CXLDEV_CAP_ARRAY_ID_MASK GENMASK(15, 0) +#define CXLDEV_CAP_ARRAY_COUNT_MASK GENMASK(47, 32) +/* (CXL 2.0 - 8.2.8.2.1) */ +#define CXLDEV_CAP_CAP_ID_DEVICE_STATUS 0x1 +#define CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX 0x2 +#define CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX 0x3 +#define CXLDEV_CAP_CAP_ID_MEMDEV 0x4000 + +/* CXL Device Mailbox (CXL 2.0 - 8.2.8.4) */ +#define CXLDEV_MB_CAPS_OFFSET 0x00 +#define CXLDEV_MB_CAP_PAYLOAD_SIZE_MASK GENMASK(4, 0) +#define CXLDEV_MB_CTRL_OFFSET 0x04 +#define CXLDEV_MB_CMD_OFFSET 0x08 +#define CXLDEV_MB_STATUS_OFFSET 0x10 +#define CXLDEV_MB_BG_CMD_STATUS_OFFSET 0x18 + /** * struct cxl_mem - A CXL memory device * @pdev: The PCI device associated with this CXL device. @@ -12,6 +43,51 @@ struct cxl_mem { struct pci_dev *pdev; void __iomem *regs; + + /* Cap 0001h - CXL_CAP_CAP_ID_DEVICE_STATUS */ + struct { + void __iomem *regs; + } status; + + /* Cap 0002h - CXL_CAP_CAP_ID_PRIMARY_MAILBOX */ + struct { + void __iomem *regs; + size_t payload_size; + } mbox; + + /* Cap 4000h - CXL_CAP_CAP_ID_MEMDEV */ + struct { + void __iomem *regs; + } mem; }; -#endif +#define cxl_reg(type) \ + static inline void cxl_write_##type##_reg32(struct cxl_mem *cxlm, \ + u32 reg, u32 value) \ + { \ + void __iomem *reg_addr = cxlm->type.regs; \ + writel(value, reg_addr + reg); \ + } \ + static inline void cxl_write_##type##_reg64(struct cxl_mem *cxlm, \ + u32 reg, u64 value) \ + { \ + void __iomem *reg_addr = cxlm->type.regs; \ + writeq(value, reg_addr + reg); \ + } \ + static inline u32 cxl_read_##type##_reg32(struct cxl_mem *cxlm, \ + u32 reg) \ + { \ + void __iomem *reg_addr = cxlm->type.regs; \ + return readl(reg_addr + reg); \ + } \ + static inline u64 cxl_read_##type##_reg64(struct cxl_mem *cxlm, \ + u32 reg) \ + { \ + void __iomem *reg_addr = cxlm->type.regs; \ + return readq(reg_addr + reg); \ + } + +cxl_reg(status); +cxl_reg(mbox); + +#endif /* __CXL_H__ */ diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index a869c8dc24cc..fa14d51243ee 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -6,6 +6,99 @@ #include "pci.h" #include "cxl.h" +/** + * cxl_mem_setup_regs() - Setup necessary MMIO. + * @cxlm: The CXL memory device to communicate with. + * + * Return: 0 if all necessary registers mapped. + * + * A memory device is required by spec to implement a certain set of MMIO + * regions. The purpose of this function is to enumerate and map those + * registers. + * + * XXX: Register accessors need the mappings set up by this function, so + * any reads or writes must be read(b|w|l|q) or write(b|w|l|q) + */ +static int cxl_mem_setup_regs(struct cxl_mem *cxlm) +{ + struct device *dev = &cxlm->pdev->dev; + int cap, cap_count; + u64 cap_array; + + cap_array = readq(cxlm->regs + CXLDEV_CAP_ARRAY_OFFSET); + if (CXL_GET_FIELD(cap_array, CXLDEV_CAP_ARRAY_ID) != CXLDEV_CAP_ARRAY_CAP_ID) + return -ENODEV; + + cap_count = CXL_GET_FIELD(cap_array, CXLDEV_CAP_ARRAY_COUNT); + + for (cap = 1; cap <= cap_count; cap++) { + void __iomem *register_block; + u32 offset; + u16 cap_id; + + cap_id = readl(cxlm->regs + cap * 0x10) & 0xffff; + offset = readl(cxlm->regs + cap * 0x10 + 0x4); + register_block = cxlm->regs + offset; + + switch (cap_id) { + case CXLDEV_CAP_CAP_ID_DEVICE_STATUS: + dev_dbg(dev, "found Status capability (0x%x)\n", + offset); + cxlm->status.regs = register_block; + break; + case CXLDEV_CAP_CAP_ID_PRIMARY_MAILBOX: + dev_dbg(dev, "found Mailbox capability (0x%x)\n", + offset); + cxlm->mbox.regs = register_block; + break; + case CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX: + dev_dbg(dev, + "found Secondary Mailbox capability (0x%x)\n", + offset); + break; + case CXLDEV_CAP_CAP_ID_MEMDEV: + dev_dbg(dev, "found Memory Device capability (0x%x)\n", + offset); + cxlm->mem.regs = register_block; + break; + default: + dev_warn(dev, "Unknown cap ID: %d (0x%x)\n", cap_id, + offset); + break; + } + } + + if (!cxlm->status.regs || !cxlm->mbox.regs || !cxlm->mem.regs) { + dev_err(dev, "registers not found: %s%s%s\n", + !cxlm->status.regs ? "status " : "", + !cxlm->mbox.regs ? "mbox " : "", + !cxlm->mem.regs ? "mem" : ""); + return -ENXIO; + } + + return 0; +} + +static int cxl_mem_setup_mailbox(struct cxl_mem *cxlm) +{ + const int cap = cxl_read_mbox_reg32(cxlm, CXLDEV_MB_CAPS_OFFSET); + + cxlm->mbox.payload_size = + 1 << CXL_GET_FIELD(cap, CXLDEV_MB_CAP_PAYLOAD_SIZE); + + /* 8.2.8.4.3 */ + if (cxlm->mbox.payload_size < 256) { + dev_err(&cxlm->pdev->dev, "Mailbox is too small (%zub)", + cxlm->mbox.payload_size); + return -ENXIO; + } + + dev_dbg(&cxlm->pdev->dev, "Mailbox payload sized %zu", + cxlm->mbox.payload_size); + + return 0; +} + /** * cxl_mem_create() - Create a new &struct cxl_mem. * @pdev: The pci device associated with the new &struct cxl_mem. @@ -119,7 +212,14 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) } } - return rc; + if (rc) + return rc; + + rc = cxl_mem_setup_regs(cxlm); + if (rc) + return rc; + + return cxl_mem_setup_mailbox(cxlm); } static const struct pci_device_id cxl_mem_pci_tbl[] = { From patchwork Sat Jan 30 00:24:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188079 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22430C433DB for ; Sat, 30 Jan 2021 00:28:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 999C864E04 for ; Sat, 30 Jan 2021 00:28:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232502AbhA3A1T (ORCPT ); Fri, 29 Jan 2021 19:27:19 -0500 Received: from mga01.intel.com ([192.55.52.88]:38336 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232173AbhA3A0F (ORCPT ); Fri, 29 Jan 2021 19:26:05 -0500 IronPort-SDR: XX3d8iNm45m5TGBqqwstphyu27C1WkQhVB0Ii0QQcipstvWOhlKFH3zIVz+a6FqXPwOd/sEdB0 u1Nk9oj+/l2w== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350684" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350684" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:47 -0800 IronPort-SDR: tXglMtC/DYykcCzZE83qqEmn+RrW3816Y7ZMC4UnJk8EONRICxy1PicM7p8VKywaIhbrD280Bm 9dK/V0KXE/Dg== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591660" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:46 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 04/14] cxl/mem: Implement polled mode mailbox Date: Fri, 29 Jan 2021 16:24:28 -0800 Message-Id: <20210130002438.1872527-5-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Provide enough functionality to utilize the mailbox of a memory device. The mailbox is used to interact with the firmware running on the memory device. The CXL specification defines separate capabilities for the mailbox and the memory device. The mailbox interface has a doorbell to indicate ready to accept commands and the memory device has a capability register that indicates the mailbox interface is ready. The expectation is that the doorbell-ready is always later than the memory-device-indication that the mailbox is ready. Create a function to handle sending a command, optionally with a payload, to the memory device, polling on a result, and then optionally copying out the payload. The algorithm for doing this comes straight out of the CXL 2.0 specification. Primary mailboxes are capable of generating an interrupt when submitting a command in the background. That implementation is saved for a later time. Secondary mailboxes aren't implemented at this time. The flow is proven with one implemented command, "identify". Because the class code has already told the driver this is a memory device and the identify command is mandatory. Signed-off-by: Ben Widawsky --- drivers/cxl/Kconfig | 14 ++ drivers/cxl/cxl.h | 39 +++++ drivers/cxl/mem.c | 342 +++++++++++++++++++++++++++++++++++++++++++- 3 files changed, 394 insertions(+), 1 deletion(-) diff --git a/drivers/cxl/Kconfig b/drivers/cxl/Kconfig index 3b66b46af8a0..fe591f74af96 100644 --- a/drivers/cxl/Kconfig +++ b/drivers/cxl/Kconfig @@ -32,4 +32,18 @@ config CXL_MEM Chapter 2.3 Type 3 CXL Device in the CXL 2.0 specification. If unsure say 'm'. + +config CXL_MEM_INSECURE_DEBUG + bool "CXL.mem debugging" + depends on CXL_MEM + help + Enable debug of all CXL command payloads. + + Some CXL devices and controllers support encryption and other + security features. The payloads for the commands that enable + those features may contain sensitive clear-text security + material. Disable debug of those command payloads by default. + If you are a kernel developer actively working on CXL + security enabling say Y, otherwise say N. + endif diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index a3da7f8050c4..df3d97154b63 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -31,9 +31,36 @@ #define CXLDEV_MB_CAPS_OFFSET 0x00 #define CXLDEV_MB_CAP_PAYLOAD_SIZE_MASK GENMASK(4, 0) #define CXLDEV_MB_CTRL_OFFSET 0x04 +#define CXLDEV_MB_CTRL_DOORBELL BIT(0) #define CXLDEV_MB_CMD_OFFSET 0x08 +#define CXLDEV_MB_CMD_COMMAND_OPCODE_MASK GENMASK(15, 0) +#define CXLDEV_MB_CMD_PAYLOAD_LENGTH_MASK GENMASK(36, 16) #define CXLDEV_MB_STATUS_OFFSET 0x10 +#define CXLDEV_MB_STATUS_RET_CODE_MASK GENMASK(47, 32) #define CXLDEV_MB_BG_CMD_STATUS_OFFSET 0x18 +#define CXLDEV_MB_PAYLOAD_OFFSET 0x20 + +/* Memory Device (CXL 2.0 - 8.2.8.5.1.1) */ +#define CXLMDEV_STATUS_OFFSET 0x0 +#define CXLMDEV_DEV_FATAL BIT(0) +#define CXLMDEV_FW_HALT BIT(1) +#define CXLMDEV_STATUS_MEDIA_STATUS_MASK GENMASK(3, 2) +#define CXLMDEV_MS_NOT_READY 0 +#define CXLMDEV_MS_READY 1 +#define CXLMDEV_MS_ERROR 2 +#define CXLMDEV_MS_DISABLED 3 +#define CXLMDEV_READY(status) \ + (CXL_GET_FIELD(status, CXLMDEV_STATUS_MEDIA_STATUS) == CXLMDEV_MS_READY) +#define CXLMDEV_MBOX_IF_READY BIT(4) +#define CXLMDEV_RESET_NEEDED_SHIFT 5 +#define CXLMDEV_RESET_NEEDED_MASK GENMASK(7, 5) +#define CXLMDEV_RESET_NEEDED_NOT 0 +#define CXLMDEV_RESET_NEEDED_COLD 1 +#define CXLMDEV_RESET_NEEDED_WARM 2 +#define CXLMDEV_RESET_NEEDED_HOT 3 +#define CXLMDEV_RESET_NEEDED_CXL 4 +#define CXLMDEV_RESET_NEEDED(status) \ + (CXL_GET_FIELD(status, CXLMDEV_RESET_NEEDED) != CXLMDEV_RESET_NEEDED_NOT) /** * struct cxl_mem - A CXL memory device @@ -44,6 +71,16 @@ struct cxl_mem { struct pci_dev *pdev; void __iomem *regs; + struct { + struct range range; + } pmem; + + struct { + struct range range; + } ram; + + char firmware_version[0x10]; + /* Cap 0001h - CXL_CAP_CAP_ID_DEVICE_STATUS */ struct { void __iomem *regs; @@ -51,6 +88,7 @@ struct cxl_mem { /* Cap 0002h - CXL_CAP_CAP_ID_PRIMARY_MAILBOX */ struct { + struct mutex mutex; /* Protects device mailbox and firmware */ void __iomem *regs; size_t payload_size; } mbox; @@ -89,5 +127,6 @@ struct cxl_mem { cxl_reg(status); cxl_reg(mbox); +cxl_reg(mem); #endif /* __CXL_H__ */ diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index fa14d51243ee..69ed15bfa5d4 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -6,6 +6,270 @@ #include "pci.h" #include "cxl.h" +#define cxl_doorbell_busy(cxlm) \ + (cxl_read_mbox_reg32(cxlm, CXLDEV_MB_CTRL_OFFSET) & \ + CXLDEV_MB_CTRL_DOORBELL) + +#define CXL_MAILBOX_TIMEOUT_US 2000 + +enum opcode { + CXL_MBOX_OP_IDENTIFY = 0x4000, + CXL_MBOX_OP_MAX = 0x10000 +}; + +/** + * struct mbox_cmd - A command to be submitted to hardware. + * @opcode: (input) The command set and command submitted to hardware. + * @payload_in: (input) Pointer to the input payload. + * @payload_out: (output) Pointer to the output payload. Must be allocated by + * the caller. + * @size_in: (input) Number of bytes to load from @payload. + * @size_out: (output) Number of bytes loaded into @payload. + * @return_code: (output) Error code returned from hardware. + * + * This is the primary mechanism used to send commands to the hardware. + * All the fields except @payload_* correspond exactly to the fields described in + * Command Register section of the CXL 2.0 spec (8.2.8.4.5). @payload_in and + * @payload_out are written to, and read from the Command Payload Registers + * defined in (8.2.8.4.8). + */ +struct mbox_cmd { + u16 opcode; + void *payload_in; + void *payload_out; + size_t size_in; + size_t size_out; + u16 return_code; +#define CXL_MBOX_SUCCESS 0 +}; + +static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm) +{ + const int timeout = msecs_to_jiffies(CXL_MAILBOX_TIMEOUT_US); + const unsigned long start = jiffies; + unsigned long end = start; + + while (cxl_doorbell_busy(cxlm)) { + end = jiffies; + + if (time_after(end, start + timeout)) { + /* Check again in case preempted before timeout test */ + if (!cxl_doorbell_busy(cxlm)) + break; + return -ETIMEDOUT; + } + cpu_relax(); + } + + dev_dbg(&cxlm->pdev->dev, "Doorbell wait took %dms", + jiffies_to_msecs(end) - jiffies_to_msecs(start)); + return 0; +} + +static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm, + struct mbox_cmd *mbox_cmd) +{ + dev_warn(&cxlm->pdev->dev, "Mailbox command timed out\n"); + dev_info(&cxlm->pdev->dev, + "\topcode: 0x%04x\n" + "\tpayload size: %zub\n", + mbox_cmd->opcode, mbox_cmd->size_in); + + if (IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) { + print_hex_dump_debug("Payload ", DUMP_PREFIX_OFFSET, 16, 1, + mbox_cmd->payload_in, mbox_cmd->size_in, + true); + } + + /* Here's a good place to figure out if a device reset is needed */ +} + +/** + * cxl_mem_mbox_send_cmd() - Send a mailbox command to a memory device. + * @cxlm: The CXL memory device to communicate with. + * @mbox_cmd: Command to send to the memory device. + * + * Context: Any context. Expects mbox_lock to be held. + * Return: -ETIMEDOUT if timeout occurred waiting for completion. 0 on success. + * Caller should check the return code in @mbox_cmd to make sure it + * succeeded. + * + * This is a generic form of the CXL mailbox send command, thus the only I/O + * operations used are cxl_read_mbox_reg(). Memory devices, and perhaps other + * types of CXL devices may have further information available upon error + * conditions. + * + * The CXL spec allows for up to two mailboxes. The intention is for the primary + * mailbox to be OS controlled and the secondary mailbox to be used by system + * firmware. This allows the OS and firmware to communicate with the device and + * not need to coordinate with each other. The driver only uses the primary + * mailbox. + */ +static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, + struct mbox_cmd *mbox_cmd) +{ + void __iomem *payload = cxlm->mbox.regs + CXLDEV_MB_PAYLOAD_OFFSET; + u64 cmd_reg, status_reg; + size_t out_len; + int rc; + + lockdep_assert_held(&cxlm->mbox.mutex); + + /* + * Here are the steps from 8.2.8.4 of the CXL 2.0 spec. + * 1. Caller reads MB Control Register to verify doorbell is clear + * 2. Caller writes Command Register + * 3. Caller writes Command Payload Registers if input payload is non-empty + * 4. Caller writes MB Control Register to set doorbell + * 5. Caller either polls for doorbell to be clear or waits for interrupt if configured + * 6. Caller reads MB Status Register to fetch Return code + * 7. If command successful, Caller reads Command Register to get Payload Length + * 8. If output payload is non-empty, host reads Command Payload Registers + * + * Hardware is free to do whatever it wants before the doorbell is + * rung, and isn't allowed to change anything after it clears the + * doorbell. As such, steps 2 and 3 can happen in any order, and steps + * 6, 7, 8 can also happen in any order (though some orders might not + * make sense). + */ + + /* #1 */ + if (cxl_doorbell_busy(cxlm)) { + dev_err_ratelimited(&cxlm->pdev->dev, + "Mailbox re-busy after acquiring\n"); + return -EBUSY; + } + + cmd_reg = CXL_SET_FIELD(mbox_cmd->opcode, CXLDEV_MB_CMD_COMMAND_OPCODE); + if (mbox_cmd->size_in) { + if (WARN_ON(!mbox_cmd->payload_in)) + return -EINVAL; + + cmd_reg |= CXL_SET_FIELD(mbox_cmd->size_in, + CXLDEV_MB_CMD_PAYLOAD_LENGTH); + memcpy_toio(payload, mbox_cmd->payload_in, mbox_cmd->size_in); + } + + /* #2, #3 */ + cxl_write_mbox_reg64(cxlm, CXLDEV_MB_CMD_OFFSET, cmd_reg); + + /* #4 */ + dev_dbg(&cxlm->pdev->dev, "Sending command\n"); + cxl_write_mbox_reg32(cxlm, CXLDEV_MB_CTRL_OFFSET, + CXLDEV_MB_CTRL_DOORBELL); + + /* #5 */ + rc = cxl_mem_wait_for_doorbell(cxlm); + if (rc == -ETIMEDOUT) { + cxl_mem_mbox_timeout(cxlm, mbox_cmd); + return rc; + } + + /* #6 */ + status_reg = cxl_read_mbox_reg64(cxlm, CXLDEV_MB_STATUS_OFFSET); + mbox_cmd->return_code = + CXL_GET_FIELD(status_reg, CXLDEV_MB_STATUS_RET_CODE); + + if (mbox_cmd->return_code != 0) { + dev_dbg(&cxlm->pdev->dev, "Mailbox operation had an error\n"); + return 0; + } + + /* #7 */ + cmd_reg = cxl_read_mbox_reg64(cxlm, CXLDEV_MB_CMD_OFFSET); + out_len = CXL_GET_FIELD(cmd_reg, CXLDEV_MB_CMD_PAYLOAD_LENGTH); + + /* #8 */ + if (out_len && mbox_cmd->payload_out) + memcpy_fromio(mbox_cmd->payload_out, payload, out_len); + + mbox_cmd->size_out = out_len; + + return 0; +} + +/** + * cxl_mem_mbox_get() - Acquire exclusive access to the mailbox. + * @cxlm: The memory device to gain access to. + * + * Context: Any context. Takes the mbox_lock. + * Return: 0 if exclusive access was acquired. + */ +static int cxl_mem_mbox_get(struct cxl_mem *cxlm) +{ + struct device *dev = &cxlm->pdev->dev; + int rc = -EBUSY; + u64 md_status; + + mutex_lock_io(&cxlm->mbox.mutex); + + /* + * XXX: There is some amount of ambiguity in the 2.0 version of the spec + * around the mailbox interface ready (8.2.8.5.1.1). The purpose of the + * bit is to allow firmware running on the device to notify the driver + * that it's ready to receive commands. It is unclear if the bit needs + * to be read for each transaction mailbox, ie. the firmware can switch + * it on and off as needed. Second, there is no defined timeout for + * mailbox ready, like there is for the doorbell interface. + * + * Assumptions: + * 1. The firmware might toggle the Mailbox Interface Ready bit, check + * it for every command. + * + * 2. If the doorbell is clear, the firmware should have first set the + * Mailbox Interface Ready bit. Therefore, waiting for the doorbell + * to be ready is sufficient. + */ + rc = cxl_mem_wait_for_doorbell(cxlm); + if (rc) { + dev_warn(dev, "Mailbox interface not ready\n"); + goto out; + } + + md_status = cxl_read_mem_reg64(cxlm, CXLMDEV_STATUS_OFFSET); + if (!(md_status & CXLMDEV_MBOX_IF_READY && CXLMDEV_READY(md_status))) { + dev_err(dev, + "mbox: reported doorbell ready, but not mbox ready\n"); + goto out; + } + + /* + * Hardware shouldn't allow a ready status but also have failure bits + * set. Spit out an error, this should be a bug report + */ + rc = -EFAULT; + if (md_status & CXLMDEV_DEV_FATAL) { + dev_err(dev, "mbox: reported ready, but fatal\n"); + goto out; + } + if (md_status & CXLMDEV_FW_HALT) { + dev_err(dev, "mbox: reported ready, but halted\n"); + goto out; + } + if (CXLMDEV_RESET_NEEDED(md_status)) { + dev_err(dev, "mbox: reported ready, but reset needed\n"); + goto out; + } + + /* with lock held */ + return 0; + +out: + mutex_unlock(&cxlm->mbox.mutex); + return rc; +} + +/** + * cxl_mem_mbox_put() - Release exclusive access to the mailbox. + * @cxlm: The CXL memory device to communicate with. + * + * Context: Any context. Expects mbox_lock to be held. + */ +static void cxl_mem_mbox_put(struct cxl_mem *cxlm) +{ + mutex_unlock(&cxlm->mbox.mutex); +} + /** * cxl_mem_setup_regs() - Setup necessary MMIO. * @cxlm: The CXL memory device to communicate with. @@ -142,6 +406,8 @@ static struct cxl_mem *cxl_mem_create(struct pci_dev *pdev, u32 reg_lo, return NULL; } + mutex_init(&cxlm->mbox.mutex); + regs = pcim_iomap_table(pdev)[bar]; cxlm->pdev = pdev; cxlm->regs = regs + offset; @@ -174,6 +440,76 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) return 0; } +/** + * cxl_mem_identify() - Send the IDENTIFY command to the device. + * @cxlm: The device to identify. + * + * Return: 0 if identify was executed successfully. + * + * This will dispatch the identify command to the device and on success populate + * structures to be exported to sysfs. + */ +static int cxl_mem_identify(struct cxl_mem *cxlm) +{ + struct cxl_mbox_identify { + char fw_revision[0x10]; + __le64 total_capacity; + __le64 volatile_capacity; + __le64 persistent_capacity; + __le64 partition_align; + __le16 info_event_log_size; + __le16 warning_event_log_size; + __le16 failure_event_log_size; + __le16 fatal_event_log_size; + __le32 lsa_size; + u8 poison_list_max_mer[3]; + __le16 inject_poison_limit; + u8 poison_caps; + u8 qos_telemetry_caps; + } __packed id; + struct mbox_cmd mbox_cmd; + int rc; + + /* Retrieve initial device memory map */ + rc = cxl_mem_mbox_get(cxlm); + if (rc) + return rc; + + mbox_cmd = (struct mbox_cmd){ + .opcode = CXL_MBOX_OP_IDENTIFY, + .payload_out = &id, + .size_in = 0, + }; + rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd); + cxl_mem_mbox_put(cxlm); + if (rc) + return rc; + + /* TODO: Handle retry or reset responses from firmware. */ + if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) { + dev_err(&cxlm->pdev->dev, "Mailbox command failed (%d)\n", + mbox_cmd.return_code); + return -ENXIO; + } + + if (mbox_cmd.size_out != sizeof(id)) + return -ENXIO; + + /* + * TODO: enumerate DPA map, as 'ram' and 'pmem' do not alias. + * For now, only the capacity is exported in sysfs + */ + cxlm->ram.range.start = 0; + cxlm->ram.range.end = le64_to_cpu(id.volatile_capacity) - 1; + + cxlm->pmem.range.start = 0; + cxlm->pmem.range.end = le64_to_cpu(id.persistent_capacity) - 1; + + memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision)); + + return rc; +} + static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct device *dev = &pdev->dev; @@ -219,7 +555,11 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; - return cxl_mem_setup_mailbox(cxlm); + rc = cxl_mem_setup_mailbox(cxlm); + if (rc) + return rc; + + return cxl_mem_identify(cxlm); } static const struct pci_device_id cxl_mem_pci_tbl[] = { From patchwork Sat Jan 30 00:24:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A9D9C433DB for ; Sat, 30 Jan 2021 10:22:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3EDE264E08 for ; Sat, 30 Jan 2021 10:22:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231235AbhA3KVp (ORCPT ); Sat, 30 Jan 2021 05:21:45 -0500 Received: from mga01.intel.com ([192.55.52.88]:38334 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232403AbhA3A1G (ORCPT ); Fri, 29 Jan 2021 19:27:06 -0500 IronPort-SDR: F5GinMUw5N6f9I7HlUKVUdMdRh8QAdYjg30If/PvlWQNLKhDKul3Q0NwulWyL4Vg5r+Df1ebEs 1eBWokVuybRw== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350685" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350685" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:48 -0800 IronPort-SDR: CxK31VQz0ekAw2R5Kp4mCrhDRCFovTy0AvoKWq8QqEM37Pl2532USsvwN1jDu3VBw3l5VcPmPn koFL43y1ma7w== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591667" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:47 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Dan Williams , Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 05/14] cxl/mem: Register CXL memX devices Date: Fri, 29 Jan 2021 16:24:29 -0800 Message-Id: <20210130002438.1872527-6-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Dan Williams Create the /sys/bus/cxl hierarchy to enumerate: * Memory Devices (per-endpoint control devices) * Memory Address Space Devices (platform address ranges with interleaving, performance, and persistence attributes) * Memory Regions (active provisioned memory from an address space device that is in use as System RAM or delegated to libnvdimm as Persistent Memory regions). For now, only the per-endpoint control devices are registered on the 'cxl' bus. However, going forward it will provide a mechanism to coordinate cross-device interleave. Signed-off-by: Dan Williams Signed-off-by: Ben Widawsky --- Documentation/ABI/testing/sysfs-bus-cxl | 26 ++ .../driver-api/cxl/memory-devices.rst | 17 + drivers/base/core.c | 14 + drivers/cxl/Makefile | 3 + drivers/cxl/bus.c | 29 ++ drivers/cxl/cxl.h | 4 + drivers/cxl/mem.c | 308 +++++++++++++++++- include/linux/device.h | 1 + 8 files changed, 400 insertions(+), 2 deletions(-) create mode 100644 Documentation/ABI/testing/sysfs-bus-cxl create mode 100644 drivers/cxl/bus.c diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl new file mode 100644 index 000000000000..fe7b87eba988 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-bus-cxl @@ -0,0 +1,26 @@ +What: /sys/bus/cxl/devices/memX/firmware_version +Date: December, 2020 +KernelVersion: v5.12 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) "FW Revision" string as reported by the Identify + Memory Device Output Payload in the CXL-2.0 + specification. + +What: /sys/bus/cxl/devices/memX/ram/size +Date: December, 2020 +KernelVersion: v5.12 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) "Volatile Only Capacity" as reported by the + Identify Memory Device Output Payload in the CXL-2.0 + specification. + +What: /sys/bus/cxl/devices/memX/pmem/size +Date: December, 2020 +KernelVersion: v5.12 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) "Persistent Only Capacity" as reported by the + Identify Memory Device Output Payload in the CXL-2.0 + specification. diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst index 43177e700d62..1bad466f9167 100644 --- a/Documentation/driver-api/cxl/memory-devices.rst +++ b/Documentation/driver-api/cxl/memory-devices.rst @@ -27,3 +27,20 @@ CXL Memory Device .. kernel-doc:: drivers/cxl/mem.c :internal: + +CXL Bus +------- +.. kernel-doc:: drivers/cxl/bus.c + :doc: cxl bus + +External Interfaces +=================== + +CXL IOCTL Interface +------------------- + +.. kernel-doc:: include/uapi/linux/cxl_mem.h + :doc: UAPI + +.. kernel-doc:: include/uapi/linux/cxl_mem.h + :internal: diff --git a/drivers/base/core.c b/drivers/base/core.c index 25e08e5f40bd..33432a4cbe23 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -3179,6 +3179,20 @@ struct device *get_device(struct device *dev) } EXPORT_SYMBOL_GPL(get_device); +/** + * get_live_device() - increment reference count for device iff !dead + * @dev: device. + * + * Forward the call to get_device() if the device is still alive. If + * this is called with the device_lock() held then the device is + * guaranteed to not die until the device_lock() is dropped. + */ +struct device *get_live_device(struct device *dev) +{ + return dev && !dev->p->dead ? get_device(dev) : NULL; +} +EXPORT_SYMBOL_GPL(get_live_device); + /** * put_device - decrement reference count. * @dev: device in question. diff --git a/drivers/cxl/Makefile b/drivers/cxl/Makefile index 4a30f7c3fc4a..a314a1891f4d 100644 --- a/drivers/cxl/Makefile +++ b/drivers/cxl/Makefile @@ -1,4 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_CXL_BUS) += cxl_bus.o obj-$(CONFIG_CXL_MEM) += cxl_mem.o +ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CXL +cxl_bus-y := bus.o cxl_mem-y := mem.o diff --git a/drivers/cxl/bus.c b/drivers/cxl/bus.c new file mode 100644 index 000000000000..58f74796d525 --- /dev/null +++ b/drivers/cxl/bus.c @@ -0,0 +1,29 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2020 Intel Corporation. All rights reserved. */ +#include +#include + +/** + * DOC: cxl bus + * + * The CXL bus provides namespace for control devices and a rendezvous + * point for cross-device interleave coordination. + */ +struct bus_type cxl_bus_type = { + .name = "cxl", +}; +EXPORT_SYMBOL_GPL(cxl_bus_type); + +static __init int cxl_bus_init(void) +{ + return bus_register(&cxl_bus_type); +} + +static void cxl_bus_exit(void) +{ + bus_unregister(&cxl_bus_type); +} + +module_init(cxl_bus_init); +module_exit(cxl_bus_exit); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index df3d97154b63..b042eee7ee25 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -3,6 +3,7 @@ #ifndef __CXL_H__ #define __CXL_H__ +#include #include #include @@ -62,6 +63,7 @@ #define CXLMDEV_RESET_NEEDED(status) \ (CXL_GET_FIELD(status, CXLMDEV_RESET_NEEDED) != CXLMDEV_RESET_NEEDED_NOT) +struct cxl_memdev; /** * struct cxl_mem - A CXL memory device * @pdev: The PCI device associated with this CXL device. @@ -70,6 +72,7 @@ struct cxl_mem { struct pci_dev *pdev; void __iomem *regs; + struct cxl_memdev *cxlmd; struct { struct range range; @@ -129,4 +132,5 @@ cxl_reg(status); cxl_reg(mbox); cxl_reg(mem); +extern struct bus_type cxl_bus_type; #endif /* __CXL_H__ */ diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 69ed15bfa5d4..f1f5c765623f 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -1,11 +1,36 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2020 Intel Corporation. All rights reserved. */ #include +#include +#include +#include #include #include #include "pci.h" #include "cxl.h" +/** + * DOC: cxl mem + * + * This implements a CXL memory device ("type-3") as it is defined by the + * Compute Express Link specification. + * + * The driver has several responsibilities, mainly: + * - Create the memX device and register on the CXL bus. + * - Enumerate device's register interface and map them. + * - Probe the device attributes to establish sysfs interface. + * - Provide an IOCTL interface to userspace to communicate with the device for + * things like firmware update. + * - Support management of interleave sets. + * - Handle and manage error conditions. + */ + +/* + * An entire PCI topology full of devices should be enough for any + * config + */ +#define CXL_MEM_MAX_DEVS 65536 + #define cxl_doorbell_busy(cxlm) \ (cxl_read_mbox_reg32(cxlm, CXLDEV_MB_CTRL_OFFSET) & \ CXLDEV_MB_CTRL_DOORBELL) @@ -43,6 +68,27 @@ struct mbox_cmd { #define CXL_MBOX_SUCCESS 0 }; +/** + * struct cxl_memdev - CXL bus object representing a Type-3 Memory Device + * @dev: driver core device object + * @cdev: char dev core object for ioctl operations + * @cxlm: pointer to the parent device driver data + * @ops_active: active user of @cxlm in ops handlers + * @ops_dead: completion when all @cxlm ops users have exited + * @id: id number of this memdev instance. + */ +struct cxl_memdev { + struct device dev; + struct cdev cdev; + struct cxl_mem *cxlm; + struct percpu_ref ops_active; + struct completion ops_dead; + int id; +}; + +static int cxl_mem_major; +static DEFINE_IDA(cxl_memdev_ida); + static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm) { const int timeout = msecs_to_jiffies(CXL_MAILBOX_TIMEOUT_US); @@ -270,6 +316,40 @@ static void cxl_mem_mbox_put(struct cxl_mem *cxlm) mutex_unlock(&cxlm->mbox.mutex); } +static int cxl_memdev_open(struct inode *inode, struct file *file) +{ + struct cxl_memdev *cxlmd = + container_of(inode->i_cdev, typeof(*cxlmd), cdev); + + file->private_data = cxlmd; + + return 0; +} + +static long cxl_memdev_ioctl(struct file *file, unsigned int cmd, + unsigned long arg) +{ + struct cxl_memdev *cxlmd = file->private_data; + int rc = -ENOTTY; + + if (!percpu_ref_tryget_live(&cxlmd->ops_active)) + return -ENXIO; + + /* TODO: ioctl body */ + + percpu_ref_put(&cxlmd->ops_active); + + return rc; +} + +static const struct file_operations cxl_memdev_fops = { + .owner = THIS_MODULE, + .open = cxl_memdev_open, + .unlocked_ioctl = cxl_memdev_ioctl, + .compat_ioctl = compat_ptr_ioctl, + .llseek = noop_llseek, +}; + /** * cxl_mem_setup_regs() - Setup necessary MMIO. * @cxlm: The CXL memory device to communicate with. @@ -440,6 +520,197 @@ static int cxl_mem_dvsec(struct pci_dev *pdev, int dvsec) return 0; } +static struct cxl_memdev *to_cxl_memdev(struct device *dev) +{ + return container_of(dev, struct cxl_memdev, dev); +} + +static void cxl_memdev_release(struct device *dev) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + + percpu_ref_exit(&cxlmd->ops_active); + ida_free(&cxl_memdev_ida, cxlmd->id); + kfree(cxlmd); +} + +static char *cxl_memdev_devnode(struct device *dev, umode_t *mode, kuid_t *uid, + kgid_t *gid) +{ + return kasprintf(GFP_KERNEL, "cxl/%s", dev_name(dev)); +} + +static ssize_t firmware_version_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_mem *cxlm = cxlmd->cxlm; + + return sprintf(buf, "%.16s\n", cxlm->firmware_version); +} +static DEVICE_ATTR_RO(firmware_version); + +static ssize_t payload_max_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_mem *cxlm = cxlmd->cxlm; + + return sprintf(buf, "%zu\n", cxlm->mbox.payload_size); +} +static DEVICE_ATTR_RO(payload_max); + +static ssize_t ram_size_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_mem *cxlm = cxlmd->cxlm; + unsigned long long len = range_len(&cxlm->ram.range); + + return sprintf(buf, "%#llx\n", len); +} + +static struct device_attribute dev_attr_ram_size = + __ATTR(size, 0444, ram_size_show, NULL); + +static ssize_t pmem_size_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct cxl_memdev *cxlmd = to_cxl_memdev(dev); + struct cxl_mem *cxlm = cxlmd->cxlm; + unsigned long long len = range_len(&cxlm->pmem.range); + + return sprintf(buf, "%#llx\n", len); +} + +static struct device_attribute dev_attr_pmem_size = + __ATTR(size, 0444, pmem_size_show, NULL); + +static struct attribute *cxl_memdev_attributes[] = { + &dev_attr_firmware_version.attr, + &dev_attr_payload_max.attr, + NULL, +}; + +static struct attribute *cxl_memdev_pmem_attributes[] = { + &dev_attr_pmem_size.attr, + NULL, +}; + +static struct attribute *cxl_memdev_ram_attributes[] = { + &dev_attr_ram_size.attr, + NULL, +}; + +static struct attribute_group cxl_memdev_attribute_group = { + .attrs = cxl_memdev_attributes, +}; + +static struct attribute_group cxl_memdev_ram_attribute_group = { + .name = "ram", + .attrs = cxl_memdev_ram_attributes, +}; + +static struct attribute_group cxl_memdev_pmem_attribute_group = { + .name = "pmem", + .attrs = cxl_memdev_pmem_attributes, +}; + +static const struct attribute_group *cxl_memdev_attribute_groups[] = { + &cxl_memdev_attribute_group, + &cxl_memdev_ram_attribute_group, + &cxl_memdev_pmem_attribute_group, + NULL, +}; + +static const struct device_type cxl_memdev_type = { + .name = "cxl_memdev", + .release = cxl_memdev_release, + .devnode = cxl_memdev_devnode, + .groups = cxl_memdev_attribute_groups, +}; + +static void cxlmdev_unregister(void *_cxlmd) +{ + struct cxl_memdev *cxlmd = _cxlmd; + struct device *dev = &cxlmd->dev; + + percpu_ref_kill(&cxlmd->ops_active); + cdev_device_del(&cxlmd->cdev, dev); + wait_for_completion(&cxlmd->ops_dead); + cxlmd->cxlm = NULL; + put_device(dev); +} + +static void cxlmdev_ops_active_release(struct percpu_ref *ref) +{ + struct cxl_memdev *cxlmd = + container_of(ref, typeof(*cxlmd), ops_active); + + complete(&cxlmd->ops_dead); +} + +static int cxl_mem_add_memdev(struct cxl_mem *cxlm) +{ + struct pci_dev *pdev = cxlm->pdev; + struct cxl_memdev *cxlmd; + struct device *dev; + struct cdev *cdev; + int rc; + + cxlmd = kzalloc(sizeof(*cxlmd), GFP_KERNEL); + if (!cxlmd) + return -ENOMEM; + init_completion(&cxlmd->ops_dead); + + /* + * @cxlm is deallocated when the driver unbinds so operations + * that are using it need to hold a live reference. + */ + cxlmd->cxlm = cxlm; + rc = percpu_ref_init(&cxlmd->ops_active, cxlmdev_ops_active_release, 0, + GFP_KERNEL); + if (rc) + goto err_ref; + + rc = ida_alloc_range(&cxl_memdev_ida, 0, CXL_MEM_MAX_DEVS, GFP_KERNEL); + if (rc < 0) + goto err_id; + cxlmd->id = rc; + + dev = &cxlmd->dev; + device_initialize(dev); + dev->parent = &pdev->dev; + dev->bus = &cxl_bus_type; + dev->devt = MKDEV(cxl_mem_major, cxlmd->id); + dev->type = &cxl_memdev_type; + dev_set_name(dev, "mem%d", cxlmd->id); + + cdev = &cxlmd->cdev; + cdev_init(cdev, &cxl_memdev_fops); + + rc = cdev_device_add(cdev, dev); + if (rc) + goto err_add; + + return devm_add_action_or_reset(dev->parent, cxlmdev_unregister, cxlmd); + +err_add: + ida_free(&cxl_memdev_ida, cxlmd->id); +err_id: + /* + * Theoretically userspace could have already entered the fops, + * so flush ops_active. + */ + percpu_ref_kill(&cxlmd->ops_active); + wait_for_completion(&cxlmd->ops_dead); + percpu_ref_exit(&cxlmd->ops_active); +err_ref: + kfree(cxlmd); + + return rc; +} + /** * cxl_mem_identify() - Send the IDENTIFY command to the device. * @cxlm: The device to identify. @@ -559,7 +830,11 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; - return cxl_mem_identify(cxlm); + rc = cxl_mem_identify(cxlm); + if (rc) + return rc; + + return cxl_mem_add_memdev(cxlm); } static const struct pci_device_id cxl_mem_pci_tbl[] = { @@ -576,5 +851,34 @@ static struct pci_driver cxl_mem_driver = { .probe = cxl_mem_probe, }; +static __init int cxl_mem_init(void) +{ + int rc; + dev_t devt; + + rc = alloc_chrdev_region(&devt, 0, CXL_MEM_MAX_DEVS, "cxl"); + if (rc) + return rc; + + cxl_mem_major = MAJOR(devt); + + rc = pci_register_driver(&cxl_mem_driver); + if (rc) { + unregister_chrdev_region(MKDEV(cxl_mem_major, 0), + CXL_MEM_MAX_DEVS); + return rc; + } + + return 0; +} + +static __exit void cxl_mem_exit(void) +{ + pci_unregister_driver(&cxl_mem_driver); + unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS); +} + MODULE_LICENSE("GPL v2"); -module_pci_driver(cxl_mem_driver); +module_init(cxl_mem_init); +module_exit(cxl_mem_exit); +MODULE_IMPORT_NS(CXL); diff --git a/include/linux/device.h b/include/linux/device.h index 89bb8b84173e..8659deee8ae6 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -895,6 +895,7 @@ extern int (*platform_notify_remove)(struct device *dev); * */ struct device *get_device(struct device *dev); +struct device *get_live_device(struct device *dev); void put_device(struct device *dev); bool kill_device(struct device *dev); From patchwork Sat Jan 30 00:24:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5250FC43381 for ; Sat, 30 Jan 2021 10:22:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0F41964E0C for ; Sat, 30 Jan 2021 10:22:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231812AbhA3KWk (ORCPT ); Sat, 30 Jan 2021 05:22:40 -0500 Received: from mga01.intel.com ([192.55.52.88]:38338 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232430AbhA3A1G (ORCPT ); Fri, 29 Jan 2021 19:27:06 -0500 IronPort-SDR: WcxZM6YGlZoZKByKq1x8f+Uf1s/1bunuJa2OBi+6FHzWDM0q1Z4Mjyv/bHrrKL6wfzl9oCglc4 NsLflRHEIC8g== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350686" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350686" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:48 -0800 IronPort-SDR: uddJezV4j4+/KYTSlrQGmcVKaRW0nX0miMzPqUrWk+tSlYehaPZAow6Xp4QNZ3em2SNgo1DdTJ tdumwq7kmeWw== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591672" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:48 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , kernel test robot , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 06/14] cxl/mem: Add basic IOCTL interface Date: Fri, 29 Jan 2021 16:24:30 -0800 Message-Id: <20210130002438.1872527-7-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Add a straightforward IOCTL that provides a mechanism for userspace to query the supported memory device commands. CXL commands as they appear to userspace are described as part of the UAPI kerneldoc. The command list returned via this IOCTL will contain the full set of commands that the driver supports, however, some of those commands may not be available for use by userspace. Memory device commands are specified in 8.2.9 of the CXL 2.0 specification. They are submitted through a mailbox mechanism specified in 8.2.8.4. Reported-by: kernel test robot # bug in earlier revision Signed-off-by: Ben Widawsky --- .clang-format | 1 + .../userspace-api/ioctl/ioctl-number.rst | 1 + drivers/cxl/mem.c | 152 +++++++++++++++++- include/uapi/linux/cxl_mem.h | 119 ++++++++++++++ 4 files changed, 271 insertions(+), 2 deletions(-) create mode 100644 include/uapi/linux/cxl_mem.h diff --git a/.clang-format b/.clang-format index 10dc5a9a61b3..3f11c8901b43 100644 --- a/.clang-format +++ b/.clang-format @@ -109,6 +109,7 @@ ForEachMacros: - 'css_for_each_child' - 'css_for_each_descendant_post' - 'css_for_each_descendant_pre' + - 'cxl_for_each_cmd' - 'device_for_each_child_node' - 'dma_fence_chain_for_each' - 'do_for_each_ftrace_op' diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst b/Documentation/userspace-api/ioctl/ioctl-number.rst index a4c75a28c839..6eb8e634664d 100644 --- a/Documentation/userspace-api/ioctl/ioctl-number.rst +++ b/Documentation/userspace-api/ioctl/ioctl-number.rst @@ -352,6 +352,7 @@ Code Seq# Include File Comments 0xCC 00-0F drivers/misc/ibmvmc.h pseries VMC driver 0xCD 01 linux/reiserfs_fs.h +0xCE 01-02 uapi/linux/cxl_mem.h Compute Express Link Memory Devices 0xCF 02 fs/cifs/ioctl.c 0xDB 00-0F drivers/char/mwave/mwavepub.h 0xDD 00-3F ZFCP device driver see drivers/s390/scsi/ diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index f1f5c765623f..3c3ff45f01c0 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2020 Intel Corporation. All rights reserved. */ +#include #include #include #include @@ -38,6 +39,7 @@ #define CXL_MAILBOX_TIMEOUT_US 2000 enum opcode { + CXL_MBOX_OP_INVALID = 0x0000, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_MAX = 0x10000 }; @@ -89,6 +91,72 @@ struct cxl_memdev { static int cxl_mem_major; static DEFINE_IDA(cxl_memdev_ida); +/** + * struct cxl_mem_command - Driver representation of a memory device command + * @info: Command information as it exists for the UAPI + * @opcode: The actual bits used for the mailbox protocol + * @flags: Set of flags reflecting the state of the command. + * + * * %CXL_CMD_INTERNAL_FLAG_HIDDEN: Command is hidden from userspace. This + * would typically be used for deprecated commands. + * * %CXL_CMD_FLAG_MANDATORY: Hardware must support this command. This flag is + * only used internally by the driver for sanity checking. + * + * The cxl_mem_command is the driver's internal representation of commands that + * are supported by the driver. Some of these commands may not be supported by + * the hardware. The driver will use @info to validate the fields passed in by + * the user then submit the @opcode to the hardware. + * + * See struct cxl_command_info. + */ +struct cxl_mem_command { + const struct cxl_command_info info; + enum opcode opcode; + u32 flags; +#define CXL_CMD_INTERNAL_FLAG_NONE 0 +#define CXL_CMD_INTERNAL_FLAG_HIDDEN BIT(0) +#define CXL_CMD_INTERNAL_FLAG_MANDATORY BIT(1) +}; + +#define CXL_CMD(_id, _flags, sin, sout, f) \ + [CXL_MEM_COMMAND_ID_##_id] = { \ + .info = { \ + .id = CXL_MEM_COMMAND_ID_##_id, \ + .flags = CXL_MEM_COMMAND_FLAG_##_flags, \ + .size_in = sin, \ + .size_out = sout, \ + }, \ + .flags = CXL_CMD_INTERNAL_FLAG_##f, \ + .opcode = CXL_MBOX_OP_##_id, \ + } + +/* + * This table defines the supported mailbox commands for the driver. This table + * is made up of a UAPI structure. Non-negative values as parameters in the + * table will be validated against the user's input. For example, if size_in is + * 0, and the user passed in 1, it is an error. + */ +static struct cxl_mem_command mem_commands[] = { + CXL_CMD(INVALID, KERNEL, 0, 0, HIDDEN), + CXL_CMD(IDENTIFY, NONE, 0, 0x43, MANDATORY), +}; + +#define cxl_for_each_cmd(cmd) \ + for ((cmd) = &mem_commands[0]; \ + ((cmd) - mem_commands) < ARRAY_SIZE(mem_commands); (cmd)++) + +static inline struct cxl_mem_command *cxl_mem_find_command(u16 opcode) +{ + struct cxl_mem_command *c; + + cxl_for_each_cmd(c) { + if (c->opcode == opcode) + return c; + } + + return NULL; +} + static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm) { const int timeout = msecs_to_jiffies(CXL_MAILBOX_TIMEOUT_US); @@ -155,6 +223,7 @@ static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, struct mbox_cmd *mbox_cmd) { void __iomem *payload = cxlm->mbox.regs + CXLDEV_MB_PAYLOAD_OFFSET; + const struct cxl_mem_command *cmd; u64 cmd_reg, status_reg; size_t out_len; int rc; @@ -179,6 +248,13 @@ static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, * make sense). */ + cmd = cxl_mem_find_command(mbox_cmd->opcode); + if (!cmd) { + dev_info(&cxlm->pdev->dev, + "Unknown opcode 0x%04x being sent to hardware\n", + mbox_cmd->opcode); + } + /* #1 */ if (cxl_doorbell_busy(cxlm)) { dev_err_ratelimited(&cxlm->pdev->dev, @@ -225,6 +301,19 @@ static int cxl_mem_mbox_send_cmd(struct cxl_mem *cxlm, cmd_reg = cxl_read_mbox_reg64(cxlm, CXLDEV_MB_CMD_OFFSET); out_len = CXL_GET_FIELD(cmd_reg, CXLDEV_MB_CMD_PAYLOAD_LENGTH); + /* + * If the command had a fixed size output, but the hardware did + * something unexpected, just print an error and move on. It would be + * worth sending a bug report. + */ + if (cmd && cmd->info.size_out >= 0 && out_len != cmd->info.size_out) { + bool too_big = out_len > cmd->info.size_out; + + dev_err(&cxlm->pdev->dev, + "payload was %s than driver expectations\n", + too_big ? "larger" : "smaller"); + } + /* #8 */ if (out_len && mbox_cmd->payload_out) memcpy_fromio(mbox_cmd->payload_out, payload, out_len); @@ -326,16 +415,75 @@ static int cxl_memdev_open(struct inode *inode, struct file *file) return 0; } +static int cxl_mem_count_commands(void) +{ + struct cxl_mem_command *c; + int n = 0; + + cxl_for_each_cmd(c) { + if (c->flags & CXL_CMD_INTERNAL_FLAG_HIDDEN) + continue; + n++; + } + + return n; +} + +static long __cxl_memdev_ioctl(struct cxl_memdev *cxlmd, unsigned int cmd, + unsigned long arg) +{ + struct device *dev = &cxlmd->dev; + + if (cmd == CXL_MEM_QUERY_COMMANDS) { + struct cxl_mem_query_commands __user *q = (void __user *)arg; + struct cxl_mem_command *cmd; + u32 n_commands; + int j = 0; + + dev_dbg(dev, "Query IOCTL\n"); + + if (get_user(n_commands, &q->n_commands)) + return -EFAULT; + + /* returns the total number if 0 elements are requested. */ + if (n_commands == 0) + return put_user(cxl_mem_count_commands(), + &q->n_commands); + + /* + * otherwise, return max(n_commands, total commands) + * cxl_command_info structures. + */ + cxl_for_each_cmd(cmd) { + const struct cxl_command_info *info = &cmd->info; + + if (cmd->flags & CXL_CMD_INTERNAL_FLAG_HIDDEN) + continue; + + if (copy_to_user(&q->commands[j++], info, + sizeof(*info))) + return -EFAULT; + + if (j == n_commands) + break; + } + + return 0; + } + + return -ENOTTY; +} + static long cxl_memdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { struct cxl_memdev *cxlmd = file->private_data; - int rc = -ENOTTY; + int rc; if (!percpu_ref_tryget_live(&cxlmd->ops_active)) return -ENXIO; - /* TODO: ioctl body */ + rc = __cxl_memdev_ioctl(cxlmd, cmd, arg); percpu_ref_put(&cxlmd->ops_active); diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h new file mode 100644 index 000000000000..70e3ba2fa008 --- /dev/null +++ b/include/uapi/linux/cxl_mem.h @@ -0,0 +1,119 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* + * CXL IOCTLs for Memory Devices + */ + +#ifndef _UAPI_CXL_MEM_H_ +#define _UAPI_CXL_MEM_H_ + +#if defined(__cplusplus) +extern "C" { +#endif + +#include + +/** + * DOC: UAPI + * + * CXL memory devices expose UAPI to have a standard user interface. + * Userspace can refer to these structure definitions and UAPI formats + * to communicate to driver. The commands themselves are somewhat obfuscated + * with macro magic. They have the form CXL_MEM_COMMAND_ID_. + * + * For example "CXL_MEM_COMMAND_ID_INVALID" + * + * Not all of all commands that the driver supports are always available for use + * by userspace. Userspace must check the results from the QUERY command in + * order to determine the live set of commands. + */ + +#define CXL_MEM_QUERY_COMMANDS _IOR(0xCE, 1, struct cxl_mem_query_commands) + +#define CXL_CMDS \ + ___C(INVALID, "Invalid Command"), \ + ___C(IDENTIFY, "Identify Command"), \ + ___C(MAX, "Last command") + +#define ___C(a, b) CXL_MEM_COMMAND_ID_##a +enum { CXL_CMDS }; + +#undef ___C + +/** + * struct cxl_command_info - Command information returned from a query. + * @id: ID number for the command. + * @flags: Flags that specify command behavior. + * + * * %CXL_MEM_COMMAND_FLAG_KERNEL: This command is reserved for exclusive + * kernel use. + * * %CXL_MEM_COMMAND_FLAG_MUTEX: This command may require coordination with + * the kernel in order to complete successfully. + * + * @size_in: Expected input size, or -1 if variable length. + * @size_out: Expected output size, or -1 if variable length. + * + * Represents a single command that is supported by both the driver and the + * hardware. This is returned as part of an array from the query ioctl. The + * following would be a command named "foobar" that takes a variable length + * input and returns 0 bytes of output. + * + * - @id = 10 + * - @flags = CXL_MEM_COMMAND_FLAG_MUTEX + * - @size_in = -1 + * - @size_out = 0 + * + * See struct cxl_mem_query_commands. + */ +struct cxl_command_info { + __u32 id; + + __u32 flags; +#define CXL_MEM_COMMAND_FLAG_NONE 0 +#define CXL_MEM_COMMAND_FLAG_KERNEL BIT(0) +#define CXL_MEM_COMMAND_FLAG_MUTEX BIT(1) + + __s32 size_in; + __s32 size_out; +}; + +/** + * struct cxl_mem_query_commands - Query supported commands. + * @n_commands: In/out parameter. When @n_commands is > 0, the driver will + * return min(num_support_commands, n_commands). When @n_commands + * is 0, driver will return the number of total supported commands. + * @rsvd: Reserved for future use. + * @commands: Output array of supported commands. This array must be allocated + * by userspace to be at least min(num_support_commands, @n_commands) + * + * Allow userspace to query the available commands supported by both the driver, + * and the hardware. Commands that aren't supported by either the driver, or the + * hardware are not returned in the query. + * + * Examples: + * + * - { .n_commands = 0 } // Get number of supported commands + * - { .n_commands = 15, .commands = buf } // Return first 15 (or less) + * supported commands + * + * See struct cxl_command_info. + */ +struct cxl_mem_query_commands { + /* + * Input: Number of commands to return (space allocated by user) + * Output: Number of commands supported by the driver/hardware + * + * If n_commands is 0, kernel will only return number of commands and + * not try to populate commands[], thus allowing userspace to know how + * much space to allocate + */ + __u32 n_commands; + __u32 rsvd; + + struct cxl_command_info __user commands[]; /* out: supported commands */ +}; + +#if defined(__cplusplus) +} +#endif + +#endif From patchwork Sat Jan 30 00:24:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9114C433E0 for ; Sat, 30 Jan 2021 10:22:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 79E4564DE4 for ; Sat, 30 Jan 2021 10:22:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231752AbhA3KWj (ORCPT ); Sat, 30 Jan 2021 05:22:39 -0500 Received: from mga01.intel.com ([192.55.52.88]:38336 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232484AbhA3A1G (ORCPT ); Fri, 29 Jan 2021 19:27:06 -0500 IronPort-SDR: c0PDrxOukNoAm7UBk8nhb3yoYdOnHyz6b2Un0/C4wQze2rB4OYRLY8k1EBk968biA4oO0Gr1Un ayQJC1nSEpsA== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350688" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350688" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:49 -0800 IronPort-SDR: VOBcOt1l54DxNAotpSMWk8CHiB+5zPh+jPg2DmJfKBiOepFTqkMIDXSeROM+ZXZJh+Y2zU5Vun wXHu8ircr5+w== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591678" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:48 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 07/14] cxl/mem: Add send command Date: Fri, 29 Jan 2021 16:24:31 -0800 Message-Id: <20210130002438.1872527-8-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org The send command allows userspace to issue mailbox commands directly to the hardware. The driver will verify basic properties of the command and possible inspect the input (or output) payload to determine whether or not the command is allowed (or might taint the kernel). The list of allowed commands and their properties can be determined by using the QUERY IOCTL for CXL memory devices. Signed-off-by: Ben Widawsky --- drivers/cxl/mem.c | 201 ++++++++++++++++++++++++++++++++++- include/uapi/linux/cxl_mem.h | 45 ++++++++ 2 files changed, 244 insertions(+), 2 deletions(-) diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 3c3ff45f01c0..c646f0a1cf66 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -126,8 +126,8 @@ struct cxl_mem_command { .size_in = sin, \ .size_out = sout, \ }, \ - .flags = CXL_CMD_INTERNAL_FLAG_##f, \ - .opcode = CXL_MBOX_OP_##_id, \ + .flags = CXL_CMD_INTERNAL_FLAG_##f, \ + .opcode = CXL_MBOX_OP_##_id, \ } /* @@ -427,6 +427,174 @@ static int cxl_mem_count_commands(void) } return n; +}; + +/** + * handle_mailbox_cmd_from_user() - Dispatch a mailbox command. + * @cxlmd: The CXL memory device to communicate with. + * @cmd: The validated command. + * @in_payload: Pointer to userspace's input payload. + * @out_payload: Pointer to userspace's output payload. + * @u: The command submitted by userspace. Has output fields. + * + * Return: + * * %0 - Mailbox transaction succeeded. + * * %-EFAULT - Something happened with copy_to/from_user. + * * %-ENOMEM - Couldn't allocate a bounce buffer. + * * %-EINTR - Mailbox acquisition interrupted. + * * %-E2BIG - Output payload would overrun user's buffer. + * + * Creates the appropriate mailbox command on behalf of a userspace request. + * Return value, size, and output payload are all copied out to @u. The + * parameters for the command must be validated before calling this function. + * + * A 0 return code indicates the command executed successfully, not that it was + * itself successful. IOW, the cmd->retval should always be checked if wanting + * to determine the actual result. + */ +static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd, + const struct cxl_mem_command *cmd, + u64 in_payload, u64 out_payload, + struct cxl_send_command __user *u) +{ + struct cxl_mem *cxlm = cxlmd->cxlm; + struct mbox_cmd mbox_cmd = { + .opcode = cmd->opcode, + .payload_in = NULL, /* Populated with copy_from_user() */ + .payload_out = NULL, /* Read out by copy_to_user() */ + .size_in = cmd->info.size_in, + }; + s32 user_size_out; + int rc; + + if (get_user(user_size_out, &u->size_out)) + return -EFAULT; + + if (cmd->info.size_out > 0) /* fixed size command */ + mbox_cmd.payload_out = kvzalloc(cmd->info.size_out, GFP_KERNEL); + else if (cmd->info.size_out < 0) /* variable */ + mbox_cmd.payload_out = + kvzalloc(cxlm->mbox.payload_size, GFP_KERNEL); + + if (cmd->info.size_in) { + mbox_cmd.payload_in = kvzalloc(cmd->info.size_in, GFP_KERNEL); + if (!mbox_cmd.payload_in) { + rc = -ENOMEM; + goto out; + } + + if (copy_from_user(mbox_cmd.payload_in, + u64_to_user_ptr(in_payload), + cmd->info.size_in)) { + rc = -EFAULT; + goto out; + } + } + + rc = cxl_mem_mbox_get(cxlm); + if (rc) + goto out; + + dev_dbg(&cxlmd->dev, + "Submitting %s command for user\n" + "\topcode: %x\n" + "\tsize: %ub\n", + cxl_command_names[cmd->info.id].name, mbox_cmd.opcode, + cmd->info.size_in); + + rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd); + cxl_mem_mbox_put(cxlm); + if (rc) + goto out; + + rc = put_user(mbox_cmd.return_code, &u->retval); + if (rc) + goto out; + + if (user_size_out < mbox_cmd.size_out) { + rc = -E2BIG; + goto out; + } + + if (mbox_cmd.size_out) { + if (copy_to_user(u64_to_user_ptr(out_payload), + mbox_cmd.payload_out, mbox_cmd.size_out)) { + rc = -EFAULT; + goto out; + } + } + + rc = put_user(mbox_cmd.size_out, &u->size_out); + +out: + kvfree(mbox_cmd.payload_in); + kvfree(mbox_cmd.payload_out); + return rc; +} + +/** + * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND. + * @cxlm: &struct cxl_mem device whose mailbox will be used. + * @send_cmd: &struct cxl_send_command copied in from userspace. + * @out_cmd: Sanitized and populated &struct cxl_mem_command. + * + * Return: + * * %0 - @out_cmd is ready to send. + * * %-ENOTTY - Invalid command specified. + * * %-EINVAL - Reserved fields or invalid values were used. + * * %-EPERM - Attempted to use a protected command. + * * %-ENOMEM - Input or output buffer wasn't sized properly. + * + * The result of this command is a fully validated command in @out_cmd that is + * safe to send to the hardware. + * + * See handle_mailbox_cmd_from_user() + */ +static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm, + const struct cxl_send_command *send_cmd, + struct cxl_mem_command *out_cmd) +{ + const struct cxl_command_info *info; + struct cxl_mem_command *c; + + if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX) + return -ENOTTY; + + /* + * The user can never specify an input payload larger than + * hardware supports, but output can be arbitrarily large, + * simply write out as much data as the hardware provides. + */ + if (send_cmd->size_in > cxlm->mbox.payload_size) + return -EINVAL; + + if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK) + return -EINVAL; + + if (send_cmd->rsvd) + return -EINVAL; + + /* Convert user's command into the internal representation */ + c = &mem_commands[send_cmd->id]; + info = &c->info; + + if (info->flags & CXL_MEM_COMMAND_FLAG_KERNEL) + return -EPERM; + + /* Check the input buffer is the expected size */ + if (info->size_in >= 0 && info->size_in != send_cmd->size_in) + return -ENOMEM; + + /* Check the output buffer is at least large enough */ + if (info->size_out >= 0 && send_cmd->size_out < info->size_out) + return -ENOMEM; + + /* Setting a few const fields here... */ + memcpy(out_cmd, c, sizeof(*c)); + *(s32 *)&out_cmd->info.size_in = send_cmd->size_in; + *(s32 *)&out_cmd->info.size_out = send_cmd->size_out; + + return 0; } static long __cxl_memdev_ioctl(struct cxl_memdev *cxlmd, unsigned int cmd, @@ -469,6 +637,35 @@ static long __cxl_memdev_ioctl(struct cxl_memdev *cxlmd, unsigned int cmd, } return 0; + } else if (cmd == CXL_MEM_SEND_COMMAND) { + struct cxl_send_command send, __user *u = (void __user *)arg; + struct cxl_mem_command c; + int rc; + + dev_dbg(dev, "Send IOCTL\n"); + + if (copy_from_user(&send, u, sizeof(send))) + return -EFAULT; + + rc = device_lock_interruptible(dev); + if (rc) + return rc; + + if (!get_live_device(dev)) { + device_unlock(dev); + return -ENXIO; + } + + rc = cxl_validate_cmd_from_user(cxlmd->cxlm, &send, &c); + if (!rc) + rc = handle_mailbox_cmd_from_user(cxlmd, &c, + send.in_payload, + send.out_payload, u); + + put_device(dev); + device_unlock(dev); + + return rc; } return -ENOTTY; diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h index 70e3ba2fa008..9d865794a420 100644 --- a/include/uapi/linux/cxl_mem.h +++ b/include/uapi/linux/cxl_mem.h @@ -28,6 +28,7 @@ extern "C" { */ #define CXL_MEM_QUERY_COMMANDS _IOR(0xCE, 1, struct cxl_mem_query_commands) +#define CXL_MEM_SEND_COMMAND _IOWR(0xCE, 2, struct cxl_send_command) #define CXL_CMDS \ ___C(INVALID, "Invalid Command"), \ @@ -37,6 +38,11 @@ extern "C" { #define ___C(a, b) CXL_MEM_COMMAND_ID_##a enum { CXL_CMDS }; +#undef ___C +#define ___C(a, b) { b } +static const struct { + const char *name; +} cxl_command_names[] = { CXL_CMDS }; #undef ___C /** @@ -71,6 +77,7 @@ struct cxl_command_info { #define CXL_MEM_COMMAND_FLAG_NONE 0 #define CXL_MEM_COMMAND_FLAG_KERNEL BIT(0) #define CXL_MEM_COMMAND_FLAG_MUTEX BIT(1) +#define CXL_MEM_COMMAND_FLAG_MASK GENMASK(1, 0) __s32 size_in; __s32 size_out; @@ -112,6 +119,44 @@ struct cxl_mem_query_commands { struct cxl_command_info __user commands[]; /* out: supported commands */ }; +/** + * struct cxl_send_command - Send a command to a memory device. + * @id: The command to send to the memory device. This must be one of the + * commands returned by the query command. + * @flags: Flags for the command (input). + * @rsvd: Must be zero. + * @retval: Return value from the memory device (output). + * @size_in: Size of the payload to provide to the device (input). + * @size_out: Size of the payload received from the device (input/output). This + * field is filled in by userspace to let the driver know how much + * space was allocated for output. It is populated by the driver to + * let userspace know how large the output payload actually was. + * @in_payload: Pointer to memory for payload input (little endian order). + * @out_payload: Pointer to memory for payload output (little endian order). + * + * Mechanism for userspace to send a command to the hardware for processing. The + * driver will do basic validation on the command sizes. In some cases even the + * payload may be introspected. Userspace is required to allocate large + * enough buffers for size_out which can be variable length in certain + * situations. + */ +struct cxl_send_command { + __u32 id; + __u32 flags; + __u32 rsvd; + __u32 retval; + + struct { + __s32 size_in; + __u64 in_payload; + }; + + struct { + __s32 size_out; + __u64 out_payload; + }; +}; + #if defined(__cplusplus) } #endif From patchwork Sat Jan 30 00:24:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F26CC4332E for ; Sat, 30 Jan 2021 10:21:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 315E264E05 for ; Sat, 30 Jan 2021 10:21:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231857AbhA3KU3 (ORCPT ); Sat, 30 Jan 2021 05:20:29 -0500 Received: from mga01.intel.com ([192.55.52.88]:38338 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231297AbhA3A3X (ORCPT ); Fri, 29 Jan 2021 19:29:23 -0500 IronPort-SDR: Uc83Y3ukXLDuJd/HgkPVd4KDUXQqmrquaiGh2G1w2YuT8xwxrt70Tgw9ddlp+Tbw+ICeTnPbPu hJNg1/6NyfsQ== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350690" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350690" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:49 -0800 IronPort-SDR: m7fLWpnzSaGDDMzkIoKQE1gcfDJtpyuPc8V2ww1bEY8aPomuNC709+efCr5+o4CuEcZOz7049/ ppIAmMFJlyyg== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591682" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:49 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 08/14] taint: add taint for direct hardware access Date: Fri, 29 Jan 2021 16:24:32 -0800 Message-Id: <20210130002438.1872527-9-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org For drivers that moderate access to the underlying hardware it is sometimes desirable to allow userspace to bypass restrictions. Once userspace has done this, the driver can no longer guarantee the sanctity of either the OS or the hardware. When in this state, it is helpful for kernel developers to be made aware (via this taint flag) of this fact for subsequent bug reports. Example usage: - Hardware xyzzy accepts 2 commands, waldo and fred. - The xyzzy driver provides an interface for using waldo, but not fred. - quux is convinced they really need the fred command. - xyzzy driver allows quux to frob hardware to initiate fred. - kernel gets tainted. - turns out fred command is borked, and scribbles over memory. - developers laugh while closing quux's subsequent bug report. Signed-off-by: Ben Widawsky --- Documentation/admin-guide/sysctl/kernel.rst | 1 + Documentation/admin-guide/tainted-kernels.rst | 6 +++++- include/linux/kernel.h | 3 ++- kernel/panic.c | 1 + 4 files changed, 9 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst index 1d56a6b73a4e..3e1eada53504 100644 --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -1352,6 +1352,7 @@ ORed together. The letters are seen in "Tainted" line of Oops reports. 32768 `(K)` kernel has been live patched 65536 `(X)` Auxiliary taint, defined and used by for distros 131072 `(T)` The kernel was built with the struct randomization plugin +262144 `(H)` The kernel has allowed vendor shenanigans ====== ===== ============================================================== See :doc:`/admin-guide/tainted-kernels` for more information. diff --git a/Documentation/admin-guide/tainted-kernels.rst b/Documentation/admin-guide/tainted-kernels.rst index ceeed7b0798d..ee2913316344 100644 --- a/Documentation/admin-guide/tainted-kernels.rst +++ b/Documentation/admin-guide/tainted-kernels.rst @@ -74,7 +74,7 @@ a particular type of taint. It's best to leave that to the aforementioned script, but if you need something quick you can use this shell command to check which bits are set:: - $ for i in $(seq 18); do echo $(($i-1)) $(($(cat /proc/sys/kernel/tainted)>>($i-1)&1));done + $ for i in $(seq 19); do echo $(($i-1)) $(($(cat /proc/sys/kernel/tainted)>>($i-1)&1));done Table for decoding tainted state ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -100,6 +100,7 @@ Bit Log Number Reason that got the kernel tainted 15 _/K 32768 kernel has been live patched 16 _/X 65536 auxiliary taint, defined for and used by distros 17 _/T 131072 kernel was built with the struct randomization plugin + 18 _/H 262144 kernel has allowed vendor shenanigans === === ====== ======================================================== Note: The character ``_`` is representing a blank in this table to make reading @@ -175,3 +176,6 @@ More detailed explanation for tainting produce extremely unusual kernel structure layouts (even performance pathological ones), which is important to know when debugging. Set at build time. + + 18) ``H`` Kernel has allowed direct access to hardware and can no longer make + any guarantees about the stability of the device or driver. diff --git a/include/linux/kernel.h b/include/linux/kernel.h index f7902d8c1048..bc95486f817e 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -443,7 +443,8 @@ extern enum system_states { #define TAINT_LIVEPATCH 15 #define TAINT_AUX 16 #define TAINT_RANDSTRUCT 17 -#define TAINT_FLAGS_COUNT 18 +#define TAINT_RAW_PASSTHROUGH 18 +#define TAINT_FLAGS_COUNT 19 #define TAINT_FLAGS_MAX ((1UL << TAINT_FLAGS_COUNT) - 1) struct taint_flag { diff --git a/kernel/panic.c b/kernel/panic.c index 332736a72a58..dff22bd80eaf 100644 --- a/kernel/panic.c +++ b/kernel/panic.c @@ -386,6 +386,7 @@ const struct taint_flag taint_flags[TAINT_FLAGS_COUNT] = { [ TAINT_LIVEPATCH ] = { 'K', ' ', true }, [ TAINT_AUX ] = { 'X', ' ', true }, [ TAINT_RANDSTRUCT ] = { 'T', ' ', true }, + [ TAINT_RAW_PASSTHROUGH ] = { 'H', ' ', true }, }; /** From patchwork Sat Jan 30 00:24:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24BF6C433E0 for ; Sat, 30 Jan 2021 10:21:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E864564E05 for ; Sat, 30 Jan 2021 10:21:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232118AbhA3KUb (ORCPT ); Sat, 30 Jan 2021 05:20:31 -0500 Received: from mga01.intel.com ([192.55.52.88]:38336 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232655AbhA3A3X (ORCPT ); Fri, 29 Jan 2021 19:29:23 -0500 IronPort-SDR: WSghDGdwn6vVGwaIXLu1yyo4GZsDQtViHKVHxQndxZWatnZOppIPKUAi72q9MYs1JRDz1AYOJo D28XkirwxeHg== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350692" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350692" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:50 -0800 IronPort-SDR: A9HGS754vvABaaZIIWeuOLDVOBjoOAw903l6VTB2PDpiL6D47P9QfC9F/4zd528n38qzlaRcGv rDJQm1dqz96w== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591688" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:49 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 09/14] cxl/mem: Add a "RAW" send command Date: Fri, 29 Jan 2021 16:24:33 -0800 Message-Id: <20210130002438.1872527-10-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org The CXL memory device send interface will have a number of supported commands. The raw command is not such a command. Raw commands allow userspace to send a specified opcode to the underlying hardware and bypass all driver checks on the command. This is useful for a couple of usecases, mainly: 1. Undocumented vendor specific hardware commands 2. Prototyping new hardware commands not yet supported by the driver While this all sounds very powerful it comes with a couple of caveats: 1. Bug reports using raw commands will not get the same level of attention as bug reports using supported commands (via taint). 2. Supported commands will be rejected by the RAW command. With this comes new debugfs knob to allow full access to your toes with your weapon of choice. Signed-off-by: Ben Widawsky --- Documentation/ABI/testing/debugfs-cxl | 10 ++ drivers/cxl/mem.c | 130 ++++++++++++++++++++++++-- include/uapi/linux/cxl_mem.h | 12 ++- 3 files changed, 142 insertions(+), 10 deletions(-) create mode 100644 Documentation/ABI/testing/debugfs-cxl diff --git a/Documentation/ABI/testing/debugfs-cxl b/Documentation/ABI/testing/debugfs-cxl new file mode 100644 index 000000000000..37e89aaac296 --- /dev/null +++ b/Documentation/ABI/testing/debugfs-cxl @@ -0,0 +1,10 @@ +What: /sys/kernel/debug/cxl/mbox/raw_allow_all +Date: January 2021 +KernelVersion: 5.12 +Description: + Permits "RAW" mailbox commands to be passed through to hardware + without driver intervention. Many such commands require + coordination and therefore should only be used for debugging or + testing. + + Valid values are boolean. diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index c646f0a1cf66..2942730dc967 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0-only /* Copyright(c) 2020 Intel Corporation. All rights reserved. */ #include +#include #include #include #include @@ -40,7 +41,14 @@ enum opcode { CXL_MBOX_OP_INVALID = 0x0000, +#define CXL_MBOX_OP_RAW CXL_MBOX_OP_INVALID + CXL_MBOX_OP_ACTIVATE_FW = 0x0202, CXL_MBOX_OP_IDENTIFY = 0x4000, + CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, + CXL_MBOX_OP_SET_LSA = 0x4103, + CXL_MBOX_OP_SET_SHUTDOWN_STATE = 0x4204, + CXL_MBOX_OP_SCAN_MEDIA = 0x4304, + CXL_MBOX_OP_GET_SCAN_MEDIA = 0x4305, CXL_MBOX_OP_MAX = 0x10000 }; @@ -90,6 +98,8 @@ struct cxl_memdev { static int cxl_mem_major; static DEFINE_IDA(cxl_memdev_ida); +static struct dentry *cxl_debugfs; +static bool raw_allow_all; /** * struct cxl_mem_command - Driver representation of a memory device command @@ -139,6 +149,47 @@ struct cxl_mem_command { static struct cxl_mem_command mem_commands[] = { CXL_CMD(INVALID, KERNEL, 0, 0, HIDDEN), CXL_CMD(IDENTIFY, NONE, 0, 0x43, MANDATORY), + CXL_CMD(RAW, NONE, ~0, ~0, MANDATORY), +}; + +/* + * Commands that RAW doesn't permit. The rationale for each: + * + * CXL_MBOX_OP_ACTIVATE_FW: Firmware activation requires adjustment / + * coordination of transaction timeout values at the root bridge level. + * + * CXL_MBOX_OP_SET_PARTITION_INFO: The device memory map may change live + * and needs to be coordinated with HDM updates. + * + * CXL_MBOX_OP_SET_LSA: The label storage area may be cached by the + * driver and any writes from userspace invalidates those contents. + * + * CXL_MBOX_OP_SET_SHUTDOWN_STATE: Set shutdown state assumes no writes + * to the device after it is marked clean, userspace can not make that + * assertion. + * + * CXL_MBOX_OP_[GET_]SCAN_MEDIA: The kernel provides a native error list that + * is kept up to date with patrol notifications and error management. + */ +static u16 disabled_raw_commands[] = { + CXL_MBOX_OP_ACTIVATE_FW, + CXL_MBOX_OP_SET_PARTITION_INFO, + CXL_MBOX_OP_SET_LSA, + CXL_MBOX_OP_SET_SHUTDOWN_STATE, + CXL_MBOX_OP_SCAN_MEDIA, + CXL_MBOX_OP_GET_SCAN_MEDIA, +}; + +/* + * Command sets that RAW doesn't permit. All opcodes in this set are + * disabled because they pass plain text security payloads over the + * user/kernel boundary. This functionality is intended to be wrapped + * behind the keys ABI which allows for encrypted payloads in the UAPI + */ +static u8 security_command_sets[] = { + 0x44, /* Sanitize */ + 0x45, /* Persistent Memory Data-at-rest Security */ + 0x46, /* Security Passthrough */ }; #define cxl_for_each_cmd(cmd) \ @@ -180,22 +231,30 @@ static int cxl_mem_wait_for_doorbell(struct cxl_mem *cxlm) return 0; } +static bool is_security_command(u16 opcode) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(security_command_sets); i++) + if (security_command_sets[i] == (opcode >> 8)) + return true; + return false; +} + static void cxl_mem_mbox_timeout(struct cxl_mem *cxlm, struct mbox_cmd *mbox_cmd) { - dev_warn(&cxlm->pdev->dev, "Mailbox command timed out\n"); - dev_info(&cxlm->pdev->dev, - "\topcode: 0x%04x\n" - "\tpayload size: %zub\n", - mbox_cmd->opcode, mbox_cmd->size_in); + struct device *dev = &cxlm->pdev->dev; + + dev_dbg(dev, "Mailbox command (opcode: %#x size: %zub) timed out\n", + mbox_cmd->opcode, mbox_cmd->size_in); - if (IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) { + if (!is_security_command(mbox_cmd->opcode) || + IS_ENABLED(CONFIG_CXL_MEM_INSECURE_DEBUG)) { print_hex_dump_debug("Payload ", DUMP_PREFIX_OFFSET, 16, 1, mbox_cmd->payload_in, mbox_cmd->size_in, true); } - - /* Here's a good place to figure out if a device reset is needed */ } /** @@ -458,6 +517,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd, struct cxl_send_command __user *u) { struct cxl_mem *cxlm = cxlmd->cxlm; + struct device *dev = &cxlmd->dev; struct mbox_cmd mbox_cmd = { .opcode = cmd->opcode, .payload_in = NULL, /* Populated with copy_from_user() */ @@ -495,13 +555,17 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd, if (rc) goto out; - dev_dbg(&cxlmd->dev, + dev_dbg(dev, "Submitting %s command for user\n" "\topcode: %x\n" "\tsize: %ub\n", cxl_command_names[cmd->info.id].name, mbox_cmd.opcode, cmd->info.size_in); + WARN_TAINT_ONCE(cmd->info.id == CXL_MEM_COMMAND_ID_RAW, + TAINT_RAW_PASSTHROUGH, "%s %s: raw command path used\n", + dev_driver_string(dev), dev_name(dev)); + rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd); cxl_mem_mbox_put(cxlm); if (rc) @@ -532,6 +596,23 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd, return rc; } +static bool cxl_mem_raw_command_allowed(u16 opcode) +{ + int i; + + if (raw_allow_all) + return true; + + if (is_security_command(opcode)) + return false; + + for (i = 0; i < ARRAY_SIZE(disabled_raw_commands); i++) + if (disabled_raw_commands[i] == opcode) + return false; + + return true; +} + /** * cxl_validate_cmd_from_user() - Check fields for CXL_MEM_SEND_COMMAND. * @cxlm: &struct cxl_mem device whose mailbox will be used. @@ -568,6 +649,30 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm, if (send_cmd->size_in > cxlm->mbox.payload_size) return -EINVAL; + /* Checks are bypassed for raw commands but along comes the taint! */ + if (send_cmd->id == CXL_MEM_COMMAND_ID_RAW) { + const struct cxl_mem_command temp = { + .info = { + .id = CXL_MEM_COMMAND_ID_RAW, + .flags = CXL_MEM_COMMAND_FLAG_NONE, + .size_in = send_cmd->size_in, + .size_out = send_cmd->size_out, + }, + .flags = 0, + .opcode = send_cmd->raw.opcode + }; + + if (send_cmd->raw.rsvd) + return -EINVAL; + + if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode)) + return -EPERM; + + memcpy(out_cmd, &temp, sizeof(temp)); + + return 0; + } + if (send_cmd->flags & ~CXL_MEM_COMMAND_FLAG_MASK) return -EINVAL; @@ -1200,6 +1305,7 @@ static __init int cxl_mem_init(void) { int rc; dev_t devt; + struct dentry *mbox_debugfs; rc = alloc_chrdev_region(&devt, 0, CXL_MEM_MAX_DEVS, "cxl"); if (rc) @@ -1214,11 +1320,17 @@ static __init int cxl_mem_init(void) return rc; } + cxl_debugfs = debugfs_create_dir("cxl", NULL); + mbox_debugfs = debugfs_create_dir("mbox", cxl_debugfs); + debugfs_create_bool("raw_allow_all", 0600, mbox_debugfs, + &raw_allow_all); + return 0; } static __exit void cxl_mem_exit(void) { + debugfs_remove_recursive(cxl_debugfs); pci_unregister_driver(&cxl_mem_driver); unregister_chrdev_region(MKDEV(cxl_mem_major, 0), CXL_MEM_MAX_DEVS); } diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h index 9d865794a420..25bfcb071c1f 100644 --- a/include/uapi/linux/cxl_mem.h +++ b/include/uapi/linux/cxl_mem.h @@ -33,6 +33,7 @@ extern "C" { #define CXL_CMDS \ ___C(INVALID, "Invalid Command"), \ ___C(IDENTIFY, "Identify Command"), \ + ___C(RAW, "Raw device command"), \ ___C(MAX, "Last command") #define ___C(a, b) CXL_MEM_COMMAND_ID_##a @@ -124,6 +125,9 @@ struct cxl_mem_query_commands { * @id: The command to send to the memory device. This must be one of the * commands returned by the query command. * @flags: Flags for the command (input). + * @raw: Special fields for raw commands + * @raw.opcode: Opcode passed to hardware when using the RAW command. + * @raw.rsvd: Must be zero. * @rsvd: Must be zero. * @retval: Return value from the memory device (output). * @size_in: Size of the payload to provide to the device (input). @@ -143,7 +147,13 @@ struct cxl_mem_query_commands { struct cxl_send_command { __u32 id; __u32 flags; - __u32 rsvd; + union { + struct { + __u16 opcode; + __u16 rsvd; + } raw; + __u32 rsvd; + }; __u32 retval; struct { From patchwork Sat Jan 30 00:24:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188089 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41FB6C433E9 for ; Sat, 30 Jan 2021 10:21:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EAB3464E05 for ; Sat, 30 Jan 2021 10:21:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232190AbhA3KUd (ORCPT ); Sat, 30 Jan 2021 05:20:33 -0500 Received: from mga01.intel.com ([192.55.52.88]:38334 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232706AbhA3A3X (ORCPT ); Fri, 29 Jan 2021 19:29:23 -0500 IronPort-SDR: xWMoN6Al8GMFv0RRXrPfcHT213hCA4kPbertb8kkNAkEqvmScOae/FKeUZpXbVUXDKhtKDKKXx 7on8vleMhobw== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350693" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350693" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:50 -0800 IronPort-SDR: PTVsJjzGJIhPmNWHZWpgsjtpCH+DjMG8Prae3+QWsFBb/v0MOBpY55rwhf13oNO3rkWts8okzc gLbIWiaS3/Qg== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591692" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:50 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 10/14] cxl/mem: Create concept of enabled commands Date: Fri, 29 Jan 2021 16:24:34 -0800 Message-Id: <20210130002438.1872527-11-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org CXL devices must implement the Device Command Interface (described in 8.2.9 of the CXL 2.0 spec). While the driver already maintains a list of commands it supports, there is still a need to be able to distinguish between commands that the driver knows about from commands that may not be supported by the hardware. No such commands currently are defined in the driver. The implementation leaves the statically defined table of commands and supplements it with a bitmap to determine commands that are enabled. There are multiple approaches that can be taken, but this is nice for a few reasons. Here are some of the other solutions: Create a per instance table with only the supported commands. 1. Having a fixed command id -> command mapping is much easier to manage for development and debugging. 2. Dealing with dynamic memory allocation for the table adds unnecessary complexity. 3. Most tables for device types are likely to be quite similar. 4. Makes it difficult to implement helper macros like cxl_for_each_cmd() If the per instance table did preserve ids, #1 above can be addressed. However, as "enable" is currently the only mutable state for the commands, it would yield a lot of overhead for not much gain. Additionally, the other issues remain. If "enable" remains the only mutable state, I believe this to be the best solution. Once the number of mutable elements in a command grows, it probably makes sense to move to per device instance state with a fixed command ID mapping. Signed-off-by: Ben Widawsky --- drivers/cxl/cxl.h | 4 ++++ drivers/cxl/mem.c | 40 +++++++++++++++++++++++++++++++++++++++- 2 files changed, 43 insertions(+), 1 deletion(-) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index b042eee7ee25..2d2f25065b81 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -17,6 +17,9 @@ #define CXL_GET_FIELD(word, field) FIELD_GET(field##_MASK, word) +/* XXX: Arbitrary max */ +#define CXL_MAX_COMMANDS 32 + /* Device Capabilities (CXL 2.0 - 8.2.8.1) */ #define CXLDEV_CAP_ARRAY_OFFSET 0x0 #define CXLDEV_CAP_ARRAY_CAP_ID 0 @@ -83,6 +86,7 @@ struct cxl_mem { } ram; char firmware_version[0x10]; + DECLARE_BITMAP(enabled_cmds, CXL_MAX_COMMANDS); /* Cap 0001h - CXL_CAP_CAP_ID_DEVICE_STATUS */ struct { diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 2942730dc967..d01c6ee32a6b 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -111,6 +111,8 @@ static bool raw_allow_all; * would typically be used for deprecated commands. * * %CXL_CMD_FLAG_MANDATORY: Hardware must support this command. This flag is * only used internally by the driver for sanity checking. + * * %CXL_CMD_INTERNAL_FLAG_PSEUDO: This is a pseudo command which doesn't have + * a direct mapping to hardware. They are implicitly always enabled. * * The cxl_mem_command is the driver's internal representation of commands that * are supported by the driver. Some of these commands may not be supported by @@ -126,6 +128,7 @@ struct cxl_mem_command { #define CXL_CMD_INTERNAL_FLAG_NONE 0 #define CXL_CMD_INTERNAL_FLAG_HIDDEN BIT(0) #define CXL_CMD_INTERNAL_FLAG_MANDATORY BIT(1) +#define CXL_CMD_INTERNAL_FLAG_PSEUDO BIT(2) }; #define CXL_CMD(_id, _flags, sin, sout, f) \ @@ -149,7 +152,7 @@ struct cxl_mem_command { static struct cxl_mem_command mem_commands[] = { CXL_CMD(INVALID, KERNEL, 0, 0, HIDDEN), CXL_CMD(IDENTIFY, NONE, 0, 0x43, MANDATORY), - CXL_CMD(RAW, NONE, ~0, ~0, MANDATORY), + CXL_CMD(RAW, NONE, ~0, ~0, PSEUDO), }; /* @@ -683,6 +686,10 @@ static int cxl_validate_cmd_from_user(struct cxl_mem *cxlm, c = &mem_commands[send_cmd->id]; info = &c->info; + /* Check that the command is enabled for hardware */ + if (!test_bit(info->id, cxlm->enabled_cmds)) + return -ENOTTY; + if (info->flags & CXL_MEM_COMMAND_FLAG_KERNEL) return -EPERM; @@ -1161,6 +1168,33 @@ static int cxl_mem_add_memdev(struct cxl_mem *cxlm) return rc; } +/** + * cxl_mem_enumerate_cmds() - Enumerate commands for a device. + * @cxlm: The device. + * + * Returns 0 if enumerate completed successfully. + * + * CXL devices have optional support for certain commands. This function will + * determine the set of supported commands for the hardware and update the + * enabled_cmds bitmap in the @cxlm. + */ +static int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm) +{ + struct cxl_mem_command *c; + + BUILD_BUG_ON(ARRAY_SIZE(mem_commands) >= CXL_MAX_COMMANDS); + + /* All commands are considered enabled for now (except INVALID). */ + cxl_for_each_cmd(c) { + if (c->flags & CXL_CMD_INTERNAL_FLAG_HIDDEN) + continue; + + set_bit(c->info.id, cxlm->enabled_cmds); + } + + return 0; +} + /** * cxl_mem_identify() - Send the IDENTIFY command to the device. * @cxlm: The device to identify. @@ -1280,6 +1314,10 @@ static int cxl_mem_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; + rc = cxl_mem_enumerate_cmds(cxlm); + if (rc) + return rc; + rc = cxl_mem_identify(cxlm); if (rc) return rc; From patchwork Sat Jan 30 00:24:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188083 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A104EC433DB for ; Sat, 30 Jan 2021 10:19:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5071F64E04 for ; Sat, 30 Jan 2021 10:19:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232499AbhA3Aej (ORCPT ); Fri, 29 Jan 2021 19:34:39 -0500 Received: from mga01.intel.com ([192.55.52.88]:38336 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232628AbhA3AcG (ORCPT ); Fri, 29 Jan 2021 19:32:06 -0500 IronPort-SDR: Ll3UuEyt8zbW6M1SybSlByKww2ZYOEhkqeKZwsatjyzmfw/MSxSLso70JEXx2SCFPi0sFMNWmE tuGDbheF9byQ== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350694" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350694" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:52 -0800 IronPort-SDR: m+agbHkW8fs1dKBpQYdREFMgvu8mFAaICCta1fOva6qKbPPaVFm0kZrmUqu/xkqWVffKml3PeO Q3p52NBxHCig== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591698" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:50 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 11/14] cxl/mem: Use CEL for enabling commands Date: Fri, 29 Jan 2021 16:24:35 -0800 Message-Id: <20210130002438.1872527-12-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org The Command Effects Log (CEL) is specified in the CXL 2.0 specification. The CEL is one of two types of logs, the other being vendor specific. They are distinguished in hardware/spec via UUID. The CEL is immediately useful for 2 things: 1. Determine which optional commands are supported by the CXL device. 2. Enumerate any vendor specific commands The CEL can be used by the driver to determine which commands are available in the hardware (though it isn't, yet). That set of commands might itself be a subset of commands which are available to be used via CXL_MEM_SEND_COMMAND IOCTL. Prior to this, all commands that the driver exposed were explicitly enabled. After this, only those commands that are found in the CEL are enabled. Signed-off-by: Ben Widawsky --- drivers/cxl/mem.c | 186 ++++++++++++++++++++++++++++++++++- include/uapi/linux/cxl_mem.h | 1 + 2 files changed, 182 insertions(+), 5 deletions(-) diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index d01c6ee32a6b..787417c4d5dc 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -43,6 +43,8 @@ enum opcode { CXL_MBOX_OP_INVALID = 0x0000, #define CXL_MBOX_OP_RAW CXL_MBOX_OP_INVALID CXL_MBOX_OP_ACTIVATE_FW = 0x0202, + CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, + CXL_MBOX_OP_GET_LOG = 0x0401, CXL_MBOX_OP_IDENTIFY = 0x4000, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, CXL_MBOX_OP_SET_LSA = 0x4103, @@ -101,6 +103,18 @@ static DEFINE_IDA(cxl_memdev_ida); static struct dentry *cxl_debugfs; static bool raw_allow_all; +enum { + CEL_UUID, + DEBUG_UUID +}; + +static const uuid_t log_uuid[] = { + [CEL_UUID] = UUID_INIT(0xda9c0b5, 0xbf41, 0x4b78, 0x8f, 0x79, 0x96, + 0xb1, 0x62, 0x3b, 0x3f, 0x17), + [DEBUG_UUID] = UUID_INIT(0xe1819d9, 0x11a9, 0x400c, 0x81, 0x1f, 0xd6, + 0x07, 0x19, 0x40, 0x3d, 0x86) +}; + /** * struct cxl_mem_command - Driver representation of a memory device command * @info: Command information as it exists for the UAPI @@ -153,6 +167,7 @@ static struct cxl_mem_command mem_commands[] = { CXL_CMD(INVALID, KERNEL, 0, 0, HIDDEN), CXL_CMD(IDENTIFY, NONE, 0, 0x43, MANDATORY), CXL_CMD(RAW, NONE, ~0, ~0, PSEUDO), + CXL_CMD(GET_SUPPORTED_LOGS, NONE, 0, ~0, MANDATORY), }; /* @@ -1168,6 +1183,101 @@ static int cxl_mem_add_memdev(struct cxl_mem *cxlm) return rc; } +struct cxl_mbox_get_supported_logs { + __le16 entries; + u8 rsvd[6]; + struct gsl_entry { + uuid_t uuid; + __le32 size; + } __packed entry[2]; +} __packed; +struct cxl_mbox_get_log { + uuid_t uuid; + __le32 offset; + __le32 length; +} __packed; + +static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out) +{ + u32 remaining = size; + u32 offset = 0; + + while (remaining) { + u32 xfer_size = min_t(u32, remaining, cxlm->mbox.payload_size); + struct mbox_cmd mbox_cmd; + int rc; + struct cxl_mbox_get_log log = { + .uuid = *uuid, + .offset = cpu_to_le32(offset), + .length = cpu_to_le32(xfer_size) + }; + + mbox_cmd = (struct mbox_cmd) { + .opcode = CXL_MBOX_OP_GET_LOG, + .payload_in = &log, + .payload_out = out, + .size_in = sizeof(log), + }; + + rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd); + if (rc) + return rc; + + WARN_ON(mbox_cmd.size_out != xfer_size); + + out += xfer_size; + remaining -= xfer_size; + offset += xfer_size; + } + + return 0; +} + +static void cxl_enable_cmd(struct cxl_mem *cxlm, + const struct cxl_mem_command *cmd) +{ + if (test_and_set_bit(cmd->info.id, cxlm->enabled_cmds)) + dev_warn(&cxlm->pdev->dev, "Command enabled twice\n"); + + dev_info(&cxlm->pdev->dev, "%s enabled", + cxl_command_names[cmd->info.id].name); +} + +/** + * cxl_walk_cel() - Walk through the Command Effects Log. + * @cxlm: Device. + * @size: Length of the Command Effects Log. + * @cel: CEL + * + * Iterate over each entry in the CEL and determine if the driver supports the + * command. If so, the command is enabled for the device and can be used later. + */ +static void cxl_walk_cel(struct cxl_mem *cxlm, size_t size, u8 *cel) +{ + struct cel_entry { + __le16 opcode; + __le16 effect; + } *cel_entry; + const int cel_entries = size / sizeof(*cel_entry); + int i; + + cel_entry = (struct cel_entry *)cel; + + for (i = 0; i < cel_entries; i++) { + const struct cel_entry *ce = &cel_entry[i]; + const struct cxl_mem_command *cmd = + cxl_mem_find_command(le16_to_cpu(ce->opcode)); + + if (!cmd) { + dev_dbg(&cxlm->pdev->dev, "Unsupported opcode 0x%04x", + le16_to_cpu(ce->opcode)); + continue; + } + + cxl_enable_cmd(cxlm, cmd); + } +} + /** * cxl_mem_enumerate_cmds() - Enumerate commands for a device. * @cxlm: The device. @@ -1180,19 +1290,85 @@ static int cxl_mem_add_memdev(struct cxl_mem *cxlm) */ static int cxl_mem_enumerate_cmds(struct cxl_mem *cxlm) { - struct cxl_mem_command *c; + struct cxl_mbox_get_supported_logs gsl; + const struct cxl_mem_command *c; + struct mbox_cmd mbox_cmd; + int i, rc; BUILD_BUG_ON(ARRAY_SIZE(mem_commands) >= CXL_MAX_COMMANDS); - /* All commands are considered enabled for now (except INVALID). */ + /* Pseudo commands are always enabled */ cxl_for_each_cmd(c) { - if (c->flags & CXL_CMD_INTERNAL_FLAG_HIDDEN) + if (c->flags & CXL_CMD_INTERNAL_FLAG_PSEUDO) + cxl_enable_cmd(cxlm, c); + } + + mbox_cmd = (struct mbox_cmd){ + .opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS, + .payload_out = &gsl, + .size_in = 0, + }; + + rc = cxl_mem_mbox_get(cxlm); + if (rc) + return rc; + + rc = cxl_mem_mbox_send_cmd(cxlm, &mbox_cmd); + if (rc) + goto out; + + if (mbox_cmd.return_code != CXL_MBOX_SUCCESS) { + rc = -ENXIO; + goto out; + } + + if (mbox_cmd.size_out > sizeof(gsl)) { + dev_warn(&cxlm->pdev->dev, "%zu excess logs\n", + (mbox_cmd.size_out - sizeof(gsl)) / + sizeof(struct gsl_entry)); + } + + for (i = 0; i < le16_to_cpu(gsl.entries); i++) { + u32 size = le32_to_cpu(gsl.entry[i].size); + uuid_t uuid = gsl.entry[i].uuid; + u8 *log; + + dev_dbg(&cxlm->pdev->dev, "Found LOG type %pU of size %d", + &uuid, size); + + if (!uuid_equal(&uuid, &log_uuid[CEL_UUID])) continue; - set_bit(c->info.id, cxlm->enabled_cmds); + /* + * It's a hardware bug if the log size is less than the input + * payload size because there are many mandatory commands. + */ + if (sizeof(struct cxl_mbox_get_log) > size) { + dev_err(&cxlm->pdev->dev, + "CEL log size reported was too small (%d)", + size); + rc = -ENOMEM; + goto out; + } + + log = kvmalloc(size, GFP_KERNEL); + if (!log) { + rc = -ENOMEM; + goto out; + } + + rc = cxl_xfer_log(cxlm, &uuid, size, log); + if (rc) + goto out; + + cxl_walk_cel(cxlm, size, log); + + kvfree(log); } - return 0; +out: + cxl_mem_mbox_put(cxlm); + return rc; } /** diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h index 25bfcb071c1f..64cb9753a077 100644 --- a/include/uapi/linux/cxl_mem.h +++ b/include/uapi/linux/cxl_mem.h @@ -34,6 +34,7 @@ extern "C" { ___C(INVALID, "Invalid Command"), \ ___C(IDENTIFY, "Identify Command"), \ ___C(RAW, "Raw device command"), \ + ___C(GET_SUPPORTED_LOGS, "Get Supported Logs"), \ ___C(MAX, "Last command") #define ___C(a, b) CXL_MEM_COMMAND_ID_##a From patchwork Sat Jan 30 00:24:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 754BDC433E0 for ; Sat, 30 Jan 2021 10:19:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 269AC64DE7 for ; Sat, 30 Jan 2021 10:19:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232747AbhA3Aey (ORCPT ); Fri, 29 Jan 2021 19:34:54 -0500 Received: from mga01.intel.com ([192.55.52.88]:38334 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232828AbhA3AcG (ORCPT ); Fri, 29 Jan 2021 19:32:06 -0500 IronPort-SDR: iDJzDplbIxyt5b0f46bhkvLbJCNreQWnBqiG0oDQmERSmdZFuGm2IalQ17HRBLOD+u9ToTHXgk 3inoVWw/5o3Q== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350695" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350695" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:52 -0800 IronPort-SDR: FS8opO0zcYO9Li6hvufbXR/rYCbqfPnhC03Y86RtN6BSDhlOR3JghOAbnPFgu0TRUGplPFSaqM /1xo1PFDYTYA== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591703" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:52 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 12/14] cxl/mem: Add set of informational commands Date: Fri, 29 Jan 2021 16:24:36 -0800 Message-Id: <20210130002438.1872527-13-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org In order to solidify support for a reasonable set of commands a set of relatively safe commands are added and thus nullifying the need to use raw operations to access them. Signed-off-by: Ben Widawsky --- drivers/cxl/mem.c | 8 ++++++++ include/uapi/linux/cxl_mem.h | 4 ++++ 2 files changed, 12 insertions(+) diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index 787417c4d5dc..b8ca6dff37b5 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -42,12 +42,16 @@ enum opcode { CXL_MBOX_OP_INVALID = 0x0000, #define CXL_MBOX_OP_RAW CXL_MBOX_OP_INVALID + CXL_MBOX_OP_GET_FW_INFO = 0x0200, CXL_MBOX_OP_ACTIVATE_FW = 0x0202, CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, CXL_MBOX_OP_GET_LOG = 0x0401, CXL_MBOX_OP_IDENTIFY = 0x4000, + CXL_MBOX_OP_GET_PARTITION_INFO = 0x4100, CXL_MBOX_OP_SET_PARTITION_INFO = 0x4101, + CXL_MBOX_OP_GET_LSA = 0x4102, CXL_MBOX_OP_SET_LSA = 0x4103, + CXL_MBOX_OP_GET_HEALTH_INFO = 0x4200, CXL_MBOX_OP_SET_SHUTDOWN_STATE = 0x4204, CXL_MBOX_OP_SCAN_MEDIA = 0x4304, CXL_MBOX_OP_GET_SCAN_MEDIA = 0x4305, @@ -168,6 +172,10 @@ static struct cxl_mem_command mem_commands[] = { CXL_CMD(IDENTIFY, NONE, 0, 0x43, MANDATORY), CXL_CMD(RAW, NONE, ~0, ~0, PSEUDO), CXL_CMD(GET_SUPPORTED_LOGS, NONE, 0, ~0, MANDATORY), + CXL_CMD(GET_FW_INFO, NONE, 0, 0x50, NONE), + CXL_CMD(GET_PARTITION_INFO, NONE, 0, 0x20, NONE), + CXL_CMD(GET_LSA, NONE, 0x8, ~0, MANDATORY), + CXL_CMD(GET_HEALTH_INFO, NONE, 0, 0x12, MANDATORY), }; /* diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h index 64cb9753a077..766c231d6150 100644 --- a/include/uapi/linux/cxl_mem.h +++ b/include/uapi/linux/cxl_mem.h @@ -35,6 +35,10 @@ extern "C" { ___C(IDENTIFY, "Identify Command"), \ ___C(RAW, "Raw device command"), \ ___C(GET_SUPPORTED_LOGS, "Get Supported Logs"), \ + ___C(GET_FW_INFO, "Get FW Info"), \ + ___C(GET_PARTITION_INFO, "Get Partition Information"), \ + ___C(GET_LSA, "Get Label Storage Area"), \ + ___C(GET_HEALTH_INFO, "Get Health Info"), \ ___C(MAX, "Last command") #define ___C(a, b) CXL_MEM_COMMAND_ID_##a From patchwork Sat Jan 30 00:24:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 659ADC433DB for ; Sat, 30 Jan 2021 10:19:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2A8D564E04 for ; Sat, 30 Jan 2021 10:19:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231705AbhA3AeR (ORCPT ); Fri, 29 Jan 2021 19:34:17 -0500 Received: from mga01.intel.com ([192.55.52.88]:38338 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232648AbhA3AcG (ORCPT ); Fri, 29 Jan 2021 19:32:06 -0500 IronPort-SDR: SR5doI+Ls5zH4HxZr4iCX1ucFGIVjnS106zN2RDbEOQ5Si48vSxFEpEt57Gl1hFY7jM5ww6BqZ 6DO3yZY+wPPg== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350697" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350697" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:53 -0800 IronPort-SDR: 6xlzRUjn7lUxS+wZo8rZMUlw9vLMCOjoZl0Qgd+c58UVWhMpK7qZKfh4IK1VeJPmgaKS6Ny4DC NbF13wTlEOwg== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591712" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:52 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Dan Williams , Ira Weiny , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , Vishal Verma , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 13/14] cxl/mem: Add limited Get Log command (0401h) Date: Fri, 29 Jan 2021 16:24:37 -0800 Message-Id: <20210130002438.1872527-14-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org The Get Log command returns the actual log entries that are advertised via the Get Supported Logs command (0400h). CXL device logs are selected by UUID which is part of the CXL spec. Because the driver tries to sanitize what is sent to hardware, there becomes a need to restrict the types of logs which can be accessed by userspace. For example, the vendor specific log might only be consumable by proprietary, or offline applications, and therefore a good candidate for userspace. The current driver infrastructure does allow basic validation for all commands, but doesn't inspect any of the payload data. Along with Get Log support comes new infrastructure to add a hook for payload validation. This infrastructure is used to filter out the CEL UUID, which the userspace driver doesn't have business knowing, and taints on invalid UUIDs being sent to hardware. Signed-off-by: Ben Widawsky --- drivers/cxl/mem.c | 42 +++++++++++++++++++++++++++++++++++- include/uapi/linux/cxl_mem.h | 1 + 2 files changed, 42 insertions(+), 1 deletion(-) diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c index b8ca6dff37b5..086268f1dd6c 100644 --- a/drivers/cxl/mem.c +++ b/drivers/cxl/mem.c @@ -119,6 +119,8 @@ static const uuid_t log_uuid[] = { 0x07, 0x19, 0x40, 0x3d, 0x86) }; +static int validate_log_uuid(void __user *payload, size_t size); + /** * struct cxl_mem_command - Driver representation of a memory device command * @info: Command information as it exists for the UAPI @@ -132,6 +134,10 @@ static const uuid_t log_uuid[] = { * * %CXL_CMD_INTERNAL_FLAG_PSEUDO: This is a pseudo command which doesn't have * a direct mapping to hardware. They are implicitly always enabled. * + * @validate_payload: A function called after the command is validated but + * before it's sent to the hardware. The primary purpose is to validate, or + * fixup the actual payload. + * * The cxl_mem_command is the driver's internal representation of commands that * are supported by the driver. Some of these commands may not be supported by * the hardware. The driver will use @info to validate the fields passed in by @@ -147,9 +153,11 @@ struct cxl_mem_command { #define CXL_CMD_INTERNAL_FLAG_HIDDEN BIT(0) #define CXL_CMD_INTERNAL_FLAG_MANDATORY BIT(1) #define CXL_CMD_INTERNAL_FLAG_PSEUDO BIT(2) + + int (*validate_payload)(void __user *payload, size_t size); }; -#define CXL_CMD(_id, _flags, sin, sout, f) \ +#define CXL_CMD_VALIDATE(_id, _flags, sin, sout, f, v) \ [CXL_MEM_COMMAND_ID_##_id] = { \ .info = { \ .id = CXL_MEM_COMMAND_ID_##_id, \ @@ -159,8 +167,12 @@ struct cxl_mem_command { }, \ .flags = CXL_CMD_INTERNAL_FLAG_##f, \ .opcode = CXL_MBOX_OP_##_id, \ + .validate_payload = v, \ } +#define CXL_CMD(_id, _flags, sin, sout, f) \ + CXL_CMD_VALIDATE(_id, _flags, sin, sout, f, NULL) + /* * This table defines the supported mailbox commands for the driver. This table * is made up of a UAPI structure. Non-negative values as parameters in the @@ -176,6 +188,8 @@ static struct cxl_mem_command mem_commands[] = { CXL_CMD(GET_PARTITION_INFO, NONE, 0, 0x20, NONE), CXL_CMD(GET_LSA, NONE, 0x8, ~0, MANDATORY), CXL_CMD(GET_HEALTH_INFO, NONE, 0, 0x12, MANDATORY), + CXL_CMD_VALIDATE(GET_LOG, MUTEX, 0x18, ~0, MANDATORY, + validate_log_uuid), }; /* @@ -563,6 +577,13 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev *cxlmd, kvzalloc(cxlm->mbox.payload_size, GFP_KERNEL); if (cmd->info.size_in) { + if (cmd->validate_payload) { + rc = cmd->validate_payload(u64_to_user_ptr(in_payload), + cmd->info.size_in); + if (rc) + goto out; + } + mbox_cmd.payload_in = kvzalloc(cmd->info.size_in, GFP_KERNEL); if (!mbox_cmd.payload_in) { rc = -ENOMEM; @@ -1205,6 +1226,25 @@ struct cxl_mbox_get_log { __le32 length; } __packed; +static int validate_log_uuid(void __user *input, size_t size) +{ + struct cxl_mbox_get_log __user *get_log = input; + uuid_t payload_uuid; + + if (copy_from_user(&payload_uuid, &get_log->uuid, sizeof(uuid_t))) + return -EFAULT; + + /* All unspec'd logs shall taint */ + if (uuid_equal(&payload_uuid, &log_uuid[CEL_UUID])) + return 0; + if (uuid_equal(&payload_uuid, &log_uuid[DEBUG_UUID])) + return 0; + + add_taint(TAINT_RAW_PASSTHROUGH, LOCKDEP_STILL_OK); + + return 0; +} + static int cxl_xfer_log(struct cxl_mem *cxlm, uuid_t *uuid, u32 size, u8 *out) { u32 remaining = size; diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h index 766c231d6150..7cdc7f7ce7ec 100644 --- a/include/uapi/linux/cxl_mem.h +++ b/include/uapi/linux/cxl_mem.h @@ -39,6 +39,7 @@ extern "C" { ___C(GET_PARTITION_INFO, "Get Partition Information"), \ ___C(GET_LSA, "Get Label Storage Area"), \ ___C(GET_HEALTH_INFO, "Get Health Info"), \ + ___C(GET_LOG, "Get Log"), \ ___C(MAX, "Last command") #define ___C(a, b) CXL_MEM_COMMAND_ID_##a From patchwork Sat Jan 30 00:24:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 12188085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3490FC43381 for ; Sat, 30 Jan 2021 10:19:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EDE3264E05 for ; Sat, 30 Jan 2021 10:19:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232194AbhA3KTE (ORCPT ); Sat, 30 Jan 2021 05:19:04 -0500 Received: from mga01.intel.com ([192.55.52.88]:38336 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232332AbhA3Aeh (ORCPT ); Fri, 29 Jan 2021 19:34:37 -0500 IronPort-SDR: zKpMLDk3hh7efYcJtfOD1XP3oAnoUs3gcVMRPQ9hNnsDTjxTRoqW7612lQms2pv1ghacvq4WaG sodkUifdRbrQ== X-IronPort-AV: E=McAfee;i="6000,8403,9879"; a="199350700" X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="199350700" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:53 -0800 IronPort-SDR: gr8fqKu2GofH9PjzJ5GvxttPPioPE7BRNJYapZOe+qDsoCgN4hNJYIR4btV+HbRgySgUpt5NSi P3SRLyjr6MRg== X-IronPort-AV: E=Sophos;i="5.79,387,1602572400"; d="scan'208";a="370591717" Received: from jambrizm-mobl1.amr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.133.15]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jan 2021 16:24:53 -0800 From: Ben Widawsky To: linux-cxl@vger.kernel.org Cc: Ben Widawsky , Dan Williams , Vishal Verma , Ira Weiny , Alison Schofield , linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, Bjorn Helgaas , Chris Browy , Christoph Hellwig , Jon Masters , Jonathan Cameron , Rafael Wysocki , Randy Dunlap , daniel.lll@alibaba-inc.com, "John Groves (jgroves)" , "Kelley, Sean V" Subject: [PATCH 14/14] MAINTAINERS: Add maintainers of the CXL driver Date: Fri, 29 Jan 2021 16:24:38 -0800 Message-Id: <20210130002438.1872527-15-ben.widawsky@intel.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210130002438.1872527-1-ben.widawsky@intel.com> References: <20210130002438.1872527-1-ben.widawsky@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Cc: Dan Williams Cc: Vishal Verma Cc: Ira Weiny Cc: Alison Schofield Signed-off-by: Ben Widawsky --- MAINTAINERS | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 6eff4f720c72..93c8694a8f04 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4444,6 +4444,17 @@ M: Miguel Ojeda S: Maintained F: include/linux/compiler_attributes.h +COMPUTE EXPRESS LINK (CXL) +M: Alison Schofield +M: Vishal Verma +M: Ira Weiny +M: Ben Widawsky +M: Dan Williams +L: linux-cxl@vger.kernel.org +S: Maintained +F: drivers/cxl/ +F: include/uapi/linux/cxl_mem.h + CONEXANT ACCESSRUNNER USB DRIVER L: accessrunner-general@lists.sourceforge.net S: Orphan