From patchwork Thu Nov 10 18:57:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13039163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FCE5C433FE for ; Thu, 10 Nov 2022 19:05:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229586AbiKJTFs (ORCPT ); Thu, 10 Nov 2022 14:05:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48634 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231493AbiKJTF3 (ORCPT ); Thu, 10 Nov 2022 14:05:29 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACC2157B4A; Thu, 10 Nov 2022 11:05:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668107110; x=1699643110; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Sx3L3dmcice/SW1UlG+9QaZyvdP4rfIP1/QGaOO2PdA=; b=btjv4kiS0E5tf0e71YU5b0rQF41/2SSZNucwzdPSiEAIzBp7MtlzDJ/Q gSbuBmdRgpGDCjaNcUh7O5aLBeefTL8VXH+2dya3owQKXyQKdFJKCX3Iw V2Z9hCEEZ0j9p3OAnHEcQvJPiMuQzGaJRC83qV4moUM2ZVur/UGRdEspf nc0gUE8qxodfyW3sTyHpD7En6Ge3QgBaeCD43QM695/RjmIZoHQoDa9HU mGqhvpDkD6pPYEoIQzuEiVk2QC9IumZ4LciqpAran+vAFGdvJqeB3iF3O adLGuU60LbrTFwwHtDJYozP0hc03apoi9pjOXxT/3GB0/rAYW448WX/2q g==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="375662050" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="375662050" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:05 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="882473397" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="882473397" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.6.223]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:05 -0800 From: ira.weiny@intel.com To: Dan Williams Cc: Davidlohr Bueso , Bjorn Helgaas , Jonathan Cameron , Ira Weiny , Alison Schofield , Vishal Verma , Ben Widawsky , Steven Rostedt , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org Subject: [PATCH 01/11] cxl/pci: Add generic MSI-X/MSI irq support Date: Thu, 10 Nov 2022 10:57:48 -0800 Message-Id: <20221110185758.879472-2-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221110185758.879472-1-ira.weiny@intel.com> References: <20221110185758.879472-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Davidlohr Bueso Currently the only CXL features targeted for irq support require their message numbers to be within the first 16 entries. The device may however support less than 16 entries depending on the support it provides. Attempt to allocate these 16 irq vectors. If the device supports less then the PCI infrastructure will allocate that number. Store the number of vectors actually allocated in the device state for later use by individual functions. Upon successful allocation, users can plug in their respective isr at any point thereafter, for example, if the irq setup is not done in the PCI driver, such as the case of the CXL-PMU. Cc: Bjorn Helgaas Cc: Jonathan Cameron Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny Signed-off-by: Davidlohr Bueso Reviewed-by: Dave Jiang --- Changes from Ira Remove reviews Allocate up to a static 16 vectors. Change cover letter --- drivers/cxl/cxlmem.h | 3 +++ drivers/cxl/cxlpci.h | 6 ++++++ drivers/cxl/pci.c | 32 ++++++++++++++++++++++++++++++++ 3 files changed, 41 insertions(+) diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 88e3a8e54b6a..b7b955ded3ac 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -211,6 +211,7 @@ struct cxl_endpoint_dvsec_info { * @info: Cached DVSEC information about the device. * @serial: PCIe Device Serial Number * @doe_mbs: PCI DOE mailbox array + * @nr_irq_vecs: Number of MSI-X/MSI vectors available * @mbox_send: @dev specific transport for transmitting mailbox commands * * See section 8.2.9.5.2 Capacity Configuration and Label Storage for @@ -247,6 +248,8 @@ struct cxl_dev_state { struct xarray doe_mbs; + int nr_irq_vecs; + int (*mbox_send)(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd); }; diff --git a/drivers/cxl/cxlpci.h b/drivers/cxl/cxlpci.h index eec597dbe763..b7f4e2f417d3 100644 --- a/drivers/cxl/cxlpci.h +++ b/drivers/cxl/cxlpci.h @@ -53,6 +53,12 @@ #define CXL_DVSEC_REG_LOCATOR_BLOCK_ID_MASK GENMASK(15, 8) #define CXL_DVSEC_REG_LOCATOR_BLOCK_OFF_LOW_MASK GENMASK(31, 16) +/* + * NOTE: Currently all the functions which are enabled for CXL require their + * vectors to be in the first 16. Use this as the max. + */ +#define CXL_PCI_REQUIRED_VECTORS 16 + /* Register Block Identifier (RBI) */ enum cxl_regloc_type { CXL_REGLOC_RBI_EMPTY = 0, diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index faeb5d9d7a7a..62e560063e50 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -428,6 +428,36 @@ static void devm_cxl_pci_create_doe(struct cxl_dev_state *cxlds) } } +static void cxl_pci_free_irq_vectors(void *data) +{ + pci_free_irq_vectors(data); +} + +static void cxl_pci_alloc_irq_vectors(struct cxl_dev_state *cxlds) +{ + struct device *dev = cxlds->dev; + struct pci_dev *pdev = to_pci_dev(dev); + int nvecs; + int rc; + + nvecs = pci_alloc_irq_vectors(pdev, 1, CXL_PCI_REQUIRED_VECTORS, + PCI_IRQ_MSIX | PCI_IRQ_MSI); + if (nvecs < 0) { + dev_dbg(dev, "Not enough interrupts; use polling instead.\n"); + return; + } + + rc = devm_add_action_or_reset(dev, cxl_pci_free_irq_vectors, pdev); + if (rc) { + dev_dbg(dev, "Device managed call failed; interrupts disabled.\n"); + /* some got allocated, clean them up */ + cxl_pci_free_irq_vectors(pdev); + return; + } + + cxlds->nr_irq_vecs = nvecs; +} + static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct cxl_register_map map; @@ -494,6 +524,8 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) return rc; + cxl_pci_alloc_irq_vectors(cxlds); + cxlmd = devm_cxl_add_memdev(cxlds); if (IS_ERR(cxlmd)) return PTR_ERR(cxlmd); From patchwork Thu Nov 10 18:57:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13039164 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4CDCC43217 for ; Thu, 10 Nov 2022 19:05:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231251AbiKJTFu (ORCPT ); Thu, 10 Nov 2022 14:05:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231499AbiKJTFd (ORCPT ); Thu, 10 Nov 2022 14:05:33 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1035A57B58; Thu, 10 Nov 2022 11:05:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668107111; x=1699643111; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=i9QLVcvjrWi2bU5ScvvMmPNidfmPBzVUn1G+fNtFVRU=; b=Apt6dHGiKkJJLj147v3I9OqtXZzXMyZ/GoW/S+w7yHyLiDXMs57FnMgx vg67yHpNCuGMfR14YQfIMFAESP3CQnvWJGr8ed2/4+VcU+zXsVtb+US8Q W2aQHzszlB07JYzvc5NZPrOuaMNUd072nUfgeZA9KFsqGeBk3yEHNAT2j 34DtxSPDCan1561f8dNMtSyxS0ZvvTNQx9pPP1MBMKtJGWpVG9fefJUne e5/TiqZe8rgOdjFAfVlLJheuyNQxf5VxDX0dpgoA+UR64iWGh8hyHMRU3 Rhp7MtgBJVsj5JoPN0w1Myf0auZT/hSff4YWOuxEhysBVd8ZaqN1uFwQf A==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="375662059" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="375662059" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:05 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="882473403" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="882473403" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.6.223]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:05 -0800 From: ira.weiny@intel.com To: Dan Williams Cc: Ira Weiny , Steven Rostedt , Alison Schofield , Vishal Verma , Ben Widawsky , Jonathan Cameron , Davidlohr Bueso , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org Subject: [PATCH 02/11] cxl/mem: Implement Get Event Records command Date: Thu, 10 Nov 2022 10:57:49 -0800 Message-Id: <20221110185758.879472-3-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221110185758.879472-1-ira.weiny@intel.com> References: <20221110185758.879472-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny CXL devices have multiple event logs which can be queried for CXL event records. Devices are required to support the storage of at least one event record in each event log type. Devices track event log overflow by incrementing a counter and tracking the time of the first and last overflow event seen. Software queries events via the Get Event Record mailbox command; CXL rev 3.0 section 8.2.9.2.2. Issue the Get Event Record mailbox command on driver load. Trace each record found with a generic record trace. Trace any overflow conditions. The device can return up to 1MB worth of event records per query. This presents complications with allocating a huge buffers to potentially capture all the records. It is not anticipated that these event logs will be very deep and reading them does not need to be performant. Process only 3 records at a time. 3 records was chosen as it fits comfortably on the stack to prevent dynamic allocation while still cutting down on extra mailbox messages. This patch traces a raw event record only and leaves the specific event record types to subsequent patches. Macros are created to use for tracing the common CXL Event header fields. Cc: Steven Rostedt Signed-off-by: Ira Weiny Reviewed-by: Dave Jiang --- Change from RFC v2: Support reading 3 events at once. Reverse Jonathan's suggestion and check for positive number of records. Because the record count may have been returned as something > 3 based on what the device thinks it can send back even though the core Linux mbox processing truncates the data. Alison and Dave Jiang Change header uuid type to uuid_t for better user space processing Smita Check status reg before reading log. Steven Prefix all trace points with 'cxl_' Use static branch _enabled() calls Jonathan s/CXL_EVENT_TYPE_INFO/0 s/{first,last}/{first,last}_ts Remove Reserved field from header Fix header issue for cxl_event_log_type_str() Change from RFC: Remove redundant error message in get event records loop s/EVENT_RECORD_DATA_LENGTH/CXL_EVENT_RECORD_DATA_LENGTH Use hdr_uuid for the header UUID field Use cxl_event_log_type_str() for the trace events Create macros for the header fields and common entries of each event Add reserved buffer output dump Report error if event query fails Remove unused record_cnt variable Steven - reorder overflow record Remove NOTE about checkpatch Jonathan check for exactly 1 record s/v3.0/rev 3.0 Use 3 byte fields for 24bit fields Add 3.0 Maintenance Operation Class Add Dynamic Capacity log type Fix spelling Dave Jiang/Dan/Alison s/cxl-event/cxl trace/events/cxl-events => trace/events/cxl.h s/cxl_event_overflow/overflow s/cxl_event/generic_event --- MAINTAINERS | 1 + drivers/cxl/core/mbox.c | 70 +++++++++++++++++++ drivers/cxl/cxl.h | 8 +++ drivers/cxl/cxlmem.h | 73 ++++++++++++++++++++ include/trace/events/cxl.h | 127 +++++++++++++++++++++++++++++++++++ include/uapi/linux/cxl_mem.h | 1 + 6 files changed, 280 insertions(+) create mode 100644 include/trace/events/cxl.h diff --git a/MAINTAINERS b/MAINTAINERS index ca063a504026..4b7c6e3055c6 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5223,6 +5223,7 @@ M: Dan Williams L: linux-cxl@vger.kernel.org S: Maintained F: drivers/cxl/ +F: include/trace/events/cxl.h F: include/uapi/linux/cxl_mem.h CONEXANT ACCESSRUNNER USB DRIVER diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 16176b9278b4..a908b95a7de4 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -7,6 +7,9 @@ #include #include +#define CREATE_TRACE_POINTS +#include + #include "core.h" static bool cxl_raw_allow_all; @@ -48,6 +51,7 @@ static struct cxl_mem_command cxl_mem_commands[CXL_MEM_COMMAND_ID_MAX] = { CXL_CMD(RAW, CXL_VARIABLE_PAYLOAD, CXL_VARIABLE_PAYLOAD, 0), #endif CXL_CMD(GET_SUPPORTED_LOGS, 0, CXL_VARIABLE_PAYLOAD, CXL_CMD_FLAG_FORCE_ENABLE), + CXL_CMD(GET_EVENT_RECORD, 1, CXL_VARIABLE_PAYLOAD, 0), CXL_CMD(GET_FW_INFO, 0, 0x50, 0), CXL_CMD(GET_PARTITION_INFO, 0, 0x20, 0), CXL_CMD(GET_LSA, 0x8, CXL_VARIABLE_PAYLOAD, 0), @@ -704,6 +708,72 @@ int cxl_enumerate_cmds(struct cxl_dev_state *cxlds) } EXPORT_SYMBOL_NS_GPL(cxl_enumerate_cmds, CXL); +static void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, + enum cxl_event_log_type type) +{ + struct cxl_get_event_payload payload; + u16 pl_nr; + + do { + u8 log_type = type; + int rc; + + rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_GET_EVENT_RECORD, + &log_type, sizeof(log_type), + &payload, sizeof(payload)); + if (rc) { + dev_err(cxlds->dev, "Event log '%s': Failed to query event records : %d", + cxl_event_log_type_str(type), rc); + return; + } + + pl_nr = le16_to_cpu(payload.record_count); + if (trace_cxl_generic_event_enabled()) { + u16 nr_rec = min_t(u16, pl_nr, CXL_GET_EVENT_NR_RECORDS); + int i; + + for (i = 0; i < nr_rec; i++) + trace_cxl_generic_event(dev_name(cxlds->dev), + type, + &payload.record[i]); + } + + if (trace_cxl_overflow_enabled() && + (payload.flags & CXL_GET_EVENT_FLAG_OVERFLOW)) + trace_cxl_overflow(dev_name(cxlds->dev), type, &payload); + + } while (pl_nr > CXL_GET_EVENT_NR_RECORDS || + payload.flags & CXL_GET_EVENT_FLAG_MORE_RECORDS); +} + +/** + * cxl_mem_get_event_records - Get Event Records from the device + * @cxlds: The device data for the operation + * + * Retrieve all event records available on the device and report them as trace + * events. + * + * See CXL rev 3.0 @8.2.9.2.2 Get Event Records + */ +void cxl_mem_get_event_records(struct cxl_dev_state *cxlds) +{ + u32 status = readl(cxlds->regs.status + CXLDEV_DEV_EVENT_STATUS_OFFSET); + + dev_dbg(cxlds->dev, "Reading event logs: %x\n", status); + + if (status & CXLDEV_EVENT_STATUS_INFO) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_INFO); + if (status & CXLDEV_EVENT_STATUS_WARN) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_WARN); + if (status & CXLDEV_EVENT_STATUS_FAIL) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FAIL); + if (status & CXLDEV_EVENT_STATUS_FATAL) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FATAL); + if (status & CXLDEV_EVENT_STATUS_DYNAMIC_CAP) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_DYNAMIC_CAP); +} +EXPORT_SYMBOL_NS_GPL(cxl_mem_get_event_records, CXL); + /** * cxl_mem_get_partition_info - Get partition info * @cxlds: The device data for the operation diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index f680450f0b16..492cff1bea6d 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -132,6 +132,14 @@ static inline int ways_to_cxl(unsigned int ways, u8 *iw) #define CXLDEV_CAP_CAP_ID_SECONDARY_MAILBOX 0x3 #define CXLDEV_CAP_CAP_ID_MEMDEV 0x4000 +/* CXL 3.0 8.2.8.3.1 Event Status Register */ +#define CXLDEV_DEV_EVENT_STATUS_OFFSET 0x00 +#define CXLDEV_EVENT_STATUS_INFO BIT(0) +#define CXLDEV_EVENT_STATUS_WARN BIT(1) +#define CXLDEV_EVENT_STATUS_FAIL BIT(2) +#define CXLDEV_EVENT_STATUS_FATAL BIT(3) +#define CXLDEV_EVENT_STATUS_DYNAMIC_CAP BIT(4) + /* CXL 2.0 8.2.8.4 Mailbox Registers */ #define CXLDEV_MBOX_CAPS_OFFSET 0x00 #define CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK GENMASK(4, 0) diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index b7b955ded3ac..da64ba0f156b 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -4,6 +4,7 @@ #define __CXL_MEM_H__ #include #include +#include #include "cxl.h" /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ @@ -256,6 +257,7 @@ struct cxl_dev_state { enum cxl_opcode { CXL_MBOX_OP_INVALID = 0x0000, CXL_MBOX_OP_RAW = CXL_MBOX_OP_INVALID, + CXL_MBOX_OP_GET_EVENT_RECORD = 0x0100, CXL_MBOX_OP_GET_FW_INFO = 0x0200, CXL_MBOX_OP_ACTIVATE_FW = 0x0202, CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, @@ -325,6 +327,76 @@ struct cxl_mbox_identify { u8 qos_telemetry_caps; } __packed; +/* + * Common Event Record Format + * CXL rev 3.0 section 8.2.9.2.1; Table 8-42 + */ +struct cxl_event_record_hdr { + uuid_t id; + u8 length; + u8 flags[3]; + __le16 handle; + __le16 related_handle; + __le64 timestamp; + u8 maint_op_class; + u8 reserved[0xf]; +} __packed; + +#define CXL_EVENT_RECORD_DATA_LENGTH 0x50 +struct cxl_event_record_raw { + struct cxl_event_record_hdr hdr; + u8 data[CXL_EVENT_RECORD_DATA_LENGTH]; +} __packed; + +/* + * Get Event Records output payload + * CXL rev 3.0 section 8.2.9.2.2; Table 8-50 + */ +#define CXL_GET_EVENT_FLAG_OVERFLOW BIT(0) +#define CXL_GET_EVENT_FLAG_MORE_RECORDS BIT(1) +#define CXL_GET_EVENT_NR_RECORDS 3 +struct cxl_get_event_payload { + u8 flags; + u8 reserved1; + __le16 overflow_err_count; + __le64 first_overflow_timestamp; + __le64 last_overflow_timestamp; + __le16 record_count; + u8 reserved2[0xa]; + struct cxl_event_record_raw record[CXL_GET_EVENT_NR_RECORDS]; +} __packed; + +/* + * CXL rev 3.0 section 8.2.9.2.2; Table 8-49 + */ +enum cxl_event_log_type { + CXL_EVENT_TYPE_INFO = 0x00, + CXL_EVENT_TYPE_WARN, + CXL_EVENT_TYPE_FAIL, + CXL_EVENT_TYPE_FATAL, + CXL_EVENT_TYPE_DYNAMIC_CAP, + CXL_EVENT_TYPE_MAX +}; + +static inline const char *cxl_event_log_type_str(enum cxl_event_log_type type) +{ + switch (type) { + case CXL_EVENT_TYPE_INFO: + return "Informational"; + case CXL_EVENT_TYPE_WARN: + return "Warning"; + case CXL_EVENT_TYPE_FAIL: + return "Failure"; + case CXL_EVENT_TYPE_FATAL: + return "Fatal"; + case CXL_EVENT_TYPE_DYNAMIC_CAP: + return "Dynamic Capacity"; + default: + break; + } + return ""; +} + struct cxl_mbox_get_partition_info { __le64 active_volatile_cap; __le64 active_persistent_cap; @@ -384,6 +456,7 @@ int cxl_mem_create_range_info(struct cxl_dev_state *cxlds); struct cxl_dev_state *cxl_dev_state_create(struct device *dev); void set_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); +void cxl_mem_get_event_records(struct cxl_dev_state *cxlds); #ifdef CONFIG_CXL_SUSPEND void cxl_mem_active_inc(void); void cxl_mem_active_dec(void); diff --git a/include/trace/events/cxl.h b/include/trace/events/cxl.h new file mode 100644 index 000000000000..60dec9a84918 --- /dev/null +++ b/include/trace/events/cxl.h @@ -0,0 +1,127 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM cxl + +#if !defined(_CXL_TRACE_EVENTS_H) || defined(TRACE_HEADER_MULTI_READ) +#define _CXL_TRACE_EVENTS_H + +#include +#include +#include + +TRACE_EVENT(cxl_overflow, + + TP_PROTO(const char *dev_name, enum cxl_event_log_type log, + struct cxl_get_event_payload *payload), + + TP_ARGS(dev_name, log, payload), + + TP_STRUCT__entry( + __string(dev_name, dev_name) + __field(int, log) + __field(u64, first_ts) + __field(u64, last_ts) + __field(u16, count) + ), + + TP_fast_assign( + __assign_str(dev_name, dev_name); + __entry->log = log; + __entry->count = le16_to_cpu(payload->overflow_err_count); + __entry->first_ts = le64_to_cpu(payload->first_overflow_timestamp); + __entry->last_ts = le64_to_cpu(payload->last_overflow_timestamp); + ), + + TP_printk("%s: EVENT LOG OVERFLOW log=%s : %u records from %llu to %llu", + __get_str(dev_name), cxl_event_log_type_str(__entry->log), + __entry->count, __entry->first_ts, __entry->last_ts) + +); + +/* + * Common Event Record Format + * CXL 3.0 section 8.2.9.2.1; Table 8-42 + */ +#define CXL_EVENT_RECORD_FLAG_PERMANENT BIT(2) +#define CXL_EVENT_RECORD_FLAG_MAINT_NEEDED BIT(3) +#define CXL_EVENT_RECORD_FLAG_PERF_DEGRADED BIT(4) +#define CXL_EVENT_RECORD_FLAG_HW_REPLACE BIT(5) +#define show_hdr_flags(flags) __print_flags(flags, " | ", \ + { CXL_EVENT_RECORD_FLAG_PERMANENT, "Permanent Condition" }, \ + { CXL_EVENT_RECORD_FLAG_MAINT_NEEDED, "Maintenance Needed" }, \ + { CXL_EVENT_RECORD_FLAG_PERF_DEGRADED, "Performance Degraded" }, \ + { CXL_EVENT_RECORD_FLAG_HW_REPLACE, "Hardware Replacement Needed" } \ +) + +/* + * Define macros for the common header of each CXL event. + * + * Tracepoints using these macros must do 3 things: + * + * 1) Add CXL_EVT_TP_entry to TP_STRUCT__entry + * 2) Use CXL_EVT_TP_fast_assign within TP_fast_assign; + * pass the dev_name, log, and CXL event header + * 3) Use CXL_EVT_TP_printk() instead of TP_printk() + * + * See the generic_event tracepoint as an example. + */ +#define CXL_EVT_TP_entry \ + __string(dev_name, dev_name) \ + __field(int, log) \ + __field_struct(uuid_t, hdr_uuid) \ + __field(u32, hdr_flags) \ + __field(u16, hdr_handle) \ + __field(u16, hdr_related_handle) \ + __field(u64, hdr_timestamp) \ + __field(u8, hdr_length) \ + __field(u8, hdr_maint_op_class) + +#define CXL_EVT_TP_fast_assign(dname, l, hdr) \ + __assign_str(dev_name, (dname)); \ + __entry->log = (l); \ + memcpy(&__entry->hdr_uuid, &(hdr).id, sizeof(uuid_t)); \ + __entry->hdr_length = (hdr).length; \ + __entry->hdr_flags = get_unaligned_le24((hdr).flags); \ + __entry->hdr_handle = le16_to_cpu((hdr).handle); \ + __entry->hdr_related_handle = le16_to_cpu((hdr).related_handle); \ + __entry->hdr_timestamp = le64_to_cpu((hdr).timestamp); \ + __entry->hdr_maint_op_class = (hdr).maint_op_class + + +#define CXL_EVT_TP_printk(fmt, ...) \ + TP_printk("%s log=%s : time=%llu uuid=%pUb len=%d flags='%s' " \ + "handle=%x related_handle=%x maint_op_class=%u" \ + " : " fmt, \ + __get_str(dev_name), cxl_event_log_type_str(__entry->log), \ + __entry->hdr_timestamp, &__entry->hdr_uuid, __entry->hdr_length,\ + show_hdr_flags(__entry->hdr_flags), __entry->hdr_handle, \ + __entry->hdr_related_handle, __entry->hdr_maint_op_class, \ + ##__VA_ARGS__) + +TRACE_EVENT(cxl_generic_event, + + TP_PROTO(const char *dev_name, enum cxl_event_log_type log, + struct cxl_event_record_raw *rec), + + TP_ARGS(dev_name, log, rec), + + TP_STRUCT__entry( + CXL_EVT_TP_entry + __array(u8, data, CXL_EVENT_RECORD_DATA_LENGTH) + ), + + TP_fast_assign( + CXL_EVT_TP_fast_assign(dev_name, log, rec->hdr); + memcpy(__entry->data, &rec->data, CXL_EVENT_RECORD_DATA_LENGTH); + ), + + CXL_EVT_TP_printk("%s", + __print_hex(__entry->data, CXL_EVENT_RECORD_DATA_LENGTH)) +); + +#endif /* _CXL_TRACE_EVENTS_H */ + +/* This part must be outside protection */ +#undef TRACE_INCLUDE_FILE +#define TRACE_INCLUDE_FILE cxl +#include diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h index c71021a2a9ed..70459be5bdd4 100644 --- a/include/uapi/linux/cxl_mem.h +++ b/include/uapi/linux/cxl_mem.h @@ -24,6 +24,7 @@ ___C(IDENTIFY, "Identify Command"), \ ___C(RAW, "Raw device command"), \ ___C(GET_SUPPORTED_LOGS, "Get Supported Logs"), \ + ___C(GET_EVENT_RECORD, "Get Event Record"), \ ___C(GET_FW_INFO, "Get FW Info"), \ ___C(GET_PARTITION_INFO, "Get Partition Information"), \ ___C(GET_LSA, "Get Label Storage Area"), \ From patchwork Thu Nov 10 18:57:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13039165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9812C433FE for ; Thu, 10 Nov 2022 19:05:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231383AbiKJTFv (ORCPT ); Thu, 10 Nov 2022 14:05:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48758 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231382AbiKJTFf (ORCPT ); Thu, 10 Nov 2022 14:05:35 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CB9348753; Thu, 10 Nov 2022 11:05:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668107114; x=1699643114; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8vg+IL9w6gi13hul2cKRBm83FUeTBA8qdk5UVTSu7lg=; b=dMxgkKXmuhHCFV6wm1IStLeIeumn5MclqpickEdLcMh22W/yGSRXW4Rw G8dQERdFx9yvOjbKiEGJxaawd3Rvu6EK+Au5HtqB+0/NWvu/Xgzy+v0on /sYNm6Lo1uC+vTa+5EIPKDiuYhpJ+PjA1lxN1twjXqQnNcc8Y8PO3uFwn ok0cTDXQ5eYboArhgwheXnsdVkFu7wF92iJduUSQLdX17HesvseXN4UDX qEXI/Kg6PiPvkZy0habv4Pgy7Guu26y8H+duTtOHfKK7Z7fiBUlEGwdwU KMQz2ycmfogvcEfwr6Zfx22Gb1pCbd+f/kbgLDcxS/cA/vrWjvswzwQGA w==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="375662076" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="375662076" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:06 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="882473410" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="882473410" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.6.223]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:06 -0800 From: ira.weiny@intel.com To: Dan Williams Cc: Ira Weiny , Jonathan Cameron , Alison Schofield , Vishal Verma , Ben Widawsky , Steven Rostedt , Davidlohr Bueso , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org Subject: [PATCH 03/11] cxl/mem: Implement Clear Event Records command Date: Thu, 10 Nov 2022 10:57:50 -0800 Message-Id: <20221110185758.879472-4-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221110185758.879472-1-ira.weiny@intel.com> References: <20221110185758.879472-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny CXL rev 3.0 section 8.2.9.2.3 defines the Clear Event Records mailbox command. After an event record is read it needs to be cleared from the event log. Implement cxl_clear_event_record() and call it for each record retrieved from the device. Each record is cleared individually. A clear all bit is specified but events could arrive between a get and the final clear all operation. Therefore each event is cleared specifically. Reviewed-by: Jonathan Cameron Signed-off-by: Ira Weiny Reviewed-by: Dave Jiang --- Changes from RFC: Jonathan Clean up init of payload and use return code. Also report any error to clear the event. s/v3.0/rev 3.0 --- drivers/cxl/core/mbox.c | 46 ++++++++++++++++++++++++++++++------ drivers/cxl/cxlmem.h | 15 ++++++++++++ include/uapi/linux/cxl_mem.h | 1 + 3 files changed, 55 insertions(+), 7 deletions(-) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index a908b95a7de4..f46558e09f08 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -52,6 +52,7 @@ static struct cxl_mem_command cxl_mem_commands[CXL_MEM_COMMAND_ID_MAX] = { #endif CXL_CMD(GET_SUPPORTED_LOGS, 0, CXL_VARIABLE_PAYLOAD, CXL_CMD_FLAG_FORCE_ENABLE), CXL_CMD(GET_EVENT_RECORD, 1, CXL_VARIABLE_PAYLOAD, 0), + CXL_CMD(CLEAR_EVENT_RECORD, CXL_VARIABLE_PAYLOAD, 0, 0), CXL_CMD(GET_FW_INFO, 0, 0x50, 0), CXL_CMD(GET_PARTITION_INFO, 0, 0x20, 0), CXL_CMD(GET_LSA, 0x8, CXL_VARIABLE_PAYLOAD, 0), @@ -708,6 +709,27 @@ int cxl_enumerate_cmds(struct cxl_dev_state *cxlds) } EXPORT_SYMBOL_NS_GPL(cxl_enumerate_cmds, CXL); +static int cxl_clear_event_record(struct cxl_dev_state *cxlds, + enum cxl_event_log_type log, + struct cxl_get_event_payload *get_pl, u16 nr) +{ + struct cxl_mbox_clear_event_payload payload = { + .event_log = log, + .nr_recs = nr, + }; + int i; + + for (i = 0; i < nr; i++) { + payload.handle[i] = get_pl->record[i].hdr.handle; + dev_dbg(cxlds->dev, "Event log '%s': Clearning %u\n", + cxl_event_log_type_str(log), + le16_to_cpu(payload.handle[i])); + } + + return cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_CLEAR_EVENT_RECORD, + &payload, sizeof(payload), NULL, 0); +} + static void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, enum cxl_event_log_type type) { @@ -728,14 +750,23 @@ static void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, } pl_nr = le16_to_cpu(payload.record_count); - if (trace_cxl_generic_event_enabled()) { + if (pl_nr > 0) { u16 nr_rec = min_t(u16, pl_nr, CXL_GET_EVENT_NR_RECORDS); int i; - for (i = 0; i < nr_rec; i++) - trace_cxl_generic_event(dev_name(cxlds->dev), - type, - &payload.record[i]); + if (trace_cxl_generic_event_enabled()) { + for (i = 0; i < nr_rec; i++) + trace_cxl_generic_event(dev_name(cxlds->dev), + type, + &payload.record[i]); + } + + rc = cxl_clear_event_record(cxlds, type, &payload, nr_rec); + if (rc) { + dev_err(cxlds->dev, "Event log '%s': Failed to clear events : %d", + cxl_event_log_type_str(type), rc); + return; + } } if (trace_cxl_overflow_enabled() && @@ -750,10 +781,11 @@ static void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, * cxl_mem_get_event_records - Get Event Records from the device * @cxlds: The device data for the operation * - * Retrieve all event records available on the device and report them as trace - * events. + * Retrieve all event records available on the device, report them as trace + * events, and clear them. * * See CXL rev 3.0 @8.2.9.2.2 Get Event Records + * See CXL rev 3.0 @8.2.9.2.3 Clear Event Records */ void cxl_mem_get_event_records(struct cxl_dev_state *cxlds) { diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index da64ba0f156b..28a114c7cf69 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -258,6 +258,7 @@ enum cxl_opcode { CXL_MBOX_OP_INVALID = 0x0000, CXL_MBOX_OP_RAW = CXL_MBOX_OP_INVALID, CXL_MBOX_OP_GET_EVENT_RECORD = 0x0100, + CXL_MBOX_OP_CLEAR_EVENT_RECORD = 0x0101, CXL_MBOX_OP_GET_FW_INFO = 0x0200, CXL_MBOX_OP_ACTIVATE_FW = 0x0202, CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, @@ -397,6 +398,20 @@ static inline const char *cxl_event_log_type_str(enum cxl_event_log_type type) return ""; } +/* + * Clear Event Records input payload + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 + * + * Space given for 1 record + */ +struct cxl_mbox_clear_event_payload { + u8 event_log; /* enum cxl_event_log_type */ + u8 clear_flags; + u8 nr_recs; /* 1 for this struct */ + u8 reserved[3]; + __le16 handle[CXL_GET_EVENT_NR_RECORDS]; +}; + struct cxl_mbox_get_partition_info { __le64 active_volatile_cap; __le64 active_persistent_cap; diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h index 70459be5bdd4..7c1ad8062792 100644 --- a/include/uapi/linux/cxl_mem.h +++ b/include/uapi/linux/cxl_mem.h @@ -25,6 +25,7 @@ ___C(RAW, "Raw device command"), \ ___C(GET_SUPPORTED_LOGS, "Get Supported Logs"), \ ___C(GET_EVENT_RECORD, "Get Event Record"), \ + ___C(CLEAR_EVENT_RECORD, "Clear Event Record"), \ ___C(GET_FW_INFO, "Get FW Info"), \ ___C(GET_PARTITION_INFO, "Get Partition Information"), \ ___C(GET_LSA, "Get Label Storage Area"), \ From patchwork Thu Nov 10 18:57:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13039166 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B5C2C433FE for ; Thu, 10 Nov 2022 19:05:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231402AbiKJTFy (ORCPT ); Thu, 10 Nov 2022 14:05:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231405AbiKJTFg (ORCPT ); Thu, 10 Nov 2022 14:05:36 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D5B245EF5; Thu, 10 Nov 2022 11:05:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668107116; x=1699643116; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JX6SVwcJaEAfjGLJHiyrrsbi4lffntQ1KMrxCo39Jzw=; b=cbYK3Jag9kjfqOHHscalysx0gmDcxcEL18OZAGrYXc0XpJ5ciCtNVhYz /xzxAFa2tjy9rOMZmi5D3jPAka/OHXkrSgWR3KQyQkgSB5Dh6ZjQ5tM8y 6Lt5EzrhSy6H63qA6M9TfZsLjtJ4cNBEBYTumKj1Kbi3wzM35m+hjHW2S 955bHs2+MPH2kAtIOdY3vximJD+pn/hMGoKhXuRc+EadAT0LZn2al9wq8 55d2Ot98jgWSl2AhpuajuDacCvjZ5SLqN1fexy7+CuQOhddyx+VRJqaNR XBCJje+AMWaQUhVFILgGDpfe3s+TU1vNBGxr3J+i9Gyp6CWkSyrtwbYnS w==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="375662085" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="375662085" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:07 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="882473419" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="882473419" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.6.223]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:07 -0800 From: ira.weiny@intel.com To: Dan Williams Cc: Ira Weiny , Jonathan Cameron , Alison Schofield , Vishal Verma , Ben Widawsky , Steven Rostedt , Davidlohr Bueso , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org Subject: [PATCH 04/11] cxl/mem: Clear events on driver load Date: Thu, 10 Nov 2022 10:57:51 -0800 Message-Id: <20221110185758.879472-5-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221110185758.879472-1-ira.weiny@intel.com> References: <20221110185758.879472-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny The information contained in the events prior to the driver loading can be queried at any time through other mailbox commands. Ensure a clean slate of events by reading and clearing the events. The events are sent to the trace buffer but it is not anticipated to have anyone listening to it at driver load time. Reviewed-by: Jonathan Cameron Signed-off-by: Ira Weiny Reviewed-by: Dave Jiang --- drivers/cxl/pci.c | 2 ++ tools/testing/cxl/test/mem.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 62e560063e50..e0d511575b45 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -530,6 +530,8 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (IS_ERR(cxlmd)) return PTR_ERR(cxlmd); + cxl_mem_get_event_records(cxlds); + if (resource_size(&cxlds->pmem_res) && IS_ENABLED(CONFIG_CXL_PMEM)) rc = devm_cxl_add_nvdimm(&pdev->dev, cxlmd); diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c index aa2df3a15051..e2f5445d24ff 100644 --- a/tools/testing/cxl/test/mem.c +++ b/tools/testing/cxl/test/mem.c @@ -285,6 +285,8 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) if (IS_ERR(cxlmd)) return PTR_ERR(cxlmd); + cxl_mem_get_event_records(cxlds); + if (resource_size(&cxlds->pmem_res) && IS_ENABLED(CONFIG_CXL_PMEM)) rc = devm_cxl_add_nvdimm(dev, cxlmd); From patchwork Thu Nov 10 18:57:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13039167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E714C4332F for ; Thu, 10 Nov 2022 19:06:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231432AbiKJTGL (ORCPT ); Thu, 10 Nov 2022 14:06:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231431AbiKJTFj (ORCPT ); Thu, 10 Nov 2022 14:05:39 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38EB91B7BA; Thu, 10 Nov 2022 11:05:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668107118; x=1699643118; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZmN6mxkBDibuSLG+5UKps6XRo1Tmm2bQYClfv7mkWL8=; b=guCm/CI221MzVR6jAQT2tvU7HhkNsZHhZHpisORX5LdImcmpabU4OCKm bJNmMVIjF20qyOhJH6U2v0Zys3KOvaVr5goJwNC7Y0ixpp8VdmSLV4NGn Z4BY3Fl6k1Jyj8whDP6TlIEdVG+a3vKBT0U+KZkALhO8MF1chdc6wlgfH psB3Dt5b6Fg4c3F1D/Fmg8YBqYms+8ml2jgXd9rCYfwS4OJlzYY66olyT rLb4lrtDgyS5wba7iWXl3o0AiLiGVW3/4PQgZpvWYHeQTo/Im+Djxzr+8 LU63C6MUMvXZybVEEgGuBZkbqphYimBeSd3ts7hATLSlQ0TB1MyAjOsMm A==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="375662096" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="375662096" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:07 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="882473423" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="882473423" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.6.223]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:07 -0800 From: ira.weiny@intel.com To: Dan Williams Cc: Ira Weiny , Alison Schofield , Vishal Verma , Ben Widawsky , Steven Rostedt , Jonathan Cameron , Davidlohr Bueso , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org Subject: [PATCH 05/11] cxl/mem: Trace General Media Event Record Date: Thu, 10 Nov 2022 10:57:52 -0800 Message-Id: <20221110185758.879472-6-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221110185758.879472-1-ira.weiny@intel.com> References: <20221110185758.879472-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny CXL rev 3.0 section 8.2.9.2.1.1 defines the General Media Event Record. Determine if the event read is a general media record and if so trace the record as a General Media Event Record. Signed-off-by: Ira Weiny Reviewed-by: Dave Jiang Reviewed-by: Jonathan Cameron --- Changes from RFC v2: Output DPA flags as a single field Ensure names of fields match what TP_print outputs Steven prefix TRACE_EVENT with 'cxl_' Jonathan Remove Reserved field Changes from RFC: Add reserved byte array Use common CXL event header record macros Jonathan Use unaligned_le{24,16} for unaligned fields Don't use the inverse of phy addr mask Dave Jiang s/cxl_gen_media_event/general_media s/cxl_evt_gen_media/cxl_event_gen_media --- drivers/cxl/core/mbox.c | 40 ++++++++++-- drivers/cxl/cxlmem.h | 19 ++++++ include/trace/events/cxl.h | 124 +++++++++++++++++++++++++++++++++++++ 3 files changed, 179 insertions(+), 4 deletions(-) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index f46558e09f08..6d48fdb07700 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -709,6 +709,38 @@ int cxl_enumerate_cmds(struct cxl_dev_state *cxlds) } EXPORT_SYMBOL_NS_GPL(cxl_enumerate_cmds, CXL); +/* + * General Media Event Record + * CXL rev 3.0 Section 8.2.9.2.1.1; Table 8-43 + */ +static const uuid_t gen_media_event_uuid = + UUID_INIT(0xfbcd0a77, 0xc260, 0x417f, + 0x85, 0xa9, 0x08, 0x8b, 0x16, 0x21, 0xeb, 0xa6); + +static bool cxl_event_tracing_enabled(void) +{ + return trace_cxl_generic_event_enabled() || + trace_cxl_general_media_enabled(); +} + +static void cxl_trace_event_record(const char *dev_name, + enum cxl_event_log_type type, + struct cxl_event_record_raw *record) +{ + uuid_t *id = &record->hdr.id; + + if (uuid_equal(id, &gen_media_event_uuid)) { + struct cxl_event_gen_media *rec = + (struct cxl_event_gen_media *)record; + + trace_cxl_general_media(dev_name, type, rec); + return; + } + + /* For unknown record types print just the header */ + trace_cxl_generic_event(dev_name, type, record); +} + static int cxl_clear_event_record(struct cxl_dev_state *cxlds, enum cxl_event_log_type log, struct cxl_get_event_payload *get_pl, u16 nr) @@ -754,11 +786,11 @@ static void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, u16 nr_rec = min_t(u16, pl_nr, CXL_GET_EVENT_NR_RECORDS); int i; - if (trace_cxl_generic_event_enabled()) { + if (cxl_event_tracing_enabled()) { for (i = 0; i < nr_rec; i++) - trace_cxl_generic_event(dev_name(cxlds->dev), - type, - &payload.record[i]); + cxl_trace_event_record(dev_name(cxlds->dev), + type, + &payload.record[i]); } rc = cxl_clear_event_record(cxlds, type, &payload, nr_rec); diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 28a114c7cf69..86197f3168c7 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -412,6 +412,25 @@ struct cxl_mbox_clear_event_payload { __le16 handle[CXL_GET_EVENT_NR_RECORDS]; }; +/* + * General Media Event Record + * CXL rev 3.0 Section 8.2.9.2.1.1; Table 8-43 + */ +#define CXL_EVENT_GEN_MED_COMP_ID_SIZE 0x10 +struct cxl_event_gen_media { + struct cxl_event_record_hdr hdr; + __le64 phys_addr; + u8 descriptor; + u8 type; + u8 transaction_type; + u8 validity_flags[2]; + u8 channel; + u8 rank; + u8 device[3]; + u8 component_id[CXL_EVENT_GEN_MED_COMP_ID_SIZE]; + u8 reserved[0x2e]; +} __packed; + struct cxl_mbox_get_partition_info { __le64 active_volatile_cap; __le64 active_persistent_cap; diff --git a/include/trace/events/cxl.h b/include/trace/events/cxl.h index 60dec9a84918..a0c20e110708 100644 --- a/include/trace/events/cxl.h +++ b/include/trace/events/cxl.h @@ -119,6 +119,130 @@ TRACE_EVENT(cxl_generic_event, __print_hex(__entry->data, CXL_EVENT_RECORD_DATA_LENGTH)) ); +/* + * Physical Address field masks + * + * General Media Event Record + * CXL v2.0 Section 8.2.9.1.1.1; Table 154 + * + * DRAM Event Record + * CXL rev 3.0 section 8.2.9.2.1.2; Table 8-44 + */ +#define CXL_DPA_FLAGS_MASK 0x3F +#define CXL_DPA_MASK (~CXL_DPA_FLAGS_MASK) + +#define CXL_DPA_VOLATILE BIT(0) +#define CXL_DPA_NOT_REPAIRABLE BIT(1) +#define show_dpa_flags(flags) __print_flags(flags, "|", \ + { CXL_DPA_VOLATILE, "VOLATILE" }, \ + { CXL_DPA_NOT_REPAIRABLE, "NOT_REPAIRABLE" } \ +) + +/* + * General Media Event Record - GMER + * CXL v2.0 Section 8.2.9.1.1.1; Table 154 + */ +#define CXL_GMER_EVT_DESC_UNCORECTABLE_EVENT BIT(0) +#define CXL_GMER_EVT_DESC_THRESHOLD_EVENT BIT(1) +#define CXL_GMER_EVT_DESC_POISON_LIST_OVERFLOW BIT(2) +#define show_event_desc_flags(flags) __print_flags(flags, "|", \ + { CXL_GMER_EVT_DESC_UNCORECTABLE_EVENT, "Uncorrectable Event" }, \ + { CXL_GMER_EVT_DESC_THRESHOLD_EVENT, "Threshold event" }, \ + { CXL_GMER_EVT_DESC_POISON_LIST_OVERFLOW, "Poison List Overflow" } \ +) + +#define CXL_GMER_MEM_EVT_TYPE_ECC_ERROR 0x00 +#define CXL_GMER_MEM_EVT_TYPE_INV_ADDR 0x01 +#define CXL_GMER_MEM_EVT_TYPE_DATA_PATH_ERROR 0x02 +#define show_mem_event_type(type) __print_symbolic(type, \ + { CXL_GMER_MEM_EVT_TYPE_ECC_ERROR, "ECC Error" }, \ + { CXL_GMER_MEM_EVT_TYPE_INV_ADDR, "Invalid Address" }, \ + { CXL_GMER_MEM_EVT_TYPE_DATA_PATH_ERROR, "Data Path Error" } \ +) + +#define CXL_GMER_TRANS_UNKNOWN 0x00 +#define CXL_GMER_TRANS_HOST_READ 0x01 +#define CXL_GMER_TRANS_HOST_WRITE 0x02 +#define CXL_GMER_TRANS_HOST_SCAN_MEDIA 0x03 +#define CXL_GMER_TRANS_HOST_INJECT_POISON 0x04 +#define CXL_GMER_TRANS_INTERNAL_MEDIA_SCRUB 0x05 +#define CXL_GMER_TRANS_INTERNAL_MEDIA_MANAGEMENT 0x06 +#define show_trans_type(type) __print_symbolic(type, \ + { CXL_GMER_TRANS_UNKNOWN, "Unknown" }, \ + { CXL_GMER_TRANS_HOST_READ, "Host Read" }, \ + { CXL_GMER_TRANS_HOST_WRITE, "Host Write" }, \ + { CXL_GMER_TRANS_HOST_SCAN_MEDIA, "Host Scan Media" }, \ + { CXL_GMER_TRANS_HOST_INJECT_POISON, "Host Inject Poison" }, \ + { CXL_GMER_TRANS_INTERNAL_MEDIA_SCRUB, "Internal Media Scrub" }, \ + { CXL_GMER_TRANS_INTERNAL_MEDIA_MANAGEMENT, "Internal Media Management" } \ +) + +#define CXL_GMER_VALID_CHANNEL BIT(0) +#define CXL_GMER_VALID_RANK BIT(1) +#define CXL_GMER_VALID_DEVICE BIT(2) +#define CXL_GMER_VALID_COMPONENT BIT(3) +#define show_valid_flags(flags) __print_flags(flags, "|", \ + { CXL_GMER_VALID_CHANNEL, "CHANNEL" }, \ + { CXL_GMER_VALID_RANK, "RANK" }, \ + { CXL_GMER_VALID_DEVICE, "DEVICE" }, \ + { CXL_GMER_VALID_COMPONENT, "COMPONENT" } \ +) + +TRACE_EVENT(cxl_general_media, + + TP_PROTO(const char *dev_name, enum cxl_event_log_type log, + struct cxl_event_gen_media *rec), + + TP_ARGS(dev_name, log, rec), + + TP_STRUCT__entry( + CXL_EVT_TP_entry + /* General Media */ + __field(u64, dpa) + __field(u8, descriptor) + __field(u8, type) + __field(u8, transaction_type) + __field(u8, channel) + __field(u32, device) + __array(u8, comp_id, CXL_EVENT_GEN_MED_COMP_ID_SIZE) + __field(u16, validity_flags) + /* Following are out of order to pack trace record */ + __field(u8, rank) + __field(u8, dpa_flags) + ), + + TP_fast_assign( + CXL_EVT_TP_fast_assign(dev_name, log, rec->hdr); + + /* General Media */ + __entry->dpa = le64_to_cpu(rec->phys_addr); + __entry->dpa_flags = __entry->dpa & CXL_DPA_FLAGS_MASK; + /* Mask after flags have been parsed */ + __entry->dpa &= CXL_DPA_MASK; + __entry->descriptor = rec->descriptor; + __entry->type = rec->type; + __entry->transaction_type = rec->transaction_type; + __entry->channel = rec->channel; + __entry->rank = rec->rank; + __entry->device = get_unaligned_le24(rec->device); + memcpy(__entry->comp_id, &rec->component_id, + CXL_EVENT_GEN_MED_COMP_ID_SIZE); + __entry->validity_flags = get_unaligned_le16(&rec->validity_flags); + ), + + CXL_EVT_TP_printk("dpa=%llx dpa_flags='%s' " \ + "descriptor='%s' type='%s' transaction_type='%s' channel=%u rank=%u " \ + "device=%x comp_id=%s validity_flags='%s'", + __entry->dpa, show_dpa_flags(__entry->dpa_flags), + show_event_desc_flags(__entry->descriptor), + show_mem_event_type(__entry->type), + show_trans_type(__entry->transaction_type), + __entry->channel, __entry->rank, __entry->device, + __print_hex(__entry->comp_id, CXL_EVENT_GEN_MED_COMP_ID_SIZE), + show_valid_flags(__entry->validity_flags) + ) +); + #endif /* _CXL_TRACE_EVENTS_H */ /* This part must be outside protection */ From patchwork Thu Nov 10 18:57:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13039170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FC29C43219 for ; Thu, 10 Nov 2022 19:06:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231220AbiKJTGY (ORCPT ); Thu, 10 Nov 2022 14:06:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231440AbiKJTFn (ORCPT ); Thu, 10 Nov 2022 14:05:43 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80A1C51C24; Thu, 10 Nov 2022 11:05:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668107123; x=1699643123; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6NxWKNG5/NMqsFsnFxq+pETL9MCt4l5GYxbCDKLM2UU=; b=DROwNBhz8gJseSb7QlmknK7autWbWUun1U+9ErtqLHd0zvFMtlJ7gIkz RuZCV1PDUoK9In/6jqnIlqK37E9Qr9qXCWeLn/Rr64xDlrSEN3Z5Dkn3J Lc9uDcEJHmi6eI9E2Prb32v3+yPLFxVln82rpHJ5k5ooz0k6LIk2wFSdq xNuS6zeAmvNY8iT2bOMZ4MzVWtbGzDPDTL7rZqn7iQD35Bp2kc9UEJ71n wu66kQhm/JKSZxQwc8vv73BiqCVS+47WrZ9p7JRZ03sp6rs/6nEEAyuAZ uG+bJ6WgWMfKc7QFuJ5ysTy7ezg7OUmi0HfBziKNLG2KR8JixPtLuu8N8 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="375662103" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="375662103" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:08 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="882473429" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="882473429" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.6.223]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:08 -0800 From: ira.weiny@intel.com To: Dan Williams Cc: Ira Weiny , Jonathan Cameron , Alison Schofield , Vishal Verma , Ben Widawsky , Steven Rostedt , Davidlohr Bueso , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org Subject: [PATCH 06/11] cxl/mem: Trace DRAM Event Record Date: Thu, 10 Nov 2022 10:57:53 -0800 Message-Id: <20221110185758.879472-7-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221110185758.879472-1-ira.weiny@intel.com> References: <20221110185758.879472-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny CXL rev 3.0 section 8.2.9.2.1.2 defines the DRAM Event Record. Determine if the event read is a DRAM event record and if so trace the record. Reviewed-by: Jonathan Cameron Signed-off-by: Ira Weiny Reviewed-by: Dave Jiang --- Changes from RFC v2: Output DPA flags as a separate field. Ensure field names match TP_print output Steven prefix TRACE_EVENT with 'cxl_' Jonathan Formatting fix Remove reserved field Changes from RFC: Add reserved byte data Use new CXL header macros Jonathan Use get_unaligned_le{24,16}() for unaligned fields Use 'else if' Dave Jiang s/cxl_dram_event/dram s/cxl_evt_dram_rec/cxl_event_dram Adjust for new phys addr mask --- drivers/cxl/core/mbox.c | 16 ++++++- drivers/cxl/cxlmem.h | 23 ++++++++++ include/trace/events/cxl.h | 92 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 130 insertions(+), 1 deletion(-) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 6d48fdb07700..b03d7b856f3d 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -717,10 +717,19 @@ static const uuid_t gen_media_event_uuid = UUID_INIT(0xfbcd0a77, 0xc260, 0x417f, 0x85, 0xa9, 0x08, 0x8b, 0x16, 0x21, 0xeb, 0xa6); +/* + * DRAM Event Record + * CXL rev 3.0 section 8.2.9.2.1.2; Table 8-44 + */ +static const uuid_t dram_event_uuid = + UUID_INIT(0x601dcbb3, 0x9c06, 0x4eab, + 0xb8, 0xaf, 0x4e, 0x9b, 0xfb, 0x5c, 0x96, 0x24); + static bool cxl_event_tracing_enabled(void) { return trace_cxl_generic_event_enabled() || - trace_cxl_general_media_enabled(); + trace_cxl_general_media_enabled() || + trace_cxl_dram_enabled(); } static void cxl_trace_event_record(const char *dev_name, @@ -735,6 +744,11 @@ static void cxl_trace_event_record(const char *dev_name, trace_cxl_general_media(dev_name, type, rec); return; + } else if (uuid_equal(id, &dram_event_uuid)) { + struct cxl_event_dram *rec = (struct cxl_event_dram *)record; + + trace_cxl_dram(dev_name, type, rec); + return; } /* For unknown record types print just the header */ diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 86197f3168c7..87c877f0940d 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -431,6 +431,29 @@ struct cxl_event_gen_media { u8 reserved[0x2e]; } __packed; +/* + * DRAM Event Record - DER + * CXL rev 3.0 section 8.2.9.2.1.2; Table 3-44 + */ +#define CXL_EVENT_DER_CORRECTION_MASK_SIZE 0x20 +struct cxl_event_dram { + struct cxl_event_record_hdr hdr; + __le64 phys_addr; + u8 descriptor; + u8 type; + u8 transaction_type; + u8 validity_flags[2]; + u8 channel; + u8 rank; + u8 nibble_mask[3]; + u8 bank_group; + u8 bank; + u8 row[3]; + u8 column[2]; + u8 correction_mask[CXL_EVENT_DER_CORRECTION_MASK_SIZE]; + u8 reserved[0x17]; +} __packed; + struct cxl_mbox_get_partition_info { __le64 active_volatile_cap; __le64 active_persistent_cap; diff --git a/include/trace/events/cxl.h b/include/trace/events/cxl.h index a0c20e110708..37bbe59905af 100644 --- a/include/trace/events/cxl.h +++ b/include/trace/events/cxl.h @@ -243,6 +243,98 @@ TRACE_EVENT(cxl_general_media, ) ); +/* + * DRAM Event Record - DER + * + * CXL rev 3.0 section 8.2.9.2.1.2; Table 8-44 + */ +/* + * DRAM Event Record defines many fields the same as the General Media Event + * Record. Reuse those definitions as appropriate. + */ +#define CXL_DER_VALID_CHANNEL BIT(0) +#define CXL_DER_VALID_RANK BIT(1) +#define CXL_DER_VALID_NIBBLE BIT(2) +#define CXL_DER_VALID_BANK_GROUP BIT(3) +#define CXL_DER_VALID_BANK BIT(4) +#define CXL_DER_VALID_ROW BIT(5) +#define CXL_DER_VALID_COLUMN BIT(6) +#define CXL_DER_VALID_CORRECTION_MASK BIT(7) +#define show_dram_valid_flags(flags) __print_flags(flags, "|", \ + { CXL_DER_VALID_CHANNEL, "CHANNEL" }, \ + { CXL_DER_VALID_RANK, "RANK" }, \ + { CXL_DER_VALID_NIBBLE, "NIBBLE" }, \ + { CXL_DER_VALID_BANK_GROUP, "BANK GROUP" }, \ + { CXL_DER_VALID_BANK, "BANK" }, \ + { CXL_DER_VALID_ROW, "ROW" }, \ + { CXL_DER_VALID_COLUMN, "COLUMN" }, \ + { CXL_DER_VALID_CORRECTION_MASK, "CORRECTION MASK" } \ +) + +TRACE_EVENT(cxl_dram, + + TP_PROTO(const char *dev_name, enum cxl_event_log_type log, + struct cxl_event_dram *rec), + + TP_ARGS(dev_name, log, rec), + + TP_STRUCT__entry( + CXL_EVT_TP_entry + /* DRAM */ + __field(u64, dpa) + __field(u8, descriptor) + __field(u8, type) + __field(u8, transaction_type) + __field(u8, channel) + __field(u16, validity_flags) + __field(u16, column) /* Out of order to pack trace record */ + __field(u32, nibble_mask) + __field(u32, row) + __array(u8, cor_mask, CXL_EVENT_DER_CORRECTION_MASK_SIZE) + __field(u8, rank) /* Out of order to pack trace record */ + __field(u8, bank_group) /* Out of order to pack trace record */ + __field(u8, bank) /* Out of order to pack trace record */ + __field(u8, dpa_flags) /* Out of order to pack trace record */ + ), + + TP_fast_assign( + CXL_EVT_TP_fast_assign(dev_name, log, rec->hdr); + + /* DRAM */ + __entry->dpa = le64_to_cpu(rec->phys_addr); + __entry->dpa_flags = __entry->dpa & CXL_DPA_FLAGS_MASK; + __entry->dpa &= CXL_DPA_MASK; + __entry->descriptor = rec->descriptor; + __entry->type = rec->type; + __entry->transaction_type = rec->transaction_type; + __entry->validity_flags = get_unaligned_le16(rec->validity_flags); + __entry->channel = rec->channel; + __entry->rank = rec->rank; + __entry->nibble_mask = get_unaligned_le24(rec->nibble_mask); + __entry->bank_group = rec->bank_group; + __entry->bank = rec->bank; + __entry->row = get_unaligned_le24(rec->row); + __entry->column = get_unaligned_le16(rec->column); + memcpy(__entry->cor_mask, &rec->correction_mask, + CXL_EVENT_DER_CORRECTION_MASK_SIZE); + ), + + CXL_EVT_TP_printk("dpa=%llx dpa_flags='%s' descriptor='%s' type='%s' " \ + "transaction_type='%s' channel=%u rank=%u nibble_mask=%x " \ + "bank_group=%u bank=%u row=%u column=%u cor_mask=%s " \ + "validity_flags='%s'", + __entry->dpa, show_dpa_flags(__entry->dpa_flags), + show_event_desc_flags(__entry->descriptor), + show_mem_event_type(__entry->type), + show_trans_type(__entry->transaction_type), + __entry->channel, __entry->rank, __entry->nibble_mask, + __entry->bank_group, __entry->bank, + __entry->row, __entry->column, + __print_hex(__entry->cor_mask, CXL_EVENT_DER_CORRECTION_MASK_SIZE), + show_dram_valid_flags(__entry->validity_flags) + ) +); + #endif /* _CXL_TRACE_EVENTS_H */ /* This part must be outside protection */ From patchwork Thu Nov 10 18:57:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13039168 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AB6CC4332F for ; Thu, 10 Nov 2022 19:06:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231434AbiKJTGX (ORCPT ); Thu, 10 Nov 2022 14:06:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230328AbiKJTFm (ORCPT ); Thu, 10 Nov 2022 14:05:42 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3754051C25; Thu, 10 Nov 2022 11:05:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668107124; x=1699643124; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VhFVGs3QRvbnc3UKsViEoHMcikbBuSWQojXM0XggZ5g=; b=MPllkcjQTGe5i/v9esf2R2+NF1Ikf6uDbAKyijAR1h0ARyPw/MGJKhyE A4vxC84TGC9khWsuc8xKyKatZEW8QSqyCL0MKh/FHxc4J4ycbd3oSLIVG Hirn5/0W1J38SH8dvUNOm9wp+RR7FC9bAoo83kPfQ8kOKRtwZoqdmLmEl ltUIp6dSw+vsiQMpMC2E8jk58XJpQwRAN0463k2r+iMQXXKoBmAuwAR96 TGiRfuQT0LSfegzidxvCQzvleKD4toTETdy9g4Wu0NbO1djaraRnyuXpI Uc2PWnZ3wGdbMIeTy5+/gqrWmB7FsEKSWTFPh+sWNE5r1jM1YRlkZIL2S Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="375662112" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="375662112" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:09 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="882473438" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="882473438" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.6.223]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:08 -0800 From: ira.weiny@intel.com To: Dan Williams Cc: Ira Weiny , Alison Schofield , Vishal Verma , Ben Widawsky , Steven Rostedt , Jonathan Cameron , Davidlohr Bueso , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org Subject: [PATCH 07/11] cxl/mem: Trace Memory Module Event Record Date: Thu, 10 Nov 2022 10:57:54 -0800 Message-Id: <20221110185758.879472-8-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221110185758.879472-1-ira.weiny@intel.com> References: <20221110185758.879472-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny CXL rev 3.0 section 8.2.9.2.1.3 defines the Memory Module Event Record. Determine if the event read is memory module record and if so trace the record. Signed-off-by: Ira Weiny Reviewed-by: Dave Jiang Reviewed-by: Jonathan Cameron Reviewed-by: Steven Rostedt (Google) --- Changes from RFC v2: Ensure field names match TP_print output Steven prefix TRACE_EVENT with 'cxl_' Jonathan Remove reserved field Define a 1bit and 2 bit status decoder Fix paren alignment Changes from RFC: Clean up spec reference Add reserved data Use new CXL header macros Jonathan Use else if Use get_unaligned_le*() for unaligned fields Dave Jiang s/cxl_mem_mod_event/memory_module s/cxl_evt_mem_mod_rec/cxl_event_mem_module --- drivers/cxl/core/mbox.c | 17 ++++- drivers/cxl/cxlmem.h | 26 +++++++ include/trace/events/cxl.h | 144 +++++++++++++++++++++++++++++++++++++ 3 files changed, 186 insertions(+), 1 deletion(-) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index b03d7b856f3d..879b228a98a0 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -725,11 +725,20 @@ static const uuid_t dram_event_uuid = UUID_INIT(0x601dcbb3, 0x9c06, 0x4eab, 0xb8, 0xaf, 0x4e, 0x9b, 0xfb, 0x5c, 0x96, 0x24); +/* + * Memory Module Event Record + * CXL rev 3.0 section 8.2.9.2.1.3; Table 8-45 + */ +static const uuid_t mem_mod_event_uuid = + UUID_INIT(0xfe927475, 0xdd59, 0x4339, + 0xa5, 0x86, 0x79, 0xba, 0xb1, 0x13, 0xb7, 0x74); + static bool cxl_event_tracing_enabled(void) { return trace_cxl_generic_event_enabled() || trace_cxl_general_media_enabled() || - trace_cxl_dram_enabled(); + trace_cxl_dram_enabled() || + trace_cxl_memory_module_enabled(); } static void cxl_trace_event_record(const char *dev_name, @@ -749,6 +758,12 @@ static void cxl_trace_event_record(const char *dev_name, trace_cxl_dram(dev_name, type, rec); return; + } else if (uuid_equal(id, &mem_mod_event_uuid)) { + struct cxl_event_mem_module *rec = + (struct cxl_event_mem_module *)record; + + trace_cxl_memory_module(dev_name, type, rec); + return; } /* For unknown record types print just the header */ diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 87c877f0940d..03da4f8f74d3 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -454,6 +454,32 @@ struct cxl_event_dram { u8 reserved[0x17]; } __packed; +/* + * Get Health Info Record + * CXL rev 3.0 section 8.2.9.8.3.1; Table 8-100 + */ +struct cxl_get_health_info { + u8 health_status; + u8 media_status; + u8 add_status; + u8 life_used; + u8 device_temp[2]; + u8 dirty_shutdown_cnt[4]; + u8 cor_vol_err_cnt[4]; + u8 cor_per_err_cnt[4]; +} __packed; + +/* + * Memory Module Event Record + * CXL rev 3.0 section 8.2.9.2.1.3; Table 8-45 + */ +struct cxl_event_mem_module { + struct cxl_event_record_hdr hdr; + u8 event_type; + struct cxl_get_health_info info; + u8 reserved[0x3d]; +} __packed; + struct cxl_mbox_get_partition_info { __le64 active_volatile_cap; __le64 active_persistent_cap; diff --git a/include/trace/events/cxl.h b/include/trace/events/cxl.h index 37bbe59905af..05437e13a882 100644 --- a/include/trace/events/cxl.h +++ b/include/trace/events/cxl.h @@ -335,6 +335,150 @@ TRACE_EVENT(cxl_dram, ) ); +/* + * Memory Module Event Record - MMER + * + * CXL res 3.0 section 8.2.9.2.1.3; Table 8-45 + */ +#define CXL_MMER_HEALTH_STATUS_CHANGE 0x00 +#define CXL_MMER_MEDIA_STATUS_CHANGE 0x01 +#define CXL_MMER_LIFE_USED_CHANGE 0x02 +#define CXL_MMER_TEMP_CHANGE 0x03 +#define CXL_MMER_DATA_PATH_ERROR 0x04 +#define CXL_MMER_LAS_ERROR 0x05 +#define show_dev_evt_type(type) __print_symbolic(type, \ + { CXL_MMER_HEALTH_STATUS_CHANGE, "Health Status Change" }, \ + { CXL_MMER_MEDIA_STATUS_CHANGE, "Media Status Change" }, \ + { CXL_MMER_LIFE_USED_CHANGE, "Life Used Change" }, \ + { CXL_MMER_TEMP_CHANGE, "Temperature Change" }, \ + { CXL_MMER_DATA_PATH_ERROR, "Data Path Error" }, \ + { CXL_MMER_LAS_ERROR, "LSA Error" } \ +) + +/* + * Device Health Information - DHI + * + * CXL res 3.0 section 8.2.9.8.3.1; Table 8-100 + */ +#define CXL_DHI_HS_MAINTENANCE_NEEDED BIT(0) +#define CXL_DHI_HS_PERFORMANCE_DEGRADED BIT(1) +#define CXL_DHI_HS_HW_REPLACEMENT_NEEDED BIT(2) +#define show_health_status_flags(flags) __print_flags(flags, "|", \ + { CXL_DHI_HS_MAINTENANCE_NEEDED, "Maintenance Needed" }, \ + { CXL_DHI_HS_PERFORMANCE_DEGRADED, "Performance Degraded" }, \ + { CXL_DHI_HS_HW_REPLACEMENT_NEEDED, "Replacement Needed" } \ +) + +#define CXL_DHI_MS_NORMAL 0x00 +#define CXL_DHI_MS_NOT_READY 0x01 +#define CXL_DHI_MS_WRITE_PERSISTENCY_LOST 0x02 +#define CXL_DHI_MS_ALL_DATA_LOST 0x03 +#define CXL_DHI_MS_WRITE_PERSISTENCY_LOSS_EVENT_POWER_LOSS 0x04 +#define CXL_DHI_MS_WRITE_PERSISTENCY_LOSS_EVENT_SHUTDOWN 0x05 +#define CXL_DHI_MS_WRITE_PERSISTENCY_LOSS_IMMINENT 0x06 +#define CXL_DHI_MS_WRITE_ALL_DATA_LOSS_EVENT_POWER_LOSS 0x07 +#define CXL_DHI_MS_WRITE_ALL_DATA_LOSS_EVENT_SHUTDOWN 0x08 +#define CXL_DHI_MS_WRITE_ALL_DATA_LOSS_IMMINENT 0x09 +#define show_media_status(ms) __print_symbolic(ms, \ + { CXL_DHI_MS_NORMAL, \ + "Normal" }, \ + { CXL_DHI_MS_NOT_READY, \ + "Not Ready" }, \ + { CXL_DHI_MS_WRITE_PERSISTENCY_LOST, \ + "Write Persistency Lost" }, \ + { CXL_DHI_MS_ALL_DATA_LOST, \ + "All Data Lost" }, \ + { CXL_DHI_MS_WRITE_PERSISTENCY_LOSS_EVENT_POWER_LOSS, \ + "Write Persistency Loss in the Event of Power Loss" }, \ + { CXL_DHI_MS_WRITE_PERSISTENCY_LOSS_EVENT_SHUTDOWN, \ + "Write Persistency Loss in Event of Shutdown" }, \ + { CXL_DHI_MS_WRITE_PERSISTENCY_LOSS_IMMINENT, \ + "Write Persistency Loss Imminent" }, \ + { CXL_DHI_MS_WRITE_ALL_DATA_LOSS_EVENT_POWER_LOSS, \ + "All Data Loss in Event of Power Loss" }, \ + { CXL_DHI_MS_WRITE_ALL_DATA_LOSS_EVENT_SHUTDOWN, \ + "All Data loss in the Event of Shutdown" }, \ + { CXL_DHI_MS_WRITE_ALL_DATA_LOSS_IMMINENT, \ + "All Data Loss Imminent" } \ +) + +#define CXL_DHI_AS_NORMAL 0x0 +#define CXL_DHI_AS_WARNING 0x1 +#define CXL_DHI_AS_CRITICAL 0x2 +#define show_two_bit_status(as) __print_symbolic(as, \ + { CXL_DHI_AS_NORMAL, "Normal" }, \ + { CXL_DHI_AS_WARNING, "Warning" }, \ + { CXL_DHI_AS_CRITICAL, "Critical" } \ +) +#define show_one_bit_status(as) __print_symbolic(as, \ + { CXL_DHI_AS_NORMAL, "Normal" }, \ + { CXL_DHI_AS_WARNING, "Warning" } \ +) + +#define CXL_DHI_AS_LIFE_USED(as) (as & 0x3) +#define CXL_DHI_AS_DEV_TEMP(as) ((as & 0xC) >> 2) +#define CXL_DHI_AS_COR_VOL_ERR_CNT(as) ((as & 0x10) >> 4) +#define CXL_DHI_AS_COR_PER_ERR_CNT(as) ((as & 0x20) >> 5) + +TRACE_EVENT(cxl_memory_module, + + TP_PROTO(const char *dev_name, enum cxl_event_log_type log, + struct cxl_event_mem_module *rec), + + TP_ARGS(dev_name, log, rec), + + TP_STRUCT__entry( + CXL_EVT_TP_entry + + /* Memory Module Event */ + __field(u8, event_type) + + /* Device Health Info */ + __field(u8, health_status) + __field(u8, media_status) + __field(u8, life_used) + __field(u32, dirty_shutdown_cnt) + __field(u32, cor_vol_err_cnt) + __field(u32, cor_per_err_cnt) + __field(s16, device_temp) + __field(u8, add_status) + ), + + TP_fast_assign( + CXL_EVT_TP_fast_assign(dev_name, log, rec->hdr); + + /* Memory Module Event */ + __entry->event_type = rec->event_type; + + /* Device Health Info */ + __entry->health_status = rec->info.health_status; + __entry->media_status = rec->info.media_status; + __entry->life_used = rec->info.life_used; + __entry->dirty_shutdown_cnt = get_unaligned_le32(rec->info.dirty_shutdown_cnt); + __entry->cor_vol_err_cnt = get_unaligned_le32(rec->info.cor_vol_err_cnt); + __entry->cor_per_err_cnt = get_unaligned_le32(rec->info.cor_per_err_cnt); + __entry->device_temp = get_unaligned_le16(rec->info.device_temp); + __entry->add_status = rec->info.add_status; + ), + + CXL_EVT_TP_printk("event_type='%s' health_status='%s' media_status='%s' " \ + "as_life_used=%s as_dev_temp=%s as_cor_vol_err_cnt=%s " \ + "as_cor_per_err_cnt=%s life_used=%u device_temp=%d " \ + "dirty_shutdown_cnt=%u cor_vol_err_cnt=%u cor_per_err_cnt=%u", + show_dev_evt_type(__entry->event_type), + show_health_status_flags(__entry->health_status), + show_media_status(__entry->media_status), + show_two_bit_status(CXL_DHI_AS_LIFE_USED(__entry->add_status)), + show_two_bit_status(CXL_DHI_AS_DEV_TEMP(__entry->add_status)), + show_one_bit_status(CXL_DHI_AS_COR_VOL_ERR_CNT(__entry->add_status)), + show_one_bit_status(CXL_DHI_AS_COR_PER_ERR_CNT(__entry->add_status)), + __entry->life_used, __entry->device_temp, + __entry->dirty_shutdown_cnt, __entry->cor_vol_err_cnt, + __entry->cor_per_err_cnt + ) +); + + #endif /* _CXL_TRACE_EVENTS_H */ /* This part must be outside protection */ From patchwork Thu Nov 10 18:57:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13039171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D82C6C4167B for ; Thu, 10 Nov 2022 19:06:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231476AbiKJTG0 (ORCPT ); Thu, 10 Nov 2022 14:06:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231285AbiKJTFn (ORCPT ); Thu, 10 Nov 2022 14:05:43 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F25EA5800D; Thu, 10 Nov 2022 11:05:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668107126; x=1699643126; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=F77gPSChHeNKN9pAevgHdvZvcF+m0wKhasT7l+mnn6Q=; b=WHXjeoJbTA/GIbeijV7h5ZDgAzowUGv11pUlqvB+ZP0kWz+Dg6mSYsxf aUP6PFBdzygWngyW2Atx4sb1doZ15qT5jrTR/w4RXh3/aw5AMvXCbklHd v5xwRFqlVpCsd/5YERk3gzqsoks8HXkM8k/d1gcl5Nas8TM3UYpTcq3dc ckKIFfYKJni1gQxHmkA7HyuJKiwvmf2QLLv59IgtQ8LgaJyq9fpLk69HW 9Cf1TpJV+ax+e9otSyGG5J6XaWzykqIftBDbRmFtsTOMP6HuJl3HP7knn kYviktl692WrPWmfYuztRr+UunIHfEuu2dbqZ4HHcBzrMnbXM0gtkGXzX g==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="375662123" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="375662123" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:09 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="882473446" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="882473446" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.6.223]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:09 -0800 From: ira.weiny@intel.com To: Dan Williams Cc: Ira Weiny , Alison Schofield , Vishal Verma , Ben Widawsky , Steven Rostedt , Jonathan Cameron , Davidlohr Bueso , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org Subject: [PATCH 08/11] cxl/mem: Wire up event interrupts Date: Thu, 10 Nov 2022 10:57:55 -0800 Message-Id: <20221110185758.879472-9-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221110185758.879472-1-ira.weiny@intel.com> References: <20221110185758.879472-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny CXL device events are signaled via interrupts. Each event log may have a different interrupt message number. These message numbers are reported in the Get Event Interrupt Policy mailbox command. Add interrupt support for event logs. Interrupts are allocated as shared interrupts. Therefore, all or some event logs can share the same message number. The driver must deal with the possibility that dynamic capacity is not yet supported by a device it sees. Fallback and retry without dynamic capacity if the first attempt fails. Device capacity event logs interrupt as part of the informational event log. Check the event status to see which log has data. Signed-off-by: Ira Weiny --- Changes from RFC v2 Adjust to new irq 16 vector allocation Jonathan Remove CXL_INT_RES Use irq threads to ensure mailbox commands are executed outside irq context Adjust for optional Dynamic Capacity log --- drivers/cxl/core/mbox.c | 53 +++++++++++++- drivers/cxl/cxlmem.h | 31 ++++++++ drivers/cxl/pci.c | 133 +++++++++++++++++++++++++++++++++++ include/uapi/linux/cxl_mem.h | 2 + 4 files changed, 217 insertions(+), 2 deletions(-) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 879b228a98a0..1e6762af2a00 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -53,6 +53,8 @@ static struct cxl_mem_command cxl_mem_commands[CXL_MEM_COMMAND_ID_MAX] = { CXL_CMD(GET_SUPPORTED_LOGS, 0, CXL_VARIABLE_PAYLOAD, CXL_CMD_FLAG_FORCE_ENABLE), CXL_CMD(GET_EVENT_RECORD, 1, CXL_VARIABLE_PAYLOAD, 0), CXL_CMD(CLEAR_EVENT_RECORD, CXL_VARIABLE_PAYLOAD, 0, 0), + CXL_CMD(GET_EVT_INT_POLICY, 0, 0x5, 0), + CXL_CMD(SET_EVT_INT_POLICY, 0x5, 0, 0), CXL_CMD(GET_FW_INFO, 0, 0x50, 0), CXL_CMD(GET_PARTITION_INFO, 0, 0x20, 0), CXL_CMD(GET_LSA, 0x8, CXL_VARIABLE_PAYLOAD, 0), @@ -791,8 +793,8 @@ static int cxl_clear_event_record(struct cxl_dev_state *cxlds, &payload, sizeof(payload), NULL, 0); } -static void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, - enum cxl_event_log_type type) +void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, + enum cxl_event_log_type type) { struct cxl_get_event_payload payload; u16 pl_nr; @@ -837,6 +839,7 @@ static void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, } while (pl_nr > CXL_GET_EVENT_NR_RECORDS || payload.flags & CXL_GET_EVENT_FLAG_MORE_RECORDS); } +EXPORT_SYMBOL_NS_GPL(cxl_mem_get_records_log, CXL); /** * cxl_mem_get_event_records - Get Event Records from the device @@ -867,6 +870,52 @@ void cxl_mem_get_event_records(struct cxl_dev_state *cxlds) } EXPORT_SYMBOL_NS_GPL(cxl_mem_get_event_records, CXL); +int cxl_event_config_msgnums(struct cxl_dev_state *cxlds) +{ + struct cxl_event_interrupt_policy *policy = &cxlds->evt_int_policy; + size_t policy_size = sizeof(*policy); + bool retry = true; + int rc; + + policy->info_settings = CXL_INT_MSI_MSIX; + policy->warn_settings = CXL_INT_MSI_MSIX; + policy->failure_settings = CXL_INT_MSI_MSIX; + policy->fatal_settings = CXL_INT_MSI_MSIX; + policy->dyn_cap_settings = CXL_INT_MSI_MSIX; + +again: + rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_SET_EVT_INT_POLICY, + policy, policy_size, NULL, 0); + if (rc < 0) { + /* + * If the device does not support dynamic capacity it may fail + * the command due to an invalid payload. Retry without + * dynamic capacity. + */ + if (retry) { + retry = false; + policy->dyn_cap_settings = 0; + policy_size = sizeof(*policy) - sizeof(policy->dyn_cap_settings); + goto again; + } + dev_err(cxlds->dev, "Failed to set event interrupt policy : %d", + rc); + memset(policy, CXL_INT_NONE, sizeof(*policy)); + return rc; + } + + rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_GET_EVT_INT_POLICY, NULL, 0, + policy, policy_size); + if (rc < 0) { + dev_err(cxlds->dev, "Failed to get event interrupt policy : %d", + rc); + return rc; + } + + return 0; +} +EXPORT_SYMBOL_NS_GPL(cxl_event_config_msgnums, CXL); + /** * cxl_mem_get_partition_info - Get partition info * @cxlds: The device data for the operation diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 03da4f8f74d3..4d9c3ea30c24 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -179,6 +179,31 @@ struct cxl_endpoint_dvsec_info { struct range dvsec_range[2]; }; +/** + * Event Interrupt Policy + * + * CXL rev 3.0 section 8.2.9.2.4; Table 8-52 + */ +enum cxl_event_int_mode { + CXL_INT_NONE = 0x00, + CXL_INT_MSI_MSIX = 0x01, + CXL_INT_FW = 0x02 +}; +#define CXL_EVENT_INT_MODE_MASK 0x3 +#define CXL_EVENT_INT_MSGNUM(setting) (((setting) & 0xf0) >> 4) +struct cxl_event_interrupt_policy { + u8 info_settings; + u8 warn_settings; + u8 failure_settings; + u8 fatal_settings; + u8 dyn_cap_settings; +} __packed; + +static inline bool cxl_evt_int_is_msi(u8 setting) +{ + return CXL_INT_MSI_MSIX == (setting & CXL_EVENT_INT_MODE_MASK); +} + /** * struct cxl_dev_state - The driver device state * @@ -246,6 +271,7 @@ struct cxl_dev_state { resource_size_t component_reg_phys; u64 serial; + struct cxl_event_interrupt_policy evt_int_policy; struct xarray doe_mbs; @@ -259,6 +285,8 @@ enum cxl_opcode { CXL_MBOX_OP_RAW = CXL_MBOX_OP_INVALID, CXL_MBOX_OP_GET_EVENT_RECORD = 0x0100, CXL_MBOX_OP_CLEAR_EVENT_RECORD = 0x0101, + CXL_MBOX_OP_GET_EVT_INT_POLICY = 0x0102, + CXL_MBOX_OP_SET_EVT_INT_POLICY = 0x0103, CXL_MBOX_OP_GET_FW_INFO = 0x0200, CXL_MBOX_OP_ACTIVATE_FW = 0x0202, CXL_MBOX_OP_GET_SUPPORTED_LOGS = 0x0400, @@ -539,7 +567,10 @@ int cxl_mem_create_range_info(struct cxl_dev_state *cxlds); struct cxl_dev_state *cxl_dev_state_create(struct device *dev); void set_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); +void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, + enum cxl_event_log_type type); void cxl_mem_get_event_records(struct cxl_dev_state *cxlds); +int cxl_event_config_msgnums(struct cxl_dev_state *cxlds); #ifdef CONFIG_CXL_SUSPEND void cxl_mem_active_inc(void); void cxl_mem_active_dec(void); diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index e0d511575b45..64b2e2671043 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -458,6 +458,138 @@ static void cxl_pci_alloc_irq_vectors(struct cxl_dev_state *cxlds) cxlds->nr_irq_vecs = nvecs; } +struct cxl_event_irq_id { + struct cxl_dev_state *cxlds; + u32 status; + unsigned int msgnum; +}; + +static irqreturn_t cxl_event_int_thread(int irq, void *id) +{ + struct cxl_event_irq_id *cxlid = id; + struct cxl_dev_state *cxlds = cxlid->cxlds; + + if (cxlid->status & CXLDEV_EVENT_STATUS_INFO) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_INFO); + if (cxlid->status & CXLDEV_EVENT_STATUS_WARN) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_WARN); + if (cxlid->status & CXLDEV_EVENT_STATUS_FAIL) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FAIL); + if (cxlid->status & CXLDEV_EVENT_STATUS_FATAL) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FATAL); + if (cxlid->status & CXLDEV_EVENT_STATUS_DYNAMIC_CAP) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_DYNAMIC_CAP); + + return IRQ_HANDLED; +} + +static irqreturn_t cxl_event_int_handler(int irq, void *id) +{ + struct cxl_event_irq_id *cxlid = id; + struct cxl_dev_state *cxlds = cxlid->cxlds; + u32 status = readl(cxlds->regs.status + CXLDEV_DEV_EVENT_STATUS_OFFSET); + + if (cxlid->status & status) + return IRQ_WAKE_THREAD; + return IRQ_HANDLED; +} + +static void cxl_free_event_irq(void *id) +{ + struct cxl_event_irq_id *cxlid = id; + struct pci_dev *pdev = to_pci_dev(cxlid->cxlds->dev); + + pci_free_irq(pdev, cxlid->msgnum, id); +} + +static u32 log_type_to_status(enum cxl_event_log_type log_type) +{ + switch (log_type) { + case CXL_EVENT_TYPE_INFO: + return CXLDEV_EVENT_STATUS_INFO | CXLDEV_EVENT_STATUS_DYNAMIC_CAP; + case CXL_EVENT_TYPE_WARN: + return CXLDEV_EVENT_STATUS_WARN; + case CXL_EVENT_TYPE_FAIL: + return CXLDEV_EVENT_STATUS_FAIL; + case CXL_EVENT_TYPE_FATAL: + return CXLDEV_EVENT_STATUS_FATAL; + default: + break; + } + return 0; +} + +static int cxl_request_event_irq(struct cxl_dev_state *cxlds, + enum cxl_event_log_type log_type, + u8 setting) +{ + struct device *dev = cxlds->dev; + struct pci_dev *pdev = to_pci_dev(dev); + struct cxl_event_irq_id *id; + unsigned int msgnum = CXL_EVENT_INT_MSGNUM(setting); + int irq; + + /* Disabled irq is not an error */ + if (!cxl_evt_int_is_msi(setting) || msgnum > cxlds->nr_irq_vecs) { + dev_dbg(dev, "Event interrupt not enabled; %s %u %d\n", + cxl_event_log_type_str(CXL_EVENT_TYPE_INFO), + msgnum, cxlds->nr_irq_vecs); + return 0; + } + + id = devm_kzalloc(dev, sizeof(*id), GFP_KERNEL); + if (!id) + return -ENOMEM; + + id->cxlds = cxlds; + id->msgnum = msgnum; + id->status = log_type_to_status(log_type); + + irq = pci_request_irq(pdev, id->msgnum, cxl_event_int_handler, + cxl_event_int_thread, id, + "%s:event-log-%s", dev_name(dev), + cxl_event_log_type_str(log_type)); + if (irq) + return irq; + + devm_add_action_or_reset(dev, cxl_free_event_irq, id); + return 0; +} + +static void cxl_event_irqsetup(struct cxl_dev_state *cxlds) +{ + struct device *dev = cxlds->dev; + u8 setting; + + if (cxl_event_config_msgnums(cxlds)) + return; + + /* + * Dynamic Capacity shares the info message number + * Nothing to be done except check the status bit in the + * irq thread. + */ + setting = cxlds->evt_int_policy.info_settings; + if (cxl_request_event_irq(cxlds, CXL_EVENT_TYPE_INFO, setting)) + dev_err(dev, "Failed to get interrupt for %s event log\n", + cxl_event_log_type_str(CXL_EVENT_TYPE_INFO)); + + setting = cxlds->evt_int_policy.warn_settings; + if (cxl_request_event_irq(cxlds, CXL_EVENT_TYPE_WARN, setting)) + dev_err(dev, "Failed to get interrupt for %s event log\n", + cxl_event_log_type_str(CXL_EVENT_TYPE_WARN)); + + setting = cxlds->evt_int_policy.failure_settings; + if (cxl_request_event_irq(cxlds, CXL_EVENT_TYPE_FAIL, setting)) + dev_err(dev, "Failed to get interrupt for %s event log\n", + cxl_event_log_type_str(CXL_EVENT_TYPE_FAIL)); + + setting = cxlds->evt_int_policy.fatal_settings; + if (cxl_request_event_irq(cxlds, CXL_EVENT_TYPE_FATAL, setting)) + dev_err(dev, "Failed to get interrupt for %s event log\n", + cxl_event_log_type_str(CXL_EVENT_TYPE_FATAL)); +} + static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct cxl_register_map map; @@ -525,6 +657,7 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) return rc; cxl_pci_alloc_irq_vectors(cxlds); + cxl_event_irqsetup(cxlds); cxlmd = devm_cxl_add_memdev(cxlds); if (IS_ERR(cxlmd)) diff --git a/include/uapi/linux/cxl_mem.h b/include/uapi/linux/cxl_mem.h index 7c1ad8062792..a8204802fcca 100644 --- a/include/uapi/linux/cxl_mem.h +++ b/include/uapi/linux/cxl_mem.h @@ -26,6 +26,8 @@ ___C(GET_SUPPORTED_LOGS, "Get Supported Logs"), \ ___C(GET_EVENT_RECORD, "Get Event Record"), \ ___C(CLEAR_EVENT_RECORD, "Clear Event Record"), \ + ___C(GET_EVT_INT_POLICY, "Get Event Interrupt Policy"), \ + ___C(SET_EVT_INT_POLICY, "Set Event Interrupt Policy"), \ ___C(GET_FW_INFO, "Get FW Info"), \ ___C(GET_PARTITION_INFO, "Get Partition Information"), \ ___C(GET_LSA, "Get Label Storage Area"), \ From patchwork Thu Nov 10 18:57:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13039172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97BE7C433FE for ; Thu, 10 Nov 2022 19:06:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231675AbiKJTG1 (ORCPT ); Thu, 10 Nov 2022 14:06:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231462AbiKJTFo (ORCPT ); Thu, 10 Nov 2022 14:05:44 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13D1151C29; Thu, 10 Nov 2022 11:05:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668107128; x=1699643128; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=X0hTlSjzN6nRB0eAV3xdVOXDyE8Lzm/yx7DpFWSNwVY=; b=kdTGeN939Bqu3tl9+52akaWrFF6hIQ2N/2yfkh5lgVmQLQxb31U3lYW+ ewacPWMWAIQdm4k4Q/8IV18xcKO2K4Iroxeol7Jw6R6UzK3kpUplZwuh2 +HRgjn+tsOERSXNcRozG6j4db5boMve1KWGjdPH33UPicnlme6ozo34fs LHSmhBW0Vy1erqTjmguwfXx4C/sqTHmyBfgUeUl54vm6scYUa/jX0yjVW XHAECWr0MJl7vYjn3/PoMh/IrdZ9Z5MXjSVGCxS/KzM2AaUYEA8OMIaJf RAnlhZAp9OnZl8hJkUc3rpgGR3qC+e5Av8FGYmXNTOIBKUELBmj/8fgDn g==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="375662133" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="375662133" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:10 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="882473452" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="882473452" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.6.223]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:10 -0800 From: ira.weiny@intel.com To: Dan Williams Cc: Ira Weiny , Alison Schofield , Vishal Verma , Ben Widawsky , Steven Rostedt , Jonathan Cameron , Davidlohr Bueso , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org Subject: [PATCH 09/11] cxl/test: Add generic mock events Date: Thu, 10 Nov 2022 10:57:56 -0800 Message-Id: <20221110185758.879472-10-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221110185758.879472-1-ira.weiny@intel.com> References: <20221110185758.879472-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny Facilitate testing basic Get/Clear Event functionality by creating multiple logs and generic events with made up UUID's. Data is completely made up with data patterns which should be easy to spot in trace output. A single sysfs entry resets the event data and triggers collecting the events for testing. Test traces are easy to obtain with a small script such as this: #!/bin/bash -x devices=`find /sys/devices/platform -name cxl_mem*` # Turn on tracing echo "" > /sys/kernel/tracing/trace echo 1 > /sys/kernel/tracing/events/cxl/enable echo 1 > /sys/kernel/tracing/tracing_on # Generate fake interrupt for device in $devices; do echo 1 > $device/event_trigger done # Turn off tracing and report events echo 0 > /sys/kernel/tracing/tracing_on cat /sys/kernel/tracing/trace Signed-off-by: Ira Weiny --- Changes from RFC v2: Adjust to simulate the event status register Changes from RFC: Separate out the event code Adjust for struct changes. Clean up devm_cxl_mock_event_logs() Clean up naming and comments Jonathan Remove dynamic allocation of event logs Clean up comment Remove unneeded xarray Ensure event_trigger sysfs is valid prior to the driver going active. Dan Remove the fill/reset event sysfs as these operations can be done together --- drivers/cxl/core/mbox.c | 31 +++-- drivers/cxl/cxlmem.h | 1 + tools/testing/cxl/test/Kbuild | 2 +- tools/testing/cxl/test/events.c | 222 ++++++++++++++++++++++++++++++++ tools/testing/cxl/test/events.h | 9 ++ tools/testing/cxl/test/mem.c | 35 ++++- 6 files changed, 286 insertions(+), 14 deletions(-) create mode 100644 tools/testing/cxl/test/events.c create mode 100644 tools/testing/cxl/test/events.h diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 1e6762af2a00..2d74c0f2cbf7 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -841,6 +841,24 @@ void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, } EXPORT_SYMBOL_NS_GPL(cxl_mem_get_records_log, CXL); +/* Direct call for mock testing */ +void __cxl_mem_get_event_records(struct cxl_dev_state *cxlds, u32 status) +{ + dev_dbg(cxlds->dev, "Reading event logs: %x\n", status); + + if (status & CXLDEV_EVENT_STATUS_INFO) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_INFO); + if (status & CXLDEV_EVENT_STATUS_WARN) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_WARN); + if (status & CXLDEV_EVENT_STATUS_FAIL) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FAIL); + if (status & CXLDEV_EVENT_STATUS_FATAL) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FATAL); + if (status & CXLDEV_EVENT_STATUS_DYNAMIC_CAP) + cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_DYNAMIC_CAP); +} +EXPORT_SYMBOL_NS_GPL(__cxl_mem_get_event_records, CXL); + /** * cxl_mem_get_event_records - Get Event Records from the device * @cxlds: The device data for the operation @@ -855,18 +873,7 @@ void cxl_mem_get_event_records(struct cxl_dev_state *cxlds) { u32 status = readl(cxlds->regs.status + CXLDEV_DEV_EVENT_STATUS_OFFSET); - dev_dbg(cxlds->dev, "Reading event logs: %x\n", status); - - if (status & CXLDEV_EVENT_STATUS_INFO) - cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_INFO); - if (status & CXLDEV_EVENT_STATUS_WARN) - cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_WARN); - if (status & CXLDEV_EVENT_STATUS_FAIL) - cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FAIL); - if (status & CXLDEV_EVENT_STATUS_FATAL) - cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_FATAL); - if (status & CXLDEV_EVENT_STATUS_DYNAMIC_CAP) - cxl_mem_get_records_log(cxlds, CXL_EVENT_TYPE_DYNAMIC_CAP); + __cxl_mem_get_event_records(cxlds, status); } EXPORT_SYMBOL_NS_GPL(cxl_mem_get_event_records, CXL); diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 4d9c3ea30c24..77bcbaa16dd3 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -569,6 +569,7 @@ void set_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds void clear_exclusive_cxl_commands(struct cxl_dev_state *cxlds, unsigned long *cmds); void cxl_mem_get_records_log(struct cxl_dev_state *cxlds, enum cxl_event_log_type type); +void __cxl_mem_get_event_records(struct cxl_dev_state *cxlds, u32 status); void cxl_mem_get_event_records(struct cxl_dev_state *cxlds); int cxl_event_config_msgnums(struct cxl_dev_state *cxlds); #ifdef CONFIG_CXL_SUSPEND diff --git a/tools/testing/cxl/test/Kbuild b/tools/testing/cxl/test/Kbuild index 4e59e2c911f6..64b14b83d8d9 100644 --- a/tools/testing/cxl/test/Kbuild +++ b/tools/testing/cxl/test/Kbuild @@ -7,4 +7,4 @@ obj-m += cxl_mock_mem.o cxl_test-y := cxl.o cxl_mock-y := mock.o -cxl_mock_mem-y := mem.o +cxl_mock_mem-y := mem.o events.o diff --git a/tools/testing/cxl/test/events.c b/tools/testing/cxl/test/events.c new file mode 100644 index 000000000000..a4816f230bb5 --- /dev/null +++ b/tools/testing/cxl/test/events.c @@ -0,0 +1,222 @@ +// SPDX-License-Identifier: GPL-2.0-only +// Copyright(c) 2022 Intel Corporation. All rights reserved. + +#include +#include + +#include "events.h" + +#define CXL_TEST_EVENT_CNT_MAX 15 + +struct mock_event_log { + int cur_event; + int nr_events; + struct cxl_event_record_raw *events[CXL_TEST_EVENT_CNT_MAX]; +}; + +struct mock_event_store { + struct cxl_dev_state *cxlds; + struct mock_event_log mock_logs[CXL_EVENT_TYPE_MAX]; + u32 ev_status; +}; + +DEFINE_XARRAY(mock_dev_event_store); + +struct mock_event_log *find_event_log(struct device *dev, int log_type) +{ + struct mock_event_store *mes = xa_load(&mock_dev_event_store, + (unsigned long)dev); + + if (!mes || log_type >= CXL_EVENT_TYPE_MAX) + return NULL; + return &mes->mock_logs[log_type]; +} + +struct cxl_event_record_raw *get_cur_event(struct mock_event_log *log) +{ + return log->events[log->cur_event]; +} + +__le16 get_cur_event_handle(struct mock_event_log *log) +{ + return cpu_to_le16(log->cur_event); +} + +static bool log_empty(struct mock_event_log *log) +{ + return log->cur_event == log->nr_events; +} + +static int log_rec_left(struct mock_event_log *log) +{ + return log->nr_events - log->cur_event; +} + +static void event_store_add_event(struct mock_event_store *mes, + enum cxl_event_log_type log_type, + struct cxl_event_record_raw *event) +{ + struct mock_event_log *log; + + if (WARN_ON(log_type >= CXL_EVENT_TYPE_MAX)) + return; + + log = &mes->mock_logs[log_type]; + if (WARN_ON(log->nr_events >= CXL_TEST_EVENT_CNT_MAX)) + return; + + log->events[log->nr_events] = event; + log->nr_events++; +} + +int mock_get_event(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) +{ + struct cxl_get_event_payload *pl; + struct mock_event_log *log; + u8 log_type; + + /* Valid request? */ + if (cmd->size_in != sizeof(log_type)) + return -EINVAL; + + log_type = *((u8 *)cmd->payload_in); + if (log_type >= CXL_EVENT_TYPE_MAX) + return -EINVAL; + + log = find_event_log(cxlds->dev, log_type); + if (!log || log_empty(log)) + goto no_data; + + pl = cmd->payload_out; + memset(pl, 0, sizeof(*pl)); + + pl->record_count = cpu_to_le16(1); + + if (log_rec_left(log) > 1) + pl->flags |= CXL_GET_EVENT_FLAG_MORE_RECORDS; + + memcpy(&pl->record[0], get_cur_event(log), sizeof(pl->record[0])); + pl->record[0].hdr.handle = get_cur_event_handle(log); + return 0; + +no_data: + /* Room for header? */ + if (cmd->size_out < (sizeof(*pl) - sizeof(pl->record[0]))) + return -EINVAL; + + memset(cmd->payload_out, 0, cmd->size_out); + return 0; +} +EXPORT_SYMBOL_GPL(mock_get_event); + +/* + * Get and clear event only handle 1 record at a time as this is what is + * currently implemented in the main code. + */ +int mock_clear_event(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) +{ + struct cxl_mbox_clear_event_payload *pl = cmd->payload_in; + struct mock_event_log *log; + u8 log_type = pl->event_log; + + /* Don't handle more than 1 record at a time */ + if (pl->nr_recs != 1) + return -EINVAL; + + if (log_type >= CXL_EVENT_TYPE_MAX) + return -EINVAL; + + log = find_event_log(cxlds->dev, log_type); + if (!log) + return 0; /* No mock data in this log */ + + /* + * Test code only reported 1 event at a time. So only support 1 event + * being cleared. + */ + if (log->cur_event != le16_to_cpu(pl->handle[0])) { + dev_err(cxlds->dev, "Clearing events out of order\n"); + return -EINVAL; + } + + log->cur_event++; + return 0; +} +EXPORT_SYMBOL_GPL(mock_clear_event); + +void cxl_mock_event_trigger(struct device *dev) +{ + struct mock_event_store *mes = xa_load(&mock_dev_event_store, + (unsigned long)dev); + int i; + + for (i = CXL_EVENT_TYPE_INFO; i < CXL_EVENT_TYPE_MAX; i++) { + struct mock_event_log *log; + + log = find_event_log(dev, i); + if (log) + log->cur_event = 0; + } + + __cxl_mem_get_event_records(mes->cxlds, mes->ev_status); +} +EXPORT_SYMBOL_GPL(cxl_mock_event_trigger); + +struct cxl_event_record_raw maint_needed = { + .hdr = { + .id = UUID_INIT(0xDEADBEEF, 0xCAFE, 0xBABE, + 0xa5, 0x5a, 0xa5, 0x5a, 0xa5, 0xa5, 0x5a, 0xa5), + .length = sizeof(struct cxl_event_record_raw), + .flags[0] = CXL_EVENT_RECORD_FLAG_MAINT_NEEDED, + /* .handle = Set dynamically */ + .related_handle = cpu_to_le16(0xa5b6), + }, + .data = { 0xDE, 0xAD, 0xBE, 0xEF }, +}; + +struct cxl_event_record_raw hardware_replace = { + .hdr = { + .id = UUID_INIT(0xBABECAFE, 0xBEEF, 0xDEAD, + 0xa5, 0x5a, 0xa5, 0x5a, 0xa5, 0xa5, 0x5a, 0xa5), + .length = sizeof(struct cxl_event_record_raw), + .flags[0] = CXL_EVENT_RECORD_FLAG_HW_REPLACE, + /* .handle = Set dynamically */ + .related_handle = cpu_to_le16(0xb6a5), + }, + .data = { 0xDE, 0xAD, 0xBE, 0xEF }, +}; + +u32 cxl_mock_add_event_logs(struct cxl_dev_state *cxlds) +{ + struct device *dev = cxlds->dev; + struct mock_event_store *mes; + + mes = devm_kzalloc(dev, sizeof(*mes), GFP_KERNEL); + if (WARN_ON(!mes)) + return 0; + mes->cxlds = cxlds; + + if (xa_insert(&mock_dev_event_store, (unsigned long)dev, mes, + GFP_KERNEL)) { + dev_err(dev, "Event store not available for %s\n", + dev_name(dev)); + return 0; + } + + event_store_add_event(mes, CXL_EVENT_TYPE_INFO, &maint_needed); + mes->ev_status |= CXLDEV_EVENT_STATUS_INFO; + + event_store_add_event(mes, CXL_EVENT_TYPE_FATAL, &hardware_replace); + mes->ev_status |= CXLDEV_EVENT_STATUS_FATAL; + + return mes->ev_status; +} +EXPORT_SYMBOL_GPL(cxl_mock_add_event_logs); + +void cxl_mock_remove_event_logs(struct device *dev) +{ + struct mock_event_store *mes; + + mes = xa_erase(&mock_dev_event_store, (unsigned long)dev); +} +EXPORT_SYMBOL_GPL(cxl_mock_remove_event_logs); diff --git a/tools/testing/cxl/test/events.h b/tools/testing/cxl/test/events.h new file mode 100644 index 000000000000..5bebc6a0a01b --- /dev/null +++ b/tools/testing/cxl/test/events.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include + +int mock_get_event(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd); +int mock_clear_event(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd); +u32 cxl_mock_add_event_logs(struct cxl_dev_state *cxlds); +void cxl_mock_remove_event_logs(struct device *dev); +void cxl_mock_event_trigger(struct device *dev); diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c index e2f5445d24ff..333fa8527a07 100644 --- a/tools/testing/cxl/test/mem.c +++ b/tools/testing/cxl/test/mem.c @@ -8,6 +8,7 @@ #include #include #include +#include "events.h" #define LSA_SIZE SZ_128K #define DEV_SIZE SZ_2G @@ -224,6 +225,12 @@ static int cxl_mock_mbox_send(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd * case CXL_MBOX_OP_GET_PARTITION_INFO: rc = mock_partition_info(cxlds, cmd); break; + case CXL_MBOX_OP_GET_EVENT_RECORD: + rc = mock_get_event(cxlds, cmd); + break; + case CXL_MBOX_OP_CLEAR_EVENT_RECORD: + rc = mock_clear_event(cxlds, cmd); + break; case CXL_MBOX_OP_SET_LSA: rc = mock_set_lsa(cxlds, cmd); break; @@ -245,11 +252,27 @@ static void label_area_release(void *lsa) vfree(lsa); } +static ssize_t event_trigger_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + cxl_mock_event_trigger(dev); + return count; +} +static DEVICE_ATTR_WO(event_trigger); + +static struct attribute *cxl_mock_event_attrs[] = { + &dev_attr_event_trigger.attr, + NULL +}; +ATTRIBUTE_GROUPS(cxl_mock_event); + static int cxl_mock_mem_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; struct cxl_memdev *cxlmd; struct cxl_dev_state *cxlds; + u32 ev_status; void *lsa; int rc; @@ -281,11 +304,13 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) if (rc) return rc; + ev_status = cxl_mock_add_event_logs(cxlds); + cxlmd = devm_cxl_add_memdev(cxlds); if (IS_ERR(cxlmd)) return PTR_ERR(cxlmd); - cxl_mem_get_event_records(cxlds); + __cxl_mem_get_event_records(cxlds, ev_status); if (resource_size(&cxlds->pmem_res) && IS_ENABLED(CONFIG_CXL_PMEM)) rc = devm_cxl_add_nvdimm(dev, cxlmd); @@ -293,6 +318,12 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) return 0; } +static int cxl_mock_mem_remove(struct platform_device *pdev) +{ + cxl_mock_remove_event_logs(&pdev->dev); + return 0; +} + static const struct platform_device_id cxl_mock_mem_ids[] = { { .name = "cxl_mem", }, { }, @@ -301,9 +332,11 @@ MODULE_DEVICE_TABLE(platform, cxl_mock_mem_ids); static struct platform_driver cxl_mock_mem_driver = { .probe = cxl_mock_mem_probe, + .remove = cxl_mock_mem_remove, .id_table = cxl_mock_mem_ids, .driver = { .name = KBUILD_MODNAME, + .dev_groups = cxl_mock_event_groups, }, }; From patchwork Thu Nov 10 18:57:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13039169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27A3DC43217 for ; Thu, 10 Nov 2022 19:06:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231665AbiKJTGZ (ORCPT ); Thu, 10 Nov 2022 14:06:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231455AbiKJTFo (ORCPT ); Thu, 10 Nov 2022 14:05:44 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4696A53EC6; Thu, 10 Nov 2022 11:05:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668107128; x=1699643128; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rXCfHgqEwojyPNUhIvBOVf4I3jSPcxkWsQv6Y0z9kg4=; b=RBLf0vx+dv5EZmRWyQX5FaJJQWLautpHz9WFNg4HixrVB2ZaHnT5SatK B65FXyvdzmLrGV15SxxjTCciPxpPFaRWo7vvpa10zeMD3UsuLl7ayXORZ t67JrqjlumzmYq6Bnhmpmn6+7bDt3NXxM0tZSy2txv493oI3fPLdIMK+F QtlOykKzbg+8sFU/cf55dH7mamk46xS3iW6JKm0kOZMHv8kgfqJ3zb6Cy VIRFg2h555zX6av8ARqXSid3LDPl9V5UcpIJ8XaqQJeQKw/LK6Pa/dAoF 4WWNJCtu1KfC4c3p/Li+ZDoXgyn8o+rtKEYqoN7JvgmRN/TRYzSJyVhXA Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="375662151" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="375662151" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:11 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="882473459" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="882473459" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.6.223]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:11 -0800 From: ira.weiny@intel.com To: Dan Williams Cc: Ira Weiny , Alison Schofield , Vishal Verma , Ben Widawsky , Steven Rostedt , Jonathan Cameron , Davidlohr Bueso , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org Subject: [PATCH 10/11] cxl/test: Add specific events Date: Thu, 10 Nov 2022 10:57:57 -0800 Message-Id: <20221110185758.879472-11-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221110185758.879472-1-ira.weiny@intel.com> References: <20221110185758.879472-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny Each type of event has different trace point outputs. Add mock General Media Event, DRAM event, and Memory Module Event records to the mock list of events returned. Signed-off-by: Ira Weiny Reviewed-by: Jonathan Cameron --- Changes from RFC: Adjust for struct changes adjust for unaligned fields --- tools/testing/cxl/test/events.c | 70 +++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/tools/testing/cxl/test/events.c b/tools/testing/cxl/test/events.c index a4816f230bb5..8693f3fb9cbb 100644 --- a/tools/testing/cxl/test/events.c +++ b/tools/testing/cxl/test/events.c @@ -186,6 +186,70 @@ struct cxl_event_record_raw hardware_replace = { .data = { 0xDE, 0xAD, 0xBE, 0xEF }, }; +struct cxl_event_gen_media gen_media = { + .hdr = { + .id = UUID_INIT(0xfbcd0a77, 0xc260, 0x417f, + 0x85, 0xa9, 0x08, 0x8b, 0x16, 0x21, 0xeb, 0xa6), + .length = sizeof(struct cxl_event_gen_media), + .flags[0] = CXL_EVENT_RECORD_FLAG_PERMANENT, + /* .handle = Set dynamically */ + .related_handle = cpu_to_le16(0), + }, + .phys_addr = cpu_to_le64(0x2000), + .descriptor = CXL_GMER_EVT_DESC_UNCORECTABLE_EVENT, + .type = CXL_GMER_MEM_EVT_TYPE_DATA_PATH_ERROR, + .transaction_type = CXL_GMER_TRANS_HOST_WRITE, + .validity_flags = { CXL_GMER_VALID_CHANNEL | + CXL_GMER_VALID_RANK, 0 }, + .channel = 1, + .rank = 30 +}; + +struct cxl_event_dram dram = { + .hdr = { + .id = UUID_INIT(0x601dcbb3, 0x9c06, 0x4eab, + 0xb8, 0xaf, 0x4e, 0x9b, 0xfb, 0x5c, 0x96, 0x24), + .length = sizeof(struct cxl_event_dram), + .flags[0] = CXL_EVENT_RECORD_FLAG_PERF_DEGRADED, + /* .handle = Set dynamically */ + .related_handle = cpu_to_le16(0), + }, + .phys_addr = cpu_to_le64(0x8000), + .descriptor = CXL_GMER_EVT_DESC_THRESHOLD_EVENT, + .type = CXL_GMER_MEM_EVT_TYPE_INV_ADDR, + .transaction_type = CXL_GMER_TRANS_INTERNAL_MEDIA_SCRUB, + .validity_flags = { CXL_DER_VALID_CHANNEL | + CXL_DER_VALID_BANK_GROUP | + CXL_DER_VALID_BANK | + CXL_DER_VALID_COLUMN, 0 }, + .channel = 1, + .bank_group = 5, + .bank = 2, + .column = { 0xDE, 0xAD}, +}; + +struct cxl_event_mem_module mem_module = { + .hdr = { + .id = UUID_INIT(0xfe927475, 0xdd59, 0x4339, + 0xa5, 0x86, 0x79, 0xba, 0xb1, 0x13, 0xb7, 0x74), + .length = sizeof(struct cxl_event_mem_module), + /* .handle = Set dynamically */ + .related_handle = cpu_to_le16(0), + }, + .event_type = CXL_MMER_TEMP_CHANGE, + .info = { + .health_status = CXL_DHI_HS_PERFORMANCE_DEGRADED, + .media_status = CXL_DHI_MS_ALL_DATA_LOST, + .add_status = (CXL_DHI_AS_CRITICAL << 2) | + (CXL_DHI_AS_WARNING << 4) | + (CXL_DHI_AS_WARNING << 5), + .device_temp = { 0xDE, 0xAD}, + .dirty_shutdown_cnt = { 0xde, 0xad, 0xbe, 0xef }, + .cor_vol_err_cnt = { 0xde, 0xad, 0xbe, 0xef }, + .cor_per_err_cnt = { 0xde, 0xad, 0xbe, 0xef }, + } +}; + u32 cxl_mock_add_event_logs(struct cxl_dev_state *cxlds) { struct device *dev = cxlds->dev; @@ -204,9 +268,15 @@ u32 cxl_mock_add_event_logs(struct cxl_dev_state *cxlds) } event_store_add_event(mes, CXL_EVENT_TYPE_INFO, &maint_needed); + event_store_add_event(mes, CXL_EVENT_TYPE_INFO, + (struct cxl_event_record_raw *)&gen_media); + event_store_add_event(mes, CXL_EVENT_TYPE_INFO, + (struct cxl_event_record_raw *)&mem_module); mes->ev_status |= CXLDEV_EVENT_STATUS_INFO; event_store_add_event(mes, CXL_EVENT_TYPE_FATAL, &hardware_replace); + event_store_add_event(mes, CXL_EVENT_TYPE_FATAL, + (struct cxl_event_record_raw *)&dram); mes->ev_status |= CXLDEV_EVENT_STATUS_FATAL; return mes->ev_status; From patchwork Thu Nov 10 18:57:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13039173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC6C6C43217 for ; Thu, 10 Nov 2022 19:06:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230452AbiKJTG1 (ORCPT ); Thu, 10 Nov 2022 14:06:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231478AbiKJTFp (ORCPT ); Thu, 10 Nov 2022 14:05:45 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 200E754B18; Thu, 10 Nov 2022 11:05:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1668107129; x=1699643129; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tGLSIWnfH0DX3OQVfNiE0JPNGcqNJkXbIRqhBGzlC2A=; b=H1fb9gCwAejUnLOr8+PLS7V92QZNyCF0aQ0mAlEahoEs+5eLNXrRDEZ0 oGptIDEsLe3MgMfB1WVg/pM0syS5LpTO9lxaBytPJIJyBryf4sne5O3RN mn2CbK2aKpewDG0EbPgzLCV4XZ0Z4+Z7k7h9Jjf0AB0DNWLFlgptPDkJz WsvrUqblwonVZD8J3kGC6SgwzQcvMeGO7LzRVRVMMmJ19l3b3EnZJRt3G QVTixePUvNHkiLDzzIXIjmvK8WOdLOoKbq3b6GGBQYL+PdFuJEQLNnQ9z +8gdGC8F1puYM8tirQKvjmXrGDIcGiuQ+yWleYz5TxppoQMFHOkV3bbg6 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="375662165" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="375662165" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:11 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="882473467" X-IronPort-AV: E=Sophos;i="5.96,154,1665471600"; d="scan'208";a="882473467" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.6.223]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Nov 2022 10:58:11 -0800 From: ira.weiny@intel.com To: Dan Williams Cc: Ira Weiny , Alison Schofield , Vishal Verma , Ben Widawsky , Steven Rostedt , Jonathan Cameron , Davidlohr Bueso , linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org Subject: [PATCH 11/11] cxl/test: Simulate event log overflow Date: Thu, 10 Nov 2022 10:57:58 -0800 Message-Id: <20221110185758.879472-12-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221110185758.879472-1-ira.weiny@intel.com> References: <20221110185758.879472-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny Log overflow is marked by a separate trace message. Simulate a log with lots of messages and flag overflow until it is drained a bit. Signed-off-by: Ira Weiny Reviewed-by: Jonathan Cameron --- Changes from RFC Adjust for new struct changes --- tools/testing/cxl/test/events.c | 37 +++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/tools/testing/cxl/test/events.c b/tools/testing/cxl/test/events.c index 8693f3fb9cbb..5ce257114f4e 100644 --- a/tools/testing/cxl/test/events.c +++ b/tools/testing/cxl/test/events.c @@ -69,11 +69,21 @@ static void event_store_add_event(struct mock_event_store *mes, log->nr_events++; } +static u16 log_overflow(struct mock_event_log *log) +{ + int cnt = log_rec_left(log) - 5; + + if (cnt < 0) + return 0; + return cnt; +} + int mock_get_event(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) { struct cxl_get_event_payload *pl; struct mock_event_log *log; u8 log_type; + u16 nr_overflow; /* Valid request? */ if (cmd->size_in != sizeof(log_type)) @@ -95,6 +105,20 @@ int mock_get_event(struct cxl_dev_state *cxlds, struct cxl_mbox_cmd *cmd) if (log_rec_left(log) > 1) pl->flags |= CXL_GET_EVENT_FLAG_MORE_RECORDS; + nr_overflow = log_overflow(log); + if (nr_overflow) { + u64 ns; + + pl->flags |= CXL_GET_EVENT_FLAG_OVERFLOW; + pl->overflow_err_count = cpu_to_le16(nr_overflow); + ns = ktime_get_real_ns(); + ns -= 5000000000; /* 5s ago */ + pl->first_overflow_timestamp = cpu_to_le64(ns); + ns = ktime_get_real_ns(); + ns -= 1000000000; /* 1s ago */ + pl->last_overflow_timestamp = cpu_to_le64(ns); + } + memcpy(&pl->record[0], get_cur_event(log), sizeof(pl->record[0])); pl->record[0].hdr.handle = get_cur_event_handle(log); return 0; @@ -274,6 +298,19 @@ u32 cxl_mock_add_event_logs(struct cxl_dev_state *cxlds) (struct cxl_event_record_raw *)&mem_module); mes->ev_status |= CXLDEV_EVENT_STATUS_INFO; + event_store_add_event(mes, CXL_EVENT_TYPE_FAIL, &maint_needed); + event_store_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); + event_store_add_event(mes, CXL_EVENT_TYPE_FAIL, + (struct cxl_event_record_raw *)&dram); + event_store_add_event(mes, CXL_EVENT_TYPE_FAIL, + (struct cxl_event_record_raw *)&gen_media); + event_store_add_event(mes, CXL_EVENT_TYPE_FAIL, + (struct cxl_event_record_raw *)&mem_module); + event_store_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); + event_store_add_event(mes, CXL_EVENT_TYPE_FAIL, + (struct cxl_event_record_raw *)&dram); + mes->ev_status |= CXLDEV_EVENT_STATUS_FAIL; + event_store_add_event(mes, CXL_EVENT_TYPE_FATAL, &hardware_replace); event_store_add_event(mes, CXL_EVENT_TYPE_FATAL, (struct cxl_event_record_raw *)&dram);