From patchwork Mon Oct 10 22:29:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13003290 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0D75C4332F for ; Mon, 10 Oct 2022 22:30:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229698AbiJJWaX (ORCPT ); Mon, 10 Oct 2022 18:30:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229648AbiJJWaW (ORCPT ); Mon, 10 Oct 2022 18:30:22 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30ABF5AC68 for ; Mon, 10 Oct 2022 15:30:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665441021; x=1696977021; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aJ1RcbZAq/8zOfhTcPqGwjNzmTWe3mf1nzcgNFpOldo=; b=ciIHk+1tcE8VjCZkZYIDWdKAPlAwvJLFiBqLvqXudxd2Abors2ypU29N qVVaKHckZJYeqRpFxcXMsFSFcJ2JjFq0nkhONszWU3wf3Q5MLf+BKmldv oU7tpGSYwM8NYAXpDbjDy4Wr/wjVZFC+jWtZrOrsB44ZCaxQv60ScgGFH 3eZpFyMnZe2ho4UN10Ai7nJWtJrGPNu9mEEaMfy+R54+rfrNrmvIOwMMo sO+9wj6kebqv5dnASkWFE18qMvkfJEM9sjn1N3xljbcX7lEZGNAduGUwk HSFLOXBGaiwx3vyCF5zessJFWu3+svfyYTKxu8zsOs+5HrClFgc3Uw8XL g==; X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="291661238" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="291661238" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 15:30:19 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="628456969" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="628456969" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.104.4]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 15:30:19 -0700 From: ira.weiny@intel.com To: Michael Tsirkin , Ben Widawsky , Jonathan Cameron Cc: Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org Subject: [RFC PATCH 1/6] qemu/bswap: Add const_le64() Date: Mon, 10 Oct 2022 15:29:39 -0700 Message-Id: <20221010222944.3923556-2-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221010222944.3923556-1-ira.weiny@intel.com> References: <20221010222944.3923556-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny Gcc requires constant versions of cpu_to_le* calls. Add a 64 bit version. Signed-off-by: Ira Weiny Reviewed-by: Jonathan Cameron Reviewed-by: Peter Maydell --- include/qemu/bswap.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/qemu/bswap.h b/include/qemu/bswap.h index 346d05f2aab3..08e607821102 100644 --- a/include/qemu/bswap.h +++ b/include/qemu/bswap.h @@ -192,10 +192,20 @@ CPU_CONVERT(le, 64, uint64_t) (((_x) & 0x0000ff00U) << 8) | \ (((_x) & 0x00ff0000U) >> 8) | \ (((_x) & 0xff000000U) >> 24)) +# define const_le64(_x) \ + ((((_x) & 0x00000000000000ffU) << 56) | \ + (((_x) & 0x000000000000ff00U) << 40) | \ + (((_x) & 0x0000000000ff0000U) << 24) | \ + (((_x) & 0x00000000ff000000U) << 8) | \ + (((_x) & 0x000000ff00000000U) >> 8) | \ + (((_x) & 0x0000ff0000000000U) >> 24) | \ + (((_x) & 0x00ff000000000000U) >> 40) | \ + (((_x) & 0xff00000000000000U) >> 56)) # define const_le16(_x) \ ((((_x) & 0x00ff) << 8) | \ (((_x) & 0xff00) >> 8)) #else +# define const_le64(_x) (_x) # define const_le32(_x) (_x) # define const_le16(_x) (_x) #endif From patchwork Mon Oct 10 22:29:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13003291 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FC30C433FE for ; Mon, 10 Oct 2022 22:30:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229548AbiJJWaY (ORCPT ); Mon, 10 Oct 2022 18:30:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229665AbiJJWaX (ORCPT ); Mon, 10 Oct 2022 18:30:23 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 906C858164 for ; Mon, 10 Oct 2022 15:30:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665441021; x=1696977021; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/t8uS0dJ6+AOQCZeLPwusIK7WCU31QkWkBGL5ijeEMc=; b=N/HfYju0bAIB4Jrb8PEWj3IPvXFqcaEARrF9eDhUdO4M6THwCOpLq+pk UO+zVxnPj3ZPjcUKsD8ayXT+HobcduQSKOpyUowD58+8VujDlC7IUaR0e kGX4+kdR0JyxxmHiJPA3f/M5KHHDsbMPrBcis31nH0+wBtwG5NDPrfccu ljVTLD4y3ox+xSqop00FsWHyxbElazIag1pzvhrPUEB8SBRl0glyXebIl krgojJ2jIqB2Mh0FKApRqk6Qqf3P1tmWZpcad4CWzUbLIcB+UHDC/NWlP NTtchll8VXJ8BIoELFmmXsvl+VrNAerzRsa4TptS+xuwgqd7q+bdanCDY Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="291661241" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="291661241" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 15:30:20 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="628456977" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="628456977" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.104.4]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 15:30:19 -0700 From: ira.weiny@intel.com To: Michael Tsirkin , Ben Widawsky , Jonathan Cameron Cc: Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org Subject: [RFC PATCH 2/6] qemu/uuid: Add UUID static initializer Date: Mon, 10 Oct 2022 15:29:40 -0700 Message-Id: <20221010222944.3923556-3-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221010222944.3923556-1-ira.weiny@intel.com> References: <20221010222944.3923556-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny UUID's are defined as network byte order fields. No static initializer was available for UUID's in their standard big endian format. Define a big endian initializer for UUIDs. Signed-off-by: Ira Weiny Reviewed-by: Jonathan Cameron --- include/qemu/uuid.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/include/qemu/uuid.h b/include/qemu/uuid.h index 9925febfa54d..dc40ee1fc998 100644 --- a/include/qemu/uuid.h +++ b/include/qemu/uuid.h @@ -61,6 +61,18 @@ typedef struct { (clock_seq_hi_and_reserved), (clock_seq_low), (node0), (node1), (node2),\ (node3), (node4), (node5) } +/* Normal (network byte order) UUID */ +#define UUID(time_low, time_mid, time_hi_and_version, \ + clock_seq_hi_and_reserved, clock_seq_low, node0, node1, node2, \ + node3, node4, node5) \ + { ((time_low) >> 24) & 0xff, ((time_low) >> 16) & 0xff, \ + ((time_low) >> 8) & 0xff, (time_low) & 0xff, \ + ((time_mid) >> 8) & 0xff, (time_mid) & 0xff, \ + ((time_hi_and_version) >> 8) & 0xff, (time_hi_and_version) & 0xff, \ + (clock_seq_hi_and_reserved), (clock_seq_low), \ + (node0), (node1), (node2), (node3), (node4), (node5) \ + } + #define UUID_FMT "%02hhx%02hhx%02hhx%02hhx-" \ "%02hhx%02hhx-%02hhx%02hhx-" \ "%02hhx%02hhx-" \ From patchwork Mon Oct 10 22:29:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13003293 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C653C4332F for ; Mon, 10 Oct 2022 22:30:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229648AbiJJWa0 (ORCPT ); Mon, 10 Oct 2022 18:30:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42588 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229471AbiJJWaX (ORCPT ); Mon, 10 Oct 2022 18:30:23 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05E115A810 for ; Mon, 10 Oct 2022 15:30:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665441022; x=1696977022; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WINfsvFZv1InPNmycrqd9oyzCqeJWXmlB+gr9uGiA0k=; b=GrvpY9n8X9jN8qv+orbO+XjNlDlDP9nQ5s/3Lod0jiXatjVUn72zI8g6 sEerV6k65kE9NhziHQt1rJJkoOYobe+zLODjfY7er1VD+9eV1SN40HPOz kZkTBXoh501X2y7ex74GAsTcT12TrNxmyEv8zWyTXD4SEynx9iKknh1I+ Xwv8NGxMbJGj/2NEfqGUMIxmm3X3AKOywcKjhy83df1lQff6rFJk+2RHH 7986x4mQ+R8CCDVHiflcwknKH8p05t0/zwRjSoG+o6CIZ9KX84y2XSaiH 3BkyzFtc0Fy/eY4K39nXrEfOElSAy9gtuVxgM+PF5MKLO9vWmXEqz6I3g A==; X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="291661245" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="291661245" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 15:30:20 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="628456989" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="628456989" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.104.4]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 15:30:20 -0700 From: ira.weiny@intel.com To: Michael Tsirkin , Ben Widawsky , Jonathan Cameron Cc: Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org Subject: [RFC PATCH 3/6] hw/cxl/cxl-events: Add CXL mock events Date: Mon, 10 Oct 2022 15:29:41 -0700 Message-Id: <20221010222944.3923556-4-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221010222944.3923556-1-ira.weiny@intel.com> References: <20221010222944.3923556-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny To facilitate testing of guest software add mock events and code to support iterating through the event logs. Signed-off-by: Ira Weiny --- hw/cxl/cxl-events.c | 248 ++++++++++++++++++++++++++++++++++++ hw/cxl/meson.build | 1 + include/hw/cxl/cxl_device.h | 19 +++ include/hw/cxl/cxl_events.h | 173 +++++++++++++++++++++++++ 4 files changed, 441 insertions(+) create mode 100644 hw/cxl/cxl-events.c create mode 100644 include/hw/cxl/cxl_events.h diff --git a/hw/cxl/cxl-events.c b/hw/cxl/cxl-events.c new file mode 100644 index 000000000000..c275280bcb64 --- /dev/null +++ b/hw/cxl/cxl-events.c @@ -0,0 +1,248 @@ +/* + * CXL Event processing + * + * Copyright(C) 2022 Intel Corporation. + * + * This work is licensed under the terms of the GNU GPL, version 2. See the + * COPYING file in the top-level directory. + */ + +#include + +#include "qemu/osdep.h" +#include "qemu/bswap.h" +#include "qemu/typedefs.h" +#include "hw/cxl/cxl.h" +#include "hw/cxl/cxl_events.h" + +struct cxl_event_log *find_event_log(CXLDeviceState *cxlds, int log_type) +{ + if (log_type >= CXL_EVENT_TYPE_MAX) { + return NULL; + } + return &cxlds->event_logs[log_type]; +} + +struct cxl_event_record_raw *get_cur_event(struct cxl_event_log *log) +{ + return log->events[log->cur_event]; +} + +uint16_t get_cur_event_handle(struct cxl_event_log *log) +{ + return cpu_to_le16(log->cur_event); +} + +bool log_empty(struct cxl_event_log *log) +{ + return log->cur_event == log->nr_events; +} + +int log_rec_left(struct cxl_event_log *log) +{ + return log->nr_events - log->cur_event; +} + +static void event_store_add_event(CXLDeviceState *cxlds, + enum cxl_event_log_type log_type, + struct cxl_event_record_raw *event) +{ + struct cxl_event_log *log; + + assert(log_type < CXL_EVENT_TYPE_MAX); + + log = &cxlds->event_logs[log_type]; + assert(log->nr_events < CXL_TEST_EVENT_CNT_MAX); + + log->events[log->nr_events] = event; + log->nr_events++; +} + +uint16_t log_overflow(struct cxl_event_log *log) +{ + int cnt = log_rec_left(log) - 5; + + if (cnt < 0) { + return 0; + } + return cnt; +} + +#define CXL_EVENT_RECORD_FLAG_PERMANENT BIT(2) +#define CXL_EVENT_RECORD_FLAG_MAINT_NEEDED BIT(3) +#define CXL_EVENT_RECORD_FLAG_PERF_DEGRADED BIT(4) +#define CXL_EVENT_RECORD_FLAG_HW_REPLACE BIT(5) + +struct cxl_event_record_raw maint_needed = { + .hdr = { + .id.data = UUID(0xDEADBEEF, 0xCAFE, 0xBABE, + 0xa5, 0x5a, 0xa5, 0x5a, 0xa5, 0xa5, 0x5a, 0xa5), + .length = sizeof(struct cxl_event_record_raw), + .flags[0] = CXL_EVENT_RECORD_FLAG_MAINT_NEEDED, + /* .handle = Set dynamically */ + .related_handle = const_le16(0xa5b6), + }, + .data = { 0xDE, 0xAD, 0xBE, 0xEF }, +}; + +struct cxl_event_record_raw hardware_replace = { + .hdr = { + .id.data = UUID(0xBABECAFE, 0xBEEF, 0xDEAD, + 0xa5, 0x5a, 0xa5, 0x5a, 0xa5, 0xa5, 0x5a, 0xa5), + .length = sizeof(struct cxl_event_record_raw), + .flags[0] = CXL_EVENT_RECORD_FLAG_HW_REPLACE, + /* .handle = Set dynamically */ + .related_handle = const_le16(0xb6a5), + }, + .data = { 0xDE, 0xAD, 0xBE, 0xEF }, +}; + +#define CXL_GMER_EVT_DESC_UNCORECTABLE_EVENT BIT(0) +#define CXL_GMER_EVT_DESC_THRESHOLD_EVENT BIT(1) +#define CXL_GMER_EVT_DESC_POISON_LIST_OVERFLOW BIT(2) + +#define CXL_GMER_MEM_EVT_TYPE_ECC_ERROR 0x00 +#define CXL_GMER_MEM_EVT_TYPE_INV_ADDR 0x01 +#define CXL_GMER_MEM_EVT_TYPE_DATA_PATH_ERROR 0x02 + +#define CXL_GMER_TRANS_UNKNOWN 0x00 +#define CXL_GMER_TRANS_HOST_READ 0x01 +#define CXL_GMER_TRANS_HOST_WRITE 0x02 +#define CXL_GMER_TRANS_HOST_SCAN_MEDIA 0x03 +#define CXL_GMER_TRANS_HOST_INJECT_POISON 0x04 +#define CXL_GMER_TRANS_INTERNAL_MEDIA_SCRUB 0x05 +#define CXL_GMER_TRANS_INTERNAL_MEDIA_MANAGEMENT 0x06 + +#define CXL_GMER_VALID_CHANNEL BIT(0) +#define CXL_GMER_VALID_RANK BIT(1) +#define CXL_GMER_VALID_DEVICE BIT(2) +#define CXL_GMER_VALID_COMPONENT BIT(3) + +struct cxl_event_gen_media gen_media = { + .hdr = { + .id.data = UUID(0xfbcd0a77, 0xc260, 0x417f, + 0x85, 0xa9, 0x08, 0x8b, 0x16, 0x21, 0xeb, 0xa6), + .length = sizeof(struct cxl_event_gen_media), + .flags[0] = CXL_EVENT_RECORD_FLAG_PERMANENT, + /* .handle = Set dynamically */ + .related_handle = const_le16(0), + }, + .phys_addr = const_le64(0x2000), + .descriptor = CXL_GMER_EVT_DESC_UNCORECTABLE_EVENT, + .type = CXL_GMER_MEM_EVT_TYPE_DATA_PATH_ERROR, + .transaction_type = CXL_GMER_TRANS_HOST_WRITE, + .validity_flags = { CXL_GMER_VALID_CHANNEL | + CXL_GMER_VALID_RANK, 0 }, + .channel = 1, + .rank = 30 +}; + +#define CXL_DER_VALID_CHANNEL BIT(0) +#define CXL_DER_VALID_RANK BIT(1) +#define CXL_DER_VALID_NIBBLE BIT(2) +#define CXL_DER_VALID_BANK_GROUP BIT(3) +#define CXL_DER_VALID_BANK BIT(4) +#define CXL_DER_VALID_ROW BIT(5) +#define CXL_DER_VALID_COLUMN BIT(6) +#define CXL_DER_VALID_CORRECTION_MASK BIT(7) + +struct cxl_event_dram dram = { + .hdr = { + .id.data = UUID(0x601dcbb3, 0x9c06, 0x4eab, + 0xb8, 0xaf, 0x4e, 0x9b, 0xfb, 0x5c, 0x96, 0x24), + .length = sizeof(struct cxl_event_dram), + .flags[0] = CXL_EVENT_RECORD_FLAG_PERF_DEGRADED, + /* .handle = Set dynamically */ + .related_handle = const_le16(0), + }, + .phys_addr = const_le64(0x8000), + .descriptor = CXL_GMER_EVT_DESC_THRESHOLD_EVENT, + .type = CXL_GMER_MEM_EVT_TYPE_INV_ADDR, + .transaction_type = CXL_GMER_TRANS_INTERNAL_MEDIA_SCRUB, + .validity_flags = { CXL_DER_VALID_CHANNEL | + CXL_DER_VALID_BANK_GROUP | + CXL_DER_VALID_BANK | + CXL_DER_VALID_COLUMN, 0 }, + .channel = 1, + .bank_group = 5, + .bank = 2, + .column = { 0xDE, 0xAD}, +}; + +#define CXL_MMER_HEALTH_STATUS_CHANGE 0x00 +#define CXL_MMER_MEDIA_STATUS_CHANGE 0x01 +#define CXL_MMER_LIFE_USED_CHANGE 0x02 +#define CXL_MMER_TEMP_CHANGE 0x03 +#define CXL_MMER_DATA_PATH_ERROR 0x04 +#define CXL_MMER_LAS_ERROR 0x05 + +#define CXL_DHI_HS_MAINTENANCE_NEEDED BIT(0) +#define CXL_DHI_HS_PERFORMANCE_DEGRADED BIT(1) +#define CXL_DHI_HS_HW_REPLACEMENT_NEEDED BIT(2) + +#define CXL_DHI_MS_NORMAL 0x00 +#define CXL_DHI_MS_NOT_READY 0x01 +#define CXL_DHI_MS_WRITE_PERSISTENCY_LOST 0x02 +#define CXL_DHI_MS_ALL_DATA_LOST 0x03 +#define CXL_DHI_MS_WRITE_PERSISTENCY_LOSS_EVENT_POWER_LOSS 0x04 +#define CXL_DHI_MS_WRITE_PERSISTENCY_LOSS_EVENT_SHUTDOWN 0x05 +#define CXL_DHI_MS_WRITE_PERSISTENCY_LOSS_IMMINENT 0x06 +#define CXL_DHI_MS_WRITE_ALL_DATA_LOSS_EVENT_POWER_LOSS 0x07 +#define CXL_DHI_MS_WRITE_ALL_DATA_LOSS_EVENT_SHUTDOWN 0x08 +#define CXL_DHI_MS_WRITE_ALL_DATA_LOSS_IMMINENT 0x09 + +#define CXL_DHI_AS_NORMAL 0x0 +#define CXL_DHI_AS_WARNING 0x1 +#define CXL_DHI_AS_CRITICAL 0x2 + +#define CXL_DHI_AS_LIFE_USED(as) (as & 0x3) +#define CXL_DHI_AS_DEV_TEMP(as) ((as & 0xC) >> 2) +#define CXL_DHI_AS_COR_VOL_ERR_CNT(as) ((as & 0x10) >> 4) +#define CXL_DHI_AS_COR_PER_ERR_CNT(as) ((as & 0x20) >> 5) + +struct cxl_event_mem_module mem_module = { + .hdr = { + .id.data = UUID(0xfe927475, 0xdd59, 0x4339, + 0xa5, 0x86, 0x79, 0xba, 0xb1, 0x13, 0xb7, 0x74), + .length = sizeof(struct cxl_event_mem_module), + /* .handle = Set dynamically */ + .related_handle = const_le16(0), + }, + .event_type = CXL_MMER_TEMP_CHANGE, + .info = { + .health_status = CXL_DHI_HS_PERFORMANCE_DEGRADED, + .media_status = CXL_DHI_MS_ALL_DATA_LOST, + .add_status = (CXL_DHI_AS_CRITICAL << 2) | + (CXL_DHI_AS_WARNING << 4) | + (CXL_DHI_AS_WARNING << 5), + .device_temp = { 0xDE, 0xAD}, + .dirty_shutdown_cnt = { 0xde, 0xad, 0xbe, 0xef }, + .cor_vol_err_cnt = { 0xde, 0xad, 0xbe, 0xef }, + .cor_per_err_cnt = { 0xde, 0xad, 0xbe, 0xef }, + } +}; + +void cxl_mock_add_event_logs(CXLDeviceState *cxlds) +{ + event_store_add_event(cxlds, CXL_EVENT_TYPE_INFO, &maint_needed); + event_store_add_event(cxlds, CXL_EVENT_TYPE_INFO, + (struct cxl_event_record_raw *)&gen_media); + event_store_add_event(cxlds, CXL_EVENT_TYPE_INFO, + (struct cxl_event_record_raw *)&mem_module); + + event_store_add_event(cxlds, CXL_EVENT_TYPE_FAIL, &maint_needed); + event_store_add_event(cxlds, CXL_EVENT_TYPE_FAIL, &hardware_replace); + event_store_add_event(cxlds, CXL_EVENT_TYPE_FAIL, + (struct cxl_event_record_raw *)&dram); + event_store_add_event(cxlds, CXL_EVENT_TYPE_FAIL, + (struct cxl_event_record_raw *)&gen_media); + event_store_add_event(cxlds, CXL_EVENT_TYPE_FAIL, + (struct cxl_event_record_raw *)&mem_module); + event_store_add_event(cxlds, CXL_EVENT_TYPE_FAIL, &hardware_replace); + event_store_add_event(cxlds, CXL_EVENT_TYPE_FAIL, + (struct cxl_event_record_raw *)&dram); + + event_store_add_event(cxlds, CXL_EVENT_TYPE_FATAL, &hardware_replace); + event_store_add_event(cxlds, CXL_EVENT_TYPE_FATAL, + (struct cxl_event_record_raw *)&dram); +} diff --git a/hw/cxl/meson.build b/hw/cxl/meson.build index cfa95ffd40b7..71059972d435 100644 --- a/hw/cxl/meson.build +++ b/hw/cxl/meson.build @@ -5,6 +5,7 @@ softmmu_ss.add(when: 'CONFIG_CXL', 'cxl-mailbox-utils.c', 'cxl-host.c', 'cxl-cdat.c', + 'cxl-events.c', ), if_false: files( 'cxl-host-stubs.c', diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h index 7b4cff569347..46c50c1c13a6 100644 --- a/include/hw/cxl/cxl_device.h +++ b/include/hw/cxl/cxl_device.h @@ -11,6 +11,7 @@ #define CXL_DEVICE_H #include "hw/register.h" +#include "hw/cxl/cxl_events.h" /* * The following is how a CXL device's Memory Device registers are laid out. @@ -80,6 +81,14 @@ (CXL_DEVICE_CAP_REG_SIZE + CXL_DEVICE_STATUS_REGISTERS_LENGTH + \ CXL_MAILBOX_REGISTERS_LENGTH + CXL_MEMORY_DEVICE_REGISTERS_LENGTH) +#define CXL_TEST_EVENT_CNT_MAX 15 + +struct cxl_event_log { + int cur_event; + int nr_events; + struct cxl_event_record_raw *events[CXL_TEST_EVENT_CNT_MAX]; +}; + typedef struct cxl_device_state { MemoryRegion device_registers; @@ -119,6 +128,8 @@ typedef struct cxl_device_state { /* memory region for persistent memory, HDM */ uint64_t pmem_size; + + struct cxl_event_log event_logs[CXL_EVENT_TYPE_MAX]; } CXLDeviceState; /* Initialize the register block for a device */ @@ -272,4 +283,12 @@ MemTxResult cxl_type3_read(PCIDevice *d, hwaddr host_addr, uint64_t *data, MemTxResult cxl_type3_write(PCIDevice *d, hwaddr host_addr, uint64_t data, unsigned size, MemTxAttrs attrs); +void cxl_mock_add_event_logs(CXLDeviceState *cxlds); +struct cxl_event_log *find_event_log(CXLDeviceState *cxlds, int log_type); +struct cxl_event_record_raw *get_cur_event(struct cxl_event_log *log); +uint16_t get_cur_event_handle(struct cxl_event_log *log); +bool log_empty(struct cxl_event_log *log); +int log_rec_left(struct cxl_event_log *log); +uint16_t log_overflow(struct cxl_event_log *log); + #endif diff --git a/include/hw/cxl/cxl_events.h b/include/hw/cxl/cxl_events.h new file mode 100644 index 000000000000..255111f3dcfb --- /dev/null +++ b/include/hw/cxl/cxl_events.h @@ -0,0 +1,173 @@ +/* + * QEMU CXL Events + * + * Copyright (c) 2022 Intel + * + * This work is licensed under the terms of the GNU GPL, version 2. See the + * COPYING file in the top-level directory. + */ + +#ifndef CXL_EVENTS_H +#define CXL_EVENTS_H + +#include "qemu/uuid.h" +#include "hw/cxl/cxl.h" + +/* + * Common Event Record Format + * CXL rev 3.0 section 8.2.9.2.1; Table 8-42 + */ +#define CXL_EVENT_REC_HDR_RES_LEN 0xf +struct cxl_event_record_hdr { + QemuUUID id; + uint8_t length; + uint8_t flags[3]; + uint16_t handle; + uint16_t related_handle; + uint64_t timestamp; + uint8_t maint_op_class; + uint8_t reserved[CXL_EVENT_REC_HDR_RES_LEN]; +} QEMU_PACKED; + +#define CXL_EVENT_RECORD_DATA_LENGTH 0x50 +struct cxl_event_record_raw { + struct cxl_event_record_hdr hdr; + uint8_t data[CXL_EVENT_RECORD_DATA_LENGTH]; +} QEMU_PACKED; + +/* + * Get Event Records output payload + * CXL rev 3.0 section 8.2.9.2.2; Table 8-50 + * + * Space given for 1 record + */ +#define CXL_GET_EVENT_FLAG_OVERFLOW BIT(0) +#define CXL_GET_EVENT_FLAG_MORE_RECORDS BIT(1) +struct cxl_get_event_payload { + uint8_t flags; + uint8_t reserved1; + uint16_t overflow_err_count; + uint64_t first_overflow_timestamp; + uint64_t last_overflow_timestamp; + uint16_t record_count; + uint8_t reserved2[0xa]; + struct cxl_event_record_raw record; +} QEMU_PACKED; + +/* + * CXL rev 3.0 section 8.2.9.2.2; Table 8-49 + */ +enum cxl_event_log_type { + CXL_EVENT_TYPE_INFO = 0x00, + CXL_EVENT_TYPE_WARN, + CXL_EVENT_TYPE_FAIL, + CXL_EVENT_TYPE_FATAL, + CXL_EVENT_TYPE_DYNAMIC_CAP, + CXL_EVENT_TYPE_MAX +}; + +static inline const char *cxl_event_log_type_str(enum cxl_event_log_type type) +{ + switch (type) { + case CXL_EVENT_TYPE_INFO: + return "Informational"; + case CXL_EVENT_TYPE_WARN: + return "Warning"; + case CXL_EVENT_TYPE_FAIL: + return "Failure"; + case CXL_EVENT_TYPE_FATAL: + return "Fatal"; + case CXL_EVENT_TYPE_DYNAMIC_CAP: + return "Dynamic Capacity"; + default: + break; + } + return ""; +} + +/* + * Clear Event Records input payload + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 + * + * Space given for 1 record + */ +struct cxl_mbox_clear_event_payload { + uint8_t event_log; /* enum cxl_event_log_type */ + uint8_t clear_flags; + uint8_t nr_recs; /* 1 for this struct */ + uint8_t reserved[3]; + uint16_t handle; +}; + +/* + * General Media Event Record + * CXL rev 3.0 Section 8.2.9.2.1.1; Table 8-43 + */ +#define CXL_EVENT_GEN_MED_COMP_ID_SIZE 0x10 +#define CXL_EVENT_GEN_MED_RES_SIZE 0x2e +struct cxl_event_gen_media { + struct cxl_event_record_hdr hdr; + uint64_t phys_addr; + uint8_t descriptor; + uint8_t type; + uint8_t transaction_type; + uint8_t validity_flags[2]; + uint8_t channel; + uint8_t rank; + uint8_t device[3]; + uint8_t component_id[CXL_EVENT_GEN_MED_COMP_ID_SIZE]; + uint8_t reserved[CXL_EVENT_GEN_MED_RES_SIZE]; +} QEMU_PACKED; + +/* + * DRAM Event Record - DER + * CXL rev 3.0 section 8.2.9.2.1.2; Table 3-44 + */ +#define CXL_EVENT_DER_CORRECTION_MASK_SIZE 0x20 +#define CXL_EVENT_DER_RES_SIZE 0x17 +struct cxl_event_dram { + struct cxl_event_record_hdr hdr; + uint64_t phys_addr; + uint8_t descriptor; + uint8_t type; + uint8_t transaction_type; + uint8_t validity_flags[2]; + uint8_t channel; + uint8_t rank; + uint8_t nibble_mask[3]; + uint8_t bank_group; + uint8_t bank; + uint8_t row[3]; + uint8_t column[2]; + uint8_t correction_mask[CXL_EVENT_DER_CORRECTION_MASK_SIZE]; + uint8_t reserved[CXL_EVENT_DER_RES_SIZE]; +} QEMU_PACKED; + +/* + * Get Health Info Record + * CXL rev 3.0 section 8.2.9.8.3.1; Table 8-100 + */ +struct cxl_get_health_info { + uint8_t health_status; + uint8_t media_status; + uint8_t add_status; + uint8_t life_used; + uint8_t device_temp[2]; + uint8_t dirty_shutdown_cnt[4]; + uint8_t cor_vol_err_cnt[4]; + uint8_t cor_per_err_cnt[4]; +} QEMU_PACKED; + +/* + * Memory Module Event Record + * CXL rev 3.0 section 8.2.9.2.1.3; Table 8-45 + */ +#define CXL_EVENT_MEM_MOD_RES_SIZE 0x3d +struct cxl_event_mem_module { + struct cxl_event_record_hdr hdr; + uint8_t event_type; + struct cxl_get_health_info info; + uint8_t reserved[CXL_EVENT_MEM_MOD_RES_SIZE]; +} QEMU_PACKED; + +#endif /* CXL_EVENTS_H */ From patchwork Mon Oct 10 22:29:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13003292 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A757DC433F5 for ; Mon, 10 Oct 2022 22:30:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229589AbiJJWaZ (ORCPT ); Mon, 10 Oct 2022 18:30:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229691AbiJJWaX (ORCPT ); Mon, 10 Oct 2022 18:30:23 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 595ED5D0C9 for ; Mon, 10 Oct 2022 15:30:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665441022; x=1696977022; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1iw/g99zdjhTGj/xb3GTGEtYnlr2Ot14YdbD7EFu5i4=; b=MvdOfa271y4pi7aQR86912KHwQ4Iy78UlGJ9xzSHtWihDD8/rrUNnT+S JzHRWmGXx5CMN0NO+GNr17gIAV8Lc60UVdT7E7VZgX62+gegi99dUzabt 2V4VWUpdoWsuHDuGyUVPKvxYNYuAEQybsQacIk99QOtF/z46F4+FikL44 127uS89PUe6sbFmyEbU7Fu5QKXVvvc6bdYhwtT44dWSajO2gi72jaPRgp nmY1U93SNZ4C8wlCIDOvbO5IIxXNj5Y3O4bkzXAsZ5u1gHCTN3r/ngWe7 w5NnYc9Cps69IMgmFxMF9+VaOw8XTeiF5cp0jqrjTdGK5zyvAV78CazBF Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="291661248" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="291661248" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 15:30:21 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="628457003" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="628457003" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.104.4]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 15:30:20 -0700 From: ira.weiny@intel.com To: Michael Tsirkin , Ben Widawsky , Jonathan Cameron Cc: Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org Subject: [RFC PATCH 4/6] hw/cxl/mailbox: Wire up get/clear event mailbox commands Date: Mon, 10 Oct 2022 15:29:42 -0700 Message-Id: <20221010222944.3923556-5-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221010222944.3923556-1-ira.weiny@intel.com> References: <20221010222944.3923556-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny Replace the stubbed out CXL Get/Clear Event mailbox commands with commands which return the mock event information. Signed-off-by: Ira Weiny --- hw/cxl/cxl-device-utils.c | 1 + hw/cxl/cxl-mailbox-utils.c | 103 +++++++++++++++++++++++++++++++++++-- 2 files changed, 101 insertions(+), 3 deletions(-) diff --git a/hw/cxl/cxl-device-utils.c b/hw/cxl/cxl-device-utils.c index 687759b3017b..4bb41101882e 100644 --- a/hw/cxl/cxl-device-utils.c +++ b/hw/cxl/cxl-device-utils.c @@ -262,4 +262,5 @@ void cxl_device_register_init_common(CXLDeviceState *cxl_dstate) memdev_reg_init_common(cxl_dstate); assert(cxl_initialize_mailbox(cxl_dstate) == 0); + cxl_mock_add_event_logs(cxl_dstate); } diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c index bb66c765a538..df345f23a30c 100644 --- a/hw/cxl/cxl-mailbox-utils.c +++ b/hw/cxl/cxl-mailbox-utils.c @@ -9,6 +9,7 @@ #include "qemu/osdep.h" #include "hw/cxl/cxl.h" +#include "hw/cxl/cxl_events.h" #include "hw/pci/pci.h" #include "qemu/cutils.h" #include "qemu/log.h" @@ -116,11 +117,107 @@ struct cxl_cmd { return CXL_MBOX_SUCCESS; \ } -DEFINE_MAILBOX_HANDLER_ZEROED(events_get_records, 0x20); -DEFINE_MAILBOX_HANDLER_NOP(events_clear_records); DEFINE_MAILBOX_HANDLER_ZEROED(events_get_interrupt_policy, 4); DEFINE_MAILBOX_HANDLER_NOP(events_set_interrupt_policy); +static ret_code cmd_events_get_records(struct cxl_cmd *cmd, + CXLDeviceState *cxlds, + uint16_t *len) +{ + struct cxl_get_event_payload *pl; + struct cxl_event_log *log; + uint8_t log_type; + uint16_t nr_overflow; + + if (cmd->in < sizeof(log_type)) { + return CXL_MBOX_INVALID_INPUT; + } + + log_type = *((uint8_t *)cmd->payload); + if (log_type >= CXL_EVENT_TYPE_MAX) { + return CXL_MBOX_INVALID_INPUT; + } + + pl = (struct cxl_get_event_payload *)cmd->payload; + + log = find_event_log(cxlds, log_type); + if (!log || log_empty(log)) { + goto no_data; + } + + memset(pl, 0, sizeof(*pl)); + pl->record_count = const_le16(1); + + if (log_rec_left(log) > 1) { + pl->flags |= CXL_GET_EVENT_FLAG_MORE_RECORDS; + } + + nr_overflow = log_overflow(log); + if (nr_overflow) { + struct timespec ts; + uint64_t ns; + + clock_gettime(CLOCK_REALTIME, &ts); + + ns = ((uint64_t)ts.tv_sec * 1000000000) + (uint64_t)ts.tv_nsec; + + pl->flags |= CXL_GET_EVENT_FLAG_OVERFLOW; + pl->overflow_err_count = cpu_to_le16(nr_overflow); + ns -= 5000000000; /* 5s ago */ + pl->first_overflow_timestamp = cpu_to_le64(ns); + ns -= 1000000000; /* 1s ago */ + pl->last_overflow_timestamp = cpu_to_le64(ns); + } + + memcpy(&pl->record, get_cur_event(log), sizeof(pl->record)); + pl->record.hdr.handle = get_cur_event_handle(log); + *len = sizeof(pl->record); + return CXL_MBOX_SUCCESS; + +no_data: + *len = sizeof(*pl) - sizeof(pl->record); + memset(pl, 0, *len); + return CXL_MBOX_SUCCESS; +} + +static ret_code cmd_events_clear_records(struct cxl_cmd *cmd, + CXLDeviceState *cxlds, + uint16_t *len) +{ + struct cxl_mbox_clear_event_payload *pl; + struct cxl_event_log *log; + uint8_t log_type; + + pl = (struct cxl_mbox_clear_event_payload *)cmd->payload; + log_type = pl->event_log; + + /* Don't handle more than 1 record at a time */ + if (pl->nr_recs != 1) { + return CXL_MBOX_INVALID_INPUT; + } + + if (log_type >= CXL_EVENT_TYPE_MAX) { + return CXL_MBOX_INVALID_INPUT; + } + + log = find_event_log(cxlds, log_type); + if (!log) { + return CXL_MBOX_SUCCESS; + } + + /* + * The current code clears events as they are read. Test that behavior + * only; don't support clearning from the middle of the log + */ + if (log->cur_event != le16_to_cpu(pl->handle)) { + return CXL_MBOX_INVALID_INPUT; + } + + log->cur_event++; + *len = 0; + return CXL_MBOX_SUCCESS; +} + /* 8.2.9.2.1 */ static ret_code cmd_firmware_update_get_info(struct cxl_cmd *cmd, CXLDeviceState *cxl_dstate, @@ -391,7 +488,7 @@ static struct cxl_cmd cxl_cmd_set[256][256] = { [EVENTS][GET_RECORDS] = { "EVENTS_GET_RECORDS", cmd_events_get_records, 1, 0 }, [EVENTS][CLEAR_RECORDS] = { "EVENTS_CLEAR_RECORDS", - cmd_events_clear_records, ~0, IMMEDIATE_LOG_CHANGE }, + cmd_events_clear_records, 8, IMMEDIATE_LOG_CHANGE }, [EVENTS][GET_INTERRUPT_POLICY] = { "EVENTS_GET_INTERRUPT_POLICY", cmd_events_get_interrupt_policy, 0, 0 }, [EVENTS][SET_INTERRUPT_POLICY] = { "EVENTS_SET_INTERRUPT_POLICY", From patchwork Mon Oct 10 22:29:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13003295 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96D98C433FE for ; Mon, 10 Oct 2022 22:30:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229471AbiJJWa1 (ORCPT ); Mon, 10 Oct 2022 18:30:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229542AbiJJWaX (ORCPT ); Mon, 10 Oct 2022 18:30:23 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBE765AC68 for ; Mon, 10 Oct 2022 15:30:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665441022; x=1696977022; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oFuJH3V9v+i1P+Dh0Wc+EVn5PyOw5woJfSZ6SrKW7v0=; b=PI1luRx4PZ2bE2iVNd9fdhS5BhgBbiPOKU505AQg0Pf2apRKOu15ImLH DGayV3ZWKOUqG3b0xShwmS3W2TcEL5gb1ZF8nYN9u/Ujg0PD1Q4p/YOTb 15gvD0C9aRmY4AGDc1IbJY6snlIilcWXT+AWXtsRhV0hbBYtSuTkqh6Sq tcrKCRRBqB8PAEOlZdD4VEz2uFSgVobcu2RF49nKqxIK0lyNUz2C/n9G9 la9GoPB+3gecPKi3hDWqtUCWFu+rxa/A4oTJn3jZXBcrFFDMnwXyUXuHY vkCfqlev+ejANNAOsVQWqgje8r5pPEaGb5WlhllMueX49s6XTc/3inYQ/ A==; X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="291661251" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="291661251" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 15:30:21 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="628457013" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="628457013" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.104.4]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 15:30:21 -0700 From: ira.weiny@intel.com To: Michael Tsirkin , Ben Widawsky , Jonathan Cameron Cc: Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org Subject: [RFC PATCH 5/6] hw/cxl/cxl-events: Add event interrupt support Date: Mon, 10 Oct 2022 15:29:43 -0700 Message-Id: <20221010222944.3923556-6-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221010222944.3923556-1-ira.weiny@intel.com> References: <20221010222944.3923556-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny To facilitate testing of event interrupt support add a QMP HMP command to reset the event logs and issue interrupts when the guest has enabled those interrupts. Signed-off-by: Ira Weiny --- hmp-commands.hx | 14 +++++++ hw/cxl/cxl-events.c | 82 +++++++++++++++++++++++++++++++++++++ hw/cxl/cxl-host-stubs.c | 5 +++ hw/mem/cxl_type3.c | 7 +++- include/hw/cxl/cxl_device.h | 3 ++ include/sysemu/sysemu.h | 3 ++ 6 files changed, 113 insertions(+), 1 deletion(-) diff --git a/hmp-commands.hx b/hmp-commands.hx index 564f1de364df..c59a98097317 100644 --- a/hmp-commands.hx +++ b/hmp-commands.hx @@ -1266,6 +1266,20 @@ SRST Inject PCIe AER error ERST + { + .name = "cxl_event_inject", + .args_type = "id:s", + .params = "id ", + .help = "inject cxl events and interrupt\n\t\t\t" + " = qdev device id\n\t\t\t", + .cmd = hmp_cxl_event_inject, + }, + +SRST +``cxl_event_inject`` + Inject CXL Events +ERST + { .name = "netdev_add", .args_type = "netdev:O", diff --git a/hw/cxl/cxl-events.c b/hw/cxl/cxl-events.c index c275280bcb64..6ece6f252462 100644 --- a/hw/cxl/cxl-events.c +++ b/hw/cxl/cxl-events.c @@ -10,8 +10,14 @@ #include #include "qemu/osdep.h" +#include "sysemu/sysemu.h" +#include "monitor/monitor.h" #include "qemu/bswap.h" #include "qemu/typedefs.h" +#include "qapi/qmp/qdict.h" +#include "hw/pci/pci.h" +#include "hw/pci/msi.h" +#include "hw/pci/msix.h" #include "hw/cxl/cxl.h" #include "hw/cxl/cxl_events.h" @@ -68,6 +74,11 @@ uint16_t log_overflow(struct cxl_event_log *log) return cnt; } +static void reset_log(struct cxl_event_log *log) +{ + log->cur_event = 0; +} + #define CXL_EVENT_RECORD_FLAG_PERMANENT BIT(2) #define CXL_EVENT_RECORD_FLAG_MAINT_NEEDED BIT(3) #define CXL_EVENT_RECORD_FLAG_PERF_DEGRADED BIT(4) @@ -246,3 +257,74 @@ void cxl_mock_add_event_logs(CXLDeviceState *cxlds) event_store_add_event(cxlds, CXL_EVENT_TYPE_FATAL, (struct cxl_event_record_raw *)&dram); } + +static void cxl_reset_all_logs(CXLDeviceState *cxlds) +{ + int i; + + for (i = 0; i < CXL_EVENT_TYPE_MAX; i++) { + struct cxl_event_log *log = find_event_log(cxlds, i); + + if (!log) { + continue; + } + + reset_log(log); + } +} + +static void cxl_event_irq_assert(PCIDevice *pdev) +{ + CXLType3Dev *ct3d = container_of(pdev, struct CXLType3Dev, parent_obj); + CXLDeviceState *cxlds = &ct3d->cxl_dstate; + int i; + + for (i = 0; i < CXL_EVENT_TYPE_MAX; i++) { + struct cxl_event_log *log; + + log = find_event_log(cxlds, i); + if (!log || !log->irq_enabled || log_empty(log)) { + continue; + } + + /* Notifies interrupt, legacy IRQ is not supported */ + if (msix_enabled(pdev)) { + msix_notify(pdev, log->irq_vec); + } else if (msi_enabled(pdev)) { + msi_notify(pdev, log->irq_vec); + } + } +} + +static int do_cxl_event_inject(Monitor *mon, const QDict *qdict) +{ + const char *id = qdict_get_str(qdict, "id"); + CXLType3Dev *ct3d; + PCIDevice *pdev; + int ret; + + ret = pci_qdev_find_device(id, &pdev); + if (ret < 0) { + monitor_printf(mon, + "id or cxl device path is invalid or device not " + "found. %s\n", id); + return ret; + } + + ct3d = container_of(pdev, struct CXLType3Dev, parent_obj); + cxl_reset_all_logs(&ct3d->cxl_dstate); + + cxl_event_irq_assert(pdev); + return 0; +} + +void hmp_cxl_event_inject(Monitor *mon, const QDict *qdict) +{ + const char *id = qdict_get_str(qdict, "id"); + + if (do_cxl_event_inject(mon, qdict) < 0) { + return; + } + + monitor_printf(mon, "OK id: %s\n", id); +} diff --git a/hw/cxl/cxl-host-stubs.c b/hw/cxl/cxl-host-stubs.c index cae4afcdde26..61039263f25a 100644 --- a/hw/cxl/cxl-host-stubs.c +++ b/hw/cxl/cxl-host-stubs.c @@ -12,4 +12,9 @@ void cxl_fmws_link_targets(CXLState *stat, Error **errp) {}; void cxl_machine_init(Object *obj, CXLState *state) {}; void cxl_hook_up_pxb_registers(PCIBus *bus, CXLState *state, Error **errp) {}; +void hmp_cxl_event_inject(Monitor *mon, const QDict *qdict) +{ + monitor_printf(mon, "CXL devices not supported\n"); +} + const MemoryRegionOps cfmws_ops; diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c index 2b13179d116d..b4a90136d190 100644 --- a/hw/mem/cxl_type3.c +++ b/hw/mem/cxl_type3.c @@ -459,7 +459,7 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp) ComponentRegisters *regs = &cxl_cstate->crb; MemoryRegion *mr = ®s->component_registers; uint8_t *pci_conf = pci_dev->config; - unsigned short msix_num = 3; + unsigned short msix_num = 7; int i; if (!cxl_setup_memory(ct3d, errp)) { @@ -502,6 +502,11 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp) msix_vector_use(pci_dev, i); } + ct3d->cxl_dstate.event_vector[CXL_EVENT_TYPE_INFO] = 6; + ct3d->cxl_dstate.event_vector[CXL_EVENT_TYPE_WARN] = 5; + ct3d->cxl_dstate.event_vector[CXL_EVENT_TYPE_FAIL] = 4; + ct3d->cxl_dstate.event_vector[CXL_EVENT_TYPE_FATAL] = 3; + /* DOE Initailization */ if (ct3d->spdm_port) { pcie_doe_init(pci_dev, &ct3d->doe_spdm, 0x160, doe_spdm_prot, true, 2); diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h index 46c50c1c13a6..41232d3b3476 100644 --- a/include/hw/cxl/cxl_device.h +++ b/include/hw/cxl/cxl_device.h @@ -84,6 +84,8 @@ #define CXL_TEST_EVENT_CNT_MAX 15 struct cxl_event_log { + bool irq_enabled; + int irq_vec; int cur_event; int nr_events; struct cxl_event_record_raw *events[CXL_TEST_EVENT_CNT_MAX]; @@ -129,6 +131,7 @@ typedef struct cxl_device_state { /* memory region for persistent memory, HDM */ uint64_t pmem_size; + uint16_t event_vector[CXL_EVENT_TYPE_MAX]; struct cxl_event_log event_logs[CXL_EVENT_TYPE_MAX]; } CXLDeviceState; diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h index 812f66a31a90..39476cc50190 100644 --- a/include/sysemu/sysemu.h +++ b/include/sysemu/sysemu.h @@ -64,6 +64,9 @@ extern unsigned int nb_prom_envs; /* pcie aer error injection */ void hmp_pcie_aer_inject_error(Monitor *mon, const QDict *qdict); +/* CXL */ +void hmp_cxl_event_inject(Monitor *mon, const QDict *qdict); + /* serial ports */ /* Return the Chardev for serial port i, or NULL if none */ From patchwork Mon Oct 10 22:29:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13003294 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF60BC43219 for ; Mon, 10 Oct 2022 22:30:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229691AbiJJWa0 (ORCPT ); Mon, 10 Oct 2022 18:30:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229701AbiJJWaY (ORCPT ); Mon, 10 Oct 2022 18:30:24 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 483495F10A for ; Mon, 10 Oct 2022 15:30:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665441023; x=1696977023; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FYIhp/lfplx+aqHt/7Dq3xswYQzfhxppIKNEfK/6qE8=; b=dviraB1mvHg1w7cxhBuyG6uJEQ4TkKTo7VZJO7KHAcjWkB+s0vaVOYmE 9//blqH0y9Lox9YvDtPex1kHGzN7VWZn7+GL2Q0EofI/S/tskHN54nKBX P9Evozq17azea3eeiFpU47n4CaM6Gg/3M3QpFpYkTXbWHvYa5z2gqtCOY JSk4mDZxjNZk1/70rk8uuz9dAgiM2BhBSd6zXs1aaD2XQgYwD+rawMKO6 2YIiJqkOeE5+gtM01AnmXiHd9grDR/nM2iXSVQb2FgtjuYw2xKM9p2cao fY0Cjwv2WiM8MjRCPfnzrIjzJchJBf8D/t3l8TWC8zG6U8kE2cxag7aSZ A==; X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="291661254" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="291661254" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 15:30:22 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="628457026" X-IronPort-AV: E=Sophos;i="5.95,173,1661842800"; d="scan'208";a="628457026" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.104.4]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2022 15:30:22 -0700 From: ira.weiny@intel.com To: Michael Tsirkin , Ben Widawsky , Jonathan Cameron Cc: Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org Subject: [RFC PATCH 6/6] hw/cxl/mailbox: Wire up Get/Set Event Interrupt policy Date: Mon, 10 Oct 2022 15:29:44 -0700 Message-Id: <20221010222944.3923556-7-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20221010222944.3923556-1-ira.weiny@intel.com> References: <20221010222944.3923556-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny Replace the stubbed out CXL Get/Set Event interrupt policy mailbox commands. Enable those commands to control interrupts for each of the event log types. Signed-off-by: Ira Weiny --- hw/cxl/cxl-mailbox-utils.c | 129 ++++++++++++++++++++++++++++++------ include/hw/cxl/cxl_events.h | 21 ++++++ 2 files changed, 129 insertions(+), 21 deletions(-) diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c index df345f23a30c..52e8804c24ed 100644 --- a/hw/cxl/cxl-mailbox-utils.c +++ b/hw/cxl/cxl-mailbox-utils.c @@ -101,25 +101,6 @@ struct cxl_cmd { uint8_t *payload; }; -#define DEFINE_MAILBOX_HANDLER_ZEROED(name, size) \ - uint16_t __zero##name = size; \ - static ret_code cmd_##name(struct cxl_cmd *cmd, \ - CXLDeviceState *cxl_dstate, uint16_t *len) \ - { \ - *len = __zero##name; \ - memset(cmd->payload, 0, *len); \ - return CXL_MBOX_SUCCESS; \ - } -#define DEFINE_MAILBOX_HANDLER_NOP(name) \ - static ret_code cmd_##name(struct cxl_cmd *cmd, \ - CXLDeviceState *cxl_dstate, uint16_t *len) \ - { \ - return CXL_MBOX_SUCCESS; \ - } - -DEFINE_MAILBOX_HANDLER_ZEROED(events_get_interrupt_policy, 4); -DEFINE_MAILBOX_HANDLER_NOP(events_set_interrupt_policy); - static ret_code cmd_events_get_records(struct cxl_cmd *cmd, CXLDeviceState *cxlds, uint16_t *len) @@ -218,6 +199,110 @@ static ret_code cmd_events_clear_records(struct cxl_cmd *cmd, return CXL_MBOX_SUCCESS; } +static ret_code cmd_events_get_interrupt_policy(struct cxl_cmd *cmd, + CXLDeviceState *cxl_dstate, + uint16_t *len) +{ + struct cxl_event_interrupt_policy *policy; + struct cxl_event_log *log; + + policy = (struct cxl_event_interrupt_policy *)cmd->payload; + memset(policy, 0, sizeof(*policy)); + + log = find_event_log(cxl_dstate, CXL_EVENT_TYPE_INFO); + if (log->irq_enabled) { + policy->info_settings = CXL_EVENT_INT_SETTING(log->irq_vec); + } + + log = find_event_log(cxl_dstate, CXL_EVENT_TYPE_WARN); + if (log->irq_enabled) { + policy->warn_settings = CXL_EVENT_INT_SETTING(log->irq_vec); + } + + log = find_event_log(cxl_dstate, CXL_EVENT_TYPE_FAIL); + if (log->irq_enabled) { + policy->failure_settings = CXL_EVENT_INT_SETTING(log->irq_vec); + } + + log = find_event_log(cxl_dstate, CXL_EVENT_TYPE_FATAL); + if (log->irq_enabled) { + policy->fatal_settings = CXL_EVENT_INT_SETTING(log->irq_vec); + } + + log = find_event_log(cxl_dstate, CXL_EVENT_TYPE_DYNAMIC_CAP); + if (log->irq_enabled) { + /* Dynamic Capacity borrows the same vector as info */ + policy->dyn_cap_settings = CXL_INT_MSI_MSIX; + } + + *len = sizeof(*policy); + return CXL_MBOX_SUCCESS; +} + +static ret_code cmd_events_set_interrupt_policy(struct cxl_cmd *cmd, + CXLDeviceState *cxl_dstate, + uint16_t *len) +{ + struct cxl_event_interrupt_policy *policy; + struct cxl_event_log *log; + + policy = (struct cxl_event_interrupt_policy *)cmd->payload; + + log = find_event_log(cxl_dstate, CXL_EVENT_TYPE_INFO); + if ((policy->info_settings & CXL_EVENT_INT_MODE_MASK) == + CXL_INT_MSI_MSIX) { + log->irq_enabled = true; + log->irq_vec = cxl_dstate->event_vector[CXL_EVENT_TYPE_INFO]; + } else { + log->irq_enabled = false; + log->irq_vec = 0; + } + + log = find_event_log(cxl_dstate, CXL_EVENT_TYPE_WARN); + if ((policy->warn_settings & CXL_EVENT_INT_MODE_MASK) == + CXL_INT_MSI_MSIX) { + log->irq_enabled = true; + log->irq_vec = cxl_dstate->event_vector[CXL_EVENT_TYPE_WARN]; + } else { + log->irq_enabled = false; + log->irq_vec = 0; + } + + log = find_event_log(cxl_dstate, CXL_EVENT_TYPE_FAIL); + if ((policy->failure_settings & CXL_EVENT_INT_MODE_MASK) == + CXL_INT_MSI_MSIX) { + log->irq_enabled = true; + log->irq_vec = cxl_dstate->event_vector[CXL_EVENT_TYPE_FAIL]; + } else { + log->irq_enabled = false; + log->irq_vec = 0; + } + + log = find_event_log(cxl_dstate, CXL_EVENT_TYPE_FATAL); + if ((policy->fatal_settings & CXL_EVENT_INT_MODE_MASK) == + CXL_INT_MSI_MSIX) { + log->irq_enabled = true; + log->irq_vec = cxl_dstate->event_vector[CXL_EVENT_TYPE_FATAL]; + } else { + log->irq_enabled = false; + log->irq_vec = 0; + } + + log = find_event_log(cxl_dstate, CXL_EVENT_TYPE_DYNAMIC_CAP); + if ((policy->dyn_cap_settings & CXL_EVENT_INT_MODE_MASK) == + CXL_INT_MSI_MSIX) { + log->irq_enabled = true; + /* Dynamic Capacity borrows the same vector as info */ + log->irq_vec = cxl_dstate->event_vector[CXL_EVENT_TYPE_INFO]; + } else { + log->irq_enabled = false; + log->irq_vec = 0; + } + + *len = sizeof(*policy); + return CXL_MBOX_SUCCESS; +} + /* 8.2.9.2.1 */ static ret_code cmd_firmware_update_get_info(struct cxl_cmd *cmd, CXLDeviceState *cxl_dstate, @@ -490,9 +575,11 @@ static struct cxl_cmd cxl_cmd_set[256][256] = { [EVENTS][CLEAR_RECORDS] = { "EVENTS_CLEAR_RECORDS", cmd_events_clear_records, 8, IMMEDIATE_LOG_CHANGE }, [EVENTS][GET_INTERRUPT_POLICY] = { "EVENTS_GET_INTERRUPT_POLICY", - cmd_events_get_interrupt_policy, 0, 0 }, + cmd_events_get_interrupt_policy, 0, 0 }, [EVENTS][SET_INTERRUPT_POLICY] = { "EVENTS_SET_INTERRUPT_POLICY", - cmd_events_set_interrupt_policy, 4, IMMEDIATE_CONFIG_CHANGE }, + cmd_events_set_interrupt_policy, + sizeof(struct cxl_event_interrupt_policy), + IMMEDIATE_CONFIG_CHANGE }, [FIRMWARE_UPDATE][GET_INFO] = { "FIRMWARE_UPDATE_GET_INFO", cmd_firmware_update_get_info, 0, 0 }, [TIMESTAMP][GET] = { "TIMESTAMP_GET", cmd_timestamp_get, 0, 0 }, diff --git a/include/hw/cxl/cxl_events.h b/include/hw/cxl/cxl_events.h index 255111f3dcfb..c121e504a6db 100644 --- a/include/hw/cxl/cxl_events.h +++ b/include/hw/cxl/cxl_events.h @@ -170,4 +170,25 @@ struct cxl_event_mem_module { uint8_t reserved[CXL_EVENT_MEM_MOD_RES_SIZE]; } QEMU_PACKED; +/** + * Event Interrupt Policy + * + * CXL rev 3.0 section 8.2.9.2.4; Table 8-52 + */ +enum cxl_event_int_mode { + CXL_INT_NONE = 0x00, + CXL_INT_MSI_MSIX = 0x01, + CXL_INT_FW = 0x02, + CXL_INT_RES = 0x03, +}; +#define CXL_EVENT_INT_MODE_MASK 0x3 +#define CXL_EVENT_INT_SETTING(vector) ((((uint8_t)vector & 0xf) << 4) | CXL_INT_MSI_MSIX) +struct cxl_event_interrupt_policy { + uint8_t info_settings; + uint8_t warn_settings; + uint8_t failure_settings; + uint8_t fatal_settings; + uint8_t dyn_cap_settings; +} QEMU_PACKED; + #endif /* CXL_EVENTS_H */