From patchwork Tue Nov 14 12:47:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhijian Li (Fujitsu)\" via" X-Patchwork-Id: 13455251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9DC0DC41535 for ; Tue, 14 Nov 2023 12:48:19 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1r2spN-00035X-6a; Tue, 14 Nov 2023 07:47:38 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r2spK-000357-C0 for qemu-devel@nongnu.org; Tue, 14 Nov 2023 07:47:34 -0500 Received: from frasgout.his.huawei.com ([185.176.79.56]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r2spH-0001yT-9z for qemu-devel@nongnu.org; Tue, 14 Nov 2023 07:47:34 -0500 Received: from lhrpeml500006.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4SV5Z84BKNz6K6S1; Tue, 14 Nov 2023 20:43:28 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 14 Nov 2023 12:47:16 +0000 To: , CC: , , , , Subject: [RFC PATCH 1/3] hw/cxl/cxl-mailbox-utils: Add support for feature commands (8.2.9.6) Date: Tue, 14 Nov 2023 20:47:09 +0800 Message-ID: <20231114124711.1128-2-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20231114124711.1128-1-shiju.jose@huawei.com> References: <20231114124711.1128-1-shiju.jose@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.234] X-ClientProxiedBy: lhrpeml100001.china.huawei.com (7.191.160.183) To lhrpeml500006.china.huawei.com (7.191.161.198) X-CFilter-Loop: Reflected Received-SPF: pass client-ip=185.176.79.56; envelope-from=shiju.jose@huawei.com; helo=frasgout.his.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: X-Patchwork-Original-From: shiju.jose--- via From: "Zhijian Li (Fujitsu)\" via" Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Shiju Jose CXL spec 3.0 section 8.2.9.6 describes optional device specific features. CXL devices supports features with changeable attributes. Get Supported Features retrieves the list of supported device specific features. The settings of a feature can be retrieved using Get Feature and optionally modified using Set Feature. Signed-off-by: Shiju Jose Reviewed-by: Davidlohr Bueso --- hw/cxl/cxl-mailbox-utils.c | 140 +++++++++++++++++++++++++++++++++++++ 1 file changed, 140 insertions(+) diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c index 6184f44339..93960afd44 100644 --- a/hw/cxl/cxl-mailbox-utils.c +++ b/hw/cxl/cxl-mailbox-utils.c @@ -66,6 +66,10 @@ enum { LOGS = 0x04, #define GET_SUPPORTED 0x0 #define GET_LOG 0x1 + FEATURES = 0x05, + #define GET_SUPPORTED 0x0 + #define GET_FEATURE 0x1 + #define SET_FEATURE 0x2 IDENTIFY = 0x40, #define MEMORY_DEVICE 0x0 CCLS = 0x41, @@ -785,6 +789,135 @@ static CXLRetCode cmd_logs_get_log(const struct cxl_cmd *cmd, return CXL_MBOX_SUCCESS; } +/* CXL r3.0 section 8.2.9.6: Features */ +typedef struct CXLSupportedFeatureHeader { + uint16_t entries; + uint16_t nsuppfeats_dev; + uint32_t reserved; +} QEMU_PACKED CXLSupportedFeatureHeader; + +typedef struct CXLSupportedFeatureEntry { + QemuUUID uuid; + uint16_t feat_index; + uint16_t get_feat_size; + uint16_t set_feat_size; + uint32_t attrb_flags; + uint8_t get_feat_version; + uint8_t set_feat_version; + uint16_t set_feat_effects; + uint8_t rsvd[18]; +} QEMU_PACKED CXLSupportedFeatureEntry; + +enum CXL_SUPPORTED_FEATURES_LIST { + CXL_FEATURE_MAX +}; + +typedef struct CXLSetFeatureInHeader { + QemuUUID uuid; + uint32_t flags; + uint16_t offset; + uint8_t version; + uint8_t rsvd[9]; +} QEMU_PACKED QEMU_ALIGNED(16) CXLSetFeatureInHeader; + +#define CXL_SET_FEATURE_FLAG_DATA_TRANSFER_MASK 0x7 +#define CXL_SET_FEATURE_FLAG_FULL_DATA_TRANSFER 0 +#define CXL_SET_FEATURE_FLAG_INITIATE_DATA_TRANSFER 1 +#define CXL_SET_FEATURE_FLAG_CONTINUE_DATA_TRANSFER 2 +#define CXL_SET_FEATURE_FLAG_FINISH_DATA_TRANSFER 3 +#define CXL_SET_FEATURE_FLAG_ABORT_DATA_TRANSFER 4 + +/* CXL r3.0 section 8.2.9.6.1: Get Supported Features (Opcode 0500h) */ +static CXLRetCode cmd_features_get_supported(const struct cxl_cmd *cmd, + uint8_t *payload_in, + size_t len_in, + uint8_t *payload_out, + size_t *len_out, + CXLCCI *cci) +{ + struct { + uint32_t count; + uint16_t start_index; + uint16_t reserved; + } QEMU_PACKED QEMU_ALIGNED(16) * get_feats_in = (void *)payload_in; + + struct { + CXLSupportedFeatureHeader hdr; + CXLSupportedFeatureEntry feat_entries[]; + } QEMU_PACKED QEMU_ALIGNED(16) * supported_feats = (void *)payload_out; + uint16_t index; + uint16_t entry, req_entries; + uint16_t feat_entries = 0; + + if (get_feats_in->count < sizeof(CXLSupportedFeatureHeader) || + get_feats_in->start_index > CXL_FEATURE_MAX) { + return CXL_MBOX_INVALID_INPUT; + } else { + req_entries = (get_feats_in->count - + sizeof(CXLSupportedFeatureHeader)) / + sizeof(CXLSupportedFeatureEntry); + } + if (req_entries > CXL_FEATURE_MAX) { + req_entries = CXL_FEATURE_MAX; + } + supported_feats->hdr.nsuppfeats_dev = CXL_FEATURE_MAX; + index = get_feats_in->start_index; + + entry = 0; + while (entry < req_entries) { + switch (index) { + default: + break; + } + index++; + entry++; + } + + supported_feats->hdr.entries = feat_entries; + *len_out = sizeof(CXLSupportedFeatureHeader) + + feat_entries * sizeof(CXLSupportedFeatureEntry); + + return CXL_MBOX_SUCCESS; +} + +/* CXL r3.0 section 8.2.9.6.2: Get Feature (Opcode 0501h) */ +static CXLRetCode cmd_features_get_feature(const struct cxl_cmd *cmd, + uint8_t *payload_in, + size_t len_in, + uint8_t *payload_out, + size_t *len_out, + CXLCCI *cci) +{ + struct { + QemuUUID uuid; + uint16_t offset; + uint16_t count; + uint8_t selection; + } QEMU_PACKED QEMU_ALIGNED(16) * get_feature; + uint16_t bytes_to_copy = 0; + + get_feature = (void *)payload_in; + + if (get_feature->offset + get_feature->count > cci->payload_max) { + return CXL_MBOX_INVALID_INPUT; + } + + *len_out = bytes_to_copy; + + return CXL_MBOX_SUCCESS; +} + +/* CXL r3.0 section 8.2.9.6.3: Set Feature (Opcode 0502h) */ +static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd, + uint8_t *payload_in, + size_t len_in, + uint8_t *payload_out, + size_t *len_out, + CXLCCI *cci) +{ + return CXL_MBOX_SUCCESS; +} + /* 8.2.9.5.1.1 */ static CXLRetCode cmd_identify_memory_device(const struct cxl_cmd *cmd, uint8_t *payload_in, @@ -1954,6 +2087,13 @@ static const struct cxl_cmd cxl_cmd_set[256][256] = { [LOGS][GET_SUPPORTED] = { "LOGS_GET_SUPPORTED", cmd_logs_get_supported, 0, 0 }, [LOGS][GET_LOG] = { "LOGS_GET_LOG", cmd_logs_get_log, 0x18, 0 }, + [FEATURES][GET_SUPPORTED] = { "FEATURES_GET_SUPPORTED", + cmd_features_get_supported, 0x8, 0 }, + [FEATURES][GET_FEATURE] = { "FEATURES_GET_FEATURE", + cmd_features_get_feature, 0x15, 0 }, + [FEATURES][SET_FEATURE] = { "FEATURES_SET_FEATURE", + cmd_features_set_feature, + ~0, CXL_MBOX_IMMEDIATE_CONFIG_CHANGE }, [IDENTIFY][MEMORY_DEVICE] = { "IDENTIFY_MEMORY_DEVICE", cmd_identify_memory_device, 0, 0 }, [CCLS][GET_PARTITION_INFO] = { "CCLS_GET_PARTITION_INFO", From patchwork Tue Nov 14 12:47:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhijian Li (Fujitsu)\" via" X-Patchwork-Id: 13455253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D6B64C4167D for ; Tue, 14 Nov 2023 12:48:40 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1r2spM-00035W-Ul; Tue, 14 Nov 2023 07:47:36 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r2spJ-00034g-PF for qemu-devel@nongnu.org; Tue, 14 Nov 2023 07:47:33 -0500 Received: from frasgout.his.huawei.com ([185.176.79.56]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r2spG-0001yU-Ao for qemu-devel@nongnu.org; Tue, 14 Nov 2023 07:47:33 -0500 Received: from lhrpeml500006.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4SV5Z85qdCz6K6KW; Tue, 14 Nov 2023 20:43:28 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 14 Nov 2023 12:47:16 +0000 To: , CC: , , , , Subject: [RFC PATCH 2/3] hw/cxl/cxl-mailbox-utils: Add device patrol scrub control feature Date: Tue, 14 Nov 2023 20:47:10 +0800 Message-ID: <20231114124711.1128-3-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20231114124711.1128-1-shiju.jose@huawei.com> References: <20231114124711.1128-1-shiju.jose@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.234] X-ClientProxiedBy: lhrpeml100001.china.huawei.com (7.191.160.183) To lhrpeml500006.china.huawei.com (7.191.161.198) X-CFilter-Loop: Reflected Received-SPF: pass client-ip=185.176.79.56; envelope-from=shiju.jose@huawei.com; helo=frasgout.his.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: X-Patchwork-Original-From: shiju.jose--- via From: "Zhijian Li (Fujitsu)\" via" Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Shiju Jose CXL spec 3.1 section 8.2.9.9.11.1 describes the device patrol scrub control feature. The device patrol scrub proactively locates and makes corrections to errors in regular cycle. The patrol scrub control allows the request to configure patrol scrub input configurations. The patrol scrub control allows the requester to specify the number of hours for which the patrol scrub cycles must be completed, provided that the requested number is not less than the minimum number of hours for the patrol scrub cycle that the device is capable of. In addition, the patrol scrub controls allow the host to disable and enable the feature in case disabling of the feature is needed for other purposes such as performance-aware operations which require the background operations to be turned off. Signed-off-by: Shiju Jose --- hw/cxl/cxl-mailbox-utils.c | 98 +++++++++++++++++++++++++++++++++++++- 1 file changed, 97 insertions(+), 1 deletion(-) diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c index 93960afd44..6ab3e74059 100644 --- a/hw/cxl/cxl-mailbox-utils.c +++ b/hw/cxl/cxl-mailbox-utils.c @@ -809,6 +809,7 @@ typedef struct CXLSupportedFeatureEntry { } QEMU_PACKED CXLSupportedFeatureEntry; enum CXL_SUPPORTED_FEATURES_LIST { + CXL_FEATURE_PATROL_SCRUB = 0, CXL_FEATURE_MAX }; @@ -827,6 +828,38 @@ typedef struct CXLSetFeatureInHeader { #define CXL_SET_FEATURE_FLAG_FINISH_DATA_TRANSFER 3 #define CXL_SET_FEATURE_FLAG_ABORT_DATA_TRANSFER 4 + +/* CXL r3.1 section 8.2.9.9.11.1: Device Patrol Scrub Control Feature */ +static const QemuUUID patrol_scrub_uuid = { + .data = UUID(0x96dad7d6, 0xfde8, 0x482b, 0xa7, 0x33, + 0x75, 0x77, 0x4e, 0x06, 0xdb, 0x8a) +}; + +#define CXL_MEMDEV_PS_GET_FEATURE_VERSION 0x01 +#define CXL_MEMDEV_PS_SET_FEATURE_VERSION 0x01 +#define CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_DEFAULT BIT(0) +#define CXL_MEMDEV_PS_SCRUB_REALTIME_REPORT_CAP_DEFAULT BIT(1) +#define CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_DEFAULT 12 +#define CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_DEFAULT 1 +#define CXL_MEMDEV_PS_ENABLE_DEFAULT 0 + +/* CXL memdev patrol scrub control attributes */ +struct CXLMemPatrolScrubReadAttrbs { + uint8_t scrub_cycle_cap; + uint16_t scrub_cycle; + uint8_t scrub_flags; +} QEMU_PACKED cxl_memdev_ps_feat_read_attrbs; + +typedef struct CXLMemPatrolScrubWriteAttrbs { + uint8_t scrub_cycle_hr; + uint8_t scrub_flags; +} QEMU_PACKED CXLMemPatrolScrubWriteAttrbs; + +typedef struct CXLMemPatrolScrubSetFeature { + CXLSetFeatureInHeader hdr; + CXLMemPatrolScrubWriteAttrbs feat_data; +} QEMU_PACKED QEMU_ALIGNED(16) CXLMemPatrolScrubSetFeature; + /* CXL r3.0 section 8.2.9.6.1: Get Supported Features (Opcode 0500h) */ static CXLRetCode cmd_features_get_supported(const struct cxl_cmd *cmd, uint8_t *payload_in, @@ -850,7 +883,7 @@ static CXLRetCode cmd_features_get_supported(const struct cxl_cmd *cmd, uint16_t feat_entries = 0; if (get_feats_in->count < sizeof(CXLSupportedFeatureHeader) || - get_feats_in->start_index > CXL_FEATURE_MAX) { + get_feats_in->start_index >= CXL_FEATURE_MAX) { return CXL_MBOX_INVALID_INPUT; } else { req_entries = (get_feats_in->count - @@ -866,6 +899,31 @@ static CXLRetCode cmd_features_get_supported(const struct cxl_cmd *cmd, entry = 0; while (entry < req_entries) { switch (index) { + case CXL_FEATURE_PATROL_SCRUB: + /* Fill supported feature entry for device patrol scrub control */ + supported_feats->feat_entries[entry] = + (struct CXLSupportedFeatureEntry) { + .uuid = patrol_scrub_uuid, + .feat_index = index, + .get_feat_size = sizeof(cxl_memdev_ps_feat_read_attrbs), + .set_feat_size = sizeof(CXLMemPatrolScrubWriteAttrbs), + /* Bit[0] : 1, feature attributes changable */ + .attrb_flags = 0x1, + .get_feat_version = CXL_MEMDEV_PS_GET_FEATURE_VERSION, + .set_feat_version = CXL_MEMDEV_PS_SET_FEATURE_VERSION, + .set_feat_effects = 0, + }; + feat_entries++; + /* Set default value for device patrol scrub read attributes */ + cxl_memdev_ps_feat_read_attrbs.scrub_cycle_cap = + CXL_MEMDEV_PS_SCRUB_CYCLE_CHANGE_CAP_DEFAULT | + CXL_MEMDEV_PS_SCRUB_REALTIME_REPORT_CAP_DEFAULT; + cxl_memdev_ps_feat_read_attrbs.scrub_cycle = + CXL_MEMDEV_PS_CUR_SCRUB_CYCLE_DEFAULT | + (CXL_MEMDEV_PS_MIN_SCRUB_CYCLE_DEFAULT << 8); + cxl_memdev_ps_feat_read_attrbs.scrub_flags = + CXL_MEMDEV_PS_ENABLE_DEFAULT; + break; default: break; } @@ -902,6 +960,21 @@ static CXLRetCode cmd_features_get_feature(const struct cxl_cmd *cmd, return CXL_MBOX_INVALID_INPUT; } + if (qemu_uuid_is_equal(&get_feature->uuid, &patrol_scrub_uuid)) { + if (get_feature->offset >= sizeof(cxl_memdev_ps_feat_read_attrbs)) { + return CXL_MBOX_INVALID_INPUT; + } + bytes_to_copy = sizeof(cxl_memdev_ps_feat_read_attrbs) - + get_feature->offset; + bytes_to_copy = (bytes_to_copy > get_feature->count) ? + get_feature->count : bytes_to_copy; + memcpy(payload_out, + &cxl_memdev_ps_feat_read_attrbs + get_feature->offset, + bytes_to_copy); + } else { + return CXL_MBOX_UNSUPPORTED; + } + *len_out = bytes_to_copy; return CXL_MBOX_SUCCESS; @@ -915,6 +988,29 @@ static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd, size_t *len_out, CXLCCI *cci) { + CXLMemPatrolScrubWriteAttrbs *ps_write_attrbs; + CXLMemPatrolScrubSetFeature *ps_set_feature; + CXLSetFeatureInHeader *hdr = (void *)payload_in; + + if (qemu_uuid_is_equal(&hdr->uuid, &patrol_scrub_uuid)) { + if (hdr->version != CXL_MEMDEV_PS_SET_FEATURE_VERSION || + (hdr->flags & CXL_SET_FEATURE_FLAG_DATA_TRANSFER_MASK) != + CXL_SET_FEATURE_FLAG_FULL_DATA_TRANSFER) { + return CXL_MBOX_UNSUPPORTED; + } + + ps_set_feature = (void *)payload_in; + ps_write_attrbs = &ps_set_feature->feat_data; + cxl_memdev_ps_feat_read_attrbs.scrub_cycle &= ~0xFF; + cxl_memdev_ps_feat_read_attrbs.scrub_cycle |= + ps_write_attrbs->scrub_cycle_hr & 0xFF; + cxl_memdev_ps_feat_read_attrbs.scrub_flags &= ~0x1; + cxl_memdev_ps_feat_read_attrbs.scrub_flags |= + ps_write_attrbs->scrub_flags & 0x1; + } else { + return CXL_MBOX_UNSUPPORTED; + } + return CXL_MBOX_SUCCESS; } From patchwork Tue Nov 14 12:47:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Zhijian Li (Fujitsu)\" via" X-Patchwork-Id: 13455250 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 78CB2C4167D for ; Tue, 14 Nov 2023 12:48:19 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1r2spK-000352-MC; Tue, 14 Nov 2023 07:47:35 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r2spJ-00034Q-1T for qemu-devel@nongnu.org; Tue, 14 Nov 2023 07:47:33 -0500 Received: from frasgout.his.huawei.com ([185.176.79.56]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1r2spG-0001z4-AY for qemu-devel@nongnu.org; Tue, 14 Nov 2023 07:47:32 -0500 Received: from lhrpeml500006.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4SV5YD51Jvz6JB0w; Tue, 14 Nov 2023 20:42:40 +0800 (CST) Received: from SecurePC30232.china.huawei.com (10.122.247.234) by lhrpeml500006.china.huawei.com (7.191.161.198) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 14 Nov 2023 12:47:16 +0000 To: , CC: , , , , Subject: [RFC PATCH 3/3] hw/cxl/cxl-mailbox-utils: Add device DDR5 ECS control feature Date: Tue, 14 Nov 2023 20:47:11 +0800 Message-ID: <20231114124711.1128-4-shiju.jose@huawei.com> X-Mailer: git-send-email 2.35.1.windows.2 In-Reply-To: <20231114124711.1128-1-shiju.jose@huawei.com> References: <20231114124711.1128-1-shiju.jose@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.234] X-ClientProxiedBy: lhrpeml100001.china.huawei.com (7.191.160.183) To lhrpeml500006.china.huawei.com (7.191.161.198) X-CFilter-Loop: Reflected Received-SPF: pass client-ip=185.176.79.56; envelope-from=shiju.jose@huawei.com; helo=frasgout.his.huawei.com X-Spam_score_int: -41 X-Spam_score: -4.2 X-Spam_bar: ---- X-Spam_report: (-4.2 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: X-Patchwork-Original-From: shiju.jose--- via From: "Zhijian Li (Fujitsu)\" via" Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org From: Shiju Jose CXL spec 3.1 section 8.2.9.9.11.2 describes the DDR5 Error Check Scrub (ECS) control feature. The Error Check Scrub (ECS) is a feature defined in JEDEC DDR5 SDRAM Specification (JESD79-5) and allows the DRAM to internally read, correct single-bit errors, and write back corrected data bits to the DRAM array while providing transparency to error counts. The ECS control feature allows the request to configure ECS input configurations during system boot or at run-time. The ECS control allows the requester to change the log entry type, the ECS threshold count provided that the request is within the definition specified in DDR5 mode registers, change mode between codeword mode and row count mode, and reset the ECS counter. Signed-off-by: Shiju Jose --- hw/cxl/cxl-mailbox-utils.c | 97 +++++++++++++++++++++++++++++++++++++- 1 file changed, 96 insertions(+), 1 deletion(-) diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c index 6ab3e74059..54bc486420 100644 --- a/hw/cxl/cxl-mailbox-utils.c +++ b/hw/cxl/cxl-mailbox-utils.c @@ -810,6 +810,7 @@ typedef struct CXLSupportedFeatureEntry { enum CXL_SUPPORTED_FEATURES_LIST { CXL_FEATURE_PATROL_SCRUB = 0, + CXL_FEATURE_DDR5_ECS, CXL_FEATURE_MAX }; @@ -860,6 +861,39 @@ typedef struct CXLMemPatrolScrubSetFeature { CXLMemPatrolScrubWriteAttrbs feat_data; } QEMU_PACKED QEMU_ALIGNED(16) CXLMemPatrolScrubSetFeature; +/* CXL r3.1 section 8.2.9.9.11.2: DDR5 Error Check Scrub (ECS) Control Feature */ +static const QemuUUID ddr5_ecs_uuid = { + .data = UUID(0xe5b13f22, 0x2328, 0x4a14, 0xb8, 0xba, + 0xb9, 0x69, 0x1e, 0x89, 0x33, 0x86) +}; + +#define CXL_DDR5_ECS_GET_FEATURE_VERSION 0x01 +#define CXL_DDR5_ECS_SET_FEATURE_VERSION 0x01 +#define CXL_DDR5_ECS_LOG_ENTRY_TYPE_DEFAULT 0x01 +#define CXL_DDR5_ECS_REALTIME_REPORT_CAP_DEFAULT 1 +#define CXL_DDR5_ECS_THRESHOLD_COUNT_DEFAULT 3 /* 3: 256, 4: 1024, 5: 4096 */ +#define CXL_DDR5_ECS_MODE_DEFAULT 0 + +#define CXL_DDR5_ECS_NUM_MEDIA_FRUS 3 + +/* CXL memdev DDR5 ECS control attributes */ +struct CXLMemECSReadAttrbs { + uint8_t ecs_log_cap; + uint8_t ecs_cap; + uint16_t ecs_config; + uint8_t ecs_flags; +} QEMU_PACKED cxl_ddr5_ecs_feat_read_attrbs[CXL_DDR5_ECS_NUM_MEDIA_FRUS]; + +typedef struct CXLDDR5ECSWriteAttrbs { + uint8_t ecs_log_cap; + uint16_t ecs_config; +} QEMU_PACKED CXLDDR5ECSWriteAttrbs; + +typedef struct CXLDDR5ECSSetFeature { + CXLSetFeatureInHeader hdr; + CXLDDR5ECSWriteAttrbs feat_data[]; +} QEMU_PACKED QEMU_ALIGNED(16) CXLDDR5ECSSetFeature; + /* CXL r3.0 section 8.2.9.6.1: Get Supported Features (Opcode 0500h) */ static CXLRetCode cmd_features_get_supported(const struct cxl_cmd *cmd, uint8_t *payload_in, @@ -878,7 +912,7 @@ static CXLRetCode cmd_features_get_supported(const struct cxl_cmd *cmd, CXLSupportedFeatureHeader hdr; CXLSupportedFeatureEntry feat_entries[]; } QEMU_PACKED QEMU_ALIGNED(16) * supported_feats = (void *)payload_out; - uint16_t index; + uint16_t count, index; uint16_t entry, req_entries; uint16_t feat_entries = 0; @@ -924,6 +958,35 @@ static CXLRetCode cmd_features_get_supported(const struct cxl_cmd *cmd, cxl_memdev_ps_feat_read_attrbs.scrub_flags = CXL_MEMDEV_PS_ENABLE_DEFAULT; break; + case CXL_FEATURE_DDR5_ECS: + /* Fill supported feature entry for device DDR5 ECS control */ + supported_feats->feat_entries[entry] = + (struct CXLSupportedFeatureEntry) { + .uuid = ddr5_ecs_uuid, + .feat_index = index, + .get_feat_size = CXL_DDR5_ECS_NUM_MEDIA_FRUS * + sizeof(struct CXLMemECSReadAttrbs), + .set_feat_size = CXL_DDR5_ECS_NUM_MEDIA_FRUS * + sizeof(CXLDDR5ECSWriteAttrbs), + .attrb_flags = 0x1, + .get_feat_version = CXL_DDR5_ECS_GET_FEATURE_VERSION, + .set_feat_version = CXL_DDR5_ECS_SET_FEATURE_VERSION, + .set_feat_effects = 0, + }; + feat_entries++; + /* Set default value for DDR5 ECS read attributes */ + for (count = 0; count < CXL_DDR5_ECS_NUM_MEDIA_FRUS; count++) { + cxl_ddr5_ecs_feat_read_attrbs[count].ecs_log_cap = + CXL_DDR5_ECS_LOG_ENTRY_TYPE_DEFAULT; + cxl_ddr5_ecs_feat_read_attrbs[count].ecs_cap = + CXL_DDR5_ECS_REALTIME_REPORT_CAP_DEFAULT; + cxl_ddr5_ecs_feat_read_attrbs[count].ecs_config = + CXL_DDR5_ECS_THRESHOLD_COUNT_DEFAULT | + (CXL_DDR5_ECS_MODE_DEFAULT << 3); + /* Reserved */ + cxl_ddr5_ecs_feat_read_attrbs[count].ecs_flags = 0; + } + break; default: break; } @@ -971,6 +1034,19 @@ static CXLRetCode cmd_features_get_feature(const struct cxl_cmd *cmd, memcpy(payload_out, &cxl_memdev_ps_feat_read_attrbs + get_feature->offset, bytes_to_copy); + } else if (qemu_uuid_is_equal(&get_feature->uuid, &ddr5_ecs_uuid)) { + if (get_feature->offset >= CXL_DDR5_ECS_NUM_MEDIA_FRUS * + sizeof(struct CXLMemECSReadAttrbs)) { + return CXL_MBOX_INVALID_INPUT; + } + bytes_to_copy = CXL_DDR5_ECS_NUM_MEDIA_FRUS * + sizeof(struct CXLMemECSReadAttrbs) - + get_feature->offset; + bytes_to_copy = (bytes_to_copy > get_feature->count) ? + get_feature->count : bytes_to_copy; + memcpy(payload_out, + &cxl_ddr5_ecs_feat_read_attrbs + get_feature->offset, + bytes_to_copy); } else { return CXL_MBOX_UNSUPPORTED; } @@ -988,8 +1064,11 @@ static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd, size_t *len_out, CXLCCI *cci) { + uint16_t count; CXLMemPatrolScrubWriteAttrbs *ps_write_attrbs; + CXLDDR5ECSWriteAttrbs *ecs_write_attrbs; CXLMemPatrolScrubSetFeature *ps_set_feature; + CXLDDR5ECSSetFeature *ecs_set_feature; CXLSetFeatureInHeader *hdr = (void *)payload_in; if (qemu_uuid_is_equal(&hdr->uuid, &patrol_scrub_uuid)) { @@ -1007,6 +1086,22 @@ static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd, cxl_memdev_ps_feat_read_attrbs.scrub_flags &= ~0x1; cxl_memdev_ps_feat_read_attrbs.scrub_flags |= ps_write_attrbs->scrub_flags & 0x1; + } else if (qemu_uuid_is_equal(&hdr->uuid, + &ddr5_ecs_uuid)) { + if (hdr->version != CXL_DDR5_ECS_SET_FEATURE_VERSION || + (hdr->flags & CXL_SET_FEATURE_FLAG_DATA_TRANSFER_MASK) != + CXL_SET_FEATURE_FLAG_FULL_DATA_TRANSFER) { + return CXL_MBOX_UNSUPPORTED; + } + + ecs_set_feature = (void *)payload_in; + ecs_write_attrbs = ecs_set_feature->feat_data; + for (count = 0; count < CXL_DDR5_ECS_NUM_MEDIA_FRUS; count++) { + cxl_ddr5_ecs_feat_read_attrbs[count].ecs_log_cap = + ecs_write_attrbs[count].ecs_log_cap; + cxl_ddr5_ecs_feat_read_attrbs[count].ecs_config = + ecs_write_attrbs[count].ecs_config & 0x1F; + } } else { return CXL_MBOX_UNSUPPORTED; }