From patchwork Fri Jul 21 16:35:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13322352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5C8DEB64DC for ; Fri, 21 Jul 2023 16:39:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231586AbjGUQjN (ORCPT ); Fri, 21 Jul 2023 12:39:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231700AbjGUQil (ORCPT ); Fri, 21 Jul 2023 12:38:41 -0400 Received: from mail-yw1-x1143.google.com (mail-yw1-x1143.google.com [IPv6:2607:f8b0:4864:20::1143]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DBCE449CE for ; Fri, 21 Jul 2023 09:37:10 -0700 (PDT) Received: by mail-yw1-x1143.google.com with SMTP id 00721157ae682-57764a6bf8cso24480797b3.3 for ; Fri, 21 Jul 2023 09:37:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689957424; x=1690562224; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ijmrvsQ+xZzl5dG2FfioPPf11zWI2IrXAZmlY53iqe8=; b=kBBkXWLAIposs8Vvw5ryzVTR1Y5pX6ZV7oyHb+pESXSHL7AchAD9QYsuxt1mDozKhL BFPUJ8WyAY6FL8Ukz3Z+d6DeeOmlNwskzdHr5QvoAeeJK3/NPUwV8NSmE13y+EZ9Uegh qG+t/aMvbf11Yr39J9dHsfQDJZEUbguWsc0fthH9ETQ4zEj39HfUM+jk1dfqd9a8SrWc AQoX2CN+jlF8vQPhRnn0cIBB3d8zApUzIZ6sivbJai8cOMGPtKX+Pd9Pr8kGtY3dPUz1 05D7CTIYt5e6k1UPG0DvFEimHNVwyhimoaD+EUj39D+lLDBxSw/iCBajmnMbGxcYGQbn b7Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689957424; x=1690562224; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ijmrvsQ+xZzl5dG2FfioPPf11zWI2IrXAZmlY53iqe8=; b=WLpUjVPlFMbryS2mYaX1y7aKu3UTYUktV+GW3WMThy0ouVOgHC0iB1C2lfBdQqOgfC PpJa1NdDgg/pdpcP8vuDh7Zes1WxinZkDYPjVxWlM6he5A6x3BQvxPl3ZydGYFm4iR6p uAnCoWYSByWXmDxHk1+9s3KuvIeOJiUUQ7lq0tUes6n/M0gD+QxMCKTJ/f0bU6MhyNRR RnJApEgJOdFIPtycpki88Cfg9/HLUph1Ggu5r3fDUJvjShttSZjMY1bKViYbsk9OK4CG NtAffoNQTl0SwjN3GKJgwSTkRV3NX08LjmQtotigfxYNaxK5y7pgzi4gSXH29nbNYlRO tiZA== X-Gm-Message-State: ABy/qLYqm1VtU2oFB6C0/Jz/2dUFzwbwWAIi+/rIS9NmPTRyX1DacbP5 Qgypz6g4qdvNIgNTk3hdh0OztsOGNf8B X-Google-Smtp-Source: APBJJlH7vPrqdLJ+4kAEg7wrCo+qdg3fckTPJsNsGcsEVRnLliRkT0tHqhNEr0Ra/nP60EGHAvE9aA== X-Received: by 2002:a25:d003:0:b0:cec:821a:e20b with SMTP id h3-20020a25d003000000b00cec821ae20bmr2141726ybg.2.1689957424576; Fri, 21 Jul 2023 09:37:04 -0700 (PDT) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id v196-20020a25c5cd000000b00c62e0df7ca8sm863822ybe.24.2023.07.21.09.37.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Jul 2023 09:37:04 -0700 (PDT) From: Gregory Price X-Google-Original-From: Gregory Price To: qemu-devel@nongnu.org Cc: jonathan.cameron@huawei.com, linux-cxl@vger.kernel.org, junhee.ryu@sk.com, kwangjin.ko@sk.com, Gregory Price Subject: [PATCH 1/4] cxl/mailbox: change CCI cmd set structure to be a member, not a refernce Date: Fri, 21 Jul 2023 12:35:04 -0400 Message-Id: <20230721163505.1910-2-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230721163505.1910-1-gregory.price@memverge.com> References: <20230721163505.1910-1-gregory.price@memverge.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org This allows devices to have fully customized CCIs, along with complex devices where wrapper devices can override or add additional CCI commands without having to replicate full command structures or pollute a base device with every command that might ever be used. Signed-off-by: Gregory Price --- hw/cxl/cxl-mailbox-utils.c | 18 ++++++++++++++---- include/hw/cxl/cxl_device.h | 2 +- 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c index 2819914e8d..ddee3f1718 100644 --- a/hw/cxl/cxl-mailbox-utils.c +++ b/hw/cxl/cxl-mailbox-utils.c @@ -1373,9 +1373,19 @@ void cxl_init_cci(CXLCCI *cci, size_t payload_max) bg_timercb, cci); } +static void cxl_copy_cci_commands(CXLCCI *cci, const struct cxl_cmd (*cxl_cmds)[256]) { + for (int set = 0; set < 256; set++) { + for (int cmd = 0; cmd < 256; cmd++) { + if (cxl_cmds[set][cmd].handler) { + cci->cxl_cmd_set[set][cmd] = cxl_cmds[set][cmd]; + } + } + } +} + void cxl_initialize_mailbox_swcci(CXLCCI *cci, DeviceState *intf, DeviceState *d, size_t payload_max) { - cci->cxl_cmd_set = cxl_cmd_set_sw; + cxl_copy_cci_commands(cci, cxl_cmd_set_sw); cci->d = d; cci->intf = intf; cxl_init_cci(cci, payload_max); @@ -1383,7 +1393,7 @@ void cxl_initialize_mailbox_swcci(CXLCCI *cci, DeviceState *intf, DeviceState *d void cxl_initialize_mailbox_t3(CXLCCI *cci, DeviceState *d, size_t payload_max) { - cci->cxl_cmd_set = cxl_cmd_set; + cxl_copy_cci_commands(cci, cxl_cmd_set); cci->d = d; /* No separation for PCI MB as protocol handled in PCI device */ @@ -1398,7 +1408,7 @@ static const struct cxl_cmd cxl_cmd_set_t3_mctp[256][256] = { void cxl_initialize_t3_mctpcci(CXLCCI *cci, DeviceState *d, DeviceState *intf, size_t payload_max) { - cci->cxl_cmd_set = cxl_cmd_set_t3_mctp; + cxl_copy_cci_commands(cci, cxl_cmd_set_t3_mctp); cci->d = d; cci->intf = intf; cxl_init_cci(cci, payload_max); @@ -1414,7 +1424,7 @@ static const struct cxl_cmd cxl_cmd_set_usp_mctp[256][256] = { void cxl_initialize_usp_mctpcci(CXLCCI *cci, DeviceState *d, DeviceState *intf, size_t payload_max) { - cci->cxl_cmd_set = cxl_cmd_set_usp_mctp; + cxl_copy_cci_commands(cci, cxl_cmd_set_usp_mctp); cci->d = d; cci->intf = intf; cxl_init_cci(cci, payload_max); diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h index c68981b618..9a3c8b2dfa 100644 --- a/include/hw/cxl/cxl_device.h +++ b/include/hw/cxl/cxl_device.h @@ -163,7 +163,7 @@ typedef struct CXLEventLog { } CXLEventLog; typedef struct CXLCCI { - const struct cxl_cmd (*cxl_cmd_set)[256]; + struct cxl_cmd cxl_cmd_set[256][256]; struct cel_log { uint16_t opcode; uint16_t effect; From patchwork Fri Jul 21 16:35:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13322353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EDF2EB64DC for ; Fri, 21 Jul 2023 16:39:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230444AbjGUQjb (ORCPT ); Fri, 21 Jul 2023 12:39:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36698 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231270AbjGUQjM (ORCPT ); Fri, 21 Jul 2023 12:39:12 -0400 Received: from mail-yb1-xb43.google.com (mail-yb1-xb43.google.com [IPv6:2607:f8b0:4864:20::b43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4BEFC3AB2 for ; Fri, 21 Jul 2023 09:37:33 -0700 (PDT) Received: by mail-yb1-xb43.google.com with SMTP id 3f1490d57ef6-c5ffb6cda23so2099160276.0 for ; Fri, 21 Jul 2023 09:37:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689957442; x=1690562242; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UCxlxCkDucjgqyJyU3Oue/fnEXKDKF8PNCPD5OQGcak=; b=LE0skMiW+AIH9AdEitz+QvtO5F5y4hjqHyJvcZ7KFgRh5hyaU0jo/Ees0GFLiDwZLz jcZn/V5N9n8776veFXkpJDsdxbz5/6jVzNNRgCoJitOrGwsEwQ4mi6B28VUzVB1uTSq5 P6sgKDqGeckcczMrYahoi4pCmZW75E5A815/hTXl5MPA/N6zYuAG1wnR3TC7phz+73fj IuonAC6i8JZsmIQrUDapUnnlxw7sWUK0htkLtRpd9vmHcnOaQ5t8kCvV5H2HyarTFwX/ 5L0TF+6Rw5IFcV6PhgaHyDEHvqaqlWIaahFw0vHBAvd/jyLgtI+XGjzxfUHIGEaYgIDB OC3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689957442; x=1690562242; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UCxlxCkDucjgqyJyU3Oue/fnEXKDKF8PNCPD5OQGcak=; b=Nan8+hv287M4t++fjkitvVo/FHsJuKNkmLMY6ftz4NubQsXfOM11YLSyhe/yfjrfY8 He0gC34HpfhtNRoMkWSmtHqsyAwDIkD2Dr3u0h8iuZpsnOk7ucXRVxI5kIBq+VVlkdlJ vXPYOx4gwiHntb8XXs/H7M3E9B9PmUpc7SpsB5NNkdjHPv1IS3Cth+6KzCn4v2d23rAl i4OIheV80v/Z+yRaIYBKWIKEM+F/6Y+ALiDd9SRoe4YX9OeWM1NAdvjHNjyWncDyH3Uz QtghV4cGxsOb9sA6B5bovXXe1Cu0gJsBE3kuiZF1sTuYKDMLivtie69cR5mHCOyAjOw2 TbxQ== X-Gm-Message-State: ABy/qLYZNNmS8rY0e9zAbt4HaNFYk2+m67r0qE5gThMUem5cqjw2eyKR rwCCIPlaI4enusQpscHQbQ== X-Google-Smtp-Source: APBJJlHgBiaBhLzzLKSQzU9vgX03UOlsab2e7R/fTJSNHBp76nl2F3xwc1GfSQ9hKOyTQtouaqaGFA== X-Received: by 2002:a25:780e:0:b0:cfd:1ea:888e with SMTP id t14-20020a25780e000000b00cfd01ea888emr2116596ybc.60.1689957442377; Fri, 21 Jul 2023 09:37:22 -0700 (PDT) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id v196-20020a25c5cd000000b00c62e0df7ca8sm863822ybe.24.2023.07.21.09.37.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Jul 2023 09:37:22 -0700 (PDT) From: Gregory Price X-Google-Original-From: Gregory Price To: qemu-devel@nongnu.org Cc: jonathan.cameron@huawei.com, linux-cxl@vger.kernel.org, junhee.ryu@sk.com, kwangjin.ko@sk.com, Gregory Price Subject: [PATCH 2/4] cxl/mailbox: interface to add CCI commands to an existing CCI Date: Fri, 21 Jul 2023 12:35:06 -0400 Message-Id: <20230721163505.1910-3-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230721163505.1910-1-gregory.price@memverge.com> References: <20230721163505.1910-1-gregory.price@memverge.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org This enables wrapper devices to customize the base device's CCI (for example, with custom commands outside the specification) without the need to change the base device. The also enabled the base device to dispatch those commands without requiring additional driver support. Signed-off-by: Gregory Price Signed-off-by: Gregory Price Signed-off-by: Jonathan Cameron --- hw/cxl/cxl-mailbox-utils.c | 19 +++++++++++++++++++ include/hw/cxl/cxl_device.h | 2 ++ 2 files changed, 21 insertions(+) diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c index ddee3f1718..cad0cd0adb 100644 --- a/hw/cxl/cxl-mailbox-utils.c +++ b/hw/cxl/cxl-mailbox-utils.c @@ -1383,6 +1383,25 @@ static void cxl_copy_cci_commands(CXLCCI *cci, const struct cxl_cmd (*cxl_cmds)[ } } +void cxl_add_cci_commands(CXLCCI *cci, const struct cxl_cmd (*cxl_cmd_set)[256], size_t payload_max) +{ + cci->payload_max = payload_max > cci->payload_max ? payload_max : cci->payload_max; + for (int set = 0; set < 256; set++) { + for (int cmd = 0; cmd < 256; cmd++) { + if (cxl_cmd_set[set][cmd].handler) { + const struct cxl_cmd *c = &cxl_cmd_set[set][cmd]; + cci->cxl_cmd_set[set][cmd] = *c; + struct cel_log *log = + &cci->cel_log[cci->cel_size]; + + log->opcode = (set << 8) | cmd; + log->effect = c->effect; + cci->cel_size++; + } + } + } +} + void cxl_initialize_mailbox_swcci(CXLCCI *cci, DeviceState *intf, DeviceState *d, size_t payload_max) { cxl_copy_cci_commands(cci, cxl_cmd_set_sw); diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h index 9a3c8b2dfa..abc8405cc5 100644 --- a/include/hw/cxl/cxl_device.h +++ b/include/hw/cxl/cxl_device.h @@ -297,6 +297,8 @@ void cxl_initialize_mailbox_t3(CXLCCI *cci, DeviceState *d, size_t payload_max); void cxl_initialize_mailbox_swcci(CXLCCI *cci, DeviceState *intf, DeviceState *d, size_t payload_max); void cxl_init_cci(CXLCCI *cci, size_t payload_max); +void cxl_add_cci_commands(CXLCCI *cci, const struct cxl_cmd (*cxl_cmd_set)[256], + size_t payload_max); int cxl_process_cci_message(CXLCCI *cci, uint8_t set, uint8_t cmd, size_t len_in, uint8_t *pl_in, size_t *len_out, uint8_t *pl_out, From patchwork Fri Jul 21 16:35:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13322354 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0F3BC0015E for ; Fri, 21 Jul 2023 16:39:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231808AbjGUQjd (ORCPT ); Fri, 21 Jul 2023 12:39:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231968AbjGUQjP (ORCPT ); Fri, 21 Jul 2023 12:39:15 -0400 Received: from mail-yb1-xb42.google.com (mail-yb1-xb42.google.com [IPv6:2607:f8b0:4864:20::b42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D751249F6 for ; Fri, 21 Jul 2023 09:37:36 -0700 (PDT) Received: by mail-yb1-xb42.google.com with SMTP id 3f1490d57ef6-bd61dd9a346so2012534276.2 for ; Fri, 21 Jul 2023 09:37:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689957452; x=1690562252; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DEhi5WEhD5/cGTeWcmoP4kJuwJqV6DWzWOMgRGMsmJ0=; b=OD0ZQxXeFs23cQ12bk3fVD5EJ7umB1jG5jnr2zV5lGIYTOpa8WHLl0XhspEY7+krrZ a2zWcyQZcaoG1I2GqJsmDjTxCVZ5Cdv7tcSCEDWw427YDy51gm71jdB6YUAkiT6p3AEY XaPx/9itwE/Mat9vyn3WbZBS+KvI0DGynWxQXTISM0K242+LOtIrzfVQ2Xpk8qlOyKcw 86NzOpRfRFI1rJVqH5aY6QNCMC1XbqCHf/r+FGtTOAv+C3CUJZzQVoVosFVtwvz6MIhA GPbNEX+NeSwdADqr+jcQur2u9bkl04UbKsYwgX7UkDIJeFgPV99MCgG1hYKuaKRFqAGb oafg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689957452; x=1690562252; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DEhi5WEhD5/cGTeWcmoP4kJuwJqV6DWzWOMgRGMsmJ0=; b=ATPGb84ksX2gVrYs3JmBG9NIF/qgmXRj1Yl2MklhwAgVzUM7ZWNZLqjWvxlNgZrCLi hBb1Iwx81npIUaPUY2bFFEN1PhS321ChLZ49RN26/xWuyNt9b/6diitltDr4bHNnLg4i L+rZjGNe5sS1XrR9f63gB+xQU9SyF3WAwEZq1NoTcRV6awgwaXv1b6UgdL8IeJKl0vXI ELxv1U8/2ot3BQXT3OgUJ6mop7RNgIuPHm2iEV+VDIkqxsUcXF78EBpoGHO8CmuELyMq 7AiA8FbAWSSwQo+nJd9/a8CZbWBtAn2hUzlJ6B+H2THWLl2GYpr5lV5rziCXkrzCb75F jwqQ== X-Gm-Message-State: ABy/qLakqB8kpbqsLmldXLWR4HbcQzMs4b4E3pTg6nzAXMqgDluTV9Lu EEsgI5EL+AJV8clb+LT/Ew== X-Google-Smtp-Source: APBJJlG1pBAOWr7nmiIumumAEa6emm/+LI1IjY01KIViMH13UYM4m9IofTnVUKq8l3MXA8hU43znlQ== X-Received: by 2002:a5b:70f:0:b0:c7c:8095:a38e with SMTP id g15-20020a5b070f000000b00c7c8095a38emr2020803ybq.65.1689957451810; Fri, 21 Jul 2023 09:37:31 -0700 (PDT) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id v196-20020a25c5cd000000b00c62e0df7ca8sm863822ybe.24.2023.07.21.09.37.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Jul 2023 09:37:31 -0700 (PDT) From: Gregory Price X-Google-Original-From: Gregory Price To: qemu-devel@nongnu.org Cc: jonathan.cameron@huawei.com, linux-cxl@vger.kernel.org, junhee.ryu@sk.com, kwangjin.ko@sk.com, Gregory Price Subject: [PATCH 3/4] cxl/type3: minimum MHD cci support Date: Fri, 21 Jul 2023 12:35:08 -0400 Message-Id: <20230721163505.1910-4-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230721163505.1910-1-gregory.price@memverge.com> References: <20230721163505.1910-1-gregory.price@memverge.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Implement the MHD GET_INFO cci command and add a shared memory region to the type3 device to host the information. Add a helper program to initialize this shared memory region. Add a function pointer to type3 devices for future work that will allow an mhd device to provide a hook to validate whether a memory access is valid or not. For now, limit the number of LD's to the number of heads. Later, this limitation will need to be lifted for MH-MLDs. Intended use case: 1. Create the shared memory region 2. Format the shared memory region 3. Launch QEMU with `is_mhd=true,mhd_head=N,mhd_shmid=$shmid` shmid=`ipcmk -M 4096 | grep -o -E '[0-9]+' | head -1` cxl_mhd_init 4 $shmid qemu-system-x86_64 \ -nographic \ -accel kvm \ -drive file=./mhd.qcow2,format=qcow2,index=0,media=disk,id=hd \ -m 4G,slots=4,maxmem=8G \ -smp 4 \ -machine type=q35,cxl=on,hmat=on \ -device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 \ -device cxl-rp,id=rp0,bus=cxl.0,chassis=0,port=0,slot=0 \ -object memory-backend-file,id=mem0,mem-path=/tmp/mem0,size=4G,share=true \ -device cxl-type3,bus=rp0,volatile-memdev=mem0,id=cxl-mem0,sn=66666,is_mhd=true,mhd_head=0,mhd_shmid=$shmid \ -M cxl-fmw.0.targets.0=cxl.0,cxl-fmw.0.size=4G Signed-off-by: Gregory Price --- hw/cxl/cxl-mailbox-utils.c | 53 +++++++++++++++++++++++++++++ hw/mem/cxl_type3.c | 67 +++++++++++++++++++++++++++++++++++++ include/hw/cxl/cxl_device.h | 14 ++++++++ tools/cxl/cxl_mhd_init.c | 63 ++++++++++++++++++++++++++++++++++ tools/cxl/meson.build | 3 ++ tools/meson.build | 1 + 6 files changed, 201 insertions(+) create mode 100644 tools/cxl/cxl_mhd_init.c create mode 100644 tools/cxl/meson.build diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c index cad0cd0adb..57b8da4376 100644 --- a/hw/cxl/cxl-mailbox-utils.c +++ b/hw/cxl/cxl-mailbox-utils.c @@ -84,6 +84,8 @@ enum { #define GET_PHYSICAL_PORT_STATE 0x1 TUNNEL = 0x53, #define MANAGEMENT_COMMAND 0x0 + MHD = 0x55, + #define GET_MHD_INFO 0x0 }; /* CCI Message Format CXL r3.0 Figure 7-19 */ @@ -1155,6 +1157,56 @@ static CXLRetCode cmd_media_clear_poison(const struct cxl_cmd *cmd, return CXL_MBOX_SUCCESS; } +static CXLRetCode cmd_mhd_get_info(const struct cxl_cmd *cmd, + uint8_t *payload_in, + size_t len_in, + uint8_t *payload_out, + size_t *len_out, + CXLCCI *cci) +{ + CXLType3Dev *ct3d = CXL_TYPE3(cci->d); + struct { + uint8_t start_ld; + uint8_t ldmap_len; + } QEMU_PACKED *input = (void *)payload_in; + + struct { + uint8_t nr_lds; + uint8_t nr_heads; + uint16_t resv1; + uint8_t start_ld; + uint8_t ldmap_len; + uint16_t resv2; + uint8_t ldmap[]; + } QEMU_PACKED *output = (void *)payload_out; + + uint8_t start_ld = input->start_ld; + uint8_t ldmap_len = input->ldmap_len; + uint8_t i; + + if (!ct3d->is_mhd) { + return CXL_MBOX_UNSUPPORTED; + } + + if (start_ld >= ct3d->mhd_state->nr_lds) { + return CXL_MBOX_INVALID_INPUT; + } + + output->nr_lds = ct3d->mhd_state->nr_lds; + output->nr_heads = ct3d->mhd_state->nr_heads; + output->resv1 = 0; + output->start_ld = start_ld; + output->resv2 = 0; + + for (i = 0; i < ldmap_len && (start_ld + i) < output->nr_lds; i++) { + output->ldmap[i] = ct3d->mhd_state->ldmap[start_ld + i]; + } + output->ldmap_len = i; + + *len_out = sizeof(*output) + output->ldmap_len; + return CXL_MBOX_SUCCESS; +} + #define IMMEDIATE_CONFIG_CHANGE (1 << 1) #define IMMEDIATE_DATA_CHANGE (1 << 2) #define IMMEDIATE_POLICY_CHANGE (1 << 3) @@ -1195,6 +1247,7 @@ static const struct cxl_cmd cxl_cmd_set[256][256] = { cmd_media_inject_poison, 8, 0 }, [MEDIA_AND_POISON][CLEAR_POISON] = { "MEDIA_AND_POISON_CLEAR_POISON", cmd_media_clear_poison, 72, 0 }, + [MHD][GET_MHD_INFO] = {"GET_MULTI_HEADED_INFO", cmd_mhd_get_info, 2, 0}, }; static const struct cxl_cmd cxl_cmd_set_sw[256][256] = { diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c index efb7dece80..c8eb3aa67d 100644 --- a/hw/mem/cxl_type3.c +++ b/hw/mem/cxl_type3.c @@ -18,6 +18,7 @@ #include "hw/cxl/cxl.h" #include "hw/pci/msix.h" #include "hw/pci/spdm.h" +#include #define DWORD_BYTE 4 @@ -794,6 +795,48 @@ static DOEProtocol doe_spdm_prot[] = { { } }; +static bool cxl_setup_mhd(CXLType3Dev *ct3d, Error **errp) +{ + if (!ct3d->is_mhd) { + ct3d->mhd_access_valid = NULL; + return true; + } else if (ct3d->is_mhd && + (!ct3d->mhd_shmid || (ct3d->mhd_head == ~(0)))) { + error_setg(errp, "is_mhd requires mhd_shmid and mhd_head settings"); + return false; + } else if (!ct3d->is_mhd && + (ct3d->mhd_shmid || (ct3d->mhd_head == ~(0)))) { + error_setg(errp, "(is_mhd,mhd_head,mhd_shmid) invalid"); + return false; + } + + if (ct3d->mhd_head >= 32) { + error_setg(errp, "MHD Head ID must be between 0-31"); + return false; + } + + ct3d->mhd_state = shmat(ct3d->mhd_shmid, NULL, 0); + if (ct3d->mhd_state == (void*)-1) { + ct3d->mhd_state = NULL; + error_setg(errp, "Unable to attach MHD State. Check ipcs is valid"); + return false; + } + + /* For now, limit the number of heads to the number of LD's (SLD) */ + if (ct3d->mhd_state->nr_heads <= ct3d->mhd_head) { + error_setg(errp, "Invalid head ID for multiheaded device."); + return false; + } + + if (ct3d->mhd_state->nr_lds <= ct3d->mhd_head) { + error_setg(errp, "MHD Shared state does not have sufficient lds."); + return false; + } + + ct3d->mhd_state->ldmap[ct3d->mhd_head] = ct3d->mhd_head; + return true; +} + static void ct3_realize(PCIDevice *pci_dev, Error **errp) { CXLType3Dev *ct3d = CXL_TYPE3(pci_dev); @@ -806,6 +849,10 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp) QTAILQ_INIT(&ct3d->error_list); + if (!cxl_setup_mhd(ct3d, errp)) { + return; + } + if (!cxl_setup_memory(ct3d, errp)) { return; } @@ -910,6 +957,9 @@ static void ct3_exit(PCIDevice *pci_dev) if (ct3d->hostvmem) { address_space_destroy(&ct3d->hostvmem_as); } + if (ct3d->mhd_state) { + shmdt(ct3d->mhd_state); + } } static bool cxl_type3_dpa(CXLType3Dev *ct3d, hwaddr host_addr, uint64_t *dpa) @@ -1006,6 +1056,7 @@ static int cxl_type3_hpa_to_as_and_dpa(CXLType3Dev *ct3d, MemTxResult cxl_type3_read(PCIDevice *d, hwaddr host_addr, uint64_t *data, unsigned size, MemTxAttrs attrs) { + CXLType3Dev *ct3d = CXL_TYPE3(d); uint64_t dpa_offset = 0; AddressSpace *as = NULL; int res; @@ -1016,16 +1067,23 @@ MemTxResult cxl_type3_read(PCIDevice *d, hwaddr host_addr, uint64_t *data, return MEMTX_ERROR; } + if (ct3d->is_mhd && ct3d->mhd_access_valid) { + if (!ct3d->mhd_access_valid(ct3d, dpa_offset, size)) + return MEMTX_ERROR; + } + if (sanitize_running(&CXL_TYPE3(d)->cci)) { qemu_guest_getrandom_nofail(data, size); return MEMTX_OK; } + return address_space_read(as, dpa_offset, attrs, data, size); } MemTxResult cxl_type3_write(PCIDevice *d, hwaddr host_addr, uint64_t data, unsigned size, MemTxAttrs attrs) { + CXLType3Dev *ct3d = CXL_TYPE3(d); uint64_t dpa_offset = 0; AddressSpace *as = NULL; int res; @@ -1035,6 +1093,12 @@ MemTxResult cxl_type3_write(PCIDevice *d, hwaddr host_addr, uint64_t data, if (res) { return MEMTX_ERROR; } + + if (ct3d->is_mhd && ct3d->mhd_access_valid) { + if (!ct3d->mhd_access_valid(ct3d, dpa_offset, size)) + return MEMTX_ERROR; + } + if (sanitize_running(&CXL_TYPE3(d)->cci)) { return MEMTX_OK; } @@ -1067,6 +1131,9 @@ static Property ct3_props[] = { DEFINE_PROP_UINT64("sn", CXLType3Dev, sn, UI64_NULL), DEFINE_PROP_STRING("cdat", CXLType3Dev, cxl_cstate.cdat.filename), DEFINE_PROP_UINT16("spdm", CXLType3Dev, spdm_port, 0), + DEFINE_PROP_BOOL("is_mhd", CXLType3Dev, is_mhd, false), + DEFINE_PROP_UINT32("mhd_head", CXLType3Dev, mhd_head, 0), + DEFINE_PROP_UINT32("mhd_shmid", CXLType3Dev, mhd_shmid, 0), DEFINE_PROP_END_OF_LIST(), }; diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h index abc8405cc5..b545c5b6f3 100644 --- a/include/hw/cxl/cxl_device.h +++ b/include/hw/cxl/cxl_device.h @@ -408,6 +408,12 @@ typedef struct CXLPoison { typedef QLIST_HEAD(, CXLPoison) CXLPoisonList; #define CXL_POISON_LIST_LIMIT 256 +struct CXLMHD_SharedState { + uint8_t nr_heads; + uint8_t nr_lds; + uint8_t ldmap[]; +}; + struct CXLType3Dev { /* Private */ PCIDevice parent_obj; @@ -442,6 +448,14 @@ struct CXLType3Dev { unsigned int poison_list_cnt; bool poison_list_overflowed; uint64_t poison_list_overflow_ts; + + /* Multi-headed Device */ + bool is_mhd; + uint32_t mhd_head; + uint32_t mhd_shmid; + struct CXLMHD_SharedState *mhd_state; + bool (*mhd_access_valid)(CXLType3Dev* ct3d, uint64_t addr, + unsigned int size); }; #define TYPE_CXL_TYPE3 "cxl-type3" diff --git a/tools/cxl/cxl_mhd_init.c b/tools/cxl/cxl_mhd_init.c new file mode 100644 index 0000000000..1303aa9494 --- /dev/null +++ b/tools/cxl/cxl_mhd_init.c @@ -0,0 +1,63 @@ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct mhd_state { + uint8_t nr_heads; + uint8_t nr_lds; + uint8_t ldmap[]; +}; + +int main(int argc, char *argv[]) { + int shmid = 0; + uint32_t heads = 0; + struct mhd_state* mhd_state = 0; + uint8_t i; + + if (argc != 3) { + printf("usage: cxl_mhd_init \n" + "\theads : number of heads on the device\n" + "\tshmid : /tmp/mytoken.tmp\n"); + return -1; + } + + // must have at least 1 head + heads = (uint32_t)atoi(argv[1]); + if (heads == 0 || heads > 32) { + printf("bad heads argument (1-32)\n"); + return -1; + } + + shmid = (uint32_t)atoi(argv[2]); + if (shmid== 0) { + printf("bad shmid argument\n"); + return -1; + } + + mhd_state = shmat(shmid, NULL, 0); + if (mhd_state == (void*)-1) { + printf("Unable to attach to shared memory\n"); + return -1; + } + + // Initialize the mhd_state + size_t mhd_state_size = sizeof(struct mhd_state) + (sizeof(uint8_t) * heads); + memset(mhd_state, 0, mhd_state_size); + mhd_state->nr_heads = heads; + mhd_state->nr_lds = heads; + + // Head ID == LD ID for now + for (i = 0; i < heads; i++) + mhd_state->ldmap[i] = i; + + printf("mhd initialized\n"); + shmdt(mhd_state); + return 0; +} diff --git a/tools/cxl/meson.build b/tools/cxl/meson.build new file mode 100644 index 0000000000..218658fe69 --- /dev/null +++ b/tools/cxl/meson.build @@ -0,0 +1,3 @@ +executable('cxl_mhd_init', files('cxl_mhd_init.c'), + install: true, + install_dir: get_option('libexecdir')) diff --git a/tools/meson.build b/tools/meson.build index e69de29bb2..91a1d788cb 100644 --- a/tools/meson.build +++ b/tools/meson.build @@ -0,0 +1 @@ +subdir('cxl') From patchwork Fri Jul 21 16:35:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory Price X-Patchwork-Id: 13322355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7463BC001DE for ; Fri, 21 Jul 2023 16:39:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231866AbjGUQjf (ORCPT ); Fri, 21 Jul 2023 12:39:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230158AbjGUQjQ (ORCPT ); Fri, 21 Jul 2023 12:39:16 -0400 Received: from mail-yb1-xb44.google.com (mail-yb1-xb44.google.com [IPv6:2607:f8b0:4864:20::b44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99DA53AB5 for ; Fri, 21 Jul 2023 09:37:40 -0700 (PDT) Received: by mail-yb1-xb44.google.com with SMTP id 3f1490d57ef6-cae0ad435b6so2125448276.0 for ; Fri, 21 Jul 2023 09:37:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1689957454; x=1690562254; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8V7qA3s9xQiMzHwd/7RVchNm0AIHiFmsvHnr3KawPyk=; b=QmmTM8qRgNTKfNgKBiA1PKvwo+nBD9BMOdhLzsMG1MUEmnA4DFGcGNsGX9Y6mlxBqI 2RsggQ+vxPXSTiJfzD8NEYDGEVUM4hggEN79cMDpNOBzZGYaym5310i3DCGXPgM1/0et VLD3V4c2qPDTsgHM1O+uLi1BD7eUuGBnkVZyIbhaQ3GrlJIqh9rUfe0TjQXPWkVWWtyi dJiRkOTaFsFr34ZcMXyCHxHP38EXILha9RHQB+oPoCGr6JklDR7UtW8MAdRUMTv1SHA/ uoHq7HsTPXR0nnK0xkaqTYI93qqLbWdKXMI+A3e4Fm6wSLIOUS7aaoxRK8ZmecfEekL7 6QuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689957454; x=1690562254; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8V7qA3s9xQiMzHwd/7RVchNm0AIHiFmsvHnr3KawPyk=; b=LcFwY8igOqZYL6PjDZYh6V1AdKjRwTTLSQ0j6xKCSjP5//vd5gJ98ebJmjCvOt0vXq WQ6uhe0U1dkixHeO9g+W9CcnizXiXV6z412NRLLsRnnTzl3DwZCKfeth2xmW9bAq6cak FdtBNO2j6ff92RwTlXAmXea4QwruSdvsvUztztp2ys+LYwgpcJlZDes+bCRyshNdR0fr cmP+U8YKN5PZyKAOj8nppxCOEacwyPdBEj3HRy5YJH2weElc/noSWwOkKbJyV3E2Awea V1rvMg2SxBipF9LZmNFnkluMaOR99v53N3Vp7uuaBql3IvlUrwc8UcOOT4/hmUrCRYTg C/kg== X-Gm-Message-State: ABy/qLZzgXEIm0pBj/ZLDMs1B2nqJuYzUaAt0XGh2EJc71nKHY87cH6k F24UBSLUEO65wKcUB1G/XA== X-Google-Smtp-Source: APBJJlH/WB+/m43eMMerrd/Aq8PKFPTm86BiDUkJdQKMSg+7WtgMzw470fqKeXbLIOpxEHWupHF4iA== X-Received: by 2002:a25:254b:0:b0:cea:e9cd:a128 with SMTP id l72-20020a25254b000000b00ceae9cda128mr2387944ybl.59.1689957453882; Fri, 21 Jul 2023 09:37:33 -0700 (PDT) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id v196-20020a25c5cd000000b00c62e0df7ca8sm863822ybe.24.2023.07.21.09.37.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Jul 2023 09:37:33 -0700 (PDT) From: Gregory Price X-Google-Original-From: Gregory Price To: qemu-devel@nongnu.org Cc: jonathan.cameron@huawei.com, linux-cxl@vger.kernel.org, junhee.ryu@sk.com, kwangjin.ko@sk.com, Gregory Price Subject: [PATCH 4/4] cxl/vendor: SK hynix Niagara Multi-Headed SLD Device Date: Fri, 21 Jul 2023 12:35:09 -0400 Message-Id: <20230721163505.1910-5-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230721163505.1910-1-gregory.price@memverge.com> References: <20230721163505.1910-1-gregory.price@memverge.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Create a new device to emulate the SK hynix Niagara MHSLD platform. This device has custom CCI commands that allow for applying isolation to each memory block between hosts. This enables an early form of dynamic capacity, whereby the NUMA node maps the entire region, but the host is responsible for asking the device which memory blocks are allocated to it, and therefore may be onlined. To instantiate, wrap a cxl-type3 mhd in a cxl-skh-niagara like so: -device cxl-type3,bus=rp0,volatile-memdev=mem0,id=cxl-mem0,sn=66666,is_mhd=true,mhd_head=0,mhd_shmid=15 -device cxl-skh-niagara,target=cxl-mem0 The linux kernel will require raw CXL commands enabled to allow for passing through of Niagara CXL commands via the CCI mailbox. The Niagara MH-SLD has a slightly different shared memory region than the base MHD, so an additional tool ('init_niagara') is located in the vendor subdirectory. Utilize this in place of cxl_mhd_init. usage: init_niagara heads : number of heads on the device sections : number of sections section_size : size of a section in 128mb increments shmid : shmid produced by ipcmk Example: $shmid1=ipcmk -M 131072 ./init_niagara 4 32 1 $shmid1 Signed-off-by: Gregory Price Signed-off-by: Junhee Ryu Signed-off-by: Kwangjin Ko --- hw/cxl/Kconfig | 4 + hw/cxl/meson.build | 2 + hw/cxl/vendor/meson.build | 1 + hw/cxl/vendor/skhynix/.gitignore | 1 + hw/cxl/vendor/skhynix/init_niagara.c | 99 +++++ hw/cxl/vendor/skhynix/meson.build | 1 + hw/cxl/vendor/skhynix/skhynix_niagara.c | 521 ++++++++++++++++++++++++ 7 files changed, 629 insertions(+) create mode 100644 hw/cxl/vendor/meson.build create mode 100644 hw/cxl/vendor/skhynix/.gitignore create mode 100644 hw/cxl/vendor/skhynix/init_niagara.c create mode 100644 hw/cxl/vendor/skhynix/meson.build create mode 100644 hw/cxl/vendor/skhynix/skhynix_niagara.c diff --git a/hw/cxl/Kconfig b/hw/cxl/Kconfig index c9b2e46bac..dd6c54b54d 100644 --- a/hw/cxl/Kconfig +++ b/hw/cxl/Kconfig @@ -2,5 +2,9 @@ config CXL bool default y if PCI_EXPRESS +config CXL_VENDOR + bool + default y + config I2C_MCTP_CXL bool diff --git a/hw/cxl/meson.build b/hw/cxl/meson.build index 1393821fc4..e8c8c1355a 100644 --- a/hw/cxl/meson.build +++ b/hw/cxl/meson.build @@ -15,3 +15,5 @@ system_ss.add(when: 'CONFIG_CXL', system_ss.add(when: 'CONFIG_I2C_MCTP_CXL', if_true: files('i2c_mctp_cxl.c')) system_ss.add(when: 'CONFIG_ALL', if_true: files('cxl-host-stubs.c')) + +subdir('vendor') diff --git a/hw/cxl/vendor/meson.build b/hw/cxl/vendor/meson.build new file mode 100644 index 0000000000..12db8991f1 --- /dev/null +++ b/hw/cxl/vendor/meson.build @@ -0,0 +1 @@ +subdir('skhynix') diff --git a/hw/cxl/vendor/skhynix/.gitignore b/hw/cxl/vendor/skhynix/.gitignore new file mode 100644 index 0000000000..6d96de38ea --- /dev/null +++ b/hw/cxl/vendor/skhynix/.gitignore @@ -0,0 +1 @@ +init_niagara diff --git a/hw/cxl/vendor/skhynix/init_niagara.c b/hw/cxl/vendor/skhynix/init_niagara.c new file mode 100644 index 0000000000..28612339e0 --- /dev/null +++ b/hw/cxl/vendor/skhynix/init_niagara.c @@ -0,0 +1,99 @@ +/* + * SPDX-License-Identifier: GPL-2.0-or-later + * + * Copyright (c) 2023 MemVerge Inc. + * Copyright (c) 2023 SK hynix Inc. + * + * Reference list: + * From www.computeexpresslink.org + * Compute Express Link (CXL) Specification revision 3.0 Version 1.0 + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct niagara_state { + uint8_t nr_heads; + uint8_t nr_lds; + uint8_t ldmap[65536]; + uint32_t total_sections; + uint32_t free_sections; + uint32_t section_size; + uint32_t sections[]; +}; + +int main(int argc, char *argv[]) { + int shmid = 0; + uint32_t sections = 0; + uint32_t section_size = 0; + uint32_t heads = 0; + struct niagara_state* niagara_state = 0; + uint8_t i; + + if (argc != 5) { + printf("usage: init_niagara \n" + "\theads : number of heads on the device\n" + "\tsections : number of sections\n" + "\tsection_size : size of a section in 128mb increments\n" + "\tshmid : /tmp/mytoken.tmp\n\n" + "It is recommended your shared memory region is at least 128kb\n"); + return -1; + } + + // must have at least 1 head + heads = (uint32_t)atoi(argv[1]); + if (heads == 0 || heads > 32) { + printf("bad heads argument (1-32)\n"); + return -1; + } + + // Get number of sections + sections = (uint32_t)atoi(argv[2]); + if (sections == 0) { + printf("bad sections argument\n"); + return -1; + } + + section_size = (uint32_t)atoi(argv[3]); + if (sections == 0) { + printf("bad section size argument\n"); + return -1; + } + + shmid = (uint32_t)atoi(argv[4]); + if (shmid== 0) { + printf("bad shmid argument\n"); + return -1; + } + + niagara_state = shmat(shmid, NULL, 0); + if (niagara_state == (void*)-1) { + printf("Unable to attach to shared memory\n"); + return -1; + } + + // Initialize the niagara_state + size_t niagara_state_size = sizeof(struct niagara_state) + (sizeof(uint32_t) * sections); + memset(niagara_state, 0, niagara_state_size); + niagara_state->nr_heads = heads; + niagara_state->nr_lds = heads; + niagara_state->total_sections = sections; + niagara_state->free_sections = sections; + niagara_state->section_size = section_size; + + memset(&niagara_state->ldmap, '\xff', sizeof(niagara_state->ldmap)); + for (i = 0; i < heads; i++) + niagara_state->ldmap[i] = i; + + printf("niagara initialized\n"); + shmdt(niagara_state); + return 0; +} diff --git a/hw/cxl/vendor/skhynix/meson.build b/hw/cxl/vendor/skhynix/meson.build new file mode 100644 index 0000000000..4e57db65f1 --- /dev/null +++ b/hw/cxl/vendor/skhynix/meson.build @@ -0,0 +1 @@ +system_ss.add(when: 'CONFIG_CXL_VENDOR', if_true: files('skhynix_niagara.c',)) diff --git a/hw/cxl/vendor/skhynix/skhynix_niagara.c b/hw/cxl/vendor/skhynix/skhynix_niagara.c new file mode 100644 index 0000000000..1224978585 --- /dev/null +++ b/hw/cxl/vendor/skhynix/skhynix_niagara.c @@ -0,0 +1,521 @@ +/* + * SPDX-License-Identifier: GPL-2.0-or-later + * + * Copyright (c) 2023 MemVerge Inc. + * Copyright (c) 2023 SK hynix Inc. + * + * Reference list: + * From www.computeexpresslink.org + * Compute Express Link (CXL) Specification revision 3.0 Version 1.0 + */ + +#include "qemu/osdep.h" +#include "hw/irq.h" +#include "migration/vmstate.h" +#include "qapi/error.h" +#include "hw/cxl/cxl.h" +#include "hw/cxl/cxl_device.h" +#include "hw/pci/pcie.h" +#include "hw/pci/pcie_port.h" +#include "hw/qdev-properties.h" + +#define MIN_MEMBLK_SIZE (1024*1024*128) + +/* + * The shared state cannot have 2 variable sized regions + * so we have to max out the ldmap. +*/ +typedef struct Niagara_Shared_State Niagara_Shared_State; +struct Niagara_Shared_State { + uint8_t nr_heads; + uint8_t nr_lds; + uint8_t ldmap[65536]; + uint32_t total_sections; + uint32_t free_sections; + uint32_t section_size; + uint32_t sections[]; +}; + +#define IMMEDIATE_CONFIG_CHANGE (1 << 1) +#define IMMEDIATE_DATA_CHANGE (1 << 2) +#define IMMEDIATE_POLICY_CHANGE (1 << 3) +#define IMMEDIATE_LOG_CHANGE (1 << 4) +#define SECURITY_STATE_CHANGE (1 << 5) +#define BACKGROUND_OPERATION (1 << 6) + +enum { + NIAGARA = 0xC0 + #define GET_SECTION_STATUS 0x0 + #define SET_SECTION_ALLOC 0x1 + #define SET_SECTION_RELEASE 0x2 + #define SET_SECTION_SIZE 0x3 + #define MOVE_DATA 0x4 + #define GET_SECTION_MAP 0x5 + #define CLEAR_SECTION 0x99 +}; + +static CXLRetCode cmd_niagara_get_section_status(const struct cxl_cmd *cmd, + uint8_t *payload_in, + size_t len_in, + uint8_t *payload_out, + size_t *len_out, + CXLCCI *cci) +{ + CXLType3Dev *ct3d = CXL_TYPE3(cci->d); + Niagara_Shared_State *niagara_state = (Niagara_Shared_State*)ct3d->mhd_state; + struct { + uint32_t total_section_count; + uint32_t free_section_count; + } QEMU_PACKED *output = (void *)payload_out; + + if (!ct3d->is_mhd) + return CXL_MBOX_UNSUPPORTED; + + output->total_section_count = niagara_state->total_sections; + output->free_section_count = niagara_state->free_sections; + + *len_out = sizeof(*output); + + return CXL_MBOX_SUCCESS; +} + +#define MHD_SECTION_ALLOC_POLICY_ALL_OR_NOTHING 0 +#define MHD_SECTION_ALLOC_POLICY_BEST_EFFORT 1 +#define MHD_SECTION_ALLOC_POLICY_MANUAL 2 +static CXLRetCode cmd_niagara_set_section_alloc(const struct cxl_cmd *cmd, + uint8_t *payload_in, + size_t len_in, + uint8_t *payload_out, + size_t *len_out, + CXLCCI *cci) +{ + CXLType3Dev *ct3d = CXL_TYPE3(cci->d); + Niagara_Shared_State *niagara_state = (Niagara_Shared_State*)ct3d->mhd_state; + struct { + uint8_t policy; + uint8_t reserved1[3]; + uint32_t section_count; + uint8_t reserved2[4]; + uint32_t extent_count; + struct { + uint32_t start_section_id; + uint32_t section_count; + uint8_t reserved[8]; + } extents[]; + } QEMU_PACKED *input = (void *)payload_in; + struct { + uint32_t section_count; + uint32_t extent_count; + struct { + uint32_t start_section_id; + uint32_t section_count; + uint8_t reserved[8]; + } extents[]; + } QEMU_PACKED *output = (void *)payload_out; + + uint8_t policy = input->policy; + uint32_t count = input->section_count; + uint32_t i = 0; + + if (count == 0 || count > niagara_state->total_sections) { + return CXL_MBOX_INVALID_INPUT; + } + + if (input->policy == MHD_SECTION_ALLOC_POLICY_MANUAL) { + /* iterate input extents and count total sections for validation */ + uint32_t ttl_sec = 0; + for (i = 0; i < input->extent_count; i++) { + uint32_t start = input->extents[i].start_section_id; + uint32_t end = start + input->extents[i].section_count; + if ((start >= niagara_state->total_sections) || (end > niagara_state->total_sections)) + return CXL_MBOX_INVALID_INPUT; + ttl_sec += input->extents[i].section_count; + } + if (ttl_sec != input->section_count) + return CXL_MBOX_INVALID_INPUT; + } + + uint32_t *section_ids = malloc(count*sizeof(uint32_t)); + uint32_t *sections = &niagara_state->sections[0]; + uint32_t allocated = 0; + + if (input->policy & MHD_SECTION_ALLOC_POLICY_MANUAL) { + uint32_t cur_extent = 0; + for (cur_extent = 0; cur_extent < input->extent_count; cur_extent++) { + uint32_t start_section = input->extents[cur_extent].start_section_id; + uint32_t count = input->extents[cur_extent].section_count; + uint32_t cur_section; + for (cur_section = input->extents[cur_extent].start_section_id; + cur_section < (start_section + count); + cur_section++) { + uint32_t *section = §ions[cur_section]; + uint32_t old_value = __sync_fetch_and_or(section, (1 << ct3d->mhd_head)); + /* if the old value wasn't 0, this section was already claimed + * if it was owned by use already, just continue and don't count it + */ + if (old_value & (1 << ct3d->mhd_head)) { + continue; + } else if (old_value != 0) { + __sync_fetch_and_and(section, ~(1 << ct3d->mhd_head)); + continue; + } + __sync_fetch_and_sub(&niagara_state->free_sections, 1); + section_ids[allocated++] = i; + } + } + } else { + /* Iterate the the section list and allocate free sections */ + for (i = 0; (i < niagara_state->total_sections) && (allocated != count); i++) { + uint32_t old_value = __sync_fetch_and_or(§ions[i], (1 << ct3d->mhd_head)); + /* if the old value wasn't 0, this section was already claimed + * if it was owned by use already, just continue and don't count it + */ + if (old_value & (1 << ct3d->mhd_head)) { + continue; + } else if (old_value != 0) { + __sync_fetch_and_and(§ions[i], ~(1 << ct3d->mhd_head)); + continue; + } + __sync_fetch_and_sub(&niagara_state->free_sections, 1); + section_ids[allocated++] = i; + } + } + + if ((policy & MHD_SECTION_ALLOC_POLICY_ALL_OR_NOTHING) && + (allocated != count)) { + goto all_or_nothing_fail; + } + + /* Build the output */ + output->section_count = allocated; + uint32_t extents = 0; + uint32_t previous = 0; + for (i=0; i < allocated; i++) { + if (i == 0) { + /* start the first extent */ + output->extents[extents].start_section_id = section_ids[i]; + output->extents[extents].section_count = 1; + extents++; + } + else if (section_ids[i] == (previous+1)) { + /* increment the current extent */ + output->extents[extents-1].section_count++; + } + else { + /* start a new extent */ + output->extents[extents].start_section_id = section_ids[i]; + output->extents[extents].section_count = 1; + extents++; + } + previous = section_ids[i]; + } + output->extent_count = extents; + + *len_out = (8+(16*extents)); + + free(section_ids); + return CXL_MBOX_SUCCESS; +all_or_nothing_fail: + /* free any successfully allocated sections */ + for (i = 0; i < allocated; i++) { + __sync_fetch_and_and(§ions[i], ~(1 << ct3d->mhd_head)); + __sync_fetch_and_add(&niagara_state->free_sections, 1); + } + free(section_ids); + return CXL_MBOX_INTERNAL_ERROR; +} + +#define MHD_SECTION_RELEASE_POLICY_NONE 0 +#define MHD_SECTION_RELEASE_POLICY_CLEARING 1 +#define MHD_SECTION_RELEASE_POLICY_RANDOMIZING 2 +static CXLRetCode cmd_niagara_set_section_release(const struct cxl_cmd *cmd, + uint8_t *payload_in, + size_t len_in, + uint8_t *payload_out, + size_t *len_out, + CXLCCI *cci) +{ + CXLType3Dev *ct3d = CXL_TYPE3(cci->d); + Niagara_Shared_State *niagara_state = (Niagara_Shared_State*)ct3d->mhd_state; + struct { + uint32_t extent_count; + uint8_t policy; + uint8_t reserved[3]; + struct { + uint32_t start_section_id; + uint32_t section_count; + uint8_t reserved[8]; + } extents[]; + } QEMU_PACKED *input = (void *)payload_in; + uint32_t i, j; + + uint32_t* sections = &niagara_state->sections[0]; + for (i = 0; i < input->extent_count; i++) { + uint32_t start = input->extents[i].start_section_id; + for (j = 0; j < input->extents[i].section_count; j++) { + uint32_t old_val = __sync_fetch_and_and(§ions[start+j], ~(1 << ct3d->mhd_head)); + if (old_val & (1 << ct3d->mhd_head)) + __sync_fetch_and_add(&niagara_state->free_sections, 1); + + // TODO: Policy + } + } + + return CXL_MBOX_SUCCESS; +} + +static CXLRetCode cmd_niagara_set_section_size(const struct cxl_cmd *cmd, + uint8_t *payload_in, + size_t len_in, + uint8_t *payload_out, + size_t *len_out, + CXLCCI *cci) +{ + CXLType3Dev *ct3d = CXL_TYPE3(cci->d); + Niagara_Shared_State *niagara_state = (Niagara_Shared_State*)ct3d->mhd_state; + struct { + uint8_t section_unit; + uint8_t reserved[7]; + } QEMU_PACKED *input = (void *)payload_in; + struct { + uint8_t section_unit; + uint8_t reserved[7]; + } QEMU_PACKED *output = (void *)payload_out; + + if (niagara_state->section_size == (1 << (input->section_unit - 1))) + goto set_section_size_success; + + /* Check that there are no actively alloc'd sections */ + if(niagara_state->free_sections != niagara_state->total_sections) + return CXL_MBOX_INTERNAL_ERROR; + + uint32_t prev_section_size = niagara_state->section_size; + uint32_t prev_total_sections = niagara_state->total_sections; + + niagara_state->section_size = (1 << (input->section_unit - 1)); + niagara_state->total_sections = (prev_section_size * prev_total_sections) / niagara_state->section_size; + niagara_state->free_sections = niagara_state->total_sections; + +set_section_size_success: + output->section_unit = input->section_unit; + return CXL_MBOX_SUCCESS; +} + +#define MHD_MOVE_DATA_POLICY_CLEARING 0 +#define MHD_MOVE_DATA_POLICY_NONE 1 +static CXLRetCode cmd_niagara_move_data(const struct cxl_cmd *cmd, + uint8_t *payload_in, + size_t len_in, + uint8_t *payload_out, + size_t *len_out, + CXLCCI *cci) +{ + struct { + uint32_t extent_count; + uint8_t policy; + uint8_t reserved[3]; + struct { + uint32_t source_section_id; + uint32_t source_data_offset; + uint32_t destination_section_id; + uint32_t destination_data_offset; + uint32_t data_length; + uint8_t reserved[4]; + } extents; + } QEMU_PACKED *input = (void *)payload_in; + + struct { + uint64_t task_id; + uint32_t bitset[]; + } QEMU_PACKED *output = (void *)payload_out; + + (void)input; + (void)output; + + return CXL_MBOX_UNSUPPORTED; +} + +static CXLRetCode cmd_niagara_clear_section(const struct cxl_cmd *cmd, + uint8_t *payload_in, + size_t len_in, + uint8_t *payload_out, + size_t *len_out, + CXLCCI *cci) +{ + return CXL_MBOX_UNSUPPORTED; +} + +#define MHD_GSM_QUERY_FREE 0 +#define MHD_GSM_QUERY_ALLOCATED 1 +static CXLRetCode cmd_niagara_get_section_map(const struct cxl_cmd *cmd, + uint8_t *payload_in, + size_t len_in, + uint8_t *payload_out, + size_t *len_out, + CXLCCI *cci) +{ + CXLType3Dev *ct3d = CXL_TYPE3(cci->d); + Niagara_Shared_State *niagara_state = (Niagara_Shared_State*)ct3d->mhd_state; + struct { + uint8_t query_type; + uint8_t reserved[7]; + } QEMU_PACKED *input = (void *)payload_in; + struct { + uint32_t ttl_section_count; + uint32_t qry_section_count; + uint8_t bitset[]; + } QEMU_PACKED *output = (void *)payload_out; + + uint8_t query_type = input->query_type; + uint32_t i; + + if ((query_type != MHD_GSM_QUERY_FREE) && (query_type != MHD_GSM_QUERY_ALLOCATED)) + return CXL_MBOX_INVALID_INPUT; + + output->ttl_section_count = niagara_state->total_sections; + output->qry_section_count = 0; + uint32_t bytes = (output->ttl_section_count/8); + if (output->ttl_section_count % 8) + bytes += 1; + for (i = 0; i < bytes; i++) + output->bitset[i] = 0x0; + + /* Iterate the the section list and check the bits */ + uint32_t* sections = &niagara_state->sections[0]; + for (i = 0; (i < niagara_state->total_sections); i++) { + uint32_t section = sections[i]; + if (((query_type == MHD_GSM_QUERY_FREE) && (!section)) || + ((query_type == MHD_GSM_QUERY_ALLOCATED) && (section & (1 << ct3d->mhd_head)))) { + uint32_t byte = i / 8; + uint8_t bit = (1 << (i % 8)); + output->bitset[byte] |= bit; + output->qry_section_count++; + } + } + + *len_out = (8+bytes); + return CXL_MBOX_SUCCESS; +} + +static bool mhdsld_access_valid(CXLType3Dev *ct3d, uint64_t dpa_offset, unsigned int size) { + Niagara_Shared_State *niagara_state = (Niagara_Shared_State*)ct3d->mhd_state; + if (ct3d->mhd_state) { + uint32_t section = (dpa_offset / MIN_MEMBLK_SIZE); + if (!(niagara_state->sections[section] & (1 << ct3d->mhd_head))) { + return false; + } + } + return true; +} + +static const struct cxl_cmd cxl_cmd_set_niagara[256][256] = { + [NIAGARA][GET_SECTION_STATUS] = { "GET_SECTION_STATUS", + cmd_niagara_get_section_status, 0, 0 }, + [NIAGARA][SET_SECTION_ALLOC] = { "SET_SECTION_ALLOC", + cmd_niagara_set_section_alloc, ~0, + (IMMEDIATE_CONFIG_CHANGE | IMMEDIATE_DATA_CHANGE | + IMMEDIATE_POLICY_CHANGE | IMMEDIATE_LOG_CHANGE) + }, + [NIAGARA][SET_SECTION_RELEASE] = { "SET_SECTION_RELEASE", + cmd_niagara_set_section_release, ~0, + (IMMEDIATE_CONFIG_CHANGE | IMMEDIATE_DATA_CHANGE | + IMMEDIATE_POLICY_CHANGE | IMMEDIATE_LOG_CHANGE) + }, + [NIAGARA][SET_SECTION_SIZE] = { "SET_SECTION_SIZE", + cmd_niagara_set_section_size, 8, + (IMMEDIATE_CONFIG_CHANGE | IMMEDIATE_DATA_CHANGE | + IMMEDIATE_POLICY_CHANGE | IMMEDIATE_LOG_CHANGE) + }, + [NIAGARA][MOVE_DATA] = { "MOVE_DATA", + cmd_niagara_move_data, ~0, IMMEDIATE_DATA_CHANGE }, + [NIAGARA][GET_SECTION_MAP] = { "GET_SECTION_MAP", + cmd_niagara_get_section_map, 8 , IMMEDIATE_DATA_CHANGE }, + [NIAGARA][CLEAR_SECTION] = { "CLEAR_SECTION", + cmd_niagara_clear_section, 0, IMMEDIATE_DATA_CHANGE }, +}; + +enum cxl_dev_type { + cxl_type3, +}; + +struct CXL_Niagara_State { + PCIDevice parent_obj; + PCIDevice *target; + enum cxl_dev_type type; + CXLCCI *cci; +}; + +struct CXL_NiagaraClass { + PCIDeviceClass parent_class; +}; + + +#define TYPE_CXL_Niagara "cxl-skh-niagara" +OBJECT_DECLARE_TYPE(CXL_Niagara_State, CXL_NiagaraClass, CXL_Niagara) + +static Property cxl_niagara_props[] = { + DEFINE_PROP_LINK("target", CXL_Niagara_State, + target, TYPE_PCI_DEVICE, PCIDevice *), + DEFINE_PROP_END_OF_LIST(), +}; + +static void cxl_niagara_realize(DeviceState *d, Error **errp) +{ + CXL_Niagara_State *s = CXL_Niagara(d); + + if (object_dynamic_cast(OBJECT(s->target), TYPE_CXL_TYPE3)) { + CXLType3Dev *ct3d = CXL_TYPE3(s->target); + + if (!ct3d->is_mhd) { + error_setg(errp, "Niagara target must be a cxl-type3 mhd"); + return; + } + + s->type = cxl_type3; + s->cci = &ct3d->cci; + + ct3d->mhd_access_valid = mhdsld_access_valid; + return; + } + + error_setg(errp, "Unhandled target type for CXL Niagara MHSLD"); +} + +static void cxl_niagara_reset(DeviceState *d) +{ + CXL_Niagara_State *s = CXL_Niagara(d); + + if (object_dynamic_cast(OBJECT(s->target), TYPE_CXL_TYPE3)) { + CXLType3Dev *ct3d = CXL_TYPE3(s->target); + cxl_add_cci_commands(&ct3d->cci, cxl_cmd_set_niagara, 512); + return; + } +} + +static void cxl_niagara_class_init(ObjectClass *klass, void *data) +{ + DeviceClass *dc = DEVICE_CLASS(klass); + dc->realize = cxl_niagara_realize; + dc->reset = cxl_niagara_reset; + device_class_set_props(dc, cxl_niagara_props); +} + +static const TypeInfo cxl_niagara_info = { + .name = TYPE_CXL_Niagara, + .parent = TYPE_CXL_TYPE3, + .class_size = sizeof(struct CXL_NiagaraClass), + .class_init = cxl_niagara_class_init, + .instance_size = sizeof(CXL_Niagara_State), + .interfaces = (InterfaceInfo[]) { + { INTERFACE_CXL_DEVICE }, + { INTERFACE_PCIE_DEVICE }, + {} + }, +}; + +static void cxl_niagara_register_types(void) +{ + type_register_static(&cxl_niagara_info); +} + +type_init(cxl_niagara_register_types)