From patchwork Thu Dec 22 04:24:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13079383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1CDE5C4332F for ; Thu, 22 Dec 2022 04:26:30 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p8D8l-0007os-DO; Wed, 21 Dec 2022 23:25:07 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8g-0007np-Qc for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:02 -0500 Received: from mga03.intel.com ([134.134.136.65]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8e-00015O-Ks for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:02 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671683100; x=1703219100; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=r+jZpDuAJgbF8qG2z8hJ/QknHZJhBEpM5YCmYell9u0=; b=jJuftZjdwySI55i3Z9oUee2l5oLobTxKFNsMP0HJeJxbv3o0UCMrzbkr 4441ih/kdf3Xs4zGCYihAAM6yCbA8thA1WBIbiHZjJCuUe/eGM2IvFQqx eZbYdmT9vQVDGuwyeWuaiCsfjtWspYWtTBBItAlqJfu2axo7nNrnX5SJB 6zFujskqGhyY/CdFDsd1e5QEvSouTtgyh24GBIQO8u/o3dGx98H8a4pK2 /Yg3NOu/lrug9uxpefAeaPG+KTaDdsVYZk+dQqCLLIvr5oSc9D8POQ370 Ln90qnsvT3NtmfntAl7+/2p2jd3QuvoVwrX9QmfRJTl5BRMwfTlbxn5YT Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="321957584" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="321957584" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:24:55 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="601733189" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="601733189" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.20.211]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:24:54 -0800 From: Ira Weiny Date: Wed, 21 Dec 2022 20:24:31 -0800 Subject: [PATCH v2 1/8] qemu/bswap: Add const_le64() MIME-Version: 1.0 Message-Id: <20221221-ira-cxl-events-2022-11-17-v2-1-2ce2ecc06219@intel.com> References: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> In-Reply-To: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> To: Jonathan Cameron Cc: Michael Tsirkin , Ben Widawsky , Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org, Peter Maydell X-Mailer: b4 0.11.0-dev-141d4 X-Developer-Signature: v=1; a=ed25519-sha256; t=1671683093; l=1569; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=r+jZpDuAJgbF8qG2z8hJ/QknHZJhBEpM5YCmYell9u0=; b=4U8LMfME7MtfbV2zXcVfJ6tolFzJKjjOLrvd5hsBWlgpAYj/h2hHLoDmaUVIoak86NiJ+kenVDRJ sftCVqvnAPwJkmfWkwFPRFkLGPQWAX5Mw7VGHPZ3mKVkqg2zJujp X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Received-SPF: pass client-ip=134.134.136.65; envelope-from=ira.weiny@intel.com; helo=mga03.intel.com X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Gcc requires constant versions of cpu_to_le* calls. Add a 64 bit version. Reviewed-by: Jonathan Cameron Reviewed-by: Peter Maydell Signed-off-by: Ira Weiny --- Changes from RFC: Peter Change order of the definitions, 64-32-16 --- include/qemu/bswap.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/qemu/bswap.h b/include/qemu/bswap.h index 346d05f2aab3..e1eca22f2548 100644 --- a/include/qemu/bswap.h +++ b/include/qemu/bswap.h @@ -187,6 +187,15 @@ CPU_CONVERT(le, 64, uint64_t) * used to initialize static variables. */ #if HOST_BIG_ENDIAN +# define const_le64(_x) \ + ((((_x) & 0x00000000000000ffU) << 56) | \ + (((_x) & 0x000000000000ff00U) << 40) | \ + (((_x) & 0x0000000000ff0000U) << 24) | \ + (((_x) & 0x00000000ff000000U) << 8) | \ + (((_x) & 0x000000ff00000000U) >> 8) | \ + (((_x) & 0x0000ff0000000000U) >> 24) | \ + (((_x) & 0x00ff000000000000U) >> 40) | \ + (((_x) & 0xff00000000000000U) >> 56)) # define const_le32(_x) \ ((((_x) & 0x000000ffU) << 24) | \ (((_x) & 0x0000ff00U) << 8) | \ @@ -196,6 +205,7 @@ CPU_CONVERT(le, 64, uint64_t) ((((_x) & 0x00ff) << 8) | \ (((_x) & 0xff00) >> 8)) #else +# define const_le64(_x) (_x) # define const_le32(_x) (_x) # define const_le16(_x) (_x) #endif From patchwork Thu Dec 22 04:24:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13079385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9C9CEC4332F for ; Thu, 22 Dec 2022 04:27:15 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p8D8r-0007qG-1H; Wed, 21 Dec 2022 23:25:13 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8h-0007nz-U6 for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:04 -0500 Received: from mga03.intel.com ([134.134.136.65]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8e-00015r-Sn for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:03 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671683100; x=1703219100; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=XGNtHbyVzjRM6zP+BrVd5/RaxuH73xazAUYM0GbkcYU=; b=fVnauWvwwAOyOtcsyq1ViqTGjc5XYcondcr9DnhZ/vpFCyjSoSPbqVa1 yxRqmkcBFbTu9qVXjBTcxWmLJzcYUPQvVBbkE+b59m/jI/H44i+BgcRpN q174QC/dhTY2G4lwBAC9IdpWFUIhQwFs3CLb0Vo5LdAFL3ATjd7Xe3pBP 595XTVizi+n8prDgD0/3JY+vMf1YVI/u8IFOEUQkikktMxIgyhc1v0zwV t0aohPNjUjjTguaKm9pB6ak6boMllDHwx7shCCdP4sGu7/LG3cyhl/cAT +EeDvsYCpmzrrc4pltPmchU4/q6VVeDdKcexW0G0a4WIJv0T8tUzkia6a Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="321957591" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="321957591" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:24:56 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="601733192" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="601733192" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.20.211]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:24:56 -0800 From: Ira Weiny Date: Wed, 21 Dec 2022 20:24:32 -0800 Subject: [PATCH v2 2/8] qemu/uuid: Add UUID static initializer MIME-Version: 1.0 Message-Id: <20221221-ira-cxl-events-2022-11-17-v2-2-2ce2ecc06219@intel.com> References: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> In-Reply-To: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> To: Jonathan Cameron Cc: Michael Tsirkin , Ben Widawsky , Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org, Peter Maydell X-Mailer: b4 0.11.0-dev-141d4 X-Developer-Signature: v=1; a=ed25519-sha256; t=1671683093; l=1574; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=XGNtHbyVzjRM6zP+BrVd5/RaxuH73xazAUYM0GbkcYU=; b=7sLIDzTSr2nn7ij9GENb4v+W0NXmQNOuFcR+/0BAMgslxVcrBtgGkWzFtn3bks+zpTJeF1qtHpFC 93hb7iYICYn7BTztkIdTd0kEGQ11LwsclTMVy2FXJzTOceov8utU X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Received-SPF: pass client-ip=134.134.136.65; envelope-from=ira.weiny@intel.com; helo=mga03.intel.com X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org UUID's are defined as network byte order fields. No static initializer was available for UUID's in their standard big endian format. Define a big endian initializer for UUIDs. Reviewed-by: Jonathan Cameron Signed-off-by: Ira Weiny --- include/qemu/uuid.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/include/qemu/uuid.h b/include/qemu/uuid.h index 9925febfa54d..dc40ee1fc998 100644 --- a/include/qemu/uuid.h +++ b/include/qemu/uuid.h @@ -61,6 +61,18 @@ typedef struct { (clock_seq_hi_and_reserved), (clock_seq_low), (node0), (node1), (node2),\ (node3), (node4), (node5) } +/* Normal (network byte order) UUID */ +#define UUID(time_low, time_mid, time_hi_and_version, \ + clock_seq_hi_and_reserved, clock_seq_low, node0, node1, node2, \ + node3, node4, node5) \ + { ((time_low) >> 24) & 0xff, ((time_low) >> 16) & 0xff, \ + ((time_low) >> 8) & 0xff, (time_low) & 0xff, \ + ((time_mid) >> 8) & 0xff, (time_mid) & 0xff, \ + ((time_hi_and_version) >> 8) & 0xff, (time_hi_and_version) & 0xff, \ + (clock_seq_hi_and_reserved), (clock_seq_low), \ + (node0), (node1), (node2), (node3), (node4), (node5) \ + } + #define UUID_FMT "%02hhx%02hhx%02hhx%02hhx-" \ "%02hhx%02hhx-%02hhx%02hhx-" \ "%02hhx%02hhx-" \ From patchwork Thu Dec 22 04:24:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13079378 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 14301C4332F for ; Thu, 22 Dec 2022 04:25:38 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p8D8m-0007pf-69; Wed, 21 Dec 2022 23:25:08 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8j-0007oG-2O for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:06 -0500 Received: from mga03.intel.com ([134.134.136.65]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8h-00015O-5r for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:04 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671683103; x=1703219103; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=Ka/aRsnXmQ7dCrsF+L+URsGC7uaO2uU2ffcMYWdbxuQ=; b=GSF+tjWE8t7fjrN4RuaDCmmPe4u67Wy0ui1p3Y6onueY6DfQ+yADEROQ ARnjwF+arCR1fnBVR5gvW77s+bWQeMCKecSZSqHp0VrVdF9iBr1iRgN8x IWNO8ZvRmBDbITre4a2X0izZU9mBhwD2ZpL+R2k43bi4oCbGmphAcN3Ag rdv9wvEk+TXJfnWQgCU2MelV+mR/Kg/qG4B4rQto88Vgg1tzFzTvtj+Hk M+5anUtH+V09J2da+ozW5fS0dR9NxiDqEP2MD2FOWQ3sByU0PBDD0cmlW 32BXc2bP+ZuKv0DC056ZUCASGX57FEaY+4OOhsczFPxCfGdAI5VuJd0K3 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="321957598" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="321957598" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:24:57 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="601733196" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="601733196" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.20.211]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:24:56 -0800 From: Ira Weiny Date: Wed, 21 Dec 2022 20:24:33 -0800 Subject: [PATCH v2 3/8] hw/cxl/mailbox: Use new UUID network order define for cel_uuid MIME-Version: 1.0 Message-Id: <20221221-ira-cxl-events-2022-11-17-v2-3-2ce2ecc06219@intel.com> References: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> In-Reply-To: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> To: Jonathan Cameron Cc: Michael Tsirkin , Ben Widawsky , Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org, Peter Maydell X-Mailer: b4 0.11.0-dev-141d4 X-Developer-Signature: v=1; a=ed25519-sha256; t=1671683093; l=3634; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=Ka/aRsnXmQ7dCrsF+L+URsGC7uaO2uU2ffcMYWdbxuQ=; b=v1sW8DWicc1820I2mSk+v4TqYL/w1ukeAaC82Yk4A/sCJ5D+6nLJ8soRE7vvHqoOGJxFbOOBgxcz HURse14cB7EXA1zKfqFrZ3LYtd3FkW95yreN/fMG4XcVYlUCFPtB X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Received-SPF: pass client-ip=134.134.136.65; envelope-from=ira.weiny@intel.com; helo=mga03.intel.com X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The cel_uuid was programatically generated previously because there was no static initializer for network order UUIDs. Use the new network order initializer for cel_uuid. Adjust cxl_initialize_mailbox() because it can't fail now. Update specification reference. Signed-off-by: Ira Weiny --- Changes from RFC: New patch. --- hw/cxl/cxl-device-utils.c | 4 ++-- hw/cxl/cxl-mailbox-utils.c | 14 +++++++------- include/hw/cxl/cxl_device.h | 2 +- 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/hw/cxl/cxl-device-utils.c b/hw/cxl/cxl-device-utils.c index 21845dbfd050..34697064714e 100644 --- a/hw/cxl/cxl-device-utils.c +++ b/hw/cxl/cxl-device-utils.c @@ -267,7 +267,7 @@ void cxl_device_register_init_common(CXLDeviceState *cxl_dstate) cxl_device_cap_init(cxl_dstate, MEMORY_DEVICE, 0x4000); memdev_reg_init_common(cxl_dstate); - assert(cxl_initialize_mailbox(cxl_dstate, false) == 0); + cxl_initialize_mailbox(cxl_dstate, false); } void cxl_device_register_init_swcci(CXLDeviceState *cxl_dstate) @@ -289,5 +289,5 @@ void cxl_device_register_init_swcci(CXLDeviceState *cxl_dstate) cxl_device_cap_init(cxl_dstate, MEMORY_DEVICE, 0x4000); memdev_reg_init_common(cxl_dstate); - assert(cxl_initialize_mailbox(cxl_dstate, true) == 0); + cxl_initialize_mailbox(cxl_dstate, true); } diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c index c1183614b9a4..157c01255ee3 100644 --- a/hw/cxl/cxl-mailbox-utils.c +++ b/hw/cxl/cxl-mailbox-utils.c @@ -321,7 +321,11 @@ static ret_code cmd_timestamp_set(struct cxl_cmd *cmd, return CXL_MBOX_SUCCESS; } -static QemuUUID cel_uuid; +/* CXL 3.0 8.2.9.5.2.1 Command Effects Log (CEL) */ +static QemuUUID cel_uuid = { + .data = UUID(0x0da9c0b5, 0xbf41, 0x4b78, 0x8f, 0x79, + 0x96, 0xb1, 0x62, 0x3b, 0x3f, 0x17) +}; /* 8.2.9.4.1 */ static ret_code cmd_logs_get_supported(struct cxl_cmd *cmd, @@ -684,16 +688,14 @@ void cxl_process_mailbox(CXLDeviceState *cxl_dstate) DOORBELL, 0); } -int cxl_initialize_mailbox(CXLDeviceState *cxl_dstate, bool switch_cci) +void cxl_initialize_mailbox(CXLDeviceState *cxl_dstate, bool switch_cci) { - /* CXL 2.0: Table 169 Get Supported Logs Log Entry */ - const char *cel_uuidstr = "0da9c0b5-bf41-4b78-8f79-96b1623b3f17"; - if (!switch_cci) { cxl_dstate->cxl_cmd_set = cxl_cmd_set; } else { cxl_dstate->cxl_cmd_set = cxl_cmd_set_sw; } + for (int set = 0; set < 256; set++) { for (int cmd = 0; cmd < 256; cmd++) { if (cxl_dstate->cxl_cmd_set[set][cmd].handler) { @@ -707,6 +709,4 @@ int cxl_initialize_mailbox(CXLDeviceState *cxl_dstate, bool switch_cci) } } } - - return qemu_uuid_parse(cel_uuidstr, &cel_uuid); } diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h index 1b366b739c62..3be2e37b3e4c 100644 --- a/include/hw/cxl/cxl_device.h +++ b/include/hw/cxl/cxl_device.h @@ -238,7 +238,7 @@ CXL_DEVICE_CAPABILITY_HEADER_REGISTER(MEMORY_DEVICE, CXL_DEVICE_CAP_HDR1_OFFSET + CXL_DEVICE_CAP_REG_SIZE * 2) -int cxl_initialize_mailbox(CXLDeviceState *cxl_dstate, bool switch_cci); +void cxl_initialize_mailbox(CXLDeviceState *cxl_dstate, bool switch_cci); void cxl_process_mailbox(CXLDeviceState *cxl_dstate); #define cxl_device_cap_init(dstate, reg, cap_id) \ From patchwork Thu Dec 22 04:24:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13079384 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B6A60C001B2 for ; Thu, 22 Dec 2022 04:26:30 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p8D8q-0007qD-3j; Wed, 21 Dec 2022 23:25:12 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8j-0007oH-Ab for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:06 -0500 Received: from mga03.intel.com ([134.134.136.65]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8h-00015K-9X for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:05 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671683103; x=1703219103; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=JQbNYS9g3BUlUil65num8zsltfxj8rTJzgpHrifcVG4=; b=gaduflrfuyd0x9B30NVg2112q5qOJXE+7NIWCmVdps1WsocgjAUmaJKM QPEJNIid2oDSwMGPqh31qx1jKDq/n/Kc/LtXSrPibBy2x0ji3MGi/Hgvz 5yp627cQTIiJht+gRcQvAotf0FIF7gIz6IEvALOKz0loM7aaM23PqL6Cq nfo1y8+jTvVsA7pu1+5npTLXPfva9alDOBCBfv4Hv1wlZmmqNu/o2R9B+ FxzHMKG1UG8920Ev+ZJQ+uwF9oFAxKkF7HYp12sPNPgpV2EimYWUIF4el WzCXaHB1i+J5CTjPwIlLQE6RCMQqs6IgcXSJNKh4rOgg6dokH8hhmn9sZ w==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="321957606" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="321957606" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:24:58 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="601733199" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="601733199" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.20.211]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:24:57 -0800 From: Ira Weiny Date: Wed, 21 Dec 2022 20:24:34 -0800 Subject: [PATCH v2 4/8] hw/cxl/events: Add event status register MIME-Version: 1.0 Message-Id: <20221221-ira-cxl-events-2022-11-17-v2-4-2ce2ecc06219@intel.com> References: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> In-Reply-To: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> To: Jonathan Cameron Cc: Michael Tsirkin , Ben Widawsky , Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org, Peter Maydell X-Mailer: b4 0.11.0-dev-141d4 X-Developer-Signature: v=1; a=ed25519-sha256; t=1671683093; l=8331; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=JQbNYS9g3BUlUil65num8zsltfxj8rTJzgpHrifcVG4=; b=+hHixGtes5MMCUbZNe07rsPMoFwG/vdah48eKi5inBtaHumT1zZ6NfRo6iWobpA/1v6gKl4wjv+M oZ8ODPiiB1abajL7169Jp2dTCRVhU/BunmIpDy2siwLlEs8XJ392 X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Received-SPF: pass client-ip=134.134.136.65; envelope-from=ira.weiny@intel.com; helo=mga03.intel.com X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The device status register block was defined. However, there were no individual registers nor any data wired up. Define the event status register [CXL 3.0; 8.2.8.3.1] as part of the device status register block. Wire up the register and initialize the event status for each log. To support CXL 3.0 the version of the device status register block needs to be 2. Change the macro to allow for setting the version. Signed-off-by: Ira Weiny --- Changes from RFC: New patch to cover this register which was not being used before. --- hw/cxl/cxl-device-utils.c | 50 +++++++++++++++++++++++++++++++++++++-------- include/hw/cxl/cxl_device.h | 23 ++++++++++++++++++--- include/hw/cxl/cxl_events.h | 28 +++++++++++++++++++++++++ 3 files changed, 90 insertions(+), 11 deletions(-) diff --git a/hw/cxl/cxl-device-utils.c b/hw/cxl/cxl-device-utils.c index 34697064714e..7f29d40be04a 100644 --- a/hw/cxl/cxl-device-utils.c +++ b/hw/cxl/cxl-device-utils.c @@ -41,7 +41,20 @@ static uint64_t caps_reg_read(void *opaque, hwaddr offset, unsigned size) static uint64_t dev_reg_read(void *opaque, hwaddr offset, unsigned size) { - return 0; + CXLDeviceState *cxl_dstate = opaque; + + switch (size) { + case 1: + return cxl_dstate->dev_reg_state[offset]; + case 2: + return cxl_dstate->dev_reg_state16[offset / size]; + case 4: + return cxl_dstate->dev_reg_state32[offset / size]; + case 8: + return cxl_dstate->dev_reg_state64[offset / size]; + default: + g_assert_not_reached(); + } } static uint64_t mailbox_reg_read(void *opaque, hwaddr offset, unsigned size) @@ -236,7 +249,28 @@ void cxl_device_register_block_init(Object *obj, CXLDeviceState *cxl_dstate) &cxl_dstate->memory_device); } -static void device_reg_init_common(CXLDeviceState *cxl_dstate) { } +void cxl_event_set_status(CXLDeviceState *cxl_dstate, + enum cxl_event_log_type log_type, + bool available) +{ + if (available) { + cxl_dstate->event_status |= (1 << log_type); + } else { + cxl_dstate->event_status &= ~(1 << log_type); + } + + ARRAY_FIELD_DP64(cxl_dstate->dev_reg_state64, CXL_DEV_EVENT_STATUS, + EVENT_STATUS, cxl_dstate->event_status); +} + +static void device_reg_init_common(CXLDeviceState *cxl_dstate) +{ + enum cxl_event_log_type log; + + for (log = 0; log < CXL_EVENT_TYPE_MAX; log++) { + cxl_event_set_status(cxl_dstate, log, false); + } +} static void mailbox_reg_init_common(CXLDeviceState *cxl_dstate) { @@ -258,13 +292,13 @@ void cxl_device_register_init_common(CXLDeviceState *cxl_dstate) ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_VERSION, 1); ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_COUNT, cap_count); - cxl_device_cap_init(cxl_dstate, DEVICE_STATUS, 1); + cxl_device_cap_init(cxl_dstate, DEVICE_STATUS, 1, 2); device_reg_init_common(cxl_dstate); - cxl_device_cap_init(cxl_dstate, MAILBOX, 2); + cxl_device_cap_init(cxl_dstate, MAILBOX, 2, 1); mailbox_reg_init_common(cxl_dstate); - cxl_device_cap_init(cxl_dstate, MEMORY_DEVICE, 0x4000); + cxl_device_cap_init(cxl_dstate, MEMORY_DEVICE, 0x4000, 1); memdev_reg_init_common(cxl_dstate); cxl_initialize_mailbox(cxl_dstate, false); @@ -280,13 +314,13 @@ void cxl_device_register_init_swcci(CXLDeviceState *cxl_dstate) ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_VERSION, 1); ARRAY_FIELD_DP64(cap_hdrs, CXL_DEV_CAP_ARRAY, CAP_COUNT, cap_count); - cxl_device_cap_init(cxl_dstate, DEVICE_STATUS, 1); + cxl_device_cap_init(cxl_dstate, DEVICE_STATUS, 1, 2); device_reg_init_common(cxl_dstate); - cxl_device_cap_init(cxl_dstate, MAILBOX, 2); + cxl_device_cap_init(cxl_dstate, MAILBOX, 2, 1); mailbox_reg_init_common(cxl_dstate); - cxl_device_cap_init(cxl_dstate, MEMORY_DEVICE, 0x4000); + cxl_device_cap_init(cxl_dstate, MEMORY_DEVICE, 0x4000, 1); memdev_reg_init_common(cxl_dstate); cxl_initialize_mailbox(cxl_dstate, true); diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h index 3be2e37b3e4c..7180fc225e29 100644 --- a/include/hw/cxl/cxl_device.h +++ b/include/hw/cxl/cxl_device.h @@ -147,7 +147,16 @@ typedef struct cxl_device_state { MemoryRegion cpmu_registers[CXL_NUM_CPMU_INSTANCES]; /* mmio for device capabilities array - 8.2.8.2 */ - MemoryRegion device; + struct { + MemoryRegion device; + union { + uint8_t dev_reg_state[CXL_DEVICE_STATUS_REGISTERS_LENGTH]; + uint16_t dev_reg_state16[CXL_DEVICE_STATUS_REGISTERS_LENGTH / 2]; + uint32_t dev_reg_state32[CXL_DEVICE_STATUS_REGISTERS_LENGTH / 4]; + uint64_t dev_reg_state64[CXL_DEVICE_STATUS_REGISTERS_LENGTH / 8]; + }; + uint64_t event_status; + }; MemoryRegion memory_device; struct { MemoryRegion caps; @@ -197,6 +206,10 @@ void cxl_device_register_block_init(Object *obj, CXLDeviceState *dev); void cxl_device_register_init_common(CXLDeviceState *dev); void cxl_device_register_init_swcci(CXLDeviceState *dev); +void cxl_event_set_status(CXLDeviceState *cxl_dstate, + enum cxl_event_log_type log_type, + bool available); + /* * CXL 2.0 - 8.2.8.1 including errata F4 * Documented as a 128 bit register, but 64 bit accesses and the second @@ -241,7 +254,7 @@ CXL_DEVICE_CAPABILITY_HEADER_REGISTER(MEMORY_DEVICE, void cxl_initialize_mailbox(CXLDeviceState *cxl_dstate, bool switch_cci); void cxl_process_mailbox(CXLDeviceState *cxl_dstate); -#define cxl_device_cap_init(dstate, reg, cap_id) \ +#define cxl_device_cap_init(dstate, reg, cap_id, ver) \ do { \ uint32_t *cap_hdrs = dstate->caps_reg_state32; \ int which = R_CXL_DEV_##reg##_CAP_HDR0; \ @@ -249,7 +262,7 @@ void cxl_process_mailbox(CXLDeviceState *cxl_dstate); FIELD_DP32(cap_hdrs[which], CXL_DEV_##reg##_CAP_HDR0, \ CAP_ID, cap_id); \ cap_hdrs[which] = FIELD_DP32( \ - cap_hdrs[which], CXL_DEV_##reg##_CAP_HDR0, CAP_VERSION, 1); \ + cap_hdrs[which], CXL_DEV_##reg##_CAP_HDR0, CAP_VERSION, ver); \ cap_hdrs[which + 1] = \ FIELD_DP32(cap_hdrs[which + 1], CXL_DEV_##reg##_CAP_HDR1, \ CAP_OFFSET, CXL_##reg##_REGISTERS_OFFSET); \ @@ -258,6 +271,10 @@ void cxl_process_mailbox(CXLDeviceState *cxl_dstate); CAP_LENGTH, CXL_##reg##_REGISTERS_LENGTH); \ } while (0) +/* CXL 3.0 8.2.8.3.1 Event Status Register */ +REG64(CXL_DEV_EVENT_STATUS, 0) + FIELD(CXL_DEV_EVENT_STATUS, EVENT_STATUS, 0, 32) + /* CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register */ REG32(CXL_DEV_MAILBOX_CAP, 0) FIELD(CXL_DEV_MAILBOX_CAP, PAYLOAD_SIZE, 0, 5) diff --git a/include/hw/cxl/cxl_events.h b/include/hw/cxl/cxl_events.h new file mode 100644 index 000000000000..7e0647ffb0e3 --- /dev/null +++ b/include/hw/cxl/cxl_events.h @@ -0,0 +1,28 @@ +/* + * QEMU CXL Events + * + * Copyright (c) 2022 Intel + * + * This work is licensed under the terms of the GNU GPL, version 2. See the + * COPYING file in the top-level directory. + */ + +#ifndef CXL_EVENTS_H +#define CXL_EVENTS_H + +/* + * CXL rev 3.0 section 8.2.9.2.2; Table 8-49 + * + * Define these as the bit position for the event status register for ease of + * setting the status. + */ +enum cxl_event_log_type { + CXL_EVENT_TYPE_INFO = 0, + CXL_EVENT_TYPE_WARN = 1, + CXL_EVENT_TYPE_FAIL = 2, + CXL_EVENT_TYPE_FATAL = 3, + CXL_EVENT_TYPE_DYNAMIC_CAP = 4, + CXL_EVENT_TYPE_MAX +}; + +#endif /* CXL_EVENTS_H */ From patchwork Thu Dec 22 04:24:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13079379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 68E36C4332F for ; Thu, 22 Dec 2022 04:25:53 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p8D8q-0007qE-4E; Wed, 21 Dec 2022 23:25:12 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8k-0007oY-MT for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:06 -0500 Received: from mga03.intel.com ([134.134.136.65]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8i-00015r-Ay for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671683104; x=1703219104; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=J+TBMExW+oxWMea1/9bWs33uubwZIzBsbSx6C9l1aHU=; b=Uf8RbmZ2Arm0pErHQyRmKFOtkYGxpjsQKHriQWaTMRxJW9m7Fmn0Pp3m nrT4A5qYAdhfZVDD6+7m/L1hGFbINIl7f+KMQL8jB3fo1FXTcl4OpmWZh rOTgOK+AiBRbRBJYxpGp2L2sp2ze+MVFUm8A98j3m2MGyfIdGT+K6lZJS 2RKpet+c0Dz5lU+qpjQJ1SEHZfDKLzUNUbIa5rTtYd+XXFVAjuUzsphH2 Er6IwCPC69SC1ULNHIys3XMxL5U1IqTUCtrVbEJV55Lnj3LWlM6cTvWuV vMgUtEhAoSIcvpEU8JQTmt84TLdiDuhNZmCbxyRXGcrTycflZ/UGDy024 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="321957614" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="321957614" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:24:59 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="601733202" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="601733202" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.20.211]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:24:58 -0800 From: Ira Weiny Date: Wed, 21 Dec 2022 20:24:35 -0800 Subject: [PATCH v2 5/8] hw/cxl/events: Wire up get/clear event mailbox commands MIME-Version: 1.0 Message-Id: <20221221-ira-cxl-events-2022-11-17-v2-5-2ce2ecc06219@intel.com> References: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> In-Reply-To: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> To: Jonathan Cameron Cc: Michael Tsirkin , Ben Widawsky , Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org, Peter Maydell X-Mailer: b4 0.11.0-dev-141d4 X-Developer-Signature: v=1; a=ed25519-sha256; t=1671683093; l=14482; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=J+TBMExW+oxWMea1/9bWs33uubwZIzBsbSx6C9l1aHU=; b=oUxfmw0Xo8L7xDSlDpYSKMcuzjXZTJ0w8csY7HaeHvV9a4cUoVG6lv+eX/zBJtpEguqRKGzUn33c yFCRBdMMD0JjDBSL2o0d6vmKfQPmO9+PKe1GnKZOAGLgTC/KmNJo X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Received-SPF: pass client-ip=134.134.136.65; envelope-from=ira.weiny@intel.com; helo=mga03.intel.com X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org CXL testing is benefited from an artificial event log injection mechanism. Add an event log infrastructure to insert, get, and clear events from the various logs available on a device. Replace the stubbed out CXL Get/Clear Event mailbox commands with commands that operate on the new infrastructure. Signed-off-by: Ira Weiny --- Change from RFC: Process multiple records per Get/Set per the spec Rework all the calls to be within events.c Add locking around the event logs to ensure that the log integrity is maintained --- hw/cxl/cxl-events.c | 221 ++++++++++++++++++++++++++++++++++++++++++++ hw/cxl/cxl-mailbox-utils.c | 40 +++++++- hw/cxl/meson.build | 1 + hw/mem/cxl_type3.c | 1 + include/hw/cxl/cxl_device.h | 28 ++++++ include/hw/cxl/cxl_events.h | 55 +++++++++++ 6 files changed, 344 insertions(+), 2 deletions(-) diff --git a/hw/cxl/cxl-events.c b/hw/cxl/cxl-events.c new file mode 100644 index 000000000000..f40c9372704e --- /dev/null +++ b/hw/cxl/cxl-events.c @@ -0,0 +1,221 @@ +/* + * CXL Event processing + * + * Copyright(C) 2022 Intel Corporation. + * + * This work is licensed under the terms of the GNU GPL, version 2. See the + * COPYING file in the top-level directory. + */ + +#include + +#include "qemu/osdep.h" +#include "qemu/bswap.h" +#include "qemu/typedefs.h" +#include "qemu/error-report.h" +#include "hw/cxl/cxl.h" +#include "hw/cxl/cxl_events.h" + +/* Artificial limit on the number of events a log can hold */ +#define CXL_TEST_EVENT_OVERFLOW 8 + +static void reset_overflow(struct cxl_event_log *log) +{ + log->overflow_err_count = 0; + log->first_overflow_timestamp = 0; + log->last_overflow_timestamp = 0; +} + +void cxl_event_init(CXLDeviceState *cxlds) +{ + struct cxl_event_log *log; + int i; + + for (i = 0; i < CXL_EVENT_TYPE_MAX; i++) { + log = &cxlds->event_logs[i]; + log->next_handle = 1; + log->overflow_err_count = 0; + log->first_overflow_timestamp = 0; + log->last_overflow_timestamp = 0; + qemu_mutex_init(&log->lock); + QSIMPLEQ_INIT(&log->events); + } +} + +static CXLEvent *cxl_event_get_head(struct cxl_event_log *log) +{ + return QSIMPLEQ_FIRST(&log->events); +} + +static CXLEvent *cxl_event_get_next(CXLEvent *entry) +{ + return QSIMPLEQ_NEXT(entry, node); +} + +static int cxl_event_count(struct cxl_event_log *log) +{ + CXLEvent *event; + int rc = 0; + + QSIMPLEQ_FOREACH(event, &log->events, node) { + rc++; + } + + return rc; +} + +static bool cxl_event_empty(struct cxl_event_log *log) +{ + return QSIMPLEQ_EMPTY(&log->events); +} + +static void cxl_event_delete_head(CXLDeviceState *cxlds, + enum cxl_event_log_type log_type, + struct cxl_event_log *log) +{ + CXLEvent *entry = cxl_event_get_head(log); + + reset_overflow(log); + QSIMPLEQ_REMOVE_HEAD(&log->events, node); + if (cxl_event_empty(log)) { + cxl_event_set_status(cxlds, log_type, false); + } + g_free(entry); +} + +/* + * return if an interrupt should be generated as a result of inserting this + * event. + */ +bool cxl_event_insert(CXLDeviceState *cxlds, + enum cxl_event_log_type log_type, + struct cxl_event_record_raw *event) +{ + uint64_t time = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL); + struct cxl_event_log *log; + CXLEvent *entry; + + if (log_type >= CXL_EVENT_TYPE_MAX) { + return false; + } + + log = &cxlds->event_logs[log_type]; + + QEMU_LOCK_GUARD(&log->lock); + + if (cxl_event_count(log) >= CXL_TEST_EVENT_OVERFLOW) { + if (log->overflow_err_count == 0) { + log->first_overflow_timestamp = time; + } + log->overflow_err_count++; + log->last_overflow_timestamp = time; + return false; + } + + entry = g_new0(CXLEvent, 1); + if (!entry) { + error_report("Failed to allocate memory for event log entry"); + return false; + } + + memcpy(&entry->data, event, sizeof(*event)); + + entry->data.hdr.handle = cpu_to_le16(log->next_handle); + log->next_handle++; + /* 0 handle is never valid */ + if (log->next_handle == 0) { + log->next_handle++; + } + entry->data.hdr.timestamp = cpu_to_le64(time); + + QSIMPLEQ_INSERT_TAIL(&log->events, entry, node); + cxl_event_set_status(cxlds, log_type, true); + + /* Count went from 0 to 1 */ + return cxl_event_count(log) == 1; +} + +ret_code cxl_event_get_records(CXLDeviceState *cxlds, + struct cxl_get_event_payload *pl, + uint8_t log_type, int max_recs, + uint16_t *len) +{ + struct cxl_event_log *log; + CXLEvent *entry; + uint16_t nr; + + if (log_type >= CXL_EVENT_TYPE_MAX) { + return CXL_MBOX_INVALID_INPUT; + } + + log = &cxlds->event_logs[log_type]; + + QEMU_LOCK_GUARD(&log->lock); + + entry = cxl_event_get_head(log); + for (nr = 0; entry && nr < max_recs; nr++) { + memcpy(&pl->records[nr], &entry->data, CXL_EVENT_RECORD_SIZE); + entry = cxl_event_get_next(entry); + } + + if (!cxl_event_empty(log)) { + pl->flags |= CXL_GET_EVENT_FLAG_MORE_RECORDS; + } + + if (log->overflow_err_count) { + pl->flags |= CXL_GET_EVENT_FLAG_OVERFLOW; + pl->overflow_err_count = cpu_to_le16(log->overflow_err_count); + pl->first_overflow_timestamp = cpu_to_le64(log->first_overflow_timestamp); + pl->last_overflow_timestamp = cpu_to_le64(log->last_overflow_timestamp); + } + + pl->record_count = cpu_to_le16(nr); + *len = CXL_EVENT_PAYLOAD_HDR_SIZE + (CXL_EVENT_RECORD_SIZE * nr); + return CXL_MBOX_SUCCESS; +} + +ret_code cxl_event_clear_records(CXLDeviceState *cxlds, + struct cxl_clear_event_payload *pl) +{ + struct cxl_event_log *log; + uint8_t log_type; + CXLEvent *entry; + int nr; + + log_type = pl->event_log; + + if (log_type >= CXL_EVENT_TYPE_MAX) { + return CXL_MBOX_INVALID_INPUT; + } + + log = &cxlds->event_logs[log_type]; + + QEMU_LOCK_GUARD(&log->lock); + /* + * Must itterate the queue twice. + * "The device shall verify the event record handles specified in the input + * payload are in temporal order. If the device detects an older event + * record that will not be cleared when Clear Event Records is executed, + * the device shall return the Invalid Handle return code and shall not + * clear any of the specified event records." + * -- CXL 3.0 8.2.9.2.3 + */ + entry = cxl_event_get_head(log); + for (nr = 0; entry && nr < pl->nr_recs; nr++) { + uint16_t handle = pl->handle[nr]; + + /* NOTE: Both handles are little endian. */ + if (handle == 0 || entry->data.hdr.handle != handle) { + return CXL_MBOX_INVALID_INPUT; + } + entry = cxl_event_get_next(entry); + } + + entry = cxl_event_get_head(log); + for (nr = 0; entry && nr < pl->nr_recs; nr++) { + cxl_event_delete_head(cxlds, log_type, log); + entry = cxl_event_get_head(log); + } + + return CXL_MBOX_SUCCESS; +} diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c index 157c01255ee3..97cf6db8582d 100644 --- a/hw/cxl/cxl-mailbox-utils.c +++ b/hw/cxl/cxl-mailbox-utils.c @@ -9,6 +9,7 @@ #include "qemu/osdep.h" #include "hw/cxl/cxl.h" +#include "hw/cxl/cxl_events.h" #include "hw/pci/pci.h" #include "hw/pci-bridge/cxl_upstream_port.h" #include "qemu/cutils.h" @@ -89,8 +90,6 @@ enum { return CXL_MBOX_SUCCESS; \ } -DEFINE_MAILBOX_HANDLER_ZEROED(events_get_records, 0x20); -DEFINE_MAILBOX_HANDLER_NOP(events_clear_records); DEFINE_MAILBOX_HANDLER_ZEROED(events_get_interrupt_policy, 4); DEFINE_MAILBOX_HANDLER_NOP(events_set_interrupt_policy); @@ -252,6 +251,43 @@ static ret_code cmd_infostat_bg_op_sts(struct cxl_cmd *cmd, return CXL_MBOX_SUCCESS; } +static ret_code cmd_events_get_records(struct cxl_cmd *cmd, + CXLDeviceState *cxlds, + uint16_t *len) +{ + struct cxl_get_event_payload *pl; + uint8_t log_type; + int max_recs; + + if (cmd->in < sizeof(log_type)) { + return CXL_MBOX_INVALID_INPUT; + } + + log_type = *((uint8_t *)cmd->payload); + + pl = (struct cxl_get_event_payload *)cmd->payload; + memset(pl, 0, sizeof(*pl)); + + max_recs = (cxlds->payload_size - CXL_EVENT_PAYLOAD_HDR_SIZE) / + CXL_EVENT_RECORD_SIZE; + if (max_recs > 0xFFFF) { + max_recs = 0xFFFF; + } + + return cxl_event_get_records(cxlds, pl, log_type, max_recs, len); +} + +static ret_code cmd_events_clear_records(struct cxl_cmd *cmd, + CXLDeviceState *cxlds, + uint16_t *len) +{ + struct cxl_clear_event_payload *pl; + + pl = (struct cxl_clear_event_payload *)cmd->payload; + *len = 0; + return cxl_event_clear_records(cxlds, pl); +} + /* 8.2.9.2.1 */ static ret_code cmd_firmware_update_get_info(struct cxl_cmd *cmd, CXLDeviceState *cxl_dstate, diff --git a/hw/cxl/meson.build b/hw/cxl/meson.build index 6e370a32fae9..053058034a53 100644 --- a/hw/cxl/meson.build +++ b/hw/cxl/meson.build @@ -7,6 +7,7 @@ softmmu_ss.add(when: 'CONFIG_CXL', 'cxl-cdat.c', 'cxl-cpmu.c', 'switch-mailbox-cci.c', + 'cxl-events.c', ), if_false: files( 'cxl-host-stubs.c', diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c index 21e866dcaf52..e74ef237dfa9 100644 --- a/hw/mem/cxl_type3.c +++ b/hw/mem/cxl_type3.c @@ -697,6 +697,7 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp) /* CXL RAS uses AER correct INTERNAL erorrs - so enable by default */ pci_set_long(pci_dev->config + 0x200 + PCI_ERR_COR_MASK, PCI_ERR_COR_MASK_DEFAULT & ~PCI_ERR_COR_INTERNAL); + cxl_event_init(&ct3d->cxl_dstate); return; err_free_spdm_socket: diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h index 7180fc225e29..d7b43e74c05c 100644 --- a/include/hw/cxl/cxl_device.h +++ b/include/hw/cxl/cxl_device.h @@ -11,6 +11,7 @@ #define CXL_DEVICE_H #include "hw/register.h" +#include "hw/cxl/cxl_events.h" #include "hw/cxl/cxl_cpmu.h" /* @@ -142,6 +143,20 @@ struct cxl_cmd { uint8_t *payload; }; +typedef struct CXLEvent { + struct cxl_event_record_raw data; + QSIMPLEQ_ENTRY(CXLEvent) node; +} CXLEvent; + +struct cxl_event_log { + uint16_t next_handle; + uint16_t overflow_err_count; + uint64_t first_overflow_timestamp; + uint64_t last_overflow_timestamp; + QemuMutex lock; + QSIMPLEQ_HEAD(, CXLEvent) events; +}; + typedef struct cxl_device_state { MemoryRegion device_registers; @@ -197,6 +212,8 @@ typedef struct cxl_device_state { struct cxl_cmd (*cxl_cmd_set)[256]; /* Move me later */ CPMUState cpmu[CXL_NUM_CPMU_INSTANCES]; + + struct cxl_event_log event_logs[CXL_EVENT_TYPE_MAX]; } CXLDeviceState; /* Initialize the register block for a device */ @@ -381,4 +398,15 @@ struct CSWMBCCIDev { CXLDeviceState cxl_dstate; }; +void cxl_event_init(CXLDeviceState *cxlds); +bool cxl_event_insert(CXLDeviceState *cxlds, + enum cxl_event_log_type log_type, + struct cxl_event_record_raw *event); +ret_code cxl_event_get_records(CXLDeviceState *cxlds, + struct cxl_get_event_payload *pl, + uint8_t log_type, int max_recs, + uint16_t *len); +ret_code cxl_event_clear_records(CXLDeviceState *cxlds, + struct cxl_clear_event_payload *pl); + #endif diff --git a/include/hw/cxl/cxl_events.h b/include/hw/cxl/cxl_events.h index 7e0647ffb0e3..1798c4502cb3 100644 --- a/include/hw/cxl/cxl_events.h +++ b/include/hw/cxl/cxl_events.h @@ -10,6 +10,8 @@ #ifndef CXL_EVENTS_H #define CXL_EVENTS_H +#include "qemu/uuid.h" + /* * CXL rev 3.0 section 8.2.9.2.2; Table 8-49 * @@ -25,4 +27,57 @@ enum cxl_event_log_type { CXL_EVENT_TYPE_MAX }; +/* + * Common Event Record Format + * CXL rev 3.0 section 8.2.9.2.1; Table 8-42 + */ +#define CXL_EVENT_REC_HDR_RES_LEN 0xf +struct cxl_event_record_hdr { + QemuUUID id; + uint8_t length; + uint8_t flags[3]; + uint16_t handle; + uint16_t related_handle; + uint64_t timestamp; + uint8_t maint_op_class; + uint8_t reserved[CXL_EVENT_REC_HDR_RES_LEN]; +} QEMU_PACKED; + +#define CXL_EVENT_RECORD_DATA_LENGTH 0x50 +struct cxl_event_record_raw { + struct cxl_event_record_hdr hdr; + uint8_t data[CXL_EVENT_RECORD_DATA_LENGTH]; +} QEMU_PACKED; +#define CXL_EVENT_RECORD_SIZE (sizeof(struct cxl_event_record_raw)) + +/* + * Get Event Records output payload + * CXL rev 3.0 section 8.2.9.2.2; Table 8-50 + */ +#define CXL_GET_EVENT_FLAG_OVERFLOW BIT(0) +#define CXL_GET_EVENT_FLAG_MORE_RECORDS BIT(1) +struct cxl_get_event_payload { + uint8_t flags; + uint8_t reserved1; + uint16_t overflow_err_count; + uint64_t first_overflow_timestamp; + uint64_t last_overflow_timestamp; + uint16_t record_count; + uint8_t reserved2[0xa]; + struct cxl_event_record_raw records[]; +} QEMU_PACKED; +#define CXL_EVENT_PAYLOAD_HDR_SIZE (sizeof(struct cxl_get_event_payload)) + +/* + * Clear Event Records input payload + * CXL rev 3.0 section 8.2.9.2.3; Table 8-51 + */ +struct cxl_clear_event_payload { + uint8_t event_log; /* enum cxl_event_log_type */ + uint8_t clear_flags; + uint8_t nr_recs; + uint8_t reserved[3]; + uint16_t handle[]; +}; + #endif /* CXL_EVENTS_H */ From patchwork Thu Dec 22 04:24:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13079382 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 443C3C4332F for ; Thu, 22 Dec 2022 04:26:24 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p8D8o-0007px-2s; Wed, 21 Dec 2022 23:25:10 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8l-0007p4-LL for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:07 -0500 Received: from mga03.intel.com ([134.134.136.65]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8j-00015O-Ea for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671683105; x=1703219105; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=FoxUVJp1TZY5GeNX0Mx4U+Cj2wm0CMQmx0Oi0AOfnQQ=; b=Pfh0/m0Tot7JWTFz0RuPgWLF9Wr9nzWHjyShQwbSTJLrn+UXpkD6iemK z+z8+hCo+FOdkn9+KLbbuHxFMQ5z7Bi7+4F9LHeK2xfpt9DArVqt8ynp6 7NqDbbbmrEWk+G+MhWhhS15D539ercGZP3L2CJdwJB2o35W0jFmZcKsLg Sr0v017+CsROsoYzxsWVnZFa6yrCDBdu3KXq5P5j6LWwa6u5zkfIQuB5Y gpFv9UzvFjwQUXuHZICGSrjhpL1506nr00hP9YYfIp7Of9N28wjm7+ilz 9g0lX/8ux319+lu3ba+jm2bJTUgy1gEeVMrWCLcF7wQJPp1t4Ev0Vys5S g==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="321957623" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="321957623" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:25:00 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="601733206" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="601733206" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.20.211]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:24:59 -0800 From: Ira Weiny Date: Wed, 21 Dec 2022 20:24:36 -0800 Subject: [PATCH v2 6/8] hw/cxl/events: Add event interrupt support MIME-Version: 1.0 Message-Id: <20221221-ira-cxl-events-2022-11-17-v2-6-2ce2ecc06219@intel.com> References: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> In-Reply-To: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> To: Jonathan Cameron Cc: Michael Tsirkin , Ben Widawsky , Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org, Peter Maydell X-Mailer: b4 0.11.0-dev-141d4 X-Developer-Signature: v=1; a=ed25519-sha256; t=1671683093; l=11685; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=FoxUVJp1TZY5GeNX0Mx4U+Cj2wm0CMQmx0Oi0AOfnQQ=; b=6unsNVcmilOmHOgNMeRne8h3qpXtQKTUssIAz2mQyKSkr2Bb/l8+7c62aigLlGZlUr/N4W6DDO9Y /kkqNderDmERpgP2lU78aQQj/eC3gQH2d5Fyv9JUqw521/DYtPl8 X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Received-SPF: pass client-ip=134.134.136.65; envelope-from=ira.weiny@intel.com; helo=mga03.intel.com X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Replace the stubbed out CXL Get/Set Event interrupt policy mailbox commands. Enable those commands to control interrupts for each of the event log types. Skip the standard input mailbox length on the Set command due to DCD being optional. Perform the checks separately. Signed-off-by: Ira Weiny --- NOTE As the spec changes it may be wise to change the standard mailbox processing to allow for various input length checks. But I'm not going try and tackle that in this series. Changes from RFC: Squashed mailbox and irq patches together to support event interrupts as a whole Remove redundant event_vector array --- hw/cxl/cxl-events.c | 33 +++++++++++++- hw/cxl/cxl-mailbox-utils.c | 106 +++++++++++++++++++++++++++++++++++--------- hw/mem/cxl_type3.c | 4 +- include/hw/cxl/cxl_device.h | 5 ++- include/hw/cxl/cxl_events.h | 23 ++++++++++ 5 files changed, 146 insertions(+), 25 deletions(-) diff --git a/hw/cxl/cxl-events.c b/hw/cxl/cxl-events.c index f40c9372704e..53ec8447236e 100644 --- a/hw/cxl/cxl-events.c +++ b/hw/cxl/cxl-events.c @@ -13,6 +13,8 @@ #include "qemu/bswap.h" #include "qemu/typedefs.h" #include "qemu/error-report.h" +#include "hw/pci/msi.h" +#include "hw/pci/msix.h" #include "hw/cxl/cxl.h" #include "hw/cxl/cxl_events.h" @@ -26,7 +28,7 @@ static void reset_overflow(struct cxl_event_log *log) log->last_overflow_timestamp = 0; } -void cxl_event_init(CXLDeviceState *cxlds) +void cxl_event_init(CXLDeviceState *cxlds, int start_msg_num) { struct cxl_event_log *log; int i; @@ -37,9 +39,16 @@ void cxl_event_init(CXLDeviceState *cxlds) log->overflow_err_count = 0; log->first_overflow_timestamp = 0; log->last_overflow_timestamp = 0; + log->irq_enabled = false; + log->irq_vec = start_msg_num++; qemu_mutex_init(&log->lock); QSIMPLEQ_INIT(&log->events); } + + /* Override -- Dynamic Capacity uses the same vector as info */ + cxlds->event_logs[CXL_EVENT_TYPE_DYNAMIC_CAP].irq_vec = + cxlds->event_logs[CXL_EVENT_TYPE_INFO].irq_vec; + } static CXLEvent *cxl_event_get_head(struct cxl_event_log *log) @@ -219,3 +228,25 @@ ret_code cxl_event_clear_records(CXLDeviceState *cxlds, return CXL_MBOX_SUCCESS; } + +void cxl_event_irq_assert(CXLType3Dev *ct3d) +{ + CXLDeviceState *cxlds = &ct3d->cxl_dstate; + PCIDevice *pdev = &ct3d->parent_obj; + int i; + + for (i = 0; i < CXL_EVENT_TYPE_MAX; i++) { + struct cxl_event_log *log = &cxlds->event_logs[i]; + + if (!log->irq_enabled || cxl_event_empty(log)) { + continue; + } + + /* Notifies interrupt, legacy IRQ is not supported */ + if (msix_enabled(pdev)) { + msix_notify(pdev, log->irq_vec); + } else if (msi_enabled(pdev)) { + msi_notify(pdev, log->irq_vec); + } + } +} diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c index 97cf6db8582d..ff94191a956a 100644 --- a/hw/cxl/cxl-mailbox-utils.c +++ b/hw/cxl/cxl-mailbox-utils.c @@ -74,25 +74,6 @@ enum { #define IDENTIFY_SWITCH_DEVICE 0x0 }; -#define DEFINE_MAILBOX_HANDLER_ZEROED(name, size) \ - uint16_t __zero##name = size; \ - static ret_code cmd_##name(struct cxl_cmd *cmd, \ - CXLDeviceState *cxl_dstate, uint16_t *len) \ - { \ - *len = __zero##name; \ - memset(cmd->payload, 0, *len); \ - return CXL_MBOX_SUCCESS; \ - } -#define DEFINE_MAILBOX_HANDLER_NOP(name) \ - static ret_code cmd_##name(struct cxl_cmd *cmd, \ - CXLDeviceState *cxl_dstate, uint16_t *len) \ - { \ - return CXL_MBOX_SUCCESS; \ - } - -DEFINE_MAILBOX_HANDLER_ZEROED(events_get_interrupt_policy, 4); -DEFINE_MAILBOX_HANDLER_NOP(events_set_interrupt_policy); - static void find_cxl_usp(PCIBus *b, PCIDevice *d, void *opaque) { PCIDevice **found_dev = opaque; @@ -288,6 +269,88 @@ static ret_code cmd_events_clear_records(struct cxl_cmd *cmd, return cxl_event_clear_records(cxlds, pl); } +static ret_code cmd_events_get_interrupt_policy(struct cxl_cmd *cmd, + CXLDeviceState *cxlds, + uint16_t *len) +{ + struct cxl_event_interrupt_policy *policy; + struct cxl_event_log *log; + + policy = (struct cxl_event_interrupt_policy *)cmd->payload; + memset(policy, 0, sizeof(*policy)); + + log = &cxlds->event_logs[CXL_EVENT_TYPE_INFO]; + if (log->irq_enabled) { + policy->info_settings = CXL_EVENT_INT_SETTING(log->irq_vec); + } + + log = &cxlds->event_logs[CXL_EVENT_TYPE_WARN]; + if (log->irq_enabled) { + policy->warn_settings = CXL_EVENT_INT_SETTING(log->irq_vec); + } + + log = &cxlds->event_logs[CXL_EVENT_TYPE_FAIL]; + if (log->irq_enabled) { + policy->failure_settings = CXL_EVENT_INT_SETTING(log->irq_vec); + } + + log = &cxlds->event_logs[CXL_EVENT_TYPE_FATAL]; + if (log->irq_enabled) { + policy->fatal_settings = CXL_EVENT_INT_SETTING(log->irq_vec); + } + + log = &cxlds->event_logs[CXL_EVENT_TYPE_DYNAMIC_CAP]; + if (log->irq_enabled) { + /* Dynamic Capacity borrows the same vector as info */ + policy->dyn_cap_settings = CXL_INT_MSI_MSIX; + } + + *len = sizeof(*policy); + return CXL_MBOX_SUCCESS; +} + +static ret_code cmd_events_set_interrupt_policy(struct cxl_cmd *cmd, + CXLDeviceState *cxlds, + uint16_t *len) +{ + struct cxl_event_interrupt_policy *policy; + struct cxl_event_log *log; + + if (*len < CXL_EVENT_INT_SETTING_MIN_LEN) { + return CXL_MBOX_INVALID_PAYLOAD_LENGTH; + } + + policy = (struct cxl_event_interrupt_policy *)cmd->payload; + + log = &cxlds->event_logs[CXL_EVENT_TYPE_INFO]; + log->irq_enabled = (policy->info_settings & CXL_EVENT_INT_MODE_MASK) == + CXL_INT_MSI_MSIX; + + log = &cxlds->event_logs[CXL_EVENT_TYPE_WARN]; + log->irq_enabled = (policy->warn_settings & CXL_EVENT_INT_MODE_MASK) == + CXL_INT_MSI_MSIX; + + log = &cxlds->event_logs[CXL_EVENT_TYPE_FAIL]; + log->irq_enabled = (policy->failure_settings & CXL_EVENT_INT_MODE_MASK) == + CXL_INT_MSI_MSIX; + + log = &cxlds->event_logs[CXL_EVENT_TYPE_FATAL]; + log->irq_enabled = (policy->fatal_settings & CXL_EVENT_INT_MODE_MASK) == + CXL_INT_MSI_MSIX; + + /* DCD is optional */ + if (*len < sizeof(*policy)) { + return CXL_MBOX_SUCCESS; + } + + log = &cxlds->event_logs[CXL_EVENT_TYPE_DYNAMIC_CAP]; + log->irq_enabled = (policy->dyn_cap_settings & CXL_EVENT_INT_MODE_MASK) == + CXL_INT_MSI_MSIX; + + *len = sizeof(*policy); + return CXL_MBOX_SUCCESS; +} + /* 8.2.9.2.1 */ static ret_code cmd_firmware_update_get_info(struct cxl_cmd *cmd, CXLDeviceState *cxl_dstate, @@ -644,9 +707,10 @@ static struct cxl_cmd cxl_cmd_set[256][256] = { [EVENTS][CLEAR_RECORDS] = { "EVENTS_CLEAR_RECORDS", cmd_events_clear_records, ~0, IMMEDIATE_LOG_CHANGE }, [EVENTS][GET_INTERRUPT_POLICY] = { "EVENTS_GET_INTERRUPT_POLICY", - cmd_events_get_interrupt_policy, 0, 0 }, + cmd_events_get_interrupt_policy, 0, 0 }, [EVENTS][SET_INTERRUPT_POLICY] = { "EVENTS_SET_INTERRUPT_POLICY", - cmd_events_set_interrupt_policy, 4, IMMEDIATE_CONFIG_CHANGE }, + cmd_events_set_interrupt_policy, + ~0, IMMEDIATE_CONFIG_CHANGE }, [FIRMWARE_UPDATE][GET_INFO] = { "FIRMWARE_UPDATE_GET_INFO", cmd_firmware_update_get_info, 0, 0 }, [TIMESTAMP][GET] = { "TIMESTAMP_GET", cmd_timestamp_get, 0, 0 }, diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c index e74ef237dfa9..a43949cab120 100644 --- a/hw/mem/cxl_type3.c +++ b/hw/mem/cxl_type3.c @@ -626,7 +626,7 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp) ComponentRegisters *regs = &cxl_cstate->crb; MemoryRegion *mr = ®s->component_registers; uint8_t *pci_conf = pci_dev->config; - unsigned short msix_num = 4; + unsigned short msix_num = 8; int i, rc; if (!cxl_setup_memory(ct3d, errp)) { @@ -697,7 +697,7 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp) /* CXL RAS uses AER correct INTERNAL erorrs - so enable by default */ pci_set_long(pci_dev->config + 0x200 + PCI_ERR_COR_MASK, PCI_ERR_COR_MASK_DEFAULT & ~PCI_ERR_COR_INTERNAL); - cxl_event_init(&ct3d->cxl_dstate); + cxl_event_init(&ct3d->cxl_dstate, 4); return; err_free_spdm_socket: diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h index d7b43e74c05c..586377607c57 100644 --- a/include/hw/cxl/cxl_device.h +++ b/include/hw/cxl/cxl_device.h @@ -153,6 +153,8 @@ struct cxl_event_log { uint16_t overflow_err_count; uint64_t first_overflow_timestamp; uint64_t last_overflow_timestamp; + bool irq_enabled; + int irq_vec; QemuMutex lock; QSIMPLEQ_HEAD(, CXLEvent) events; }; @@ -398,7 +400,7 @@ struct CSWMBCCIDev { CXLDeviceState cxl_dstate; }; -void cxl_event_init(CXLDeviceState *cxlds); +void cxl_event_init(CXLDeviceState *cxlds, int start_msg_num); bool cxl_event_insert(CXLDeviceState *cxlds, enum cxl_event_log_type log_type, struct cxl_event_record_raw *event); @@ -408,5 +410,6 @@ ret_code cxl_event_get_records(CXLDeviceState *cxlds, uint16_t *len); ret_code cxl_event_clear_records(CXLDeviceState *cxlds, struct cxl_clear_event_payload *pl); +void cxl_event_irq_assert(CXLType3Dev *ct3d); #endif diff --git a/include/hw/cxl/cxl_events.h b/include/hw/cxl/cxl_events.h index 1798c4502cb3..2df40720320a 100644 --- a/include/hw/cxl/cxl_events.h +++ b/include/hw/cxl/cxl_events.h @@ -80,4 +80,27 @@ struct cxl_clear_event_payload { uint16_t handle[]; }; +/** + * Event Interrupt Policy + * + * CXL rev 3.0 section 8.2.9.2.4; Table 8-52 + */ +enum cxl_event_int_mode { + CXL_INT_NONE = 0x00, + CXL_INT_MSI_MSIX = 0x01, + CXL_INT_FW = 0x02, + CXL_INT_RES = 0x03, +}; +#define CXL_EVENT_INT_MODE_MASK 0x3 +#define CXL_EVENT_INT_SETTING(vector) ((((uint8_t)vector & 0xf) << 4) | CXL_INT_MSI_MSIX) +struct cxl_event_interrupt_policy { + uint8_t info_settings; + uint8_t warn_settings; + uint8_t failure_settings; + uint8_t fatal_settings; + uint8_t dyn_cap_settings; +} QEMU_PACKED; +/* DCD is optional but other fields are not */ +#define CXL_EVENT_INT_SETTING_MIN_LEN 4 + #endif /* CXL_EVENTS_H */ From patchwork Thu Dec 22 04:24:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13079381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 20C7FC4167B for ; Thu, 22 Dec 2022 04:25:58 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p8D8t-0007qn-J3; Wed, 21 Dec 2022 23:25:15 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8l-0007oz-DO for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:07 -0500 Received: from mga03.intel.com ([134.134.136.65]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8j-00015K-L8 for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671683105; x=1703219105; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=UQnornUfT/vaDjeF5Z6CteaB75obyZRj7zvuhg79FR0=; b=gKOtYzYRFhGin/9P84W6WPaC463I6TSFwQDxTRLukBFte6sjny4fyJlE t80BQS3PV8kJccioBh5lionLY9TT2K1JntcDWW6wg7KCfiCbukFK5DZdX ffUhOmc5sSSp0aRl6Aiv4BT3NbcK8eLQsIYCDPrYGq8Pf76xCaTMk13Xh d2/SMdlEMBvayU9iGIXNJ24BlKWLL1KCpQgcPCsWuhCoGEYc5wTUrZNHx fdrDKiUc7cgiv6j2oD/dbKwBA/1ta/aRCLtEufOuBR2rYnnO+AYN8UHLO Hj5vXJW0UriOvwyByvy/2s0VE30dD277XAJebo3XDRerwVbykPrF6cbr2 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="321957633" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="321957633" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:25:01 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="601733212" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="601733212" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.20.211]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:25:00 -0800 From: Ira Weiny Date: Wed, 21 Dec 2022 20:24:37 -0800 Subject: [PATCH v2 7/8] bswap: Add the ability to store to an unaligned 24 bit field MIME-Version: 1.0 Message-Id: <20221221-ira-cxl-events-2022-11-17-v2-7-2ce2ecc06219@intel.com> References: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> In-Reply-To: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> To: Jonathan Cameron Cc: Michael Tsirkin , Ben Widawsky , Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org, Peter Maydell X-Mailer: b4 0.11.0-dev-141d4 X-Developer-Signature: v=1; a=ed25519-sha256; t=1671683093; l=2387; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=UQnornUfT/vaDjeF5Z6CteaB75obyZRj7zvuhg79FR0=; b=bW3809h1Txum7nDYbsKD/2LjWN3Nt7V86syYEfxpIeK26nBUMDZsF5aVinGGfsLwWD7NkRKvmPB4 rfqk7tx1DINr9J5m9o1egvcmFluIEEzXsehen96H+4IcxbeqWlqJ X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Received-SPF: pass client-ip=134.134.136.65; envelope-from=ira.weiny@intel.com; helo=mga03.intel.com X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org CXL has 24 bit unaligned fields which need to be stored to. CXL is specified as little endian. Define st24_le_p() and the supporting functions to store such a field from a 32 bit host native value. The use of b, w, l, q as the size specifier is limiting. So "24" was used for the size part of the function name. Signed-off-by: Ira Weiny --- include/qemu/bswap.h | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/include/qemu/bswap.h b/include/qemu/bswap.h index e1eca22f2548..8af4d4a75eb6 100644 --- a/include/qemu/bswap.h +++ b/include/qemu/bswap.h @@ -25,6 +25,13 @@ static inline uint16_t bswap16(uint16_t x) return bswap_16(x); } +static inline uint32_t bswap24(uint32_t x) +{ + return (((x & 0x000000ffU) << 16) | + ((x & 0x0000ff00U) << 0) | + ((x & 0x00ff0000U) >> 16)); +} + static inline uint32_t bswap32(uint32_t x) { return bswap_32(x); @@ -43,6 +50,13 @@ static inline uint16_t bswap16(uint16_t x) ((x & 0xff00) >> 8)); } +static inline uint32_t bswap24(uint32_t x) +{ + return (((x & 0x000000ffU) << 16) | + ((x & 0x0000ff00U) << 0) | + ((x & 0x00ff0000U) >> 16)); +} + static inline uint32_t bswap32(uint32_t x) { return (((x & 0x000000ffU) << 24) | @@ -72,6 +86,11 @@ static inline void bswap16s(uint16_t *s) *s = bswap16(*s); } +static inline void bswap24s(uint32_t *s) +{ + *s = bswap24(*s); +} + static inline void bswap32s(uint32_t *s) { *s = bswap32(*s); @@ -233,6 +252,7 @@ CPU_CONVERT(le, 64, uint64_t) * size is: * b: 8 bits * w: 16 bits + * 24: 24 bits * l: 32 bits * q: 64 bits * @@ -305,6 +325,11 @@ static inline void stw_he_p(void *ptr, uint16_t v) __builtin_memcpy(ptr, &v, sizeof(v)); } +static inline void st24_he_p(void *ptr, uint32_t v) +{ + __builtin_memcpy(ptr, &v, 3); +} + static inline int ldl_he_p(const void *ptr) { int32_t r; @@ -354,6 +379,11 @@ static inline void stw_le_p(void *ptr, uint16_t v) stw_he_p(ptr, le_bswap(v, 16)); } +static inline void st24_le_p(void *ptr, uint32_t v) +{ + st24_he_p(ptr, le_bswap(v, 24)); +} + static inline void stl_le_p(void *ptr, uint32_t v) { stl_he_p(ptr, le_bswap(v, 32)); From patchwork Thu Dec 22 04:24:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13079386 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CF4B1C4332F for ; Thu, 22 Dec 2022 04:27:23 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1p8D8u-0007r5-0u; Wed, 21 Dec 2022 23:25:16 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8n-0007pp-4R for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:09 -0500 Received: from mga03.intel.com ([134.134.136.65]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1p8D8l-00015r-0w for qemu-devel@nongnu.org; Wed, 21 Dec 2022 23:25:08 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671683107; x=1703219107; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=2kLIfGZNTdvSDrBZBTH0L0KDp8KIlJqQNOVEwB4c/0k=; b=bNzFThql8oXXxqwjZczgjhgzpl7tmTMND3GLfmupUh+kXuEJw6hxSO+M ANZ8vk7l5TxitjF+/hTKYYlFeI1e+3/e6ZEXpIArnemh4+/oqBGkDlod4 0f72Qo1zFzr22vd4l88i5Xn1YqUwjN+4tAJbBH0s0Sj/Oy3nCzvW0W3GY WTSJoV3a9smDHNW1z7xH7vskYN2XkF74GWNuNE6VIrxT4v/ZEOu1lFIkP 3KQaBxZFUH8F+lKHtNgjWbNFlEEXUIBdue8poeyZA1NuRrcxVKV01lukx Y/93w3cPnzyoOWPqwHBSyMVSs0IKKLCxDKjohLAKJKa96w6lfqN+YH1Pa Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="321957641" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="321957641" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:25:01 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="601733219" X-IronPort-AV: E=Sophos;i="5.96,264,1665471600"; d="scan'208";a="601733219" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.212.20.211]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Dec 2022 20:25:01 -0800 From: Ira Weiny Date: Wed, 21 Dec 2022 20:24:38 -0800 Subject: [PATCH v2 8/8] hw/cxl/events: Add in inject general media event MIME-Version: 1.0 Message-Id: <20221221-ira-cxl-events-2022-11-17-v2-8-2ce2ecc06219@intel.com> References: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> In-Reply-To: <20221221-ira-cxl-events-2022-11-17-v2-0-2ce2ecc06219@intel.com> To: Jonathan Cameron Cc: Michael Tsirkin , Ben Widawsky , Ira Weiny , qemu-devel@nongnu.org, linux-cxl@vger.kernel.org, Peter Maydell X-Mailer: b4 0.11.0-dev-141d4 X-Developer-Signature: v=1; a=ed25519-sha256; t=1671683093; l=7127; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=2kLIfGZNTdvSDrBZBTH0L0KDp8KIlJqQNOVEwB4c/0k=; b=eHwpxyXnDoarV4OWdpzvxjJrSNjkV8WUI3YsTwjOzVtp60DZmq82RhXrn8gn+i9PqLRnqbszA6Dj sYxIfbsIDDjRb/glNFUSxWSAeQ4cDBWLDulnxFoZC55hnaKt5u6y X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Received-SPF: pass client-ip=134.134.136.65; envelope-from=ira.weiny@intel.com; helo=mga03.intel.com X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org To facilitate testing provide a QMP command to inject a general media event. The event can be added to the log specified. Signed-off-by: Ira Weiny --- Changes from RFC: Add all fields for this event irq happens automatically when log transitions from 0 to 1 --- hw/mem/cxl_type3.c | 93 +++++++++++++++++++++++++++++++++++++++++++++ hw/mem/cxl_type3_stubs.c | 8 ++++ include/hw/cxl/cxl_events.h | 20 ++++++++++ qapi/cxl.json | 25 ++++++++++++ 4 files changed, 146 insertions(+) diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c index a43949cab120..bedd09e500ba 100644 --- a/hw/mem/cxl_type3.c +++ b/hw/mem/cxl_type3.c @@ -916,6 +916,99 @@ static CXLPoisonList *get_poison_list(CXLType3Dev *ct3d) return &ct3d->poison_list; } +static void cxl_assign_event_header(struct cxl_event_record_hdr *hdr, + QemuUUID *uuid, uint8_t flags, + uint8_t length) +{ + hdr->flags[0] = flags; + hdr->length = length; + memcpy(&hdr->id, uuid, sizeof(hdr->id)); + hdr->timestamp = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL); +} + +QemuUUID gen_media_uuid = { + .data = UUID(0xfbcd0a77, 0xc260, 0x417f, + 0x85, 0xa9, 0x08, 0x8b, 0x16, 0x21, 0xeb, 0xa6), +}; + +#define CXL_GMER_VALID_CHANNEL BIT(0) +#define CXL_GMER_VALID_RANK BIT(1) +#define CXL_GMER_VALID_DEVICE BIT(2) +#define CXL_GMER_VALID_COMPONENT BIT(3) + +/* + * For channel, rank, and device; any value inside of the fields valid range + * will flag that field to be valid. IE pass -1 to mark the field invalid. + * + * Component ID is device specific. Define this as a string. + */ +void qmp_cxl_inject_gen_media_event(const char *path, uint8_t log, + uint8_t flags, uint64_t physaddr, + uint8_t descriptor, uint8_t type, + uint8_t transaction_type, + int16_t channel, int16_t rank, + int32_t device, + const char *component_id, + Error **errp) +{ + Object *obj = object_resolve_path(path, NULL); + struct cxl_event_gen_media gem; + struct cxl_event_record_hdr *hdr = &gem.hdr; + CXLDeviceState *cxlds; + CXLType3Dev *ct3d; + uint16_t valid_flags = 0; + + if (log >= CXL_EVENT_TYPE_MAX) { + error_setg(errp, "Invalid log type: %d", log); + return; + } + if (!obj) { + error_setg(errp, "Unable to resolve path"); + return; + } + if (!object_dynamic_cast(obj, TYPE_CXL_TYPE3)) { + error_setg(errp, "Path does not point to a CXL type 3 device"); + } + ct3d = CXL_TYPE3(obj); + cxlds = &ct3d->cxl_dstate; + + memset(&gem, 0, sizeof(gem)); + cxl_assign_event_header(hdr, &gen_media_uuid, flags, + sizeof(struct cxl_event_gen_media)); + + gem.phys_addr = physaddr; + gem.descriptor = descriptor; + gem.type = type; + gem.transaction_type = transaction_type; + + if (0 <= channel && channel <= 0xFF) { + gem.channel = channel; + valid_flags |= CXL_GMER_VALID_CHANNEL; + } + + if (0 <= rank && rank <= 0xFF) { + gem.rank = rank; + valid_flags |= CXL_GMER_VALID_RANK; + } + + if (0 <= device && device <= 0xFFFFFF) { + st24_le_p(gem.device, device); + valid_flags |= CXL_GMER_VALID_DEVICE; + } + + if (component_id && strcmp(component_id, "")) { + strncpy((char *)gem.component_id, component_id, + sizeof(gem.component_id) - 1); + valid_flags |= CXL_GMER_VALID_COMPONENT; + } + + stw_le_p(gem.validity_flags, valid_flags); + + if (cxl_event_insert(cxlds, log, (struct cxl_event_record_raw *)&gem)) { + cxl_event_irq_assert(ct3d); + } +} + void qmp_cxl_inject_poison(const char *path, uint64_t start, uint64_t length, Error **errp) { diff --git a/hw/mem/cxl_type3_stubs.c b/hw/mem/cxl_type3_stubs.c index f2c9f48f4010..62f04d487031 100644 --- a/hw/mem/cxl_type3_stubs.c +++ b/hw/mem/cxl_type3_stubs.c @@ -2,6 +2,14 @@ #include "qemu/osdep.h" #include "qapi/qapi-commands-cxl.h" +void qmp_cxl_inject_gen_media_event(const char *path, uint8_t log, + uint8_t flags, uint64_t physaddr, + uint8_t descriptor, uint8_t type, + uint8_t transaction_type, + int16_t channel, int16_t rank, + int32_t device, + char *component_id, + Error **errp) {} void qmp_cxl_inject_poison(const char *path, uint64_t start, uint64_t length, Error **errp) {} void qmp_cxl_inject_uncorrectable_error(const char *path, diff --git a/include/hw/cxl/cxl_events.h b/include/hw/cxl/cxl_events.h index 2df40720320a..3175e9d9866d 100644 --- a/include/hw/cxl/cxl_events.h +++ b/include/hw/cxl/cxl_events.h @@ -103,4 +103,24 @@ struct cxl_event_interrupt_policy { /* DCD is optional but other fields are not */ #define CXL_EVENT_INT_SETTING_MIN_LEN 4 +/* + * General Media Event Record + * CXL rev 3.0 Section 8.2.9.2.1.1; Table 8-43 + */ +#define CXL_EVENT_GEN_MED_COMP_ID_SIZE 0x10 +#define CXL_EVENT_GEN_MED_RES_SIZE 0x2e +struct cxl_event_gen_media { + struct cxl_event_record_hdr hdr; + uint64_t phys_addr; + uint8_t descriptor; + uint8_t type; + uint8_t transaction_type; + uint8_t validity_flags[2]; + uint8_t channel; + uint8_t rank; + uint8_t device[3]; + uint8_t component_id[CXL_EVENT_GEN_MED_COMP_ID_SIZE]; + uint8_t reserved[CXL_EVENT_GEN_MED_RES_SIZE]; +} QEMU_PACKED; + #endif /* CXL_EVENTS_H */ diff --git a/qapi/cxl.json b/qapi/cxl.json index b4836bb87f53..56e85a28d7e0 100644 --- a/qapi/cxl.json +++ b/qapi/cxl.json @@ -5,6 +5,31 @@ # = CXL devices ## +## +# @cxl-inject-gen-media-event: +# +# @path: CXL type 3 device canonical QOM path +# +# @log: Event Log to add the event to +# @flags: header flags +# @physaddr: Physical Address +# @descriptor: Descriptor +# @type: Type +# @transactiontype: Transaction Type +# @channel: Channel +# @rank: Rank +# @device: Device +# @componentid: Device specific string +# +## +{ 'command': 'cxl-inject-gen-media-event', + 'data': { 'path': 'str', 'log': 'uint8', 'flags': 'uint8', + 'physaddr': 'uint64', 'descriptor': 'uint8', + 'type': 'uint8', 'transactiontype': 'uint8', + 'channel': 'int16', 'rank': 'int16', + 'device': 'int32', 'componentid': 'str' + }} + ## # @cxl-inject-poison: #