From patchwork Fri May 17 13:40:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13667016 Received: from mail-lj1-f182.google.com (mail-lj1-f182.google.com [209.85.208.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DAB61E507 for ; Fri, 17 May 2024 13:40:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715953216; cv=none; b=Ba2LKNj8nVc/9x26RMdNWlSpEpxYuEgD83hnmW2m1fkLLfJM942x7cfyi4jWuxfbVEeGN9ndPKBqleqPdjrbDmp0Xd+SPdFGIQSicDDJ/IYtGcYSjqfKDtnEVVm7Vt61cRfZRgEolH+H9Kl/+KzfY6W4BMWhn+2YB5IHlp9Xj3U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715953216; c=relaxed/simple; bh=6ujubs7v5yCnGZc5e/QR6dN2R1rJdBpfT9oeaW4arro=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=XdTDcoEOF5duetcGsME58IklIhfve0fx7kXi1TFvRNek/JIEn30YnRlYPuE8fPqJt8kNydYwaG3ICoWTY287yHZwD2SU9Znzp++dgl2aZT/KxbqqEr6h3b+i8O3IxDnluMm4RyMf60Zh7DO0pXqk6zjv5WWuvBojljJN81xgubw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=lILT4crh; arc=none smtp.client-ip=209.85.208.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="lILT4crh" Received: by mail-lj1-f182.google.com with SMTP id 38308e7fff4ca-2e2554b8cdfso587221fa.0 for ; Fri, 17 May 2024 06:40:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1715953212; x=1716558012; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aiPUoWXGycjCV4H2caLI8wPFwzPpr4ZyRJjWUc3GD0E=; b=lILT4crhwh/ZQKP9Z5+5LQtJaRpPUhLoEMSNH2lsNfqQQgeSxnji46h0QBD34vNRiZ WlktItd6GRKVR5PG4M/RZSV7KqHXQoqDBxqgYiP3oQ563wpK/L92uicrZKzQv1WSIIRv KsTU5h1D+u/7zzyM4cocPpX/ysrNFkssJptNUFgnfgkRhFfu2CwHSlHCfm91jc7/J2Hl uKXipzLMrP/Bh8eaIsO63fnU+CteXPMvZSDkiFL7ELhXaPTPdTJRzTekO2NHWS+/84Aq OKbfxfSq/mO8Qfcgc8k8QUvC1+svgYiNIByoXrmprXYiQAEB+a/tqRv6gw26noF9gsnJ /QuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715953212; x=1716558012; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aiPUoWXGycjCV4H2caLI8wPFwzPpr4ZyRJjWUc3GD0E=; b=kaRsBAP6z4/v1gJLbrVYD520p4GaQhmgasb25f47MDUwxqC4p3SKmRoBTdJGBVC4IY AeyOotcVt+EHHEcgIEwbojH48TOaF3YOFZ6BAywbPYmrUEmc6gwQG+09x4Mq5/pH2312 kzULRtjNVZE2ezWtmHqIo3n9c8tDYErR1hheszvUsTvzQ2v6f8gV/NO5fRThGfBkdVje FTDo3+DXq2MnLfNH5Ti2lXZgRthuJHhqV5fj7d8BylM0d+/N0l4Flit22hq36sTxwx5O 0XgSblOFMuvOYwy3uhXaMtBxOiGKXohGeV2k/nTgyKFDCO87FK31qCCzS3HggL8GChhW KHrA== X-Gm-Message-State: AOJu0YxSEBZdHaJuQuMHwKOET0+3+FvSFsn+sJTN179pjU6JwZb3QmZc JVW9NuRtYKETV/SKOjOqW9X/BrISCOXplwY+stxb3APGFq4nM4dUEZU9YYCxhiB8tFkOBJ9eHop kQDM= X-Google-Smtp-Source: AGHT+IGQ4yU5OfslFvHrmaaNBWzo/A4qApS0K4KT/T9TcNj9oj4eile615E5VVCVv+D0M7SFfiGcxQ== X-Received: by 2002:a05:651c:19ac:b0:2e2:18c2:9c8b with SMTP id 38308e7fff4ca-2e51f262c65mr153598271fa.0.1715953212229; Fri, 17 May 2024 06:40:12 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:46f0:3724:aa77:c1f8]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4200e518984sm240669275e9.23.2024.05.17.06.40.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 06:40:11 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra Subject: [kvm-unit-tests PATCH v1 1/4] riscv: move REG_L/REG_W in a dedicated asm.h file Date: Fri, 17 May 2024 15:40:02 +0200 Message-ID: <20240517134007.928539-2-cleger@rivosinc.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517134007.928539-1-cleger@rivosinc.com> References: <20240517134007.928539-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 These assembly macros will be used as part of the SSE entry assembly code, export them in asm.h header. Signed-off-by: Clément Léger --- lib/riscv/asm/asm.h | 19 +++++++++++++++++++ riscv/cstart.S | 14 +------------- 2 files changed, 20 insertions(+), 13 deletions(-) create mode 100644 lib/riscv/asm/asm.h diff --git a/lib/riscv/asm/asm.h b/lib/riscv/asm/asm.h new file mode 100644 index 00000000..763b28e6 --- /dev/null +++ b/lib/riscv/asm/asm.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _ASMRISCV_ASM_H_ +#define _ASMRISCV_ASM_H_ + +#if __riscv_xlen == 64 +#define __REG_SEL(a, b) a +#elif __riscv_xlen == 32 +#define __REG_SEL(a, b) b +#else +#error "Unexpected __riscv_xlen" +#endif + +#define REG_L __REG_SEL(ld, lw) +#define REG_S __REG_SEL(sd, sw) +#define SZREG __REG_SEL(8, 4) + +#define FP_SIZE 16 + +#endif /* _ASMRISCV_ASM_H_ */ diff --git a/riscv/cstart.S b/riscv/cstart.S index 10b5da57..d5d8ad25 100644 --- a/riscv/cstart.S +++ b/riscv/cstart.S @@ -4,22 +4,10 @@ * * Copyright (C) 2023, Ventana Micro Systems Inc., Andrew Jones */ +#include #include #include -#if __riscv_xlen == 64 -#define __REG_SEL(a, b) a -#elif __riscv_xlen == 32 -#define __REG_SEL(a, b) b -#else -#error "Unexpected __riscv_xlen" -#endif - -#define REG_L __REG_SEL(ld, lw) -#define REG_S __REG_SEL(sd, sw) -#define SZREG __REG_SEL(8, 4) - -#define FP_SIZE 16 .macro push_fp, ra=ra addi sp, sp, -FP_SIZE From patchwork Fri May 17 13:40:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13667017 Received: from mail-lj1-f177.google.com (mail-lj1-f177.google.com [209.85.208.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 246143BBE6 for ; Fri, 17 May 2024 13:40:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715953217; cv=none; b=qHw9jcWqh53wMCC6bfLNLm+qBjUX2GBUA2BiTBJgmIxDteCP8J/a6j1FY9c2mGnwBB85yUJzmXIAI4UPfm/BDyPpnSDq7VKrblQhOkx9O8w8TxDNGiPeyGmvNjnygyBYZgevWfmDiLx4FoyaA6dI4xhPXh+xU97KKXwryxanFzI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715953217; c=relaxed/simple; bh=G/lziIDTjMF7ephjP7Ta81ESZ3q9Kdn+yqUcvUJiRa8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ZfN0bVrGzHKWHjFPiFDnb2n49EqAeW3oemKRlpOSVpD9GcO0AfhSfHIFV29A7RxxSWVL48Cj0sGcpIRVFoljJJ50CNGS1Qk9XPc31Aj41Lnv+Uuy9KiQdrPFef8VlPWCZDst7IrF8KJcLOSmNjCmVl+lsfoKKBbZaA/Tgq4tp5U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=REYaHfgO; arc=none smtp.client-ip=209.85.208.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="REYaHfgO" Received: by mail-lj1-f177.google.com with SMTP id 38308e7fff4ca-2e1e7d9d87bso100321fa.3 for ; Fri, 17 May 2024 06:40:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1715953213; x=1716558013; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1HFLuNx37V7QBaBUb75eO70bRW+tMn62Y1Htb9ZgKbc=; b=REYaHfgOdggamJukw87Hm0B1TzztV68p5Wln8cJA5QhII83utAxSyw/5f5i4eurX4n F9Bijn05Z/KurhzzeSn8lVZIO4j1aYHCrDc6tZGKU0TMMCNYc9ULmxIAdxlt1vnkH1Yt ZNo70n5Oi7WKs6zjFoX6v01NoKanex5kIWT0fn804Y0ap1uyMpRHlIpBztBTZjwMLg9n Zy+emQbyWPyJUCH7Wqe0J7/OoK0+Vkk2STx0LS00n8C1N1eEz8JKqQMI98KMRBfcFZcz +crLaNwqIOAEYj+1044iqTSAVidrutDv4PxBZCMN5VqE1yQoeITYTjcajcBMPr1R5C/W Xbrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715953213; x=1716558013; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1HFLuNx37V7QBaBUb75eO70bRW+tMn62Y1Htb9ZgKbc=; b=KNQppIE1hFs/iVz5SqnobYmXwD6P80fsBBSEyYwbXr0KtPch9bUI0MJEiUuefW00kb 2/lG9eoH84fZcb/Ry4OCNuu/Z80a/R3/+Wnox1s0S31JUR/bFkUFdNLsfEm+BsI9zLGG RaY/ZOxzZND4HNOS4TMOAGUyBMOYObZS7CCaqDGkYa6eiq9z+VryNzRiaBY525BajJvO 214q6IP1li8vR3iP9/Wc9z0prnzHRoXdpeXOMClt0Gv3w5tdDAkxvJf1yc3q/1MtpUZx EIx/QYFoixob7GyEYtwzxXYErFkZKK1gkt5iCT3YEay+Ug88KEdNNboJtqhM/sGTZ5KU zhSQ== X-Gm-Message-State: AOJu0YzLJYEBPOAM2xfBXAQPoGTtnsffvvAeY83+lISKejr9mNzTJdec jA9G/TEPqURDGClA60GYi5ARm3MouWhsRIU3ZLVRXgpwIBMjRGIyPNe+xxKXMjRfuYN3EEt07Uc 1Xek= X-Google-Smtp-Source: AGHT+IHO/7qAot5DZKPHbiJRymeoucahtN5Hacl/TMAXLhaiKTtGPymEDoD8nrUwXKJvMpVxUSWJxA== X-Received: by 2002:a2e:b041:0:b0:2e6:f1bf:9897 with SMTP id 38308e7fff4ca-2e6f1bf991emr54486281fa.4.1715953212885; Fri, 17 May 2024 06:40:12 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:46f0:3724:aa77:c1f8]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4200e518984sm240669275e9.23.2024.05.17.06.40.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 06:40:12 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra Subject: [kvm-unit-tests PATCH v1 2/4] riscv: add SBI SSE extension definitions Date: Fri, 17 May 2024 15:40:03 +0200 Message-ID: <20240517134007.928539-3-cleger@rivosinc.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517134007.928539-1-cleger@rivosinc.com> References: <20240517134007.928539-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add SBI SSE extension definitions in sbi.h Signed-off-by: Clément Léger --- lib/riscv/asm/sbi.h | 76 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 76 insertions(+) diff --git a/lib/riscv/asm/sbi.h b/lib/riscv/asm/sbi.h index d82a384d..c7443ae4 100644 --- a/lib/riscv/asm/sbi.h +++ b/lib/riscv/asm/sbi.h @@ -11,6 +11,9 @@ #define SBI_ERR_ALREADY_AVAILABLE -6 #define SBI_ERR_ALREADY_STARTED -7 #define SBI_ERR_ALREADY_STOPPED -8 +#define SBI_ERR_NO_SHMEM -9 +#define SBI_ERR_INVALID_STATE -10 +#define SBI_ERR_BAD_RANGE -11 #ifndef __ASSEMBLY__ @@ -18,6 +21,7 @@ enum sbi_ext_id { SBI_EXT_BASE = 0x10, SBI_EXT_HSM = 0x48534d, SBI_EXT_SRST = 0x53525354, + SBI_EXT_SSE = 0x535345, }; enum sbi_ext_base_fid { @@ -37,6 +41,78 @@ enum sbi_ext_hsm_fid { SBI_EXT_HSM_HART_SUSPEND, }; +/* SBI Function IDs for SSE extension */ +#define SBI_EXT_SSE_READ_ATTR 0x00000000 +#define SBI_EXT_SSE_WRITE_ATTR 0x00000001 +#define SBI_EXT_SSE_REGISTER 0x00000002 +#define SBI_EXT_SSE_UNREGISTER 0x00000003 +#define SBI_EXT_SSE_ENABLE 0x00000004 +#define SBI_EXT_SSE_DISABLE 0x00000005 +#define SBI_EXT_SSE_COMPLETE 0x00000006 +#define SBI_EXT_SSE_INJECT 0x00000007 + +/* SBI SSE Event Attributes. */ +enum sbi_sse_attr_id { + SBI_SSE_ATTR_STATUS = 0x00000000, + SBI_SSE_ATTR_PRIO = 0x00000001, + SBI_SSE_ATTR_CONFIG = 0x00000002, + SBI_SSE_ATTR_PREFERRED_HART = 0x00000003, + SBI_SSE_ATTR_ENTRY_PC = 0x00000004, + SBI_SSE_ATTR_ENTRY_ARG = 0x00000005, + SBI_SSE_ATTR_INTERRUPTED_SEPC = 0x00000006, + SBI_SSE_ATTR_INTERRUPTED_FLAGS = 0x00000007, + SBI_SSE_ATTR_INTERRUPTED_A6 = 0x00000008, + SBI_SSE_ATTR_INTERRUPTED_A7 = 0x00000009, +}; + +#define SBI_SSE_ATTR_STATUS_STATE_OFFSET 0 +#define SBI_SSE_ATTR_STATUS_STATE_MASK 0x3 +#define SBI_SSE_ATTR_STATUS_PENDING_OFFSET 2 +#define SBI_SSE_ATTR_STATUS_INJECT_OFFSET 3 + +#define SBI_SSE_ATTR_CONFIG_ONESHOT (1 << 0) + +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_STATUS_SPP BIT(0) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_STATUS_SPIE BIT(1) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV BIT(2) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP BIT(3) + +enum sbi_sse_state { + SBI_SSE_STATE_UNUSED = 0, + SBI_SSE_STATE_REGISTERED = 1, + SBI_SSE_STATE_ENABLED = 2, + SBI_SSE_STATE_RUNNING = 3, +}; + +/* SBI SSE Event IDs. */ +#define SBI_SSE_EVENT_LOCAL_RAS 0x00000000 +#define SBI_SSE_EVENT_LOCAL_PLAT_0_START 0x00004000 +#define SBI_SSE_EVENT_LOCAL_PLAT_0_END 0x00007fff +#define SBI_SSE_EVENT_GLOBAL_RAS 0x00008000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_0_START 0x00004000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_0_END 0x00007fff + +#define SBI_SSE_EVENT_LOCAL_PMU 0x00010000 +#define SBI_SSE_EVENT_LOCAL_PLAT_1_START 0x00014000 +#define SBI_SSE_EVENT_LOCAL_PLAT_1_END 0x00017fff +#define SBI_SSE_EVENT_GLOBAL_PLAT_1_START 0x0001c000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_1_END 0x0001ffff + +#define SBI_SSE_EVENT_LOCAL_PLAT_2_START 0x00024000 +#define SBI_SSE_EVENT_LOCAL_PLAT_2_END 0x00027fff +#define SBI_SSE_EVENT_GLOBAL_PLAT_2_START 0x0002c000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_2_END 0x0002ffff + +#define SBI_SSE_EVENT_LOCAL_SOFTWARE 0xffff0000 +#define SBI_SSE_EVENT_LOCAL_PLAT_3_START 0xffff4000 +#define SBI_SSE_EVENT_LOCAL_PLAT_3_END 0xffff7fff +#define SBI_SSE_EVENT_GLOBAL_SOFTWARE 0xffff8000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_3_START 0xffffc000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_3_END 0xffffffff + +#define SBI_SSE_EVENT_GLOBAL_BIT (1 << 15) +#define SBI_SSE_EVENT_PLATFORM_BIT (1 << 14) + struct sbiret { long error; long value; From patchwork Fri May 17 13:40:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13667018 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 564AA54BFB for ; Fri, 17 May 2024 13:40:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715953218; cv=none; b=Z8qIGdhKT0NriPOG5UMQgtpDnMtnxOITIvclmhF+ygVNTQn1BFRTcFlx1chZB4u1SAWoitSibK5KGYfsjr2bB1IsFv0vsfSNbGvGxCs94IO22V/MQ/Xkaig5v1bb+RAn4xAnQ2YYZwtWSZbbHztWz3M2SpBEuIdMklK7uFfiuDI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715953218; c=relaxed/simple; bh=D9SbEZpb74wmEYyt9oWlS/gX1rcdTxjFEK1RwhdSSUY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=eZgJ+yejsBXViUD3tj37p1ODOsOD85vM1L2SHDuU9rS7KJkzpisQb6KdT/H3dr/jUDH9Ya5XyP3oKSQO+kDNDbA3bZH6NoNs2WQRU0FcH0LMi1QCek5Iu4GvVY11yAAczj0GOree3+sw0YvD0xaW7qnFB3OjHqWNGjsGuQq+siI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=octkb9Ae; arc=none smtp.client-ip=209.85.128.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="octkb9Ae" Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-4201033aeb4so198195e9.3 for ; Fri, 17 May 2024 06:40:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1715953214; x=1716558014; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2uULTfJpUxv0cHbNED0YCMQ2Ed2aowh8NFmnRBg6dlM=; b=octkb9AeGaRpq/lpgfUIcr/JWcmamD9M+c/sAqyaq33DvNz2gpe71pJnNtTxAW4pLw BQCICVO9WBpHgNlAinzPKDTaBNFZH6rUlEPMc8WmOlEKx9e3m4rYwNWjf/iWAYcCQaZ2 9c0KWTh28W9L9LWE2itvZ0dZndAfcLVc65JVJhBv8hCp6bVk3oDr67EHzVvDt8mTKKYz VKoJT8Yqz9Q02RANAI/9/mG675IFFAXSEijmqJ0La452Pu1l1JFIbzxYOEL9U8oRvo3P I/OLe4BvWeEV5xmy6BvEz+7TnVnzkdykjFmB+A0H2kDs69O7pozV2W9HkGht42Wc0pov WlHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715953214; x=1716558014; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2uULTfJpUxv0cHbNED0YCMQ2Ed2aowh8NFmnRBg6dlM=; b=iN5tHQwgGq/TdtvmogAhjmGM3Ix2JFJj8+LgT+BikvcuAwzqaPhfLb586a60edcmHq u6u1t3mvdXvlsNEQ+hWJ7T/qbF9NhuNrud3xKxQYFM5UNlKOjFR4TtNofZyEZk08X3lX VrylYDTD6ymGfoXe5yLlC6/5AMvpMNvEAm33i5JqMW0AKXxIiOSDxp4hx6rXHKzqR5NS kTpp62+MQAapiCFJFpLuCoNh1l6Ta4I18/XxoUJX16RLCEdTeWbZBFyBg4HtW+HVGmCW +jhufisRlQLImQU2mV32DcTDty9KMeS+oFZk3WaElsOlWf2eSjpS36ZfFH2dQ5E+sZt3 V7mg== X-Gm-Message-State: AOJu0YyNy91PmUr7uA0AryL9obztj+PHEUZT//8oz1R0RoLaBw8sp0Rb XGdmb7x+te0D1P4TRuNmhEA/kC6JhEHb1ujc7/9K72R2TyEdSqVMYztMbIP3lRy31RCmaUhN7C4 63bQ= X-Google-Smtp-Source: AGHT+IHOY4EdaWkIkYGC/BuZM092fUgU/spg4q6sIjw+u3kQJdyzIQ3bQH3dAR1kSMeXc/bpocpyjQ== X-Received: by 2002:a05:600c:1d0a:b0:418:1303:c3d1 with SMTP id 5b1f17b1804b1-41fead6ae1bmr183977285e9.3.1715953213910; Fri, 17 May 2024 06:40:13 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:46f0:3724:aa77:c1f8]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4200e518984sm240669275e9.23.2024.05.17.06.40.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 06:40:13 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra Subject: [kvm-unit-tests PATCH v1 3/4] riscv: add SSE assembly entry handling Date: Fri, 17 May 2024 15:40:04 +0200 Message-ID: <20240517134007.928539-4-cleger@rivosinc.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517134007.928539-1-cleger@rivosinc.com> References: <20240517134007.928539-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add a SSE entry assembly code to handle SSE events. Events should be registered with a struct sse_handler_arg containing a correct stack and handler function. Signed-off-by: Clément Léger --- riscv/Makefile | 1 + lib/riscv/asm/sse.h | 16 +++++++ lib/riscv/sse_entry.S | 100 ++++++++++++++++++++++++++++++++++++++++ lib/riscv/asm-offsets.c | 9 ++++ 4 files changed, 126 insertions(+) create mode 100644 lib/riscv/asm/sse.h create mode 100644 lib/riscv/sse_entry.S diff --git a/riscv/Makefile b/riscv/Makefile index 919a3ebb..c7810359 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -37,6 +37,7 @@ cflatobjs += lib/riscv/processor.o cflatobjs += lib/riscv/sbi.o cflatobjs += lib/riscv/setup.o cflatobjs += lib/riscv/smp.o +cflatobjs += lib/riscv/sse_entry.o cflatobjs += lib/riscv/stack.o ifeq ($(ARCH),riscv32) cflatobjs += lib/ldiv32.o diff --git a/lib/riscv/asm/sse.h b/lib/riscv/asm/sse.h new file mode 100644 index 00000000..557f6680 --- /dev/null +++ b/lib/riscv/asm/sse.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef _ASMRISCV_SSE_H_ +#define _ASMRISCV_SSE_H_ + +typedef void (*sse_handler_fn)(void *data, struct pt_regs *regs, unsigned int hartid); + +struct sse_handler_arg { + unsigned long reg_tmp; + sse_handler_fn handler; + void *handler_data; + void *stack; +}; + +extern void sse_entry(void); + +#endif /* _ASMRISCV_SSE_H_ */ diff --git a/lib/riscv/sse_entry.S b/lib/riscv/sse_entry.S new file mode 100644 index 00000000..bedc47e9 --- /dev/null +++ b/lib/riscv/sse_entry.S @@ -0,0 +1,100 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * SBI SSE entry code + * + * Copyright (C) 2024, Rivos Inc., Clément Léger + */ +#include +#include +#include + +.global sse_entry +sse_entry: + /* Save stack temporarily */ + REG_S sp, SSE_REG_TMP(a6) + /* Set entry stack */ + REG_L sp, SSE_HANDLER_STACK(a6) + + addi sp, sp, -(PT_SIZE) + REG_S ra, PT_RA(sp) + REG_S s0, PT_S0(sp) + REG_S s1, PT_S1(sp) + REG_S s2, PT_S2(sp) + REG_S s3, PT_S3(sp) + REG_S s4, PT_S4(sp) + REG_S s5, PT_S5(sp) + REG_S s6, PT_S6(sp) + REG_S s7, PT_S7(sp) + REG_S s8, PT_S8(sp) + REG_S s9, PT_S9(sp) + REG_S s10, PT_S10(sp) + REG_S s11, PT_S11(sp) + REG_S tp, PT_TP(sp) + REG_S t0, PT_T0(sp) + REG_S t1, PT_T1(sp) + REG_S t2, PT_T2(sp) + REG_S t3, PT_T3(sp) + REG_S t4, PT_T4(sp) + REG_S t5, PT_T5(sp) + REG_S t6, PT_T6(sp) + REG_S gp, PT_GP(sp) + REG_S a0, PT_A0(sp) + REG_S a1, PT_A1(sp) + REG_S a2, PT_A2(sp) + REG_S a3, PT_A3(sp) + REG_S a4, PT_A4(sp) + REG_S a5, PT_A5(sp) + csrr a1, CSR_SEPC + REG_S a1, PT_EPC(sp) + csrr a2, CSR_SSTATUS + REG_S a2, PT_STATUS(sp) + + REG_L a0, SSE_REG_TMP(a6) + REG_S a0, PT_SP(sp) + + REG_L t0, SSE_HANDLER(a6) + REG_L a0, SSE_HANDLER_DATA(a6) + move a1, sp + move a2, a7 + jalr t0 + + + REG_L a1, PT_EPC(sp) + REG_L a2, PT_STATUS(sp) + csrw CSR_SEPC, a1 + csrw CSR_SSTATUS, a2 + + REG_L ra, PT_RA(sp) + REG_L s0, PT_S0(sp) + REG_L s1, PT_S1(sp) + REG_L s2, PT_S2(sp) + REG_L s3, PT_S3(sp) + REG_L s4, PT_S4(sp) + REG_L s5, PT_S5(sp) + REG_L s6, PT_S6(sp) + REG_L s7, PT_S7(sp) + REG_L s8, PT_S8(sp) + REG_L s9, PT_S9(sp) + REG_L s10, PT_S10(sp) + REG_L s11, PT_S11(sp) + REG_L tp, PT_TP(sp) + REG_L t0, PT_T0(sp) + REG_L t1, PT_T1(sp) + REG_L t2, PT_T2(sp) + REG_L t3, PT_T3(sp) + REG_L t4, PT_T4(sp) + REG_L t5, PT_T5(sp) + REG_L t6, PT_T6(sp) + REG_L gp, PT_GP(sp) + REG_L a0, PT_A0(sp) + REG_L a1, PT_A1(sp) + REG_L a2, PT_A2(sp) + REG_L a3, PT_A3(sp) + REG_L a4, PT_A4(sp) + REG_L a5, PT_A5(sp) + + REG_L sp, PT_SP(sp) + + li a7, ASM_SBI_EXT_SSE + li a6, ASM_SBI_EXT_SSE_COMPLETE + ecall diff --git a/lib/riscv/asm-offsets.c b/lib/riscv/asm-offsets.c index a2a32438..a5e25332 100644 --- a/lib/riscv/asm-offsets.c +++ b/lib/riscv/asm-offsets.c @@ -2,7 +2,9 @@ #include #include #include +#include #include +#include int main(void) { @@ -58,5 +60,12 @@ int main(void) OFFSET(SECONDARY_FUNC, secondary_data, func); DEFINE(SECONDARY_DATA_SIZE, sizeof(struct secondary_data)); + OFFSET(SSE_REG_TMP, sse_handler_arg, reg_tmp); + OFFSET(SSE_HANDLER, sse_handler_arg, handler); + OFFSET(SSE_HANDLER_DATA, sse_handler_arg, handler_data); + OFFSET(SSE_HANDLER_STACK, sse_handler_arg, stack); + DEFINE(ASM_SBI_EXT_SSE, SBI_EXT_SSE); + DEFINE(ASM_SBI_EXT_SSE_COMPLETE, SBI_EXT_SSE_COMPLETE); + return 0; } From patchwork Fri May 17 13:40:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 13667019 Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F1C1A44C7A for ; Fri, 17 May 2024 13:40:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715953219; cv=none; b=TmwqPZERh7DSE7Va8ixnBwYjqazHhYPxc0sJqmYu4zX561jl+Lu50yu+pZj7+H7tnpKOTXfYhDC9jmIAYOD0lJXcttApDiEY9vnO+1DR4zX5knh/OXWe1deWJ/crjIM4fPPZA9ZyFtHryEu8Edf97sf15LM2J2ZuDYchQnv9rOE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715953219; c=relaxed/simple; bh=e+JwvURC55mkr406VlQKvTtnFaH0LFJbr8Idi6MVxSI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=dt2yPURxiUMejk+oldo4yl/6S0Z8MXNLgv6wiJJZXboiS7qC/IBxzNhoPqybo5DTNNb1FqK+a0tLKpjDWs5qBC5dTfmFiTfxKBm6KO8xP3GQg+e9dKCTHyUyn7mdQjF1xKMyqIaf1yc6a3KtUsHTPP4xwjiJBZ+METp1s9VvEvw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=wm9wJbJC; arc=none smtp.client-ip=209.85.128.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="wm9wJbJC" Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-4202a1ead57so210275e9.2 for ; Fri, 17 May 2024 06:40:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1715953215; x=1716558015; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GKJpjhqNKGdM7Nv0BzNaLLJCsOS31ylkLxlC2QK6Av8=; b=wm9wJbJCi4yC+sADEuFHzHEL27QfVAvoyfQhAs+mLxTejVyd3nou6i+7B2idNEo8Dq gkdVDa38Kv8R+n1QzeY+w1feQFXTGKBfPVkkJIC+xUP2Oz6mAr+nyvC15AJ99pxLDTJt T5vlzRQDzEQbqgHotUyJgVu3JUF2md5tsuxNbKCfF4FyqskC0WN0rNq50eHZc3pTZM7M rK0snv5xUvrt4wWRPVm3mmFL9ce7QHZjg509XQOUXn+acNOodTTDCXSj0j1lb603Lbkx UL/lEYg7/S7Gyxo1OTvuslTcjKZDs3CXq9s/AokvBdyExzkZPog0/8QWr0xFsxDmMisW YJ5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715953215; x=1716558015; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GKJpjhqNKGdM7Nv0BzNaLLJCsOS31ylkLxlC2QK6Av8=; b=PZ7rySMeo/EW7gkdVu4LRXa0zdffJ7fDiTmySPd+hWShunc3/jN/KCRf6yjTR1U0Ag 7chLJmYpG7K9s5scSNNBNvuIVGG6BSYQFsUd+sC1+t5VRfSO3/1IWcuSDcqTGSZfnpXT u4bU2hsiAwBw1CwDVd64m1gjP3sM1Hvj+sZn/9a6t5CVTOTpT4//w8ca1W0J0QVngoTV yxx9B3broC5NJs8C0REFglCnAJuCgnj78QI4s/yMUAQfEDTC07tKQl+4JiN8fz0HNdAr Qn8i2/wj6sbOQrfrsVr+asIXKBgUz/lOQBdpvSfqI/SjJJv9FzmUU+p57dwEjIN73ZNJ 0qLg== X-Gm-Message-State: AOJu0Yw7OKKWbReQfuHSvsObkGdV7WKKTe56VdHgeLztR5a9lPDyhYI7 H27CBjvPmMHk3kf4eJJhxK1hSJNeVkorTCdAchqW4ygd7GxJCPq2eP/dlGiMJQJ9gizsrsbP0Yl M1KM= X-Google-Smtp-Source: AGHT+IG5nq+FsE50gwbMlreJ/z/ubsmPbFcSBXSXr0aDu461t/lxkKJW68eqDBVqKEmYGrp7lrmwYw== X-Received: by 2002:a05:600c:3b0a:b0:41a:3150:cc83 with SMTP id 5b1f17b1804b1-41feac59d8emr160574435e9.2.1715953214724; Fri, 17 May 2024 06:40:14 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:46f0:3724:aa77:c1f8]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4200e518984sm240669275e9.23.2024.05.17.06.40.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 06:40:14 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra Subject: [kvm-unit-tests PATCH v1 4/4] riscv: add SBI SSE extension tests Date: Fri, 17 May 2024 15:40:05 +0200 Message-ID: <20240517134007.928539-5-cleger@rivosinc.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240517134007.928539-1-cleger@rivosinc.com> References: <20240517134007.928539-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Test SBI SSE extension: - Test attributes errors (invalid values, RO, etc) - Registration errors - Simple events (register, enable, inject) - Events with different priorities - Global events dispatch on different harts - Local events on all harts Signed-off-by: Clément Léger --- riscv/Makefile | 1 + lib/riscv/asm/csr.h | 2 + riscv/sbi_sse.c | 983 ++++++++++++++++++++++++++++++++++++++++++++ riscv/unittests.cfg | 4 + 4 files changed, 990 insertions(+) create mode 100644 riscv/sbi_sse.c diff --git a/riscv/Makefile b/riscv/Makefile index c7810359..9f9529ad 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -12,6 +12,7 @@ endif tests = tests += $(TEST_DIR)/sbi.$(exe) +tests += $(TEST_DIR)/sbi_sse.$(exe) tests += $(TEST_DIR)/selftest.$(exe) tests += $(TEST_DIR)/sieve.$(exe) diff --git a/lib/riscv/asm/csr.h b/lib/riscv/asm/csr.h index 52608512..d4d7fe34 100644 --- a/lib/riscv/asm/csr.h +++ b/lib/riscv/asm/csr.h @@ -14,6 +14,8 @@ /* Exception cause high bit - is an interrupt if set */ #define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1)) +#define SSTATUS_SPP _AC(0x00000100, UL) /* Previously Supervisor */ + /* Exception causes */ #define EXC_INST_MISALIGNED 0 #define EXC_INST_ACCESS 1 diff --git a/riscv/sbi_sse.c b/riscv/sbi_sse.c new file mode 100644 index 00000000..446bf747 --- /dev/null +++ b/riscv/sbi_sse.c @@ -0,0 +1,983 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * SBI SSE testsuite + * + * Copyright (C) 2024, Rivos Inc., Clément Léger + */ +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#define SSE_STACK_SIZE PAGE_SIZE + +struct sse_event_info { + unsigned long event_id; + const char *name; + bool can_inject; +}; + +static struct sse_event_info sse_event_infos[] = { + { + .event_id = SBI_SSE_EVENT_LOCAL_RAS, + .name = "local_ras", + }, + { + .event_id = SBI_SSE_EVENT_GLOBAL_RAS, + .name = "global_ras", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_PMU, + .name = "local_pmu", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, + .name = "local_software", + }, + { + .event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, + .name = "global_software", + }, +}; + +static const char *const attr_names[] = { + [SBI_SSE_ATTR_STATUS] = "status", + [SBI_SSE_ATTR_PRIO] = "prio", + [SBI_SSE_ATTR_CONFIG] = "config", + [SBI_SSE_ATTR_PREFERRED_HART] = "preferred_hart", + [SBI_SSE_ATTR_ENTRY_PC] = "entry_pc", + [SBI_SSE_ATTR_ENTRY_ARG] = "entry_arg", + [SBI_SSE_ATTR_INTERRUPTED_SEPC] = "interrupted_pc", + [SBI_SSE_ATTR_INTERRUPTED_FLAGS] = "interrupted_flags", + [SBI_SSE_ATTR_INTERRUPTED_A6] = "interrupted_a6", + [SBI_SSE_ATTR_INTERRUPTED_A7] = "interrupted_a7", +}; + +static const unsigned long ro_attrs[] = { + SBI_SSE_ATTR_STATUS, + SBI_SSE_ATTR_ENTRY_PC, + SBI_SSE_ATTR_ENTRY_ARG, +}; + +static const unsigned long interrupted_attrs[] = { + SBI_SSE_ATTR_INTERRUPTED_FLAGS, + SBI_SSE_ATTR_INTERRUPTED_SEPC, + SBI_SSE_ATTR_INTERRUPTED_A6, + SBI_SSE_ATTR_INTERRUPTED_A7, +}; + +static const unsigned long interrupted_flags[] = { + SBI_SSE_ATTR_INTERRUPTED_FLAGS_STATUS_SPP, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_STATUS_SPIE, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP, +}; + +static void help(void) +{ + puts("Test SBI SSE extension\n"); +} + +static struct sse_event_info *sse_evt_get_infos(unsigned long event_id) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) { + if (sse_event_infos[i].event_id == event_id) + return &sse_event_infos[i]; + } + + assert_msg(false, "Invalid event id: %ld", event_id); +} + +static const char *sse_evt_name(unsigned long event_id) +{ + struct sse_event_info *infos = sse_evt_get_infos(event_id); + + return infos->name; +} + +static bool sse_evt_can_inject(unsigned long event_id) +{ + struct sse_event_info *infos = sse_evt_get_infos(event_id); + + return infos->can_inject; +} + +static bool sse_event_is_global(unsigned long event_id) +{ + return !!(event_id & SBI_SSE_EVENT_GLOBAL_BIT); +} + +static struct sbiret sse_event_get_attr_raw(unsigned long event_id, + unsigned long base_attr_id, + unsigned long attr_count, + unsigned long phys_lo, + unsigned long phys_hi) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_READ_ATTR, event_id, + base_attr_id, attr_count, phys_lo, phys_hi, 0); +} + +static unsigned long sse_event_get_attrs(unsigned long event_id, unsigned long attr_id, + unsigned long *values, unsigned int attr_count) +{ + struct sbiret ret; + + ret = sse_event_get_attr_raw(event_id, attr_id, attr_count, (unsigned long)values, 0); + + return ret.error; +} + +static unsigned long sse_event_get_attr(unsigned long event_id, unsigned long attr_id, + unsigned long *value) +{ + return sse_event_get_attrs(event_id, attr_id, value, 1); +} + +static struct sbiret sse_event_set_attr_raw(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long phys_lo, + unsigned long phys_hi) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_WRITE_ATTR, event_id, base_attr_id, attr_count, + phys_lo, phys_hi, 0); +} + +static unsigned long sse_event_set_attr(unsigned long event_id, unsigned long attr_id, + unsigned long value) +{ + struct sbiret ret; + + ret = sse_event_set_attr_raw(event_id, attr_id, 1, (unsigned long)&value, 0); + + return ret.error; +} + +static unsigned long sse_event_register_raw(unsigned long event_id, void *entry_pc, void *entry_arg) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_REGISTER, event_id, (unsigned long)entry_pc, + (unsigned long)entry_arg, 0, 0, 0); + + return ret.error; +} + +static unsigned long sse_event_register(unsigned long event_id, struct sse_handler_arg *arg) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_REGISTER, event_id, (unsigned long)sse_entry, + (unsigned long)arg, 0, 0, 0); + + return ret.error; +} + +static unsigned long sse_event_unregister(unsigned long event_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_UNREGISTER, event_id, 0, 0, 0, 0, 0); + + return ret.error; +} + +static unsigned long sse_event_enable(unsigned long event_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_ENABLE, event_id, 0, 0, 0, 0, 0); + + return ret.error; +} + +static unsigned long sse_event_inject(unsigned long event_id, unsigned long hart_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_INJECT, event_id, hart_id, 0, 0, 0, 0); + + return ret.error; +} + +static unsigned long sse_event_disable(unsigned long event_id) +{ + struct sbiret ret; + + ret = sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_DISABLE, event_id, 0, 0, 0, 0, 0); + + return ret.error; +} + + +static int sse_get_state(unsigned long event_id, enum sbi_sse_state *state) +{ + int ret; + unsigned long status; + + ret = sse_event_get_attr(event_id, SBI_SSE_ATTR_STATUS, &status); + if (ret) { + report_fail("Failed to get SSE event status"); + return -1; + } + + *state = status & SBI_SSE_ATTR_STATUS_STATE_MASK; + + return 0; +} + +static void sse_global_event_set_current_hart(unsigned long event_id) +{ + int ret; + + if (!sse_event_is_global(event_id)) + return; + + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PREFERRED_HART, + current_thread_info()->hartid); + if (ret) + report_abort("set preferred hart failure"); +} + +static int sse_check_state(unsigned long event_id, unsigned long expected_state) +{ + int ret; + enum sbi_sse_state state; + + ret = sse_get_state(event_id, &state); + if (ret) + return 1; + report(state == expected_state, "SSE event status == %ld", expected_state); + + return state != expected_state; +} + +static bool sse_event_pending(unsigned long event_id) +{ + int ret; + unsigned long status; + + ret = sse_event_get_attr(event_id, SBI_SSE_ATTR_STATUS, &status); + if (ret) { + report_fail("Failed to get SSE event status"); + return false; + } + + return !!(status & BIT(SBI_SSE_ATTR_STATUS_PENDING_OFFSET)); +} + +static void *sse_alloc_stack(void) +{ + return (alloc_page() + PAGE_SIZE); +} + +static void sse_free_stack(void *stack) +{ + free_page(stack - PAGE_SIZE); +} + +static void sse_test_attr(unsigned long event_id) +{ + unsigned long ret, value = 0; + unsigned long values[ARRAY_SIZE(ro_attrs)]; + struct sbiret sret; + unsigned int i; + + report_prefix_push("attrs"); + + for (i = 0; i < ARRAY_SIZE(ro_attrs); i++) { + ret = sse_event_set_attr(event_id, ro_attrs[i], value); + report(ret == SBI_ERR_BAD_RANGE, "RO attribute %s not writable", + attr_names[ro_attrs[i]]); + } + + for (i = SBI_SSE_ATTR_STATUS; i <= SBI_SSE_ATTR_INTERRUPTED_A7; i++) { + ret = sse_event_get_attr(event_id, i, &value); + report(ret == SBI_SUCCESS, "Read single attribute %s", attr_names[i]); + /* Preferred Hart reset value is defined by SBI vendor and status injectable bit + * also depends on the SBI implementation + */ + if (i != SBI_SSE_ATTR_STATUS && i != SBI_SSE_ATTR_PREFERRED_HART) + report(value == 0, "Attribute %s reset value is 0", attr_names[i]); + } + + ret = sse_event_get_attrs(event_id, SBI_SSE_ATTR_STATUS, values, + SBI_SSE_ATTR_INTERRUPTED_A7 - SBI_SSE_ATTR_STATUS); + report(ret == SBI_SUCCESS, "Read multiple attributes"); + +#if __riscv_xlen > 32 + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PRIO, 0xFFFFFFFFUL + 1UL); + report(ret == SBI_ERR_INVALID_PARAM, "Write prio > 0xFFFFFFFF error"); +#endif + + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_CONFIG, ~SBI_SSE_ATTR_CONFIG_ONESHOT); + report(ret == SBI_ERR_INVALID_PARAM, "Write invalid config error"); + + if (sse_event_is_global(event_id)) { + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PREFERRED_HART, 0xFFFFFFFFUL); + report(ret == SBI_ERR_INVALID_PARAM, "Set invalid hart id error"); + } else { + /* Set Hart on local event -> RO */ + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PREFERRED_HART, + current_thread_info()->hartid); + report(ret == SBI_ERR_BAD_RANGE, "Set hart id on local event error"); + } + + /* Set/get flags, sepc, a6, a7 */ + for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) { + ret = sse_event_get_attr(event_id, interrupted_attrs[i], &value); + report(ret == 0, "Get interrupted %s no error", attr_names[interrupted_attrs[i]]); + + /* 0x1 is a valid value for all the interrupted attributes */ + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_INTERRUPTED_FLAGS, 0x1); + report(ret == SBI_ERR_INVALID_STATE, "Set interrupted flags invalid state error"); + } + + /* Attr_count == 0 */ + sret = sse_event_get_attr_raw(event_id, SBI_SSE_ATTR_STATUS, 0, (unsigned long) &value, 0); + report(sret.error == SBI_ERR_INVALID_PARAM, "Read attribute attr_count == 0 error"); + + sret = sse_event_set_attr_raw(event_id, SBI_SSE_ATTR_STATUS, 0, (unsigned long) &value, 0); + report(sret.error == SBI_ERR_INVALID_PARAM, "Write attribute attr_count == 0 error"); + + /* Invalid attribute id */ + ret = sse_event_get_attr(event_id, SBI_SSE_ATTR_INTERRUPTED_A7 + 1, &value); + report(ret == SBI_ERR_BAD_RANGE, "Read invalid attribute error"); + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_INTERRUPTED_A7 + 1, value); + report(ret == SBI_ERR_BAD_RANGE, "Write invalid attribute error"); + + /* Misaligned phys address */ + sret = sse_event_get_attr_raw(event_id, SBI_SSE_ATTR_STATUS, 1, + ((unsigned long) &value | 0x1), 0); + report(sret.error == SBI_ERR_INVALID_ADDRESS, "Read attribute with invalid address error"); + sret = sse_event_set_attr_raw(event_id, SBI_SSE_ATTR_STATUS, 1, + ((unsigned long) &value | 0x1), 0); + report(sret.error == SBI_ERR_INVALID_ADDRESS, "Write attribute with invalid address error"); + + report_prefix_pop(); +} + +static void sse_test_register_error(unsigned long event_id) +{ + unsigned long ret; + + report_prefix_push("register"); + + ret = sse_event_unregister(event_id); + report(ret == SBI_ERR_INVALID_STATE, "SSE unregister non registered event"); + + ret = sse_event_register_raw(event_id, (void *) 0x1, NULL); + report(ret == SBI_ERR_INVALID_PARAM, "SSE register misaligned entry"); + + ret = sse_event_register_raw(event_id, (void *) sse_entry, NULL); + report(ret == SBI_SUCCESS, "SSE register ok"); + if (ret) + goto done; + + ret = sse_event_register_raw(event_id, (void *) sse_entry, NULL); + report(ret == SBI_ERR_INVALID_STATE, "SSE register twice failure"); + if (!ret) + goto done; + + ret = sse_event_unregister(event_id); + report(ret == SBI_SUCCESS, "SSE unregister ok"); + +done: + report_prefix_pop(); +} + +struct sse_simple_test_arg { + bool done; + unsigned long event_id; +}; + +static void sse_simple_handler(void *data, struct pt_regs *regs, unsigned int hartid) +{ + struct sse_simple_test_arg *arg = data; + int ret, i; + const char *attr_name; + unsigned long event_id = arg->event_id, value, prev_value, flags, attr; + const unsigned long regs_len = (SBI_SSE_ATTR_INTERRUPTED_A7 - SBI_SSE_ATTR_INTERRUPTED_A6) + + 1; + unsigned long interrupted_state[regs_len]; + + if ((regs->status & SSTATUS_SPP) == 0) + report_fail("Interrupted S-mode"); + + if (hartid != current_thread_info()->hartid) + report_fail("Hartid correctly passed"); + + sse_check_state(event_id, SBI_SSE_STATE_RUNNING); + if (sse_event_pending(event_id)) + report_fail("Event is not pending"); + + /* Set a6, a7, sepc, flags while running */ + for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) { + attr = interrupted_attrs[i]; + attr_name = attr_names[attr]; + + ret = sse_event_get_attr(event_id, attr, &prev_value); + report(ret == 0, "Get attr %s no error", attr_name); + + /* We test SBI_SSE_ATTR_INTERRUPTED_FLAGS below with specific flag values */ + if (attr == SBI_SSE_ATTR_INTERRUPTED_FLAGS) + continue; + + ret = sse_event_set_attr(event_id, attr, 0xDEADBEEF + i); + report(ret == 0, "Set attr %s invalid state no error", attr_name); + + ret = sse_event_get_attr(event_id, attr, &value); + report(ret == 0, "Get attr %s modified value no error", attr_name); + report(value == 0xDEADBEEF + i, "Get attr %s modified value ok", attr_name); + + ret = sse_event_set_attr(event_id, attr, prev_value); + report(ret == 0, "Restore attr %s value no error", attr_name); + } + + /* Test all flags allowed for SBI_SSE_ATTR_INTERRUPTED_FLAGS*/ + attr = SBI_SSE_ATTR_INTERRUPTED_FLAGS; + attr_name = attr_names[attr]; + ret = sse_event_get_attr(event_id, attr, &prev_value); + report(ret == 0, "Get attr %s no error", attr_name); + + for (i = 0; i < ARRAY_SIZE(interrupted_flags); i++) { + flags = interrupted_flags[i]; + ret = sse_event_set_attr(event_id, attr, flags); + report(ret == 0, "Set interrupted %s value no error", attr_name); + ret = sse_event_get_attr(event_id, attr, &value); + report(value == flags, "Get attr %s modified value ok", attr_name); + } + + ret = sse_event_set_attr(event_id, attr, prev_value); + report(ret == 0, "Restore attr %s value no error", attr_name); + + /* Try to change HARTID/Priority while running */ + if (sse_event_is_global(event_id)) { + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PREFERRED_HART, + current_thread_info()->hartid); + report(ret == SBI_ERR_INVALID_STATE, "Set hart id while running error"); + } + + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PRIO, 0); + report(ret == SBI_ERR_INVALID_STATE, "Set priority while running error"); + + ret = sse_event_get_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_A6, interrupted_state, + regs_len); + report(ret == SBI_SUCCESS, "Read interrupted context from SSE handler ok"); + if (interrupted_state[0] != SBI_EXT_SSE_INJECT) + report_fail("Interrupted state a6 check ok"); + if (interrupted_state[1] != SBI_EXT_SSE) + report_fail("Interrupted state a7 check ok"); + + arg->done = true; +} + +static void sse_test_inject_simple(unsigned long event_id) +{ + unsigned long ret; + struct sse_handler_arg args; + struct sse_simple_test_arg test_arg = {.event_id = event_id}; + + args.handler = sse_simple_handler; + args.handler_data = &test_arg; + args.stack = sse_alloc_stack(); + + report_prefix_push("simple"); + + ret = sse_check_state(event_id, SBI_SSE_STATE_UNUSED); + if (ret) + goto done; + + ret = sse_event_register(event_id, &args); + report(ret == SBI_SUCCESS, "SSE register no error"); + if (ret) + goto done; + + ret = sse_check_state(event_id, SBI_SSE_STATE_REGISTERED); + if (ret) + goto done; + + /* Be sure global events are targeting the current hart */ + sse_global_event_set_current_hart(event_id); + + ret = sse_event_enable(event_id); + report(ret == SBI_SUCCESS, "SSE enable no error"); + if (ret) + goto done; + + ret = sse_check_state(event_id, SBI_SSE_STATE_ENABLED); + if (ret) + goto done; + + ret = sse_event_inject(event_id, current_thread_info()->hartid); + report(ret == SBI_SUCCESS, "SSE injection no error"); + if (ret) + goto done; + + report(test_arg.done == 1, "SSE event handled ok"); + test_arg.done = 0; + + /* Set as oneshot and verify it is disabled */ + ret = sse_event_disable(event_id); + report(ret == 0, "Disable event ok"); + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_CONFIG, SBI_SSE_ATTR_CONFIG_ONESHOT); + report(ret == 0, "Set event attribute as ONESHOT"); + ret = sse_event_enable(event_id); + report(ret == 0, "Enable event ok"); + + ret = sse_event_inject(event_id, current_thread_info()->hartid); + report(ret == SBI_SUCCESS, "SSE injection 2 no error"); + if (ret) + goto done; + + report(test_arg.done == 1, "SSE event handled ok"); + test_arg.done = 0; + + ret = sse_check_state(event_id, SBI_SSE_STATE_REGISTERED); + if (ret) + goto done; + + /* Clear ONESHOT FLAG */ + sse_event_set_attr(event_id, SBI_SSE_ATTR_CONFIG, 0); + + ret = sse_event_unregister(event_id); + report(ret == SBI_SUCCESS, "SSE unregister no error"); + if (ret) + goto done; + + ret = sse_check_state(event_id, SBI_SSE_STATE_UNUSED); + if (ret) + goto done; + +done: + sse_free_stack(args.stack); + report_prefix_pop(); +} + +struct sse_foreign_cpu_test_arg { + bool done; + unsigned int expected_cpu; + unsigned long event_id; +}; + +static void sse_foreign_cpu_handler(void *data, struct pt_regs *regs, unsigned int hartid) +{ + volatile struct sse_foreign_cpu_test_arg *arg = data; + + /* For arg content to be visible */ + smp_rmb(); + if (arg->expected_cpu != current_thread_info()->cpu) + report_fail("Received event on CPU (%d), expected CPU (%d)", + current_thread_info()->cpu, arg->expected_cpu); + + arg->done = true; + /* For arg update to be visible for other CPUs */ + smp_wmb(); +} + +struct sse_local_per_cpu { + struct sse_handler_arg args; + unsigned long ret; +}; + +struct sse_local_data { + unsigned long event_id; + struct sse_local_per_cpu *cpu_args[NR_CPUS]; +}; + +static void sse_register_enable_local(void *data) +{ + struct sse_local_data *local_data = data; + struct sse_local_per_cpu *cpu_arg = local_data->cpu_args[current_thread_info()->cpu]; + + cpu_arg->ret = sse_event_register(local_data->event_id, &cpu_arg->args); + if (cpu_arg->ret) + return; + + cpu_arg->ret = sse_event_enable(local_data->event_id); +} + +static void sse_disable_unregister_local(void *data) +{ + struct sse_local_data *local_data = data; + struct sse_local_per_cpu *cpu_arg = local_data->cpu_args[current_thread_info()->cpu]; + + cpu_arg->ret = sse_event_disable(local_data->event_id); + if (cpu_arg->ret) + return; + + cpu_arg->ret = sse_event_unregister(local_data->event_id); +} + +static void sse_test_inject_local(unsigned long event_id) +{ + int cpu; + unsigned long ret; + struct sse_local_data local_data; + struct sse_local_per_cpu *cpu_arg; + volatile struct sse_foreign_cpu_test_arg test_arg = {.event_id = event_id}; + + report_prefix_push("local_dispatch"); + local_data.event_id = event_id; + + for_each_online_cpu(cpu) { + cpu_arg = calloc(1, sizeof(struct sse_handler_arg)); + + cpu_arg->args.stack = sse_alloc_stack(); + cpu_arg->args.handler = sse_foreign_cpu_handler; + cpu_arg->args.handler_data = (void *)&test_arg; + local_data.cpu_args[cpu] = cpu_arg; + } + + on_cpus(sse_register_enable_local, &local_data); + for_each_online_cpu(cpu) { + if (local_data.cpu_args[cpu]->ret) + report_abort("CPU failed to register/enable SSE event"); + + test_arg.expected_cpu = cpu; + /* For test_arg content to be visible for other CPUs */ + smp_wmb(); + ret = sse_event_inject(event_id, cpus[cpu].hartid); + if (ret) + report_abort("CPU failed to register/enable SSE event"); + + while (!test_arg.done) { + /* For test_arg update to be visible */ + smp_rmb(); + } + + test_arg.done = false; + } + + on_cpus(sse_disable_unregister_local, &local_data); + for_each_online_cpu(cpu) { + if (local_data.cpu_args[cpu]->ret) + report_abort("CPU failed to disable/unregister SSE event"); + } + + for_each_online_cpu(cpu) { + cpu_arg = local_data.cpu_args[cpu]; + + sse_free_stack(cpu_arg->args.stack); + } + + report_pass("local event dispartch on all CPUs"); + report_prefix_pop(); + +} + +static void sse_test_inject_global(unsigned long event_id) +{ + unsigned long ret; + unsigned int cpu; + struct sse_handler_arg args; + volatile struct sse_foreign_cpu_test_arg test_arg = {.event_id = event_id}; + enum sbi_sse_state state; + + args.handler = sse_foreign_cpu_handler; + args.handler_data = (void *)&test_arg; + args.stack = sse_alloc_stack(); + + report_prefix_push("global_dispatch"); + + ret = sse_event_register(event_id, &args); + if (ret) + goto done; + + for_each_online_cpu(cpu) { + test_arg.expected_cpu = cpu; + /* For test_arg content to be visible for other CPUs */ + smp_wmb(); + ret = sse_event_set_attr(event_id, SBI_SSE_ATTR_PREFERRED_HART, cpu); + if (ret) { + report_fail("Failed to set preferred hart"); + goto done; + } + + ret = sse_event_enable(event_id); + if (ret) { + report_fail("Failed to enable SSE event"); + goto done; + } + + ret = sse_event_inject(event_id, cpu); + if (ret) { + report_fail("Failed to inject event"); + goto done; + } + + while (!test_arg.done) { + /* For shared test_arg structure */ + smp_rmb(); + } + + test_arg.done = false; + + /* Wait for event to be in ENABLED state */ + do { + ret = sse_get_state(event_id, &state); + if (ret) { + report_fail("Failed to get event state"); + goto done; + } + } while (state != SBI_SSE_STATE_ENABLED); + + ret = sse_event_disable(event_id); + if (ret) { + report_fail("Failed to disable SSE event"); + goto done; + } + + report_pass("Global event on CPU %d", cpu); + } + +done: + ret = sse_event_unregister(event_id); + if (ret) + report_fail("Failed to unregister event"); + + sse_free_stack(args.stack); + report_prefix_pop(); +} + +struct priority_test_arg { + unsigned long evt; + bool called; + u32 prio; + struct priority_test_arg *next_evt_arg; + void (*check_func)(struct priority_test_arg *arg); +}; + +static void sse_hi_priority_test_handler(void *arg, struct pt_regs *regs, + unsigned int hartid) +{ + struct priority_test_arg *targ = arg; + struct priority_test_arg *next = targ->next_evt_arg; + + targ->called = 1; + if (next) { + sse_event_inject(next->evt, current_thread_info()->hartid); + if (sse_event_pending(next->evt)) + report_fail("Higher priority event is pending"); + if (!next->called) + report_fail("Higher priority event was not handled"); + } +} + +static void sse_low_priority_test_handler(void *arg, struct pt_regs *regs, + unsigned int hartid) +{ + struct priority_test_arg *targ = arg; + struct priority_test_arg *next = targ->next_evt_arg; + + targ->called = 1; + + if (next) { + sse_event_inject(next->evt, current_thread_info()->hartid); + + if (!sse_event_pending(next->evt)) + report_fail("Lower priority event is pending"); + + if (next->called) + report_fail("Lower priority event %s was handle before %s", + sse_evt_name(next->evt), sse_evt_name(targ->evt)); + } +} + +static void sse_test_injection_priority_arg(struct priority_test_arg *args, + unsigned int args_size, + sse_handler_fn handler, + const char *test_name) +{ + unsigned int i; + int ret; + unsigned long event_id; + struct priority_test_arg *arg; + void *stack; + struct sse_handler_arg event_args[args_size]; + struct sse_handler_arg *event_arg; + + report_prefix_push(test_name); + + for (i = 0; i < args_size; i++) { + arg = &args[i]; + event_id = arg->evt; + if (!sse_evt_can_inject(event_id)) + continue; + + stack = sse_alloc_stack(); + + event_arg = &event_args[i]; + event_arg->handler = handler; + event_arg->handler_data = (void *)arg; + event_arg->stack = stack; + + if (i < (args_size - 1)) + arg->next_evt_arg = &args[i + 1]; + else + arg->next_evt_arg = NULL; + + /* Be sure global events are targeting the current hart */ + sse_global_event_set_current_hart(event_id); + + sse_event_register(event_id, event_arg); + sse_event_set_attr(event_id, SBI_SSE_ATTR_PRIO, arg->prio); + sse_event_enable(event_id); + } + + /* Inject first event */ + arg = &args[0]; + + ret = sse_event_inject(arg->evt, current_thread_info()->hartid); + report(ret == SBI_SUCCESS, "SSE injection no error"); + + for (i = 0; i < args_size; i++) { + arg = &args[i]; + event_id = arg->evt; + + if (!sse_evt_can_inject(event_id)) + continue; + + if (!arg->called) + report_fail("Event %s handler called", sse_evt_name(arg->evt)); + + sse_event_disable(event_id); + sse_event_unregister(event_id); + + event_arg = &event_args[i]; + sse_free_stack(event_arg->stack); + } + + report_prefix_pop(); +} + +static struct priority_test_arg hi_prio_args[] = { + {.evt = SBI_SSE_EVENT_GLOBAL_SOFTWARE}, + {.evt = SBI_SSE_EVENT_LOCAL_SOFTWARE}, + {.evt = SBI_SSE_EVENT_LOCAL_PMU}, + {.evt = SBI_SSE_EVENT_GLOBAL_RAS}, + {.evt = SBI_SSE_EVENT_LOCAL_RAS}, +}; + +static struct priority_test_arg low_prio_args[] = { + {.evt = SBI_SSE_EVENT_LOCAL_RAS}, + {.evt = SBI_SSE_EVENT_GLOBAL_RAS}, + {.evt = SBI_SSE_EVENT_LOCAL_PMU}, + {.evt = SBI_SSE_EVENT_LOCAL_SOFTWARE}, + {.evt = SBI_SSE_EVENT_GLOBAL_SOFTWARE}, +}; + +static struct priority_test_arg prio_args[] = { + {.evt = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 5}, + {.evt = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10}, + {.evt = SBI_SSE_EVENT_LOCAL_PMU, .prio = 15}, + {.evt = SBI_SSE_EVENT_GLOBAL_RAS, .prio = 20}, + {.evt = SBI_SSE_EVENT_LOCAL_RAS, .prio = 25}, +}; + +static struct priority_test_arg same_prio_args[] = { + {.evt = SBI_SSE_EVENT_LOCAL_PMU, .prio = 0}, + {.evt = SBI_SSE_EVENT_LOCAL_RAS, .prio = 10}, + {.evt = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10}, + {.evt = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 10}, + {.evt = SBI_SSE_EVENT_GLOBAL_RAS, .prio = 20}, +}; + +static void sse_test_injection_priority(void) +{ + report_prefix_push("prio"); + + sse_test_injection_priority_arg(hi_prio_args, ARRAY_SIZE(hi_prio_args), + sse_hi_priority_test_handler, "high"); + + sse_test_injection_priority_arg(low_prio_args, ARRAY_SIZE(low_prio_args), + sse_low_priority_test_handler, "low"); + + sse_test_injection_priority_arg(prio_args, ARRAY_SIZE(prio_args), + sse_low_priority_test_handler, "changed"); + + sse_test_injection_priority_arg(same_prio_args, ARRAY_SIZE(same_prio_args), + sse_low_priority_test_handler, "same_prio_args"); + + report_prefix_pop(); +} + +static bool sse_can_inject(unsigned long event_id) +{ + int ret; + unsigned long status; + + ret = sse_event_get_attr(event_id, SBI_SSE_ATTR_STATUS, &status); + report(ret == 0, "SSE get attr status no error"); + if (ret) + return 0; + + return !!(status & BIT(SBI_SSE_ATTR_STATUS_INJECT_OFFSET)); +} + +static void boot_secondary(void *data) +{ +} + +int main(int argc, char **argv) +{ + struct sbiret ret; + unsigned long i, event; + + if (argc > 1 && !strcmp(argv[1], "-h")) { + help(); + exit(0); + } + + /* + * Dummy wakeup of all processors since some of them will be targeted + * by global events + */ + on_cpus(boot_secondary, NULL); + report_prefix_push("sse"); + + ret = sbi_ecall(SBI_EXT_BASE, SBI_EXT_BASE_PROBE_EXT, SBI_EXT_SSE, 0, 0, 0, 0, 0); + report(!ret.error, "SSE extension probing no error"); + if (ret.error) + goto done; + + report(ret.value, "SSE extension is present"); + if (ret.value == 0) + goto done; + + for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) { + event = sse_event_infos[i].event_id; + report_prefix_push(sse_event_infos[i].name); + if (!sse_can_inject(event)) { + report_skip("Event does not support injection"); + report_prefix_pop(); + continue; + } else { + sse_event_infos[i].can_inject = true; + } + sse_test_attr(event); + sse_test_register_error(event); + sse_test_inject_simple(event); + if (sse_event_is_global(event)) + sse_test_inject_global(event); + else + sse_test_inject_local(event); + + report_prefix_pop(); + } + + sse_test_injection_priority(); + +done: + return report_summary(); +} diff --git a/riscv/unittests.cfg b/riscv/unittests.cfg index 50c67e37..4081b35b 100644 --- a/riscv/unittests.cfg +++ b/riscv/unittests.cfg @@ -17,3 +17,7 @@ groups = selftest [sbi] file = sbi.flat groups = sbi + +[sbi_sse] +file = sbi_sse.flat +groups = sbi