From patchwork Fri Mar 14 11:10:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 14016645 Received: from mail-wr1-f48.google.com (mail-wr1-f48.google.com [209.85.221.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79F2F1F03DA for ; Fri, 14 Mar 2025 11:10:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741950641; cv=none; b=RzY3d8t7JJcKaDZl+fBheG1quhZXCQIP+MKoTCIMb6OKWqyvvzSqOEHrVYfs99+tqsW3kW3Giv+2pun22lKPaIexxBHsGylA6ThnQLU2dz8r+AtakDpyjud/T5AjN7yZzDnOiPvmIk+0eqDHe0fniAl9D4dKOSEeekDGEl+c9pM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741950641; c=relaxed/simple; bh=pUbBH6lKqjkYHOrWph+jCd+XfegjoIltPb4IwKUmCfs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ZrYc/S6Lg1XE9oPvkttKizs/nmDBIt/KlexexvqGm4hQRxnr2mUWX+QQ5qeiTOQ27sY9PCHQlvWrcseY/uG1RjF0Dc9JKeWKYeR6JtUn8sn7ULL6lPwniH/U3ZtDup35HzXuPd86PDqQ0CR67NUS8ZXPgGDwo3PWG1NwebPTA+o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=nQMuvmyQ; arc=none smtp.client-ip=209.85.221.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="nQMuvmyQ" Received: by mail-wr1-f48.google.com with SMTP id ffacd0b85a97d-3912c09be7dso1309812f8f.1 for ; Fri, 14 Mar 2025 04:10:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741950637; x=1742555437; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/6i1/sQFS23e0aHQ0gG/Xo3pgpweMckYvmkT9KdoCho=; b=nQMuvmyQR9hE33PADgc0S2g1IBOWUwl3jtKlwxlyrf0w7Hc3QCg1fH32w6EaM6VOdF RfNDVHGU6pfqEi8o3GQBVJCCCdcG/jZKdP5thruE3ECfL+mMsbgvaMjZBpgBRftZDuTp XTKY6fR8jK1f/DpaWx7iYke+26QCo6Bux0vgQ3t0hDXgWoSC6x0AZ0EQM95iXg4aFqKz KwtfVU7A6fk9iN3QIR9jfj+PJ2ZU2FA8/2fv3W+yQ/Ui6SV5+TyQlc4kaIxwLI1mKezN /IyDZEeHi+el3nsDI45bCXIL0DhLiCnNJvrPZ4ZSm+naaX73fBSJUlTAb7PS//V2yaYz yPkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741950637; x=1742555437; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/6i1/sQFS23e0aHQ0gG/Xo3pgpweMckYvmkT9KdoCho=; b=RxmqTC4INnGIUF0f32CwJQHQ8Sr9ZmCA7zslomK1kdnq0isIZQKi9+LKwEL6BDuq3v uWWAEdaX1bQJozJ5DgYDb5o8eNXd7FlAIve1o3Tr92Q0Ft0aLnWq8zmR8wPx1qV+q/sE 85BPNJ1jgk1BYb7TTjOVBO7b25/HYb9Hr6tbSFh1CbOjOoOfLF2UhhDXwxi2rE092nbe GO4xDvoOP3uCHVnuvoLvsCfyP5H5kqbM6lTeqd7i9nmMaHALjYmDmp/RoX6CJb9/JM0K Zv0kZkWjgxY03cq1duWied9+ZD7F5jS8JtLLgiUgnV04SUuOtk3UOXO5Eg5SgHq+TSB+ Mh7A== X-Gm-Message-State: AOJu0Yw1vaec6zEj8z8asl+bKY+A+22PdLgaBLd56SYQKLwkXhq0khLU YfJwKUqqfIhvt52wbZXRuDuNa94abqun5TT7at1/TP7BPLKFufkC3iixxBvBRkMWWG0hg0JQLuh cRHg= X-Gm-Gg: ASbGnctHUIkCBulQoNS/iJsO0DZAlNT4tDU4I0/QGLt57/CRzy+LZcfDKjZBinpAsPP cimRcyRcJ63gRpLSwONuTaxjuXs6ncJiSlsY95afg9rgCzfL1uf5geiant/FbdkgJOQF80bFedR DSBX+4sEQO2jmbtw9fmks/iCXc/ViE+mTX0GskGxsZRePBg3pZ8uTRQ4+wN/3Wp0K5d5Jivl6uC 8ZrWVxMfuM+HelV6G/HLuTDstkseI3tFMYVEz8PIKyfFGrVjZjH72+tSRk61tu8fVE7StXMURaP I/Kd/vmQcJDSpJoHDkdKGXrnvBdnK0akTZRuH+5h/u+Jaw== X-Google-Smtp-Source: AGHT+IFiA0hkUJPg6bO/qu3cGjt/6OITQY7GCDIoNVDviqYslOtokCwJ1H0VZcDwraWVxmXqvXyLTQ== X-Received: by 2002:a05:6000:4009:b0:391:2e58:f085 with SMTP id ffacd0b85a97d-3971fdc2557mr2581473f8f.54.1741950636951; Fri, 14 Mar 2025 04:10:36 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-395cb3188e8sm5299203f8f.65.2025.03.14.04.10.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Mar 2025 04:10:36 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra , Andrew Jones Subject: [kvm-unit-tests PATCH v9 1/6] kbuild: Allow multiple asm-offsets file to be generated Date: Fri, 14 Mar 2025 12:10:24 +0100 Message-ID: <20250314111030.3728671-2-cleger@rivosinc.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250314111030.3728671-1-cleger@rivosinc.com> References: <20250314111030.3728671-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In order to allow multiple asm-offsets files to generated the include guard need to be different between these file. Add a asm_offset_name makefile macro to obtain an uppercase name matching the original asm offsets file. Signed-off-by: Clément Léger Reviewed-by: Andrew Jones --- scripts/asm-offsets.mak | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/scripts/asm-offsets.mak b/scripts/asm-offsets.mak index 7b64162d..a5fdbf5d 100644 --- a/scripts/asm-offsets.mak +++ b/scripts/asm-offsets.mak @@ -15,10 +15,14 @@ define sed-y s:->::; p;}' endef +define asm_offset_name + $(shell echo $(notdir $(1)) | tr [:lower:]- [:upper:]_) +endef + define make_asm_offsets (set -e; \ - echo "#ifndef __ASM_OFFSETS_H__"; \ - echo "#define __ASM_OFFSETS_H__"; \ + echo "#ifndef __$(strip $(asm_offset_name))_H__"; \ + echo "#define __$(strip $(asm_offset_name))_H__"; \ echo "/*"; \ echo " * Generated file. DO NOT MODIFY."; \ echo " *"; \ @@ -29,12 +33,16 @@ define make_asm_offsets echo "#endif" ) > $@ endef -$(asm-offsets:.h=.s): $(asm-offsets:.h=.c) - $(CC) $(CFLAGS) -fverbose-asm -S -o $@ $< +define gen_asm_offsets_rules +$(1).s: $(1).c + $(CC) $(CFLAGS) -fverbose-asm -S -o $$@ $$< + +$(1).h: $(1).s + $$(call make_asm_offsets,$(1)) + cp -f $$@ lib/generated/ +endef -$(asm-offsets): $(asm-offsets:.h=.s) - $(call make_asm_offsets) - cp -f $(asm-offsets) lib/generated/ +$(foreach o,$(asm-offsets),$(eval $(call gen_asm_offsets_rules, $(o:.h=)))) OBJDIRS += lib/generated From patchwork Fri Mar 14 11:10:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 14016646 Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF1071F4288 for ; Fri, 14 Mar 2025 11:10:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741950643; cv=none; b=uj/a0Y19QPjmNw6DhHnYGkvZX30zGgGG1BHslzbtAsem3EbaJkJ08J/8sdJ7FyMQ2E5KXjq6dV8zRJZ0DdaEgo3BYldvsHFx8tC+RBfDUz0lD3PScmmQPp9gdInAe3rA2au4hMdcSA6GSLYAxOzYyJNJpJjliAP3bc6YqXMJbEE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741950643; c=relaxed/simple; bh=DjyTf8rvGg+sxlrtrvr2LrWvnoCj1tzl8p+htzo4Khs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ocs2t5oQGpD3YhTpFOxtrynxHJYXy67lslr+vwuIQNjD+WjeYvXRQ99+LchaD9CBdtGcbHJszp7khr9Uqf8amvoHNAZ4i6GlMnO1fHC1BOf3yS3NfcUdrmqYzpFqYPOoQNj4gQWa9V66bjJNUUY2/rTUsvMh36cXRLDup9LNEB0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=3Vxpa1W5; arc=none smtp.client-ip=209.85.128.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="3Vxpa1W5" Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-43d0a5cfd7dso13145695e9.1 for ; Fri, 14 Mar 2025 04:10:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741950638; x=1742555438; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gp1/a4HJMNi1Mi5InH5hHHDY+xPb3C4W59LQWScL2LY=; b=3Vxpa1W5vY7X8eitKCXgDEh9DVDg3LGQKDnUkXpQ+Z/YxCKmCDZKpx/05G7xUr3n+B Y7eymKYyen8W9c/wQqOYUo90878k3/Tm0v4sMFbh8jIevVNk1kNk1MAO8YeZbW+fEpBR PwWl+tLX5X/AWuicguG2x5+yUl/o1QzC9w7/CFy5kU3MxynVAaUURP9gUPXxm1dfz7Cf Frm1WrNitiFYGkx1uSRFlAlPNL2j6BYG27DylWc4idMJ8q5/MyUo1e/zl3GIiUvBclsC Zm9teV2fcf6CaqMpVPI8xwrsNv0h5fnn5Aa/Q728ASRjOWcfIdgT9bqw6UGeD2C4dBdY iJ+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741950638; x=1742555438; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gp1/a4HJMNi1Mi5InH5hHHDY+xPb3C4W59LQWScL2LY=; b=Z/D9u0el+FYbY7eF5Qtc58sl+RB3stvZTJSDcb2tSAlWOa/2FkHHfjDXjz5DiyWK8f LIxo9afzqlVvy00ygmLRWiKwf7AAOy8HHQEurmPyKJBeAMiw9n0jKmSeWumHtOTmUSSj BUwjHUUBnkxsVNLzbyE2yodHpbPTIAe9EZq07561v7+8FAMzaULQbZJOrGt8weuqiQNT LHuW0nMmyW6aYniW8KYm9YFjDtojXlXTEVLQmhd9xL0U4Wzhm1zfr0U1uQx9MIYr2bRY YvAe1gl9EVoc+k07CcJxs/pBQ+aJsyQi2RkqYID9c3qaPYfGTo5/80gW0d4BfpYL89n3 eLNA== X-Gm-Message-State: AOJu0Yyy9KDpR5lmU7h/8zwEgJ6r/xcMe8F1JrwZ3u6BaGCBkZs13Hur 70ENmFgk17uRhzDnhL3F3Nu4xnBDI+bzFELbD2yTYno4TtBjxJKozaGH+aLGxWiHQHhgmIcFgf8 Qal4= X-Gm-Gg: ASbGncu8oJ6ZKi5YaX8XVzdh7Z70GbzDtY5Q28XXhcl2+o+myebADbr4MTat+KW2HIF dBSfZJ8ZpCz7dXQOblQo4ANCnHrLRbbW+OUdTLRUgbStLeWY/iiySRbwYioDcb25JSC1rkQlcm8 bl7ufoez3o4lV+iUClR3Ooh3f0pgTfAE0LJII7g2VjeVfAAF3y7Gr3w/LZDj1sHWq0dQeFx+9dY terBMTl+zsDhGx6cL1nw5jJvSGoqUC6og6gcSoAbQ8Hp1UQb5qc1+R63hKOZz0a9iafv52RxJc4 mqeHXca+lq9hWHKiT0YjF559RkAgdy0t2KWRelEgtmPL2w== X-Google-Smtp-Source: AGHT+IEc9bVKQjP8LFKBn9o8XzyPNxvekNKBXKTZ6w8EQ6LJ1oTbhl5A33KtEPlIQggrNtpIXe/5VA== X-Received: by 2002:a5d:5f85:0:b0:391:306f:57e4 with SMTP id ffacd0b85a97d-3971f9e4947mr2605885f8f.34.1741950638331; Fri, 14 Mar 2025 04:10:38 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-395cb3188e8sm5299203f8f.65.2025.03.14.04.10.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Mar 2025 04:10:37 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra , Andrew Jones Subject: [kvm-unit-tests PATCH v9 2/6] riscv: Set .aux.o files as .PRECIOUS Date: Fri, 14 Mar 2025 12:10:25 +0100 Message-ID: <20250314111030.3728671-3-cleger@rivosinc.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250314111030.3728671-1-cleger@rivosinc.com> References: <20250314111030.3728671-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 When compiling, we need to keep .aux.o file or they will be removed after the compilation which leads to dependent files to be recompiled. Set these files as .PRECIOUS to keep them. Signed-off-by: Clément Léger Reviewed-by: Andrew Jones --- riscv/Makefile | 1 + 1 file changed, 1 insertion(+) diff --git a/riscv/Makefile b/riscv/Makefile index 52718f3f..ae9cf02a 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -90,6 +90,7 @@ CFLAGS += -I $(SRCDIR)/lib -I $(SRCDIR)/lib/libfdt -I lib -I $(SRCDIR)/riscv asm-offsets = lib/riscv/asm-offsets.h include $(SRCDIR)/scripts/asm-offsets.mak +.PRECIOUS: %.aux.o %.aux.o: $(SRCDIR)/lib/auxinfo.c $(CC) $(CFLAGS) -c -o $@ $< \ -DPROGNAME=\"$(notdir $(@:.aux.o=.$(exe)))\" -DAUXFLAGS=$(AUXFLAGS) From patchwork Fri Mar 14 11:10:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 14016647 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D416316F271 for ; Fri, 14 Mar 2025 11:10:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741950643; cv=none; b=gglupWqhWN/now+AybZzJacCpBt6eXpTHLgm+/YbBULtpmIUa9KFPKn1CncXXHjOkeST0phgtieJriU2jBSOs6/ZsYae8X0q04cvZNiGuJGBM7yqPxnM1r7+baO8zuBQ6vKwuKlsv7QktEy1jC7nNpKVLfOFhM7sHs+GzUP973w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741950643; c=relaxed/simple; bh=WMcAeqgXkwwkH48gxSeCvtz99ZLpGt6R2M4eU1o5O9Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=KfTXnwYknvvC+t2S4nHyn3Shn18aEyGczecQqK4YDbTnyeBbRLi9ywai7wKDEhEP8FSblyJwoHgCGvX5/QGPYooWTzthSsWQgCm40D/L4RJnJDs9QOGiBs5K8hnqQ4saHQEICzNGFbUL1xmxHziRyc1Kwb2rEBfIZyqbMYFlfZ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=iqypZvnK; arc=none smtp.client-ip=209.85.128.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="iqypZvnK" Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-43cfe63c592so18245595e9.2 for ; Fri, 14 Mar 2025 04:10:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741950640; x=1742555440; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C6xs0cTjWqrdDwyk7E4bpCxTQMeqygyQS+ITTTaO07U=; b=iqypZvnK2fr8dk+qc6/Hkk2X+UZADwrw408oyUduQ9B44P8zbdYWk4O4hytGZ9DQBO lb1un6f9aRiZJAI3rFJfgBzhNNzt4lyJIjagnqIUMiD54+ERl/Hr3phL6vUmavfg+BxH go17DC76iSc4qJ9yeJf8+J5fyYrghHr+1g67n0Mq3VJP/4bKyLOCbFHliUWpRLaP2Pc9 FaKmV8ge5ybnhnUrxm+xB6/Yw2B9rAGl+skmw3kmlLdxp4RnypGi09TCgStAp5Xm8H0A YXV8ocK8eeiT3nN199pLUKlIGg6nHokyUQl5OgStVhWaLAQQp66ZBBB9b+9MtlODZ8in Wrdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741950640; x=1742555440; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C6xs0cTjWqrdDwyk7E4bpCxTQMeqygyQS+ITTTaO07U=; b=JWWSzMbYCmNXTHPoXQggJsEKzMCyMpSNqFJKvMFkcgzSU8k1HsZvXXt0yuTv+4iCnq SHLYS1ISFkU6Y8bXxvb7MuLYfKolGLsQZzrGx40rOk4sqb20N6MnFHJVi9Xbau/fiD4l PSLqXbKZR/YVxbhznY3izhIEp8twRFwddgBtGen10bFmGARmZEMs3IbM926a80CvJ9Tz d7/D0YcBozMKUe6ZFpx118wvGyxN0mjWhhZ2IliLaa8lJHOU5uaNrQ8z6Qv+T6iartlR hNruDkUxA08xEc/R6WiTjTjohng6zAw+/bkQYKb89c6GPLX3ULn8UuIQ1MXdkXnIQDpW U9lQ== X-Gm-Message-State: AOJu0Yxy7kybFpIMctSAM0WXS3auD/2oI/zN1u9BcWiTyailUA2CBoIu r3VL0UKnp80GOt+YCA/B+sp/mB2yoAaF58rqKV7nP539nTqn52C1SO3OWiOFXkmw5YgGoTYP0wg QVQQ= X-Gm-Gg: ASbGnctP/MMY7qbfmT/5Z1vXi00rS0f8VoTm7TanjjH9aTmrdgmgKuOhlI0p+m/Hwkf ia2a3UwZJjHRhwKFQOLPz/OfR9c6VX9akUob9b/rGXNKG9XQcoNNHlZVmrtqsvrpXwAXLic0BHK reYOT8SVwDejkihSBm+/jk/W5KmPrC12A6KvVY/WQDpLmDLnxV7PKyc3bzTd9XUBS6hEA5C429k 04nrF7GZ8NAa6KmFx7F/mEMgmleenanUkgAkBnojuw/Ealk03BPduq1ntmNDfMExkdRyYiVF1yD DVBKXn9qCbAEDiow3AWjenQ/KeKuJCO6RNcJh3zgftVZwmxJv21Ra4in X-Google-Smtp-Source: AGHT+IFLRqD0+GtRah5fmDfiMyZW0TGXXj+pDnCO6vfFUhQoxNlSBVgzIrjuEjHFwzUxpLoxa2qikA== X-Received: by 2002:a05:6000:1acc:b0:390:f987:26a1 with SMTP id ffacd0b85a97d-3971dbe7ea7mr2951045f8f.29.1741950639774; Fri, 14 Mar 2025 04:10:39 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-395cb3188e8sm5299203f8f.65.2025.03.14.04.10.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Mar 2025 04:10:38 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra , Andrew Jones Subject: [kvm-unit-tests PATCH v9 3/6] riscv: Use asm-offsets to generate SBI_EXT_HSM values Date: Fri, 14 Mar 2025 12:10:26 +0100 Message-ID: <20250314111030.3728671-4-cleger@rivosinc.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250314111030.3728671-1-cleger@rivosinc.com> References: <20250314111030.3728671-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Replace hardcoded values with generated ones using sbi-asm-offset. This allows to directly use ASM_SBI_EXT_HSM and ASM_SBI_EXT_HSM_STOP in assembly. Signed-off-by: Clément Léger Reviewed-by: Andrew Jones --- riscv/Makefile | 2 +- riscv/sbi-asm.S | 6 ++++-- riscv/sbi-asm-offsets.c | 11 +++++++++++ riscv/.gitignore | 1 + 4 files changed, 17 insertions(+), 3 deletions(-) create mode 100644 riscv/sbi-asm-offsets.c create mode 100644 riscv/.gitignore diff --git a/riscv/Makefile b/riscv/Makefile index ae9cf02a..02d2ac39 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -87,7 +87,7 @@ CFLAGS += -ffreestanding CFLAGS += -O2 CFLAGS += -I $(SRCDIR)/lib -I $(SRCDIR)/lib/libfdt -I lib -I $(SRCDIR)/riscv -asm-offsets = lib/riscv/asm-offsets.h +asm-offsets = lib/riscv/asm-offsets.h riscv/sbi-asm-offsets.h include $(SRCDIR)/scripts/asm-offsets.mak .PRECIOUS: %.aux.o diff --git a/riscv/sbi-asm.S b/riscv/sbi-asm.S index f4185496..51f46efd 100644 --- a/riscv/sbi-asm.S +++ b/riscv/sbi-asm.S @@ -6,6 +6,8 @@ */ #include #include +#include +#include #include "sbi-tests.h" @@ -57,8 +59,8 @@ sbi_hsm_check: 7: lb t0, 0(t1) pause beqz t0, 7b - li a7, 0x48534d /* SBI_EXT_HSM */ - li a6, 1 /* SBI_EXT_HSM_HART_STOP */ + li a7, ASM_SBI_EXT_HSM + li a6, ASM_SBI_EXT_HSM_HART_STOP ecall 8: pause j 8b diff --git a/riscv/sbi-asm-offsets.c b/riscv/sbi-asm-offsets.c new file mode 100644 index 00000000..bd37b6a2 --- /dev/null +++ b/riscv/sbi-asm-offsets.c @@ -0,0 +1,11 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include + +int main(void) +{ + DEFINE(ASM_SBI_EXT_HSM, SBI_EXT_HSM); + DEFINE(ASM_SBI_EXT_HSM_HART_STOP, SBI_EXT_HSM_HART_STOP); + + return 0; +} diff --git a/riscv/.gitignore b/riscv/.gitignore new file mode 100644 index 00000000..0a8c5a36 --- /dev/null +++ b/riscv/.gitignore @@ -0,0 +1 @@ +/*-asm-offsets.[hs] From patchwork Fri Mar 14 11:10:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 14016648 Received: from mail-wr1-f41.google.com (mail-wr1-f41.google.com [209.85.221.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E737D1FDA8D for ; Fri, 14 Mar 2025 11:10:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741950644; cv=none; b=GoLE0LWYkgrywih2zllggtzlvft5KHZ2TRyqtyzgxUi/Dj9ieJ28vTIlf6XyyDlYZ4+uewEyJbr1akPJmKc2LT96C1yEJ+zqKGGLRYA2U003iOlfDBChPxC08ZE+2pimmwinOJBzU9S4YNzVC4VtrcyQ5Od9uf5Y491AikYC44o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741950644; c=relaxed/simple; bh=2HnFK5/Qhr15peEjtFesKrEiqrALMeMpB1gNvKkT6dE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FPrvhrYgEXDCARovUXaMceUmmWggbtYD2pmBj1k126OaCQm5OjnME1lcBUYlqNvYSHkvlK1aPupMRgJtAtV6mKH+F1iqDIH4UhVOVgpVbNUHVdGyJlyW2uXBOH6IJvgbAFZun9VpXWYE58CgdWq5sDzfRcGUrntcyOFzmpjmiOY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=FVgPOxb0; arc=none smtp.client-ip=209.85.221.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="FVgPOxb0" Received: by mail-wr1-f41.google.com with SMTP id ffacd0b85a97d-3913d129c1aso1564843f8f.0 for ; Fri, 14 Mar 2025 04:10:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741950641; x=1742555441; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SEnadjkXdfJ/Wyi24C/siwRXIpKyA4zX4ZpFhHe2Glo=; b=FVgPOxb0Jylk3A6St0d/zZwaMVwbzbpETg8FUCIcWlutloi7ddNB1jS0oE0EhiemIX L8EGLY3C/wrFZuZyTAix+gjq+4FsLqhkUdUtnsTXOMigbYMT6PS10QgAeVD36ncraODk pAdYsxf7X/wkTeW42Fi10/RQJBfs6UrEEFORkn6wqWrxC7bnxsewgj/Jdww8sS1x+GXD sJhUmVcjn8VsKETrUYR/HqSudznnFTtwYjuHOfPwD3rDVo6NOfFqLUESPMWW98E+adCy Mtp0C6stHmz5LPqYyF0gr0oYhJsTUppfOT6ZNAGapWGe3GGFNwH03cIs+GLOn8FbluOm +adw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741950641; x=1742555441; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SEnadjkXdfJ/Wyi24C/siwRXIpKyA4zX4ZpFhHe2Glo=; b=ZgZt5zAE8Ej1Xu7ACYH/ftqOcwFg9+uWxQgoL4HYPwVnePDWMR2uycDJM9bVlzWB3u hqce2HwZJVjkoCamXZ5WueXtl6vLWP5kHtYHVM7d3JTNnhaKsX60tDhlhe7bT5SAGFci 8oNXBNyjAbxdhaef+UucgRXcwaKG4SBoHcIbbykclO6qKULdxT9O9V2idPq2bw0+R6Ci kxGcVwpTPmaWb0djJsghpnbVdwbcx+rAzdlR31WzuHK6Mk5uj/UcLn5gtxy9azvukJsl eQRcVb/UJJOpjRZype1MObDQ7vrRzgNb9B95U1EjXnaE+HshpwlP0y9CYSX8qNhhdw2l wwWw== X-Gm-Message-State: AOJu0YxshmVSGBuQsEIuTykMh8Z3GqD4HoZWYXkckxAs9VsE+Sfyqf2j jDke3tdfyiTmN5/lskYZgRevvFVBunQHIVM3yqAhPm28FPl+u5FEKRVYINiKrHjVuyTgvmxwmCr oD0Q= X-Gm-Gg: ASbGncv4nDgNRiwPHCvTsvt9aYiZXODGSKkY8q4fpj2lKeeevXjQTX7qxUoiMWr325A LSrQtIV+OkNXgYA3yvGztbgbCr+EUz0g2NgWpBSgnoRO+sbhqFJ/OVIK53ruFCDEN0MHWKzjpry Ps/jOCFYpM4XmGRA/2kJ5POB4iwq3Ys2kDfnsLSgEgVSdGJ1CibmBEQ5D8tZKU09ywmeqFQJiN6 9rhWJRKPaG3fyBjxRFtWxZArmVcrgSH+dD5A4pPjESgCkRw7c6GNzeZ6pt5YnUs13FYkIjhLEsQ Jw1IcSoRxa+9OXMPtkd3f3K9i4w3CTwVEc0m0Vpu/ZEC8A== X-Google-Smtp-Source: AGHT+IH+UgD9+rQY1r1Z6nIHKl6HkEXuOtXPmIM+xM3JrBx9oj3bC1hZb1PaVvizxhxRY1r/9PcOWw== X-Received: by 2002:a5d:47cc:0:b0:391:2eaf:eca1 with SMTP id ffacd0b85a97d-395b71ac4b5mr5869107f8f.2.1741950640781; Fri, 14 Mar 2025 04:10:40 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-395cb3188e8sm5299203f8f.65.2025.03.14.04.10.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Mar 2025 04:10:40 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra , Andrew Jones Subject: [kvm-unit-tests PATCH v9 4/6] riscv: lib: Add SBI SSE extension definitions Date: Fri, 14 Mar 2025 12:10:27 +0100 Message-ID: <20250314111030.3728671-5-cleger@rivosinc.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250314111030.3728671-1-cleger@rivosinc.com> References: <20250314111030.3728671-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add SBI SSE extension definitions in sbi.h Signed-off-by: Clément Léger Reviewed-by: Andrew Jones --- lib/riscv/asm/sbi.h | 106 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 105 insertions(+), 1 deletion(-) diff --git a/lib/riscv/asm/sbi.h b/lib/riscv/asm/sbi.h index 2f4d91ef..780c9edd 100644 --- a/lib/riscv/asm/sbi.h +++ b/lib/riscv/asm/sbi.h @@ -30,6 +30,7 @@ enum sbi_ext_id { SBI_EXT_DBCN = 0x4442434E, SBI_EXT_SUSP = 0x53555350, SBI_EXT_FWFT = 0x46574654, + SBI_EXT_SSE = 0x535345, }; enum sbi_ext_base_fid { @@ -78,7 +79,6 @@ enum sbi_ext_dbcn_fid { SBI_EXT_DBCN_CONSOLE_WRITE_BYTE, }; - enum sbi_ext_fwft_fid { SBI_EXT_FWFT_SET = 0, SBI_EXT_FWFT_GET, @@ -105,6 +105,110 @@ enum sbi_ext_fwft_fid { #define SBI_FWFT_SET_FLAG_LOCK BIT(0) +enum sbi_ext_sse_fid { + SBI_EXT_SSE_READ_ATTRS = 0, + SBI_EXT_SSE_WRITE_ATTRS, + SBI_EXT_SSE_REGISTER, + SBI_EXT_SSE_UNREGISTER, + SBI_EXT_SSE_ENABLE, + SBI_EXT_SSE_DISABLE, + SBI_EXT_SSE_COMPLETE, + SBI_EXT_SSE_INJECT, + SBI_EXT_SSE_HART_UNMASK, + SBI_EXT_SSE_HART_MASK, +}; + +/* SBI SSE Event Attributes. */ +enum sbi_sse_attr_id { + SBI_SSE_ATTR_STATUS = 0x00000000, + SBI_SSE_ATTR_PRIORITY = 0x00000001, + SBI_SSE_ATTR_CONFIG = 0x00000002, + SBI_SSE_ATTR_PREFERRED_HART = 0x00000003, + SBI_SSE_ATTR_ENTRY_PC = 0x00000004, + SBI_SSE_ATTR_ENTRY_ARG = 0x00000005, + SBI_SSE_ATTR_INTERRUPTED_SEPC = 0x00000006, + SBI_SSE_ATTR_INTERRUPTED_FLAGS = 0x00000007, + SBI_SSE_ATTR_INTERRUPTED_A6 = 0x00000008, + SBI_SSE_ATTR_INTERRUPTED_A7 = 0x00000009, +}; + +#define SBI_SSE_ATTR_STATUS_STATE_OFFSET 0 +#define SBI_SSE_ATTR_STATUS_STATE_MASK 0x3 +#define SBI_SSE_ATTR_STATUS_PENDING_OFFSET 2 +#define SBI_SSE_ATTR_STATUS_INJECT_OFFSET 3 + +#define SBI_SSE_ATTR_CONFIG_ONESHOT BIT(0) + +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP BIT(0) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE BIT(1) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV BIT(2) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP BIT(3) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP BIT(4) +#define SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT BIT(5) + +enum sbi_sse_state { + SBI_SSE_STATE_UNUSED = 0, + SBI_SSE_STATE_REGISTERED = 1, + SBI_SSE_STATE_ENABLED = 2, + SBI_SSE_STATE_RUNNING = 3, +}; + +/* SBI SSE Event IDs. */ +/* Range 0x00000000 - 0x0000ffff */ +#define SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS 0x00000000 +#define SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP 0x00000001 +#define SBI_SSE_EVENT_LOCAL_RESERVED_0_START 0x00000002 +#define SBI_SSE_EVENT_LOCAL_RESERVED_0_END 0x00003fff +#define SBI_SSE_EVENT_LOCAL_PLAT_0_START 0x00004000 +#define SBI_SSE_EVENT_LOCAL_PLAT_0_END 0x00007fff + +#define SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS 0x00008000 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_0_START 0x00008001 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_0_END 0x0000bfff +#define SBI_SSE_EVENT_GLOBAL_PLAT_0_START 0x0000c000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_0_END 0x0000ffff + +/* Range 0x00010000 - 0x0001ffff */ +#define SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW 0x00010000 +#define SBI_SSE_EVENT_LOCAL_RESERVED_1_START 0x00010001 +#define SBI_SSE_EVENT_LOCAL_RESERVED_1_END 0x00013fff +#define SBI_SSE_EVENT_LOCAL_PLAT_1_START 0x00014000 +#define SBI_SSE_EVENT_LOCAL_PLAT_1_END 0x00017fff + +#define SBI_SSE_EVENT_GLOBAL_RESERVED_1_START 0x00018000 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_1_END 0x0001bfff +#define SBI_SSE_EVENT_GLOBAL_PLAT_1_START 0x0001c000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_1_END 0x0001ffff + +/* Range 0x00100000 - 0x0010ffff */ +#define SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS 0x00100000 +#define SBI_SSE_EVENT_LOCAL_RESERVED_2_START 0x00100001 +#define SBI_SSE_EVENT_LOCAL_RESERVED_2_END 0x00103fff +#define SBI_SSE_EVENT_LOCAL_PLAT_2_START 0x00104000 +#define SBI_SSE_EVENT_LOCAL_PLAT_2_END 0x00107fff + +#define SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS 0x00108000 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_2_START 0x00108001 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_2_END 0x0010bfff +#define SBI_SSE_EVENT_GLOBAL_PLAT_2_START 0x0010c000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_2_END 0x0010ffff + +/* Range 0xffff0000 - 0xffffffff */ +#define SBI_SSE_EVENT_LOCAL_SOFTWARE 0xffff0000 +#define SBI_SSE_EVENT_LOCAL_RESERVED_3_START 0xffff0001 +#define SBI_SSE_EVENT_LOCAL_RESERVED_3_END 0xffff3fff +#define SBI_SSE_EVENT_LOCAL_PLAT_3_START 0xffff4000 +#define SBI_SSE_EVENT_LOCAL_PLAT_3_END 0xffff7fff + +#define SBI_SSE_EVENT_GLOBAL_SOFTWARE 0xffff8000 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_3_START 0xffff8001 +#define SBI_SSE_EVENT_GLOBAL_RESERVED_3_END 0xffffbfff +#define SBI_SSE_EVENT_GLOBAL_PLAT_3_START 0xffffc000 +#define SBI_SSE_EVENT_GLOBAL_PLAT_3_END 0xffffffff + +#define SBI_SSE_EVENT_PLATFORM_BIT BIT(14) +#define SBI_SSE_EVENT_GLOBAL_BIT BIT(15) + struct sbiret { long error; long value; From patchwork Fri Mar 14 11:10:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 14016649 Received: from mail-wr1-f49.google.com (mail-wr1-f49.google.com [209.85.221.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 366251FDE01 for ; Fri, 14 Mar 2025 11:10:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741950646; cv=none; b=VcLraAOBvhfiLhyOEax+Id3fasVwlgpcjUt6R/oL3Z5cHeSmzQ44ARKNVNj32wgN74Gmoij65wC4UxNM+Ii2eKLpIPXagIMUlgiKdZadyXjBuVyBquRmzHA50XfFetI9xFb3y2HY3fQtW9iDfGwQYy0KAdcLDS/zbjmXIq/pT0U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741950646; c=relaxed/simple; bh=ooE01sm7iD6/QFKVy/F96MwNqriUxvnVVCpGtZo6vZA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Bhf1KuwMuaHF6KJtvWgjH3Brk5+GOR1277q9A4sd472xcYBl6F/QgeRF+H+dCtH408YaZDaiBrfyxjcAnUHfc/BVBzrsw0ONg1B5DXYiLnkX8rPbIzXO+33shnsKqFIMMxbmWdiElg0SPpLrlMLfQAKy+vXG58R9LQgD/9Qi/aY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=um4A+TNw; arc=none smtp.client-ip=209.85.221.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="um4A+TNw" Received: by mail-wr1-f49.google.com with SMTP id ffacd0b85a97d-39143200ddaso1264865f8f.1 for ; Fri, 14 Mar 2025 04:10:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741950642; x=1742555442; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7cU8DTHHzTTTUnS0772JSns/mbismflxPSqaV3aZOIc=; b=um4A+TNwN4sCIThQM6aRyyPmcBG0GR5xH3WC9hzyZV7dpTeexKUrlri+21yarwDtTK czUZFIcZjnHAk4U3ZpvIeo2L4/p/Vh6xUs2Xs8z2qg/s0TfvCQRkUdbyRojED0uY7qhZ ZPAqvY0we/BwthkF0AcoyIeVkJSCRKVJu0JnUVocCCsUChHVSK2AmU//9r1+d76sq3Yi e2EjelU+5faxDBU9sD7qD4L+SpRC4BX+mkSNhj3f35ChhfvvAsuUXqnpgGW8eukntqBF kq2d0Ji6KhGmmz07XQwSSpNOl7OlRAZc1IvatxTMIyOBbTrL2QqUVxLi6FMoaZDlTZ84 +isw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741950642; x=1742555442; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7cU8DTHHzTTTUnS0772JSns/mbismflxPSqaV3aZOIc=; b=kW6BHz9FmIhpOKPWoI9oHtOiCKnqcwhTcrlxmDzd7iITXQuWfmy8kaLNdvRhaHHgW7 Yr/OQ/hZkKd01nqKT68FlrMOPevOu6AKKRhMa+7aH0FmseXlw3vaMhcrOvnU0KPMqYuJ UMULecKqWwu3dRDK/bzyOcPSF1pMyQiZuKCO2K3XE5wcgfvXBPbR+22mmU6Qq2WIQsFV 08U8mjTcCoHq1itVqVaDnHG9YNDUPLb2c4bD/E+jfHGK7+VZ0iyGWcLok2IJxpQB/MIp Fd2Kcoi1U4P3C2Jo2P4/WLYnSd2sfNeuUbysmooKuNEq6V6XXknli1iK79xtvxUfZWFh 1CjA== X-Gm-Message-State: AOJu0YzNmEeqDhm1Q1oow4syzXbGzdFYaP7tBzxh07p9JZR2JYrNO9lN 8p9UF7109ekNXb7PbsgQGyrnVkSHkGvloeftE0/EtOO6ja3EXbVE3Z30BvAQyyWCwl7mbS23nMY wrHY= X-Gm-Gg: ASbGnct3iHMJwlykbxE6Rvul9d/JODnzrdO/rLDcTkYlmRF+xzClRVYPjHfdGTAWtt5 Smzx9vc91YI2KbgK9tcjaij7SA6DJmKpT+7s+pCU6kPr3w2YxRJvZaNlp/saE7Ovh7VEYNp9GlA HeaIlZuJL26FTgvdQ1cLYx5atQAQN8jhba7lHaaWk901xX9jXQaZEOzagSzwF8/hzX7cGAz+W5W xeCvuR7PuWeMPCIrz3x6sBcUDSGuMLLDcAUYGfRfmUsHSJeMpoOTszuv5rUjZnmCglcL1uYSo/i aWyhZDl2+v0N+iGMHscn1ZtMtQDRtkx+hxzGvSkLLFZLyw== X-Google-Smtp-Source: AGHT+IGo6SAlZxa4z6sGi7M07KUuvbkbDanrr7AqJO8Xlsy7WNQbskeSPCd+Pb/WMhTg5I8GeXx1vQ== X-Received: by 2002:a05:6000:1889:b0:391:3988:1c97 with SMTP id ffacd0b85a97d-3971d522193mr2132766f8f.17.1741950641800; Fri, 14 Mar 2025 04:10:41 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-395cb3188e8sm5299203f8f.65.2025.03.14.04.10.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Mar 2025 04:10:41 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra , Andrew Jones Subject: [kvm-unit-tests PATCH v9 5/6] lib: riscv: Add SBI SSE support Date: Fri, 14 Mar 2025 12:10:28 +0100 Message-ID: <20250314111030.3728671-6-cleger@rivosinc.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250314111030.3728671-1-cleger@rivosinc.com> References: <20250314111030.3728671-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add support for registering and handling SSE events. This will be used by sbi test as well as upcoming double trap tests. Signed-off-by: Clément Léger Reviewed-by: Andrew Jones --- riscv/Makefile | 1 + lib/riscv/asm/csr.h | 1 + lib/riscv/asm/sbi.h | 36 ++++++++++++++ lib/riscv/sbi-sse-asm.S | 102 ++++++++++++++++++++++++++++++++++++++++ lib/riscv/asm-offsets.c | 9 ++++ lib/riscv/sbi.c | 76 ++++++++++++++++++++++++++++++ 6 files changed, 225 insertions(+) create mode 100644 lib/riscv/sbi-sse-asm.S diff --git a/riscv/Makefile b/riscv/Makefile index 02d2ac39..16fc125b 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -43,6 +43,7 @@ cflatobjs += lib/riscv/setup.o cflatobjs += lib/riscv/smp.o cflatobjs += lib/riscv/stack.o cflatobjs += lib/riscv/timer.o +cflatobjs += lib/riscv/sbi-sse-asm.o ifeq ($(ARCH),riscv32) cflatobjs += lib/ldiv32.o endif diff --git a/lib/riscv/asm/csr.h b/lib/riscv/asm/csr.h index c7fc87a9..3e4b5fca 100644 --- a/lib/riscv/asm/csr.h +++ b/lib/riscv/asm/csr.h @@ -17,6 +17,7 @@ #define CSR_TIME 0xc01 #define SR_SIE _AC(0x00000002, UL) +#define SR_SPP _AC(0x00000100, UL) /* Exception cause high bit - is an interrupt if set */ #define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1)) diff --git a/lib/riscv/asm/sbi.h b/lib/riscv/asm/sbi.h index 780c9edd..e0efcee6 100644 --- a/lib/riscv/asm/sbi.h +++ b/lib/riscv/asm/sbi.h @@ -230,5 +230,41 @@ struct sbiret sbi_send_ipi_broadcast(void); struct sbiret sbi_set_timer(unsigned long stime_value); long sbi_probe(int ext); +typedef void (*sbi_sse_handler_fn)(void *data, struct pt_regs *regs, unsigned int hartid); + +struct sbi_sse_handler_arg { + unsigned long reg_tmp; + sbi_sse_handler_fn handler; + void *handler_data; + void *stack; +}; + +extern void sbi_sse_entry(void); + +static inline bool sbi_sse_event_is_global(uint32_t event_id) +{ + return !!(event_id & SBI_SSE_EVENT_GLOBAL_BIT); +} + +struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long phys_lo, + unsigned long phys_hi); +struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long *values); +struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long phys_lo, + unsigned long phys_hi); +struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long *values); +struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc, + unsigned long entry_arg); +struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg); +struct sbiret sbi_sse_unregister(unsigned long event_id); +struct sbiret sbi_sse_enable(unsigned long event_id); +struct sbiret sbi_sse_disable(unsigned long event_id); +struct sbiret sbi_sse_hart_mask(void); +struct sbiret sbi_sse_hart_unmask(void); +struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id); + #endif /* !__ASSEMBLER__ */ #endif /* _ASMRISCV_SBI_H_ */ diff --git a/lib/riscv/sbi-sse-asm.S b/lib/riscv/sbi-sse-asm.S new file mode 100644 index 00000000..b9e951f5 --- /dev/null +++ b/lib/riscv/sbi-sse-asm.S @@ -0,0 +1,102 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * RISC-V SSE events entry point. + * + * Copyright (C) 2025, Rivos Inc., Clément Léger + */ +#include +#include +#include +#include + +.section .text +.global sbi_sse_entry +sbi_sse_entry: + /* Save stack temporarily */ + REG_S sp, SBI_SSE_REG_TMP(a7) + /* Set entry stack */ + REG_L sp, SBI_SSE_HANDLER_STACK(a7) + + addi sp, sp, -(PT_SIZE) + REG_S ra, PT_RA(sp) + REG_S s0, PT_S0(sp) + REG_S s1, PT_S1(sp) + REG_S s2, PT_S2(sp) + REG_S s3, PT_S3(sp) + REG_S s4, PT_S4(sp) + REG_S s5, PT_S5(sp) + REG_S s6, PT_S6(sp) + REG_S s7, PT_S7(sp) + REG_S s8, PT_S8(sp) + REG_S s9, PT_S9(sp) + REG_S s10, PT_S10(sp) + REG_S s11, PT_S11(sp) + REG_S tp, PT_TP(sp) + REG_S t0, PT_T0(sp) + REG_S t1, PT_T1(sp) + REG_S t2, PT_T2(sp) + REG_S t3, PT_T3(sp) + REG_S t4, PT_T4(sp) + REG_S t5, PT_T5(sp) + REG_S t6, PT_T6(sp) + REG_S gp, PT_GP(sp) + REG_S a0, PT_A0(sp) + REG_S a1, PT_A1(sp) + REG_S a2, PT_A2(sp) + REG_S a3, PT_A3(sp) + REG_S a4, PT_A4(sp) + REG_S a5, PT_A5(sp) + csrr a1, CSR_SEPC + REG_S a1, PT_EPC(sp) + csrr a2, CSR_SSTATUS + REG_S a2, PT_STATUS(sp) + + REG_L a0, SBI_SSE_REG_TMP(a7) + REG_S a0, PT_SP(sp) + + REG_L t0, SBI_SSE_HANDLER(a7) + REG_L a0, SBI_SSE_HANDLER_DATA(a7) + mv a1, sp + mv a2, a6 + jalr t0 + + REG_L a1, PT_EPC(sp) + REG_L a2, PT_STATUS(sp) + csrw CSR_SEPC, a1 + csrw CSR_SSTATUS, a2 + + REG_L ra, PT_RA(sp) + REG_L s0, PT_S0(sp) + REG_L s1, PT_S1(sp) + REG_L s2, PT_S2(sp) + REG_L s3, PT_S3(sp) + REG_L s4, PT_S4(sp) + REG_L s5, PT_S5(sp) + REG_L s6, PT_S6(sp) + REG_L s7, PT_S7(sp) + REG_L s8, PT_S8(sp) + REG_L s9, PT_S9(sp) + REG_L s10, PT_S10(sp) + REG_L s11, PT_S11(sp) + REG_L tp, PT_TP(sp) + REG_L t0, PT_T0(sp) + REG_L t1, PT_T1(sp) + REG_L t2, PT_T2(sp) + REG_L t3, PT_T3(sp) + REG_L t4, PT_T4(sp) + REG_L t5, PT_T5(sp) + REG_L t6, PT_T6(sp) + REG_L gp, PT_GP(sp) + REG_L a0, PT_A0(sp) + REG_L a1, PT_A1(sp) + REG_L a2, PT_A2(sp) + REG_L a3, PT_A3(sp) + REG_L a4, PT_A4(sp) + REG_L a5, PT_A5(sp) + + REG_L sp, PT_SP(sp) + + li a7, ASM_SBI_EXT_SSE + li a6, ASM_SBI_EXT_SSE_COMPLETE + ecall + diff --git a/lib/riscv/asm-offsets.c b/lib/riscv/asm-offsets.c index 6c511c14..a96c6e97 100644 --- a/lib/riscv/asm-offsets.c +++ b/lib/riscv/asm-offsets.c @@ -3,6 +3,7 @@ #include #include #include +#include #include int main(void) @@ -63,5 +64,13 @@ int main(void) OFFSET(THREAD_INFO_HARTID, thread_info, hartid); DEFINE(THREAD_INFO_SIZE, sizeof(struct thread_info)); + DEFINE(ASM_SBI_EXT_SSE, SBI_EXT_SSE); + DEFINE(ASM_SBI_EXT_SSE_COMPLETE, SBI_EXT_SSE_COMPLETE); + + OFFSET(SBI_SSE_REG_TMP, sbi_sse_handler_arg, reg_tmp); + OFFSET(SBI_SSE_HANDLER, sbi_sse_handler_arg, handler); + OFFSET(SBI_SSE_HANDLER_DATA, sbi_sse_handler_arg, handler_data); + OFFSET(SBI_SSE_HANDLER_STACK, sbi_sse_handler_arg, stack); + return 0; } diff --git a/lib/riscv/sbi.c b/lib/riscv/sbi.c index 02dd338c..ff33fe7b 100644 --- a/lib/riscv/sbi.c +++ b/lib/riscv/sbi.c @@ -2,6 +2,7 @@ #include #include #include +#include #include #include @@ -31,6 +32,81 @@ struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0, return ret; } +struct sbiret sbi_sse_read_attrs_raw(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long phys_lo, + unsigned long phys_hi) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_READ_ATTRS, event_id, base_attr_id, attr_count, + phys_lo, phys_hi, 0); +} + +struct sbiret sbi_sse_read_attrs(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long *values) +{ + phys_addr_t p = virt_to_phys(values); + + return sbi_sse_read_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p), + upper_32_bits(p)); +} + +struct sbiret sbi_sse_write_attrs_raw(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long phys_lo, + unsigned long phys_hi) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_WRITE_ATTRS, event_id, base_attr_id, attr_count, + phys_lo, phys_hi, 0); +} + +struct sbiret sbi_sse_write_attrs(unsigned long event_id, unsigned long base_attr_id, + unsigned long attr_count, unsigned long *values) +{ + phys_addr_t p = virt_to_phys(values); + + return sbi_sse_write_attrs_raw(event_id, base_attr_id, attr_count, lower_32_bits(p), + upper_32_bits(p)); +} + +struct sbiret sbi_sse_register_raw(unsigned long event_id, unsigned long entry_pc, + unsigned long entry_arg) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_REGISTER, event_id, entry_pc, entry_arg, 0, 0, 0); +} + +struct sbiret sbi_sse_register(unsigned long event_id, struct sbi_sse_handler_arg *arg) +{ + return sbi_sse_register_raw(event_id, (unsigned long)sbi_sse_entry, (unsigned long)arg); +} + +struct sbiret sbi_sse_unregister(unsigned long event_id) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_UNREGISTER, event_id, 0, 0, 0, 0, 0); +} + +struct sbiret sbi_sse_enable(unsigned long event_id) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_ENABLE, event_id, 0, 0, 0, 0, 0); +} + +struct sbiret sbi_sse_disable(unsigned long event_id) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_DISABLE, event_id, 0, 0, 0, 0, 0); +} + +struct sbiret sbi_sse_hart_mask(void) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_MASK, 0, 0, 0, 0, 0, 0); +} + +struct sbiret sbi_sse_hart_unmask(void) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_HART_UNMASK, 0, 0, 0, 0, 0, 0); +} + +struct sbiret sbi_sse_inject(unsigned long event_id, unsigned long hart_id) +{ + return sbi_ecall(SBI_EXT_SSE, SBI_EXT_SSE_INJECT, event_id, hart_id, 0, 0, 0, 0); +} + void sbi_shutdown(void) { sbi_ecall(SBI_EXT_SRST, 0, 0, 0, 0, 0, 0, 0); From patchwork Fri Mar 14 11:10:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 14016650 Received: from mail-wm1-f52.google.com (mail-wm1-f52.google.com [209.85.128.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C9A81FDE2B for ; Fri, 14 Mar 2025 11:10:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741950649; cv=none; b=mSC2KNAprCBqSxSX1tnxsz3Jo0Dtlk0JNGgmHuGS3ZKOgwn6MCd7FIhQagRhpF4zlVSRIMZk2k+B4Mmy4GQWi+2Eb4VjS2UBKRFt6um0llRXheLYCDQjvPS0L8UFRijCfpI1izPy+uHKiA96Fo1bpALMAcJ6AcRJ8go+1bq+xek= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741950649; c=relaxed/simple; bh=6NM1HY2CLD8FFEIn6E4lQT5N+XNYjOUf4AOvCt5aeR0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cqQBGNwuO0EvVLKkFPZLqpv5fHKonZ0/YxDpIsm0UVPvcxTCJ0hNM4basCjPApEQt0GANdr6uNDpwSjKQ6iAYaMEG9hOj0yukXWb7IwjAV1bpauJPaM0cuBU6Y5fLE0/UMDJv7pT7BTH2IVK5sjOhnJv8MeYDzXaSDgK9W6mrOU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=qzWLGaq/; arc=none smtp.client-ip=209.85.128.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="qzWLGaq/" Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-43948021a45so17687855e9.1 for ; Fri, 14 Mar 2025 04:10:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1741950644; x=1742555444; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7/b0bjzFEcZA4WsRnYL9qN7kS/EvSE9dlgK05Epnn40=; b=qzWLGaq/RYUh0/kz0nNdvV01tm0DyYIBJQ9RiHi2wy47jwFVqOi/3qFBEGgRQiTKvZ Eq5Aj7cx4p6f9g9pVRlCX8PmqqjjP/k4oVdJK8Ik3r9Vjso4uGcAM4NG3D+8fP7/X+tf PlM5aT2glEMZTubJCF7tvPCsqe3+lcFr0pOBU9O46K5jdsxZOmLvCWJcNjzM10Pv6G3f SFVxCXNUahbVBEki4dhcFELKyPlNJjndxxAki+D74vTmEFmc6lKdZmUCjE9stkiN+iDa JL3GWNShnT8hg2aVStONXX2ZddDG0ieofuPZZW5Qoaf9R0TQ8LWPeUUtyVzF/LapyD2L ofXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741950644; x=1742555444; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7/b0bjzFEcZA4WsRnYL9qN7kS/EvSE9dlgK05Epnn40=; b=bZOvicyZaZ2s6GZV7Wfr5O58klFzzGM/gFJQGWbAubdC2D2fWffOTLangESAJ0aD/y yprEz8lZzKoNVg9MQ9MTdBYF6aBHyZbanFWjDkVI4omxODY7ho/5W+X8/DgUC7xigkE3 cmOOZzJ2THAUDtgF1XuM2bLCQ1/oT58UPCUa+GCXufuz96i0LkTxBaBW92IBsW+ttJRy YjOZA9uWs+SwWy5jnW3K9rYT/9yPrX+GHMUaJQYq+5G4kbMMqj1/shTtwopBe0eWLmGU PsutuKCg3FWx2cKOc0f99yZEw/JFRM0NbmMydsDGAWoajbsLsIpzQQgtLp+FG+G6BTFB Kbmg== X-Gm-Message-State: AOJu0YywsaCYUsQUGn9tqUBe1y1A1E9rFXZ+e5XjXBuRCThy6RoQ63Sj Cr7MVD0Qk4eYagYQuTfcdcm9gLCWbBhQysngsfrPa1ELvX1Rd90PNrkUWukSRKt2dCovB89NENC BHEE= X-Gm-Gg: ASbGnct+tr7wn6ysYKetHjjdA6UK1maLKjqIN3k4Y+K4V7+VnDKaAEbqbYPP5UEGvxI 3Rni09qCUwVHicmcjN4x8NuAKUhrIrVvpyVxhdxK/P7Ynw7bwd1ObhPyWOMELQoFtG6xwP5C5+b dS+EOFhW2wzDUFctvn85mK5VWpZAOkyqRHW2rcZiKVO7d3CmJvvPftCn8cia9SZj9JoJ0gtZdJg Zwt0z2tmJewWxmIE6AeQN8hgXtiZsKjlb5jf+RWVOrU4/r4XlrBmBWWCmuLO/FVSIIf2aP4+seK ayh1oz6B7kf7JwQpYBhmDxJ8sNScCi9vpUt6cQT0UzpfsmSLwtPoabQl X-Google-Smtp-Source: AGHT+IEFpwB0/Ag3Ua3E5gcBlSRAboQo618hixLNJS8jvKAf+RgcrWuTNd9NDzewdqTWtO+luRJwtQ== X-Received: by 2002:a5d:64a5:0:b0:38f:2a49:f6a5 with SMTP id ffacd0b85a97d-3971d61782fmr3253180f8f.15.1741950643311; Fri, 14 Mar 2025 04:10:43 -0700 (PDT) Received: from carbon-x1.. ([2a01:e0a:e17:9700:16d2:7456:6634:9626]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-395cb3188e8sm5299203f8f.65.2025.03.14.04.10.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Mar 2025 04:10:42 -0700 (PDT) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Andrew Jones , Anup Patel , Atish Patra Subject: [kvm-unit-tests PATCH v9 6/6] riscv: sbi: Add SSE extension tests Date: Fri, 14 Mar 2025 12:10:29 +0100 Message-ID: <20250314111030.3728671-7-cleger@rivosinc.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250314111030.3728671-1-cleger@rivosinc.com> References: <20250314111030.3728671-1-cleger@rivosinc.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add SBI SSE extension tests for the following features: - Test attributes errors (invalid values, RO, etc) - Registration errors - Simple events (register, enable, inject) - Events with different priorities - Global events dispatch on different harts - Local events on all harts - Hart mask/unmask events Signed-off-by: Clément Léger Reviewed-by: Andrew Jones --- riscv/Makefile | 1 + riscv/sbi-tests.h | 1 + riscv/sbi-sse.c | 1263 +++++++++++++++++++++++++++++++++++++++++++++ riscv/sbi.c | 2 + 4 files changed, 1267 insertions(+) create mode 100644 riscv/sbi-sse.c diff --git a/riscv/Makefile b/riscv/Makefile index 16fc125b..4fe2f1bb 100644 --- a/riscv/Makefile +++ b/riscv/Makefile @@ -18,6 +18,7 @@ tests += $(TEST_DIR)/sieve.$(exe) all: $(tests) $(TEST_DIR)/sbi-deps = $(TEST_DIR)/sbi-asm.o $(TEST_DIR)/sbi-fwft.o +$(TEST_DIR)/sbi-deps += $(TEST_DIR)/sbi-sse.o # When built for EFI sieve needs extra memory, run with e.g. '-m 256' on QEMU $(TEST_DIR)/sieve.$(exe): AUXFLAGS = 0x1 diff --git a/riscv/sbi-tests.h b/riscv/sbi-tests.h index b081464d..a71da809 100644 --- a/riscv/sbi-tests.h +++ b/riscv/sbi-tests.h @@ -71,6 +71,7 @@ sbiret_report(ret, expected_error, expected_value, "check sbi.error and sbi.value") void sbi_bad_fid(int ext); +void check_sse(void); #endif /* __ASSEMBLER__ */ #endif /* _RISCV_SBI_TESTS_H_ */ diff --git a/riscv/sbi-sse.c b/riscv/sbi-sse.c new file mode 100644 index 00000000..2e4fb83d --- /dev/null +++ b/riscv/sbi-sse.c @@ -0,0 +1,1263 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * SBI SSE testsuite + * + * Copyright (C) 2025, Rivos Inc., Clément Léger + */ +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "sbi-tests.h" + +#define SSE_STACK_SIZE PAGE_SIZE + +struct sse_event_info { + uint32_t event_id; + const char *name; + bool can_inject; +}; + +static struct sse_event_info sse_event_infos[] = { + { + .event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, + .name = "local_high_prio_ras", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP, + .name = "double_trap", + }, + { + .event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS, + .name = "global_high_prio_ras", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, + .name = "local_pmu_overflow", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS, + .name = "local_low_prio_ras", + }, + { + .event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, + .name = "global_low_prio_ras", + }, + { + .event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, + .name = "local_software", + }, + { + .event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, + .name = "global_software", + }, +}; + +static const char *const attr_names[] = { + [SBI_SSE_ATTR_STATUS] = "status", + [SBI_SSE_ATTR_PRIORITY] = "priority", + [SBI_SSE_ATTR_CONFIG] = "config", + [SBI_SSE_ATTR_PREFERRED_HART] = "preferred_hart", + [SBI_SSE_ATTR_ENTRY_PC] = "entry_pc", + [SBI_SSE_ATTR_ENTRY_ARG] = "entry_arg", + [SBI_SSE_ATTR_INTERRUPTED_SEPC] = "interrupted_sepc", + [SBI_SSE_ATTR_INTERRUPTED_FLAGS] = "interrupted_flags", + [SBI_SSE_ATTR_INTERRUPTED_A6] = "interrupted_a6", + [SBI_SSE_ATTR_INTERRUPTED_A7] = "interrupted_a7", +}; + +static const unsigned long ro_attrs[] = { + SBI_SSE_ATTR_STATUS, + SBI_SSE_ATTR_ENTRY_PC, + SBI_SSE_ATTR_ENTRY_ARG, +}; + +static const unsigned long interrupted_attrs[] = { + SBI_SSE_ATTR_INTERRUPTED_SEPC, + SBI_SSE_ATTR_INTERRUPTED_FLAGS, + SBI_SSE_ATTR_INTERRUPTED_A6, + SBI_SSE_ATTR_INTERRUPTED_A7, +}; + +static const unsigned long interrupted_flags[] = { + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPP, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPIE, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SPELP, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPV, + SBI_SSE_ATTR_INTERRUPTED_FLAGS_HSTATUS_SPVP, +}; + +static struct sse_event_info *sse_event_get_info(uint32_t event_id) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) { + if (sse_event_infos[i].event_id == event_id) + return &sse_event_infos[i]; + } + + assert_msg(false, "Invalid event id: %d", event_id); +} + +static const char *sse_event_name(uint32_t event_id) +{ + return sse_event_get_info(event_id)->name; +} + +static bool sse_event_can_inject(uint32_t event_id) +{ + return sse_event_get_info(event_id)->can_inject; +} + +static struct sbiret sse_get_event_status_field(uint32_t event_id, unsigned long mask, + unsigned long shift, unsigned long *value) +{ + struct sbiret ret; + unsigned long status; + + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status); + if (ret.error) { + sbiret_report_error(&ret, SBI_SUCCESS, "Get event status"); + return ret; + } + + *value = (status & mask) >> shift; + + return ret; +} + +static struct sbiret sse_event_get_state(uint32_t event_id, enum sbi_sse_state *state) +{ + unsigned long status = 0; + struct sbiret ret; + + ret = sse_get_event_status_field(event_id, SBI_SSE_ATTR_STATUS_STATE_MASK, + SBI_SSE_ATTR_STATUS_STATE_OFFSET, &status); + *state = status; + + return ret; +} + +static unsigned long sse_global_event_set_current_hart(uint32_t event_id) +{ + struct sbiret ret; + unsigned long current_hart = current_thread_info()->hartid; + + assert(sbi_sse_event_is_global(event_id)); + + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, ¤t_hart); + if (sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart")) + return ret.error; + + return 0; +} + +static bool sse_check_state(uint32_t event_id, unsigned long expected_state) +{ + struct sbiret ret; + enum sbi_sse_state state; + + ret = sse_event_get_state(event_id, &state); + if (ret.error) + return false; + + return report(state == expected_state, "event status == %ld", expected_state); +} + +static bool sse_event_pending(uint32_t event_id) +{ + bool pending = 0; + + sse_get_event_status_field(event_id, BIT(SBI_SSE_ATTR_STATUS_PENDING_OFFSET), + SBI_SSE_ATTR_STATUS_PENDING_OFFSET, (unsigned long *)&pending); + + return pending; +} + +static void *sse_alloc_stack(void) +{ + /* + * We assume that SSE_STACK_SIZE always fit in one page. This page will + * always be decremented before storing anything on it in sse-entry.S. + */ + assert(SSE_STACK_SIZE <= PAGE_SIZE); + + return (alloc_page() + SSE_STACK_SIZE); +} + +static void sse_free_stack(void *stack) +{ + free_page(stack - SSE_STACK_SIZE); +} + +static void sse_read_write_test(uint32_t event_id, unsigned long attr, unsigned long attr_count, + unsigned long *value, long expected_error, const char *str) +{ + struct sbiret ret; + + ret = sbi_sse_read_attrs(event_id, attr, attr_count, value); + sbiret_report_error(&ret, expected_error, "Read %s error", str); + + ret = sbi_sse_write_attrs(event_id, attr, attr_count, value); + sbiret_report_error(&ret, expected_error, "Write %s error", str); +} + +#define ALL_ATTRS_COUNT (SBI_SSE_ATTR_INTERRUPTED_A7 + 1) + +static void sse_test_attrs(uint32_t event_id) +{ + unsigned long value = 0; + struct sbiret ret; + void *ptr; + unsigned long values[ALL_ATTRS_COUNT]; + unsigned int i; + const char *invalid_hart_str; + const char *attr_name; + + report_prefix_push("attrs"); + + for (i = 0; i < ARRAY_SIZE(ro_attrs); i++) { + ret = sbi_sse_write_attrs(event_id, ro_attrs[i], 1, &value); + sbiret_report_error(&ret, SBI_ERR_DENIED, "RO attribute %s not writable", + attr_names[ro_attrs[i]]); + } + + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, ALL_ATTRS_COUNT, values); + sbiret_report_error(&ret, SBI_SUCCESS, "Read multiple attributes"); + + for (i = SBI_SSE_ATTR_STATUS; i <= SBI_SSE_ATTR_INTERRUPTED_A7; i++) { + ret = sbi_sse_read_attrs(event_id, i, 1, &value); + attr_name = attr_names[i]; + + sbiret_report_error(&ret, SBI_SUCCESS, "Read single attribute %s", attr_name); + if (values[i] != value) + report_fail("Attribute 0x%x single value read (0x%lx) differs from the one read with multiple attributes (0x%lx)", + i, value, values[i]); + /* + * Preferred hart reset value is defined by SBI vendor + */ + if (i != SBI_SSE_ATTR_PREFERRED_HART) { + /* + * Specification states that injectable bit is implementation dependent + * but other bits are zero-initialized. + */ + if (i == SBI_SSE_ATTR_STATUS) + value &= ~BIT(SBI_SSE_ATTR_STATUS_INJECT_OFFSET); + report(value == 0, "Attribute %s reset value is 0, found %lx", attr_name, value); + } + } + +#if __riscv_xlen > 32 + value = BIT(32); + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid prio > 0xFFFFFFFF error"); +#endif + + value = ~SBI_SSE_ATTR_CONFIG_ONESHOT; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Write invalid config value error"); + + if (sbi_sse_event_is_global(event_id)) { + invalid_hart_str = getenv("INVALID_HART_ID"); + if (!invalid_hart_str) + value = 0xFFFFFFFFUL; + else + value = strtoul(invalid_hart_str, NULL, 0); + + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid hart id error"); + } else { + /* Set Hart on local event -> RO */ + value = current_thread_info()->hartid; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value); + sbiret_report_error(&ret, SBI_ERR_DENIED, + "Set hart id on local event error"); + } + + /* Set/get flags, sepc, a6, a7 */ + for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) { + attr_name = attr_names[interrupted_attrs[i]]; + ret = sbi_sse_read_attrs(event_id, interrupted_attrs[i], 1, &value); + sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted %s", attr_name); + + value = ARRAY_SIZE(interrupted_attrs) - i; + ret = sbi_sse_write_attrs(event_id, interrupted_attrs[i], 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, + "Set attribute %s invalid state error", attr_name); + } + + sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 0, &value, SBI_ERR_INVALID_PARAM, + "attribute attr_count == 0"); + sse_read_write_test(event_id, SBI_SSE_ATTR_INTERRUPTED_A7 + 1, 1, &value, SBI_ERR_BAD_RANGE, + "invalid attribute"); + + /* Misaligned pointer address */ + ptr = (void *)&value; + ptr += 1; + sse_read_write_test(event_id, SBI_SSE_ATTR_STATUS, 1, ptr, SBI_ERR_INVALID_ADDRESS, + "attribute with invalid address"); + + report_prefix_pop(); +} + +static void sse_test_register_error(uint32_t event_id) +{ + struct sbiret ret; + + report_prefix_push("register"); + + ret = sbi_sse_unregister(event_id); + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "unregister non-registered event"); + + ret = sbi_sse_register_raw(event_id, 0x1, 0); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "register misaligned entry"); + + ret = sbi_sse_register(event_id, NULL); + sbiret_report_error(&ret, SBI_SUCCESS, "register"); + if (ret.error) + goto done; + + ret = sbi_sse_register(event_id, NULL); + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "register used event failure"); + + ret = sbi_sse_unregister(event_id); + sbiret_report_error(&ret, SBI_SUCCESS, "unregister"); + +done: + report_prefix_pop(); +} + +struct sse_simple_test_arg { + bool done; + unsigned long expected_a6; + uint32_t event_id; +}; + +#if __riscv_xlen > 32 + +struct alias_test_params { + unsigned long event_id; + unsigned long attr_id; + unsigned long attr_count; + const char *str; +}; + +static void test_alias(uint32_t event_id) +{ + struct alias_test_params *write, *read; + unsigned long write_value, read_value; + struct sbiret ret; + bool err = false; + int r, w; + struct alias_test_params params[] = { + {event_id, SBI_SSE_ATTR_INTERRUPTED_A6, 1, "non aliased"}, + {BIT(32) + event_id, SBI_SSE_ATTR_INTERRUPTED_A6, 1, "aliased event_id"}, + {event_id, BIT(32) + SBI_SSE_ATTR_INTERRUPTED_A6, 1, "aliased attr_id"}, + {event_id, SBI_SSE_ATTR_INTERRUPTED_A6, BIT(32) + 1, "aliased attr_count"}, + }; + + report_prefix_push("alias"); + for (w = 0; w < ARRAY_SIZE(params); w++) { + write = ¶ms[w]; + + write_value = 0xDEADBEEF + w; + ret = sbi_sse_write_attrs(write->event_id, write->attr_id, write->attr_count, &write_value); + if (ret.error) + sbiret_report_error(&ret, SBI_SUCCESS, "Write %s, event 0x%lx attr 0x%lx, attr count 0x%lx", + write->str, write->event_id, write->attr_id, write->attr_count); + + for (r = 0; r < ARRAY_SIZE(params); r++) { + read = ¶ms[r]; + read_value = 0; + ret = sbi_sse_read_attrs(read->event_id, read->attr_id, read->attr_count, &read_value); + if (ret.error) + sbiret_report_error(&ret, SBI_SUCCESS, + "Read %s, event 0x%lx attr 0x%lx, attr count 0x%lx", + read->str, read->event_id, read->attr_id, read->attr_count); + + /* Do not spam output with a lot of reports */ + if (write_value != read_value) { + err = true; + report_fail("Write %s, event 0x%lx attr 0x%lx, attr count 0x%lx value %lx ==" + "Read %s, event 0x%lx attr 0x%lx, attr count 0x%lx value %lx", + write->str, write->event_id, write->attr_id, + write->attr_count, write_value, read->str, + read->event_id, read->attr_id, read->attr_count, + read_value); + } + } + } + + report(!err, "BIT(32) aliasing tests"); + report_prefix_pop(); +} +#endif + +static void sse_simple_handler(void *data, struct pt_regs *regs, unsigned int hartid) +{ + struct sse_simple_test_arg *arg = data; + int i; + struct sbiret ret; + const char *attr_name; + uint32_t event_id = READ_ONCE(arg->event_id), attr; + unsigned long value, prev_value, flags; + unsigned long interrupted_state[ARRAY_SIZE(interrupted_attrs)]; + unsigned long modified_state[ARRAY_SIZE(interrupted_attrs)] = {4, 3, 2, 1}; + unsigned long tmp_state[ARRAY_SIZE(interrupted_attrs)]; + + report((regs->status & SR_SPP) == SR_SPP, "Interrupted S-mode"); + report(hartid == current_thread_info()->hartid, "Hartid correctly passed"); + sse_check_state(event_id, SBI_SSE_STATE_RUNNING); + report(!sse_event_pending(event_id), "Event not pending"); + + /* Read full interrupted state */ + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC, + ARRAY_SIZE(interrupted_attrs), interrupted_state); + sbiret_report_error(&ret, SBI_SUCCESS, "Save full interrupted state from handler"); + + /* Write full modified state and read it */ + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC, + ARRAY_SIZE(modified_state), modified_state); + sbiret_report_error(&ret, SBI_SUCCESS, + "Write full interrupted state from handler"); + + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC, + ARRAY_SIZE(tmp_state), tmp_state); + sbiret_report_error(&ret, SBI_SUCCESS, "Read full modified state from handler"); + + report(memcmp(tmp_state, modified_state, sizeof(modified_state)) == 0, + "Full interrupted state successfully written"); + +#if __riscv_xlen > 32 + test_alias(event_id); +#endif + + /* Restore full saved state */ + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_INTERRUPTED_SEPC, + ARRAY_SIZE(interrupted_attrs), interrupted_state); + sbiret_report_error(&ret, SBI_SUCCESS, "Full interrupted state restore from handler"); + + /* We test SBI_SSE_ATTR_INTERRUPTED_FLAGS below with specific flag values */ + for (i = 0; i < ARRAY_SIZE(interrupted_attrs); i++) { + attr = interrupted_attrs[i]; + if (attr == SBI_SSE_ATTR_INTERRUPTED_FLAGS) + continue; + + attr_name = attr_names[attr]; + + ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value); + sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s", attr_name); + + value = 0xDEADBEEF + i; + ret = sbi_sse_write_attrs(event_id, attr, 1, &value); + sbiret_report_error(&ret, SBI_SUCCESS, "Set attr %s", attr_name); + + ret = sbi_sse_read_attrs(event_id, attr, 1, &value); + sbiret_report_error(&ret, SBI_SUCCESS, "Get attr %s", attr_name); + report(value == 0xDEADBEEF + i, "Get attr %s, value: 0x%lx", attr_name, value); + + ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value); + sbiret_report_error(&ret, SBI_SUCCESS, "Restore attr %s value", attr_name); + } + + /* Test all flags allowed for SBI_SSE_ATTR_INTERRUPTED_FLAGS */ + attr = SBI_SSE_ATTR_INTERRUPTED_FLAGS; + ret = sbi_sse_read_attrs(event_id, attr, 1, &prev_value); + sbiret_report_error(&ret, SBI_SUCCESS, "Save interrupted flags"); + + for (i = 0; i < ARRAY_SIZE(interrupted_flags); i++) { + flags = interrupted_flags[i]; + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags); + sbiret_report_error(&ret, SBI_SUCCESS, + "Set interrupted flags bit 0x%lx value", flags); + ret = sbi_sse_read_attrs(event_id, attr, 1, &value); + sbiret_report_error(&ret, SBI_SUCCESS, "Get interrupted flags after set"); + report(value == flags, "interrupted flags modified value: 0x%lx", value); + } + + /* Write invalid bit in flag register */ + flags = SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT << 1; + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error", + flags); + + flags = BIT(SBI_SSE_ATTR_INTERRUPTED_FLAGS_SSTATUS_SDT + 1); + ret = sbi_sse_write_attrs(event_id, attr, 1, &flags); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, "Set invalid flags bit 0x%lx value error", + flags); + + ret = sbi_sse_write_attrs(event_id, attr, 1, &prev_value); + sbiret_report_error(&ret, SBI_SUCCESS, "Restore interrupted flags"); + + /* Try to change HARTID/Priority while running */ + if (sbi_sse_event_is_global(event_id)) { + value = current_thread_info()->hartid; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set hart id while running error"); + } + + value = 0; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_STATE, "Set priority while running error"); + + value = READ_ONCE(arg->expected_a6); + report(interrupted_state[2] == value, "Interrupted state a6, expected 0x%lx, got 0x%lx", + value, interrupted_state[2]); + + report(interrupted_state[3] == SBI_EXT_SSE, + "Interrupted state a7, expected 0x%x, got 0x%lx", SBI_EXT_SSE, + interrupted_state[3]); + + WRITE_ONCE(arg->done, true); +} + +static void sse_test_inject_simple(uint32_t event_id) +{ + unsigned long value, error; + struct sbiret ret; + enum sbi_sse_state state = SBI_SSE_STATE_UNUSED; + struct sse_simple_test_arg test_arg = {.event_id = event_id}; + struct sbi_sse_handler_arg args = { + .handler = sse_simple_handler, + .handler_data = (void *)&test_arg, + .stack = sse_alloc_stack(), + }; + + report_prefix_push("simple"); + + if (!sse_check_state(event_id, SBI_SSE_STATE_UNUSED)) + goto cleanup; + + ret = sbi_sse_register(event_id, &args); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "register")) + goto cleanup; + + state = SBI_SSE_STATE_REGISTERED; + + if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED)) + goto cleanup; + + if (sbi_sse_event_is_global(event_id)) { + /* Be sure global events are targeting the current hart */ + error = sse_global_event_set_current_hart(event_id); + if (error) + goto cleanup; + } + + ret = sbi_sse_enable(event_id); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "enable")) + goto cleanup; + + state = SBI_SSE_STATE_ENABLED; + if (!sse_check_state(event_id, SBI_SSE_STATE_ENABLED)) + goto cleanup; + + ret = sbi_sse_hart_mask(); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "hart mask")) + goto cleanup; + + ret = sbi_sse_inject(event_id, current_thread_info()->hartid); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "injection masked")) { + sbi_sse_hart_unmask(); + goto cleanup; + } + + report(READ_ONCE(test_arg.done) == 0, "event masked not handled"); + + /* + * When unmasking the SSE events, we expect it to be injected + * immediately so a6 should be SBI_EXT_SBI_SSE_HART_UNMASK + */ + WRITE_ONCE(test_arg.expected_a6, SBI_EXT_SSE_HART_UNMASK); + ret = sbi_sse_hart_unmask(); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "hart unmask")) + goto cleanup; + + report(READ_ONCE(test_arg.done) == 1, "event unmasked handled"); + WRITE_ONCE(test_arg.done, 0); + WRITE_ONCE(test_arg.expected_a6, SBI_EXT_SSE_INJECT); + + /* Set as oneshot and verify it is disabled */ + ret = sbi_sse_disable(event_id); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Disable event")) { + /* Nothing we can really do here, event can not be disabled */ + goto cleanup; + } + state = SBI_SSE_STATE_REGISTERED; + + value = SBI_SSE_ATTR_CONFIG_ONESHOT; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set event attribute as ONESHOT")) + goto cleanup; + + ret = sbi_sse_enable(event_id); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable event")) + goto cleanup; + state = SBI_SSE_STATE_ENABLED; + + ret = sbi_sse_inject(event_id, current_thread_info()->hartid); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "second injection")) + goto cleanup; + + report(READ_ONCE(test_arg.done) == 1, "event handled"); + WRITE_ONCE(test_arg.done, 0); + + if (!sse_check_state(event_id, SBI_SSE_STATE_REGISTERED)) + goto cleanup; + state = SBI_SSE_STATE_REGISTERED; + + /* Clear ONESHOT FLAG */ + value = 0; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_CONFIG, 1, &value); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Clear CONFIG.ONESHOT flag")) + goto cleanup; + + ret = sbi_sse_unregister(event_id); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "unregister")) + goto cleanup; + state = SBI_SSE_STATE_UNUSED; + + sse_check_state(event_id, SBI_SSE_STATE_UNUSED); + +cleanup: + switch (state) { + case SBI_SSE_STATE_ENABLED: + ret = sbi_sse_disable(event_id); + if (ret.error) { + sbiret_report_error(&ret, SBI_SUCCESS, "disable event 0x%x", event_id); + break; + } + case SBI_SSE_STATE_REGISTERED: + sbi_sse_unregister(event_id); + if (ret.error) + sbiret_report_error(&ret, SBI_SUCCESS, "unregister event 0x%x", event_id); + default: + } + + sse_free_stack(args.stack); + report_prefix_pop(); +} + +struct sse_foreign_cpu_test_arg { + bool done; + unsigned int expected_cpu; + uint32_t event_id; +}; + +static void sse_foreign_cpu_handler(void *data, struct pt_regs *regs, unsigned int hartid) +{ + struct sse_foreign_cpu_test_arg *arg = data; + unsigned int expected_cpu; + + /* For arg content to be visible */ + smp_rmb(); + expected_cpu = READ_ONCE(arg->expected_cpu); + report(expected_cpu == current_thread_info()->cpu, + "Received event on CPU (%d), expected CPU (%d)", current_thread_info()->cpu, + expected_cpu); + + WRITE_ONCE(arg->done, true); + /* For arg update to be visible for other CPUs */ + smp_wmb(); +} + +struct sse_local_per_cpu { + struct sbi_sse_handler_arg args; + struct sbiret ret; + struct sse_foreign_cpu_test_arg handler_arg; + enum sbi_sse_state state; +}; + +static void sse_register_enable_local(void *data) +{ + struct sbiret ret; + struct sse_local_per_cpu *cpu_args = data; + struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu]; + uint32_t event_id = cpu_arg->handler_arg.event_id; + + ret = sbi_sse_register(event_id, &cpu_arg->args); + WRITE_ONCE(cpu_arg->ret, ret); + if (ret.error) + return; + cpu_arg->state = SBI_SSE_STATE_REGISTERED; + + ret = sbi_sse_enable(event_id); + WRITE_ONCE(cpu_arg->ret, ret); + if (ret.error) + return; + cpu_arg->state = SBI_SSE_STATE_ENABLED; +} + +static void sbi_sse_disable_unregister_local(void *data) +{ + struct sbiret ret; + struct sse_local_per_cpu *cpu_args = data; + struct sse_local_per_cpu *cpu_arg = &cpu_args[current_thread_info()->cpu]; + uint32_t event_id = cpu_arg->handler_arg.event_id; + + switch (cpu_arg->state) { + case SBI_SSE_STATE_ENABLED: + ret = sbi_sse_disable(event_id); + WRITE_ONCE(cpu_arg->ret, ret); + if (ret.error) + return; + case SBI_SSE_STATE_REGISTERED: + ret = sbi_sse_unregister(event_id); + WRITE_ONCE(cpu_arg->ret, ret); + default: + } +} + +static uint64_t sse_event_get_complete_timeout(void) +{ + char *event_complete_timeout_str; + uint64_t timeout; + + event_complete_timeout_str = getenv("SSE_EVENT_COMPLETE_TIMEOUT"); + if (!event_complete_timeout_str) + timeout = 1000; + else + timeout = strtoul(event_complete_timeout_str, NULL, 0); + + return timer_get_cycles() + usec_to_cycles(timeout); +} + +static void sse_test_inject_local(uint32_t event_id) +{ + int cpu; + uint64_t timeout; + struct sbiret ret; + struct sse_local_per_cpu *cpu_args, *cpu_arg; + struct sse_foreign_cpu_test_arg *handler_arg; + + cpu_args = calloc(NR_CPUS, sizeof(struct sbi_sse_handler_arg)); + + report_prefix_push("local_dispatch"); + for_each_online_cpu(cpu) { + cpu_arg = &cpu_args[cpu]; + cpu_arg->handler_arg.event_id = event_id; + cpu_arg->args.stack = sse_alloc_stack(); + cpu_arg->args.handler = sse_foreign_cpu_handler; + cpu_arg->args.handler_data = (void *)&cpu_arg->handler_arg; + cpu_arg->state = SBI_SSE_STATE_UNUSED; + } + + on_cpus(sse_register_enable_local, cpu_args); + for_each_online_cpu(cpu) { + cpu_arg = &cpu_args[cpu]; + ret = cpu_arg->ret; + if (ret.error) { + report_fail("CPU failed to register/enable event: %ld", ret.error); + goto cleanup; + } + + handler_arg = &cpu_arg->handler_arg; + WRITE_ONCE(handler_arg->expected_cpu, cpu); + /* For handler_arg content to be visible for other CPUs */ + smp_wmb(); + ret = sbi_sse_inject(event_id, cpus[cpu].hartid); + if (ret.error) { + report_fail("CPU failed to inject event: %ld", ret.error); + goto cleanup; + } + } + + for_each_online_cpu(cpu) { + handler_arg = &cpu_args[cpu].handler_arg; + smp_rmb(); + + timeout = sse_event_get_complete_timeout(); + while (!READ_ONCE(handler_arg->done) || timer_get_cycles() < timeout) { + /* For handler_arg update to be visible */ + smp_rmb(); + cpu_relax(); + } + report(READ_ONCE(handler_arg->done), "Event handled"); + WRITE_ONCE(handler_arg->done, false); + } + +cleanup: + on_cpus(sbi_sse_disable_unregister_local, cpu_args); + for_each_online_cpu(cpu) { + cpu_arg = &cpu_args[cpu]; + ret = READ_ONCE(cpu_arg->ret); + if (ret.error) + report_fail("CPU failed to disable/unregister event: %ld", ret.error); + } + + for_each_online_cpu(cpu) { + cpu_arg = &cpu_args[cpu]; + sse_free_stack(cpu_arg->args.stack); + } + + report_prefix_pop(); +} + +static void sse_test_inject_global_cpu(uint32_t event_id, unsigned int cpu, + struct sse_foreign_cpu_test_arg *test_arg) +{ + unsigned long value; + struct sbiret ret; + uint64_t timeout; + enum sbi_sse_state state; + + WRITE_ONCE(test_arg->expected_cpu, cpu); + /* For test_arg content to be visible for other CPUs */ + smp_wmb(); + value = cpu; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PREFERRED_HART, 1, &value); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Set preferred hart")) + return; + + ret = sbi_sse_enable(event_id); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Enable event")) + return; + + ret = sbi_sse_inject(event_id, cpu); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Inject event")) + goto disable; + + smp_rmb(); + timeout = sse_event_get_complete_timeout(); + while (!READ_ONCE(test_arg->done) || timer_get_cycles() < timeout) { + /* For shared test_arg structure */ + smp_rmb(); + cpu_relax(); + } + + report(READ_ONCE(test_arg->done), "event handler called"); + WRITE_ONCE(test_arg->done, false); + + timeout = sse_event_get_complete_timeout(); + /* Wait for event to be back in ENABLED state */ + do { + ret = sse_event_get_state(event_id, &state); + if (ret.error) + goto disable; + cpu_relax(); + } while (state != SBI_SSE_STATE_ENABLED || timer_get_cycles() < timeout); + + report(state == SBI_SSE_STATE_ENABLED, "Event in enabled state"); + +disable: + ret = sbi_sse_disable(event_id); + sbiret_report_error(&ret, SBI_SUCCESS, "Disable event"); +} + +static void sse_test_inject_global(uint32_t event_id) +{ + struct sbiret ret; + unsigned int cpu; + struct sse_foreign_cpu_test_arg test_arg = {.event_id = event_id}; + struct sbi_sse_handler_arg args = { + .handler = sse_foreign_cpu_handler, + .handler_data = (void *)&test_arg, + .stack = sse_alloc_stack(), + }; + + report_prefix_push("global_dispatch"); + + ret = sbi_sse_register(event_id, &args); + if (!sbiret_report_error(&ret, SBI_SUCCESS, "Register event")) + goto err; + + for_each_online_cpu(cpu) + sse_test_inject_global_cpu(event_id, cpu, &test_arg); + + ret = sbi_sse_unregister(event_id); + sbiret_report_error(&ret, SBI_SUCCESS, "Unregister event"); + +err: + sse_free_stack(args.stack); + report_prefix_pop(); +} + +struct priority_test_arg { + uint32_t event_id; + bool called; + u32 prio; + enum sbi_sse_state state; /* Used for error handling */ + struct priority_test_arg *next_event_arg; + void (*check_func)(struct priority_test_arg *arg); +}; + +static void sse_hi_priority_test_handler(void *arg, struct pt_regs *regs, + unsigned int hartid) +{ + struct priority_test_arg *targ = arg; + struct priority_test_arg *next = targ->next_event_arg; + + targ->called = true; + if (next) { + sbi_sse_inject(next->event_id, current_thread_info()->hartid); + + report(!sse_event_pending(next->event_id), "Higher priority event is not pending"); + report(next->called, "Higher priority event was handled"); + } +} + +static void sse_low_priority_test_handler(void *arg, struct pt_regs *regs, + unsigned int hartid) +{ + struct priority_test_arg *targ = arg; + struct priority_test_arg *next = targ->next_event_arg; + + targ->called = true; + + if (next) { + sbi_sse_inject(next->event_id, current_thread_info()->hartid); + + report(sse_event_pending(next->event_id), "Lower priority event is pending"); + report(!next->called, "Lower priority event %s was not handled before %s", + sse_event_name(next->event_id), sse_event_name(targ->event_id)); + } +} + +static void sse_test_injection_priority_arg(struct priority_test_arg *in_args, + unsigned int in_args_size, + sbi_sse_handler_fn handler, + const char *test_name) +{ + unsigned int i; + unsigned long value, uret; + struct sbiret ret; + uint32_t event_id; + struct priority_test_arg *arg; + unsigned int args_size = 0; + struct sbi_sse_handler_arg event_args[in_args_size]; + struct priority_test_arg *args[in_args_size]; + void *stack; + struct sbi_sse_handler_arg *event_arg; + + report_prefix_push(test_name); + + for (i = 0; i < in_args_size; i++) { + arg = &in_args[i]; + arg->state = SBI_SSE_STATE_UNUSED; + event_id = arg->event_id; + if (!sse_event_can_inject(event_id)) + continue; + + args[args_size] = arg; + args_size++; + event_args->stack = 0; + } + + if (!args_size) { + report_skip("No injectable events"); + goto skip; + } + + for (i = 0; i < args_size; i++) { + arg = args[i]; + event_id = arg->event_id; + stack = sse_alloc_stack(); + + event_arg = &event_args[i]; + event_arg->handler = handler; + event_arg->handler_data = (void *)arg; + event_arg->stack = stack; + + if (i < (args_size - 1)) + arg->next_event_arg = args[i + 1]; + else + arg->next_event_arg = NULL; + + /* Be sure global events are targeting the current hart */ + if (sbi_sse_event_is_global(event_id)) { + uret = sse_global_event_set_current_hart(event_id); + if (uret) + goto err; + } + + ret = sbi_sse_register(event_id, event_arg); + if (ret.error) { + sbiret_report_error(&ret, SBI_SUCCESS, "register event %s", + sse_event_name(event_id)); + goto err; + } + arg->state = SBI_SSE_STATE_REGISTERED; + + value = arg->prio; + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value); + if (ret.error) { + sbiret_report_error(&ret, SBI_SUCCESS, "set event %s priority", + sse_event_name(event_id)); + goto err; + } + ret = sbi_sse_enable(event_id); + if (ret.error) { + sbiret_report_error(&ret, SBI_SUCCESS, "enable event %s", + sse_event_name(event_id)); + goto err; + } + arg->state = SBI_SSE_STATE_ENABLED; + } + + /* Inject first event */ + ret = sbi_sse_inject(args[0]->event_id, current_thread_info()->hartid); + sbiret_report_error(&ret, SBI_SUCCESS, "injection"); + + /* Check that all handlers have been called */ + for (i = 0; i < args_size; i++) + report(arg->called, "Event %s handler called", sse_event_name(args[i]->event_id)); + +err: + for (i = 0; i < args_size; i++) { + arg = args[i]; + event_id = arg->event_id; + + switch (arg->state) { + case SBI_SSE_STATE_ENABLED: + ret = sbi_sse_disable(event_id); + if (ret.error) { + sbiret_report_error(&ret, SBI_SUCCESS, "disable event 0x%x", + event_id); + break; + } + case SBI_SSE_STATE_REGISTERED: + sbi_sse_unregister(event_id); + if (ret.error) + sbiret_report_error(&ret, SBI_SUCCESS, "unregister event 0x%x", + event_id); + default: + } + + event_arg = &event_args[i]; + if (event_arg->stack) + sse_free_stack(event_arg->stack); + } + +skip: + report_prefix_pop(); +} + +static struct priority_test_arg hi_prio_args[] = { + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE}, + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE}, + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW}, + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP}, + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS}, +}; + +static struct priority_test_arg low_prio_args[] = { + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_LOCAL_DOUBLE_TRAP}, + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW}, + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS}, + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE}, + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE}, +}; + +static struct priority_test_arg prio_args[] = { + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 5}, + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10}, + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS, .prio = 12}, + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, .prio = 15}, + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS, .prio = 20}, + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, .prio = 22}, + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, .prio = 25}, +}; + +static struct priority_test_arg same_prio_args[] = { + {.event_id = SBI_SSE_EVENT_LOCAL_PMU_OVERFLOW, .prio = 0}, + {.event_id = SBI_SSE_EVENT_GLOBAL_LOW_PRIO_RAS, .prio = 0}, + {.event_id = SBI_SSE_EVENT_LOCAL_HIGH_PRIO_RAS, .prio = 10}, + {.event_id = SBI_SSE_EVENT_LOCAL_SOFTWARE, .prio = 10}, + {.event_id = SBI_SSE_EVENT_GLOBAL_SOFTWARE, .prio = 10}, + {.event_id = SBI_SSE_EVENT_GLOBAL_HIGH_PRIO_RAS, .prio = 20}, + {.event_id = SBI_SSE_EVENT_LOCAL_LOW_PRIO_RAS, .prio = 20}, +}; + +static void sse_test_injection_priority(void) +{ + report_prefix_push("prio"); + + sse_test_injection_priority_arg(hi_prio_args, ARRAY_SIZE(hi_prio_args), + sse_hi_priority_test_handler, "high"); + + sse_test_injection_priority_arg(low_prio_args, ARRAY_SIZE(low_prio_args), + sse_low_priority_test_handler, "low"); + + sse_test_injection_priority_arg(prio_args, ARRAY_SIZE(prio_args), + sse_low_priority_test_handler, "changed"); + + sse_test_injection_priority_arg(same_prio_args, ARRAY_SIZE(same_prio_args), + sse_low_priority_test_handler, "same_prio_args"); + + report_prefix_pop(); +} + +static void test_invalid_event_id(unsigned long event_id) +{ + struct sbiret ret; + unsigned long value = 0; + + ret = sbi_sse_register_raw(event_id, (unsigned long) sbi_sse_entry, 0); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "register event_id 0x%lx", event_id); + + ret = sbi_sse_unregister(event_id); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "unregister event_id 0x%lx", event_id); + + ret = sbi_sse_enable(event_id); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "enable event_id 0x%lx", event_id); + + ret = sbi_sse_disable(event_id); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "disable event_id 0x%lx", event_id); + + ret = sbi_sse_inject(event_id, 0); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "inject event_id 0x%lx", event_id); + + ret = sbi_sse_write_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "write attr event_id 0x%lx", event_id); + + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_PRIORITY, 1, &value); + sbiret_report_error(&ret, SBI_ERR_INVALID_PARAM, + "read attr event_id 0x%lx", event_id); +} + +static void sse_test_invalid_event_id(void) +{ + + report_prefix_push("event_id"); + + test_invalid_event_id(SBI_SSE_EVENT_LOCAL_RESERVED_0_START); + + report_prefix_pop(); +} + +static void sse_check_event_availability(uint32_t event_id, bool *can_inject, bool *supported) +{ + unsigned long status; + struct sbiret ret; + + *can_inject = false; + *supported = false; + + ret = sbi_sse_read_attrs(event_id, SBI_SSE_ATTR_STATUS, 1, &status); + if (ret.error != SBI_SUCCESS && ret.error != SBI_ERR_NOT_SUPPORTED) { + report_fail("Get event status != SBI_SUCCESS && != SBI_ERR_NOT_SUPPORTED: %ld", + ret.error); + return; + } + if (ret.error == SBI_ERR_NOT_SUPPORTED) + return; + + *supported = true; + *can_inject = (status >> SBI_SSE_ATTR_STATUS_INJECT_OFFSET) & 1; +} + +static void sse_secondary_boot_and_unmask(void *data) +{ + sbi_sse_hart_unmask(); +} + +static void sse_check_mask(void) +{ + struct sbiret ret; + + /* Upon boot, event are masked, check that */ + ret = sbi_sse_hart_mask(); + sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "hart mask at boot time"); + + ret = sbi_sse_hart_unmask(); + sbiret_report_error(&ret, SBI_SUCCESS, "hart unmask"); + ret = sbi_sse_hart_unmask(); + sbiret_report_error(&ret, SBI_ERR_ALREADY_STARTED, "hart unmask twice error"); + + ret = sbi_sse_hart_mask(); + sbiret_report_error(&ret, SBI_SUCCESS, "hart mask"); + ret = sbi_sse_hart_mask(); + sbiret_report_error(&ret, SBI_ERR_ALREADY_STOPPED, "hart mask twice"); +} + +static void run_inject_test(struct sse_event_info *info) +{ + unsigned long event_id = info->event_id; + + if (!info->can_inject) { + report_skip("Event does not support injection, skipping injection tests"); + return; + } + + sse_test_inject_simple(event_id); + + if (sbi_sse_event_is_global(event_id)) + sse_test_inject_global(event_id); + else + sse_test_inject_local(event_id); +} + +void check_sse(void) +{ + struct sse_event_info *info; + unsigned long i, event_id; + bool supported; + + report_prefix_push("sse"); + + if (!sbi_probe(SBI_EXT_SSE)) { + report_skip("extension not available"); + report_prefix_pop(); + return; + } + + sse_check_mask(); + + /* + * Dummy wakeup of all processors since some of them will be targeted + * by global events without going through the wakeup call as well as + * unmasking SSE events on all harts + */ + on_cpus(sse_secondary_boot_and_unmask, NULL); + + sse_test_invalid_event_id(); + + for (i = 0; i < ARRAY_SIZE(sse_event_infos); i++) { + info = &sse_event_infos[i]; + event_id = info->event_id; + report_prefix_push(info->name); + sse_check_event_availability(event_id, &info->can_inject, &supported); + if (!supported) { + report_skip("Event is not supported, skipping tests"); + report_prefix_pop(); + continue; + } + + sse_test_attrs(event_id); + sse_test_register_error(event_id); + + run_inject_test(info); + + report_prefix_pop(); + } + + sse_test_injection_priority(); + + report_prefix_pop(); +} diff --git a/riscv/sbi.c b/riscv/sbi.c index 0404bb81..478cb35d 100644 --- a/riscv/sbi.c +++ b/riscv/sbi.c @@ -32,6 +32,7 @@ #define HIGH_ADDR_BOUNDARY ((phys_addr_t)1 << 32) +void check_sse(void); void check_fwft(void); static long __labs(long a) @@ -1567,6 +1568,7 @@ int main(int argc, char **argv) check_hsm(); check_dbcn(); check_susp(); + check_sse(); check_fwft(); return report_summary();