From patchwork Sun Dec 31 15:27:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507237 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8182379DD for ; Sun, 31 Dec 2023 15:27:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="amBL++QD" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1d3ed1ca402so63153605ad.2 for ; Sun, 31 Dec 2023 07:27:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036477; x=1704641277; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Iifngznp+ertIjgLYKgYyodchJA69bCk2AsJZ4z1PYM=; b=amBL++QDsc+EE5EFmrVMy/WSKZtIGyKBE02sIoMt/Jr+ZrP1A33/zVgpdnEDnLGMfg YHaV0WGyrqWOZOZT2ukv9Rnn3faVBaDwm+wVOngbsk5tx/9qlD8c7wYa6n/mDxsngQrq GcdfjpONGDDy9EkTOVk1SqjRIQ4gIvWN1MVedVnQF/lNeqc4pMTR33KvNLhFyjfRg/Gn X46Nyj8QwgWMss84zYz24HFrsQbNBPwGBncTRDUzL4xf9AbzsEMWberEUI9yezmOFh04 ZfT850gB2pK/NeNqtMO0sck0A9MCJb2Gi3J6VuJPJwSGrUOden8f/MkoeTAy8amjHXOM TyCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036477; x=1704641277; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Iifngznp+ertIjgLYKgYyodchJA69bCk2AsJZ4z1PYM=; b=euCV162n5/kNItx2x/XQzErTjTsOd7oz/Rbu31GD3tHv+9XcukJ2UAJ6UCZvHGPMsB 1YrW+pZ0j56Jq+QGElw0CaSIn7BLTyRq5kb6RKGJAGlXczfvoioo9PsQpcTY++E0ELF8 7TGfedLRL3bOLybuFbj6yXMOkQqCWexVVjCqlxcPy3d1++D2o98spgJ1HWNLiM+nXXJ7 9gh+lIiY2acaUyXgsntuqqir7oUB/+e0NMoj/wEoPa+Lywj01NQwVxfavNCF01RXSAkI /5ySESwRz540D4Qe0GkFeg6S0oDrryG0G6qX8Hxu4W+5WuMipbzTjSa5Q31tsH6HiPbI eaig== X-Gm-Message-State: AOJu0YwRzsetfk7d285kXHLznzncycTWSJHKXOpHx69KeS5Sv8UnoLYi +38zgPilqNqx430ulrHpT9mOKVh+xG/kGA== X-Google-Smtp-Source: AGHT+IG2X++FLxOhI7SgAVayiogOLWmlW/65gGZuJtaqh9iKO1jYHAKqb8XHTGyS7hCPlNtJAE2iRw== X-Received: by 2002:a17:902:684f:b0:1d0:6ffe:9f5 with SMTP id f15-20020a170902684f00b001d06ffe09f5mr5478381pln.83.1704036476827; Sun, 31 Dec 2023 07:27:56 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.27.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:27:56 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 01/11] RISC-V: add helper function to read the vector VLEN Date: Sun, 31 Dec 2023 23:27:33 +0800 Message-Id: <20231231152743.6304-2-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Heiko Stuebner VLEN describes the length of each vector register and some instructions need specific minimal VLENs to work correctly. The vector code already includes a variable riscv_v_vsize that contains the value of "32 vector registers with vlenb length" that gets filled during boot. vlenb is the value contained in the CSR_VLENB register and the value represents "VLEN / 8". So add riscv_vector_vlen() to return the actual VLEN value for in-kernel users when they need to check the available VLEN. Signed-off-by: Heiko Stuebner Reviewed-by: Eric Biggers Signed-off-by: Jerry Shih --- arch/riscv/include/asm/vector.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/riscv/include/asm/vector.h b/arch/riscv/include/asm/vector.h index d69844906d51..b04ee0a50315 100644 --- a/arch/riscv/include/asm/vector.h +++ b/arch/riscv/include/asm/vector.h @@ -294,4 +294,15 @@ static inline bool riscv_v_vstate_ctrl_user_allowed(void) { return false; } #endif /* CONFIG_RISCV_ISA_V */ +/* + * Return the implementation's vlen value. + * + * riscv_v_vsize contains the value of "32 vector registers with vlenb length" + * so rebuild the vlen value in bits from it. + */ +static inline int riscv_vector_vlen(void) +{ + return riscv_v_vsize / 32 * 8; +} + #endif /* ! __ASM_RISCV_VECTOR_H */ From patchwork Sun Dec 31 15:27:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507238 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mail-io1-f54.google.com (mail-io1-f54.google.com [209.85.166.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF677A958 for ; Sun, 31 Dec 2023 15:28:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="EgK3MOKR" Received: by mail-io1-f54.google.com with SMTP id ca18e2360f4ac-7b7fb34265fso400590739f.3 for ; Sun, 31 Dec 2023 07:28:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036480; x=1704641280; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=leMifUQFY/hnRKKKOp+M4kt1TiREdMrY53gOOvxLu5g=; b=EgK3MOKRC8Yak0GueyTlYsgy0GJ/JXChiZHZ3HaWP6ow73J3K12TL/wZUAvLPR9/iu nERe9vhr2/CS4a/PMAOpz+0Fjm+hZnQVSkyjcEGNTlc62PoF9in2UHeH3wHN+hqHN/Ma pnEaumnn6SpwGcwFQS4UVZHW5h9nJm8MXTd/C9FDUbxy2E8Hr2Rp6fH8VnSPIRWXnpxF Y0FnlJQG2YLsfWzShu0yIuj2WA6qBXYChAIpVQ4OfWEwRyv4dY69MVJvfiS1vr6+Rftu TMhdazHS/Sn/w+fZIRLr6hPMX5iHn2Nmd/i8YnRFH5FVlIRGS5UROBrYebv8TcpHSDcz ee+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036480; x=1704641280; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=leMifUQFY/hnRKKKOp+M4kt1TiREdMrY53gOOvxLu5g=; b=fRpZ14D+q8kWvIT0ltNTLF75hVsl0q0q/wL5py1du1SSIlJhIFMUsz9BU5vtj10jgh jMLbLb61og2/VrPazcSVAGbOFRTfQkHD1sc0DDcDawKuorCqzj2ytQ78vhhSnMyG4H52 8cHcnNkQ9v4kraDGA2k4n4POMQrvaQ7eP7pBBsJXgjOR6/4GEUn7C4OdVGSLLY+uYebo J2/iZ3SqEmDmpgPyI5GR9vFYLZlToQzlhen5ngAbGnKE0Vismwo/1HqCCU7nWX5PpLsg RFF4Shhjsw03XWnpEj0If7YqFwdnp13hJdFhLpiJ6PMTDmlPy2EHuOwC4Kfto9oxGuwv XzVg== X-Gm-Message-State: AOJu0Yy1YXt5AHNHZeHJQC9ZUWJVK3OnJZ3ouSkKZsZJxJGKf0MpCMtq LTVhntK2T3lKVjN50KvRSPBz5eaN9NMlmA== X-Google-Smtp-Source: AGHT+IHvzlmEOnBTt5frc//wb3eiOCzfM75y7LicREf98+n1CVL3gaYUoNXYckFPbzNADg+lxI7hlA== X-Received: by 2002:a92:ca4a:0:b0:35f:f4ae:955e with SMTP id q10-20020a92ca4a000000b0035ff4ae955emr17849031ilo.35.1704036479991; Sun, 31 Dec 2023 07:27:59 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.27.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:27:59 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 02/11] RISC-V: hook new crypto subdir into build-system Date: Sun, 31 Dec 2023 23:27:34 +0800 Message-Id: <20231231152743.6304-3-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Heiko Stuebner Create a crypto subdirectory for added accelerated cryptography routines and hook it into the riscv Kbuild and the main crypto Kconfig. Signed-off-by: Heiko Stuebner Reviewed-by: Eric Biggers Signed-off-by: Jerry Shih --- arch/riscv/Kbuild | 1 + arch/riscv/crypto/Kconfig | 5 +++++ arch/riscv/crypto/Makefile | 4 ++++ crypto/Kconfig | 3 +++ 4 files changed, 13 insertions(+) create mode 100644 arch/riscv/crypto/Kconfig create mode 100644 arch/riscv/crypto/Makefile diff --git a/arch/riscv/Kbuild b/arch/riscv/Kbuild index d25ad1c19f88..2c585f7a0b6e 100644 --- a/arch/riscv/Kbuild +++ b/arch/riscv/Kbuild @@ -2,6 +2,7 @@ obj-y += kernel/ mm/ net/ obj-$(CONFIG_BUILTIN_DTB) += boot/dts/ +obj-$(CONFIG_CRYPTO) += crypto/ obj-y += errata/ obj-$(CONFIG_KVM) += kvm/ diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig new file mode 100644 index 000000000000..10d60edc0110 --- /dev/null +++ b/arch/riscv/crypto/Kconfig @@ -0,0 +1,5 @@ +# SPDX-License-Identifier: GPL-2.0 + +menu "Accelerated Cryptographic Algorithms for CPU (riscv)" + +endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile new file mode 100644 index 000000000000..b3b6332c9f6d --- /dev/null +++ b/arch/riscv/crypto/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# linux/arch/riscv/crypto/Makefile +# diff --git a/crypto/Kconfig b/crypto/Kconfig index 70661f58ee41..c8fd2b83e589 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1519,6 +1519,9 @@ endif if PPC source "arch/powerpc/crypto/Kconfig" endif +if RISCV +source "arch/riscv/crypto/Kconfig" +endif if S390 source "arch/s390/crypto/Kconfig" endif From patchwork Sun Dec 31 15:27:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507239 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA3AFBA2B for ; Sun, 31 Dec 2023 15:28:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="TGiNmJG7" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-1d3ea5cc137so63357165ad.0 for ; Sun, 31 Dec 2023 07:28:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036483; x=1704641283; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XuCoCK+SW4zawUSFFv0erKPf4OwGQ3cihPvdb62nAAQ=; b=TGiNmJG7kgiO6iwMkVvaOG55cZTPy7o/ffaBqUuSV7X3owlmL2WcNqb2+mRm9eUGok CqeySdUs1AaREkBpiYR80ANy8w1bmGixtosjtWS32TzZzGHm1A0UNdGMU9x3MxTl8xSB yRTurWDUnV9Kf3n3tT6Sl5xvBgJdjCt/0HZ5FJYLZm/Rc8X/v9Eq5xmU9er4t4K4UcqJ 6dJJEMaZOqsxSk0AK14HPj82am5teiZNlt5v4RD+WP2l4myQavcS3DtGMqKi2544/F1i EEXLBU7v9dNk5R2PwmsGEswSq7B1pz6WPAsopj20rwUWSPgSeM05L9SOTK/wAMPl6WxQ QjAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036483; x=1704641283; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XuCoCK+SW4zawUSFFv0erKPf4OwGQ3cihPvdb62nAAQ=; b=l4Io6WPSyJjjBg97VO9fyhSnmKIsyUxNVs5+5QH4LGEfq/M6oJqg6PdWVM9n2Lxm+3 HBckKHP2PKpbTrbtUp+vWWdNmh9pMcjK7ykMdp3vwGjO60vOrAswvOyv2+k53TImvRSs Q6Fdc4Ct2v0SKNKH6u9gM094TQg4BrqaJ07aqjCpbik3C23imWzeyZay7uZsvx6T+n5b 0GV4TzKcXzspM5RX8WWrhgzYq5j5XgScTjevwHCCqP6zEuKN0lC6Q0MEG5B3y2UFs+mu bFCM8NuhKrBNPwWKRN+wfH4dj18KQgZntQ6DASyxYDx/lnybmwtH/NnYyT9QXopUmxt8 M3IQ== X-Gm-Message-State: AOJu0YyyJYJVWv2iVYhiS9PoHxjkcCt5BMU8g3PujGv8WRHhUA/vmsWG 92Cb+OPHKQzrFJ4pFXnJZw4phRaVe0hnOg== X-Google-Smtp-Source: AGHT+IFrXdhgAdCMZNHHFRjvOLoIEeS5k4qN1vslTS4fA+aVRSXmXy/wbfSG12vHdQUE2i/D13D4aw== X-Received: by 2002:a17:903:1ca:b0:1d4:25ec:5975 with SMTP id e10-20020a17090301ca00b001d425ec5975mr20365331plh.10.1704036483091; Sun, 31 Dec 2023 07:28:03 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:02 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 03/11] RISC-V: add TOOLCHAIN_HAS_VECTOR_CRYPTO in kconfig Date: Sun, 31 Dec 2023 23:27:35 +0800 Message-Id: <20231231152743.6304-4-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 LLVM main and binutils master now both fully support v1.0 of the RISC-V vector crypto extensions. Check the assembler capability for using the vector crypto asm mnemonics in kernel. Co-developed-by: Eric Biggers Signed-off-by: Eric Biggers Signed-off-by: Jerry Shih --- arch/riscv/Kconfig | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 0a03d72706b5..8647392ece0b 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -636,6 +636,14 @@ config TOOLCHAIN_NEEDS_OLD_ISA_SPEC versions of clang and GCC to be passed to GAS, which has the same result as passing zicsr and zifencei to -march. +# This option indicates that the toolchain supports all v1.0 vector crypto +# extensions, including Zvk*, Zvbb, and Zvbc. The LLVM added all of these at +# once. The binutils added all except Zvkb, then added Zvkb. So we just check +# for Zvkb. +config TOOLCHAIN_HAS_VECTOR_CRYPTO + def_bool $(as-instr, .option arch$(comma) +zvkb) + depends on AS_HAS_OPTION_ARCH + config FPU bool "FPU support" default y From patchwork Sun Dec 31 15:27:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507240 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mail-il1-f179.google.com (mail-il1-f179.google.com [209.85.166.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45D5DC2C0 for ; Sun, 31 Dec 2023 15:28:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="R2hNASPz" Received: by mail-il1-f179.google.com with SMTP id e9e14a558f8ab-35fa08df8afso40920565ab.3 for ; Sun, 31 Dec 2023 07:28:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036486; x=1704641286; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wzdxV7yuqvqemWBa1v0uB2/WXCtG4G7354Jv7SKaK6w=; b=R2hNASPzYd7W4MiRQzxEhKrIGIuCnKeX4B2LZKIsWpRQ4gUqTniSA1I0/XU3gu7RW6 lvBS1/CoMQLXS46qNwz3sEcWBMpacE/Ee/O/BtRzPADhoHRdizPDuevgWv9bdfIzRzSW Y5ZWMds59c445GA1GUZNponTMf32U+iSfcQHoZWO+NmVL9HmIS7X1VGv1ugoSFB0wRVe eesGleCZcABiotrn1EKup6qDRirPI7s2zLdgmTwqAC20FH42PG0UWRT9cL1+t904Sv88 BPirzt7TFbybo1AIay6afv4BIuOdGABieHv7qCh5zfq+/F/l+rJ3qjG5GUT5wGWMxNXE Kfkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036486; x=1704641286; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wzdxV7yuqvqemWBa1v0uB2/WXCtG4G7354Jv7SKaK6w=; b=WoEiaEFGuJXKMIGr+TQTP/mdVu1MJyNgKctpeiubm5I/QUE2eRh8fsUL0IDvAoGAOm AB7SaTD1S1IuHqLsnlPbKQO4T0uq8yuqgpyOtIGuARnP5Wnv4qPUMahB3wUDPF4IKOyB tJvtuja9MqrVwkSd57ufqFH8jnvD0jLFHvmJ5Xc6gCwSXwfBdmGSjLncO90ZkZhQM+mu 7OkI5H52E3Xw5q6X0tKorqNu8nq2u7FjJYNzgP0t7OrljKkHmcJNFP4EwzrPsT6Pc7Ni Hs6lxhOzH615mE6WL1+IoRcY8bMEFirO/KrRCpYiHSvFucQKnsN6ZC/HqWNasrwy05m6 gMZg== X-Gm-Message-State: AOJu0YwNiFBWoCuzaj2SuZFVQEHYxwzVSaltWiusPIdeSxlPLktoM8xk QB5VghvFQ8Tzc7zGBsETCxGM6cYcF6O/Nw== X-Google-Smtp-Source: AGHT+IGFJrVAhY7E/f4SOKNHzcFmCp5P3PR8v2oEb4Y/dTPncHB52szAEuJryNrpYmYa0/l5D3ErIQ== X-Received: by 2002:a92:ca4c:0:b0:35f:efdc:f067 with SMTP id q12-20020a92ca4c000000b0035fefdcf067mr23324392ilo.11.1704036486373; Sun, 31 Dec 2023 07:28:06 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:05 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 04/11] RISC-V: crypto: add Zvkned accelerated AES implementation Date: Sun, 31 Dec 2023 23:27:36 +0800 Message-Id: <20231231152743.6304-5-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The AES implementation using the Zvkned vector crypto extension from OpenSSL(openssl/openssl#21923). Co-developed-by: Christoph Müllner Signed-off-by: Christoph Müllner Co-developed-by: Heiko Stuebner Signed-off-by: Heiko Stuebner Co-developed-by: Phoebe Chen Signed-off-by: Phoebe Chen Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. Changelog v3: - Rename aes_setkey() to aes_setkey_zvkned(). - Rename riscv64_aes_setkey() to riscv64_aes_setkey_zvkned(). - Use aes generic software key expanding everywhere. - Remove rv64i_zvkned_set_encrypt_key(). We still need to provide the decryption expanding key for the SW fallback path which is not supported directly using zvkned extension. So, we turn to use the pure generic software key expanding everywhere to simplify the set_key flow. - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `AES_RISCV64` option by default. - Turn to use `crypto_aes_ctx` structure for aes key. - Use `Zvkned` extension for AES-128/256 key expanding. - Export riscv64_aes_* symbols for other modules. - Add `asmlinkage` qualifier for crypto asm function. - Reorder structure riscv64_aes_alg_zvkned members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 11 + arch/riscv/crypto/Makefile | 11 + arch/riscv/crypto/aes-riscv64-glue.c | 137 +++++++ arch/riscv/crypto/aes-riscv64-glue.h | 18 + arch/riscv/crypto/aes-riscv64-zvkned.pl | 453 ++++++++++++++++++++++++ 5 files changed, 630 insertions(+) create mode 100644 arch/riscv/crypto/aes-riscv64-glue.c create mode 100644 arch/riscv/crypto/aes-riscv64-glue.h create mode 100644 arch/riscv/crypto/aes-riscv64-zvkned.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 10d60edc0110..2a7c365f2a86 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -2,4 +2,15 @@ menu "Accelerated Cryptographic Algorithms for CPU (riscv)" +config CRYPTO_AES_RISCV64 + tristate "Ciphers: AES" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_ALGAPI + select CRYPTO_LIB_AES + help + Block ciphers: AES cipher algorithms (FIPS-197) + + Architecture: riscv64 using: + - Zvkned vector crypto extension + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index b3b6332c9f6d..90ca91d8df26 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -2,3 +2,14 @@ # # linux/arch/riscv/crypto/Makefile # + +obj-$(CONFIG_CRYPTO_AES_RISCV64) += aes-riscv64.o +aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o + +quiet_cmd_perlasm = PERLASM $@ + cmd_perlasm = $(PERL) $(<) void $(@) + +$(obj)/aes-riscv64-zvkned.S: $(src)/aes-riscv64-zvkned.pl + $(call cmd,perlasm) + +clean-files += aes-riscv64-zvkned.S diff --git a/arch/riscv/crypto/aes-riscv64-glue.c b/arch/riscv/crypto/aes-riscv64-glue.c new file mode 100644 index 000000000000..f29898c25652 --- /dev/null +++ b/arch/riscv/crypto/aes-riscv64-glue.c @@ -0,0 +1,137 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Port of the OpenSSL AES implementation for RISC-V + * + * Copyright (C) 2023 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "aes-riscv64-glue.h" + +/* aes cipher using zvkned vector crypto extension */ +asmlinkage void rv64i_zvkned_encrypt(const u8 *in, u8 *out, + const struct crypto_aes_ctx *key); +asmlinkage void rv64i_zvkned_decrypt(const u8 *in, u8 *out, + const struct crypto_aes_ctx *key); + +int riscv64_aes_setkey_zvkned(struct crypto_aes_ctx *ctx, const u8 *key, + unsigned int keylen) +{ + int ret; + + ret = aes_check_keylen(keylen); + if (ret < 0) + return -EINVAL; + + /* + * The RISC-V AES vector crypto key expanding doesn't support AES-192. + * So, we use the generic software key expanding here for all cases. + */ + return aes_expandkey(ctx, key, keylen); +} +EXPORT_SYMBOL(riscv64_aes_setkey_zvkned); + +void riscv64_aes_encrypt_zvkned(const struct crypto_aes_ctx *ctx, u8 *dst, + const u8 *src) +{ + if (crypto_simd_usable()) { + kernel_vector_begin(); + rv64i_zvkned_encrypt(src, dst, ctx); + kernel_vector_end(); + } else { + aes_encrypt(ctx, dst, src); + } +} +EXPORT_SYMBOL(riscv64_aes_encrypt_zvkned); + +void riscv64_aes_decrypt_zvkned(const struct crypto_aes_ctx *ctx, u8 *dst, + const u8 *src) +{ + if (crypto_simd_usable()) { + kernel_vector_begin(); + rv64i_zvkned_decrypt(src, dst, ctx); + kernel_vector_end(); + } else { + aes_decrypt(ctx, dst, src); + } +} +EXPORT_SYMBOL(riscv64_aes_decrypt_zvkned); + +static int aes_setkey_zvkned(struct crypto_tfm *tfm, const u8 *key, + unsigned int keylen) +{ + struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + + return riscv64_aes_setkey_zvkned(ctx, key, keylen); +} + +static void aes_encrypt_zvkned(struct crypto_tfm *tfm, u8 *dst, const u8 *src) +{ + const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + + riscv64_aes_encrypt_zvkned(ctx, dst, src); +} + +static void aes_decrypt_zvkned(struct crypto_tfm *tfm, u8 *dst, const u8 *src) +{ + const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm); + + riscv64_aes_decrypt_zvkned(ctx, dst, src); +} + +static struct crypto_alg riscv64_aes_alg_zvkned = { + .cra_flags = CRYPTO_ALG_TYPE_CIPHER, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_priority = 300, + .cra_name = "aes", + .cra_driver_name = "aes-riscv64-zvkned", + .cra_cipher = { + .cia_min_keysize = AES_MIN_KEY_SIZE, + .cia_max_keysize = AES_MAX_KEY_SIZE, + .cia_setkey = aes_setkey_zvkned, + .cia_encrypt = aes_encrypt_zvkned, + .cia_decrypt = aes_decrypt_zvkned, + }, + .cra_module = THIS_MODULE, +}; + +static inline bool check_aes_ext(void) +{ + return riscv_isa_extension_available(NULL, ZVKNED) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_aes_mod_init(void) +{ + if (check_aes_ext()) + return crypto_register_alg(&riscv64_aes_alg_zvkned); + + return -ENODEV; +} + +static void __exit riscv64_aes_mod_fini(void) +{ + crypto_unregister_alg(&riscv64_aes_alg_zvkned); +} + +module_init(riscv64_aes_mod_init); +module_exit(riscv64_aes_mod_fini); + +MODULE_DESCRIPTION("AES (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("aes"); diff --git a/arch/riscv/crypto/aes-riscv64-glue.h b/arch/riscv/crypto/aes-riscv64-glue.h new file mode 100644 index 000000000000..2b544125091e --- /dev/null +++ b/arch/riscv/crypto/aes-riscv64-glue.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef AES_RISCV64_GLUE_H +#define AES_RISCV64_GLUE_H + +#include +#include + +int riscv64_aes_setkey_zvkned(struct crypto_aes_ctx *ctx, const u8 *key, + unsigned int keylen); + +void riscv64_aes_encrypt_zvkned(const struct crypto_aes_ctx *ctx, u8 *dst, + const u8 *src); + +void riscv64_aes_decrypt_zvkned(const struct crypto_aes_ctx *ctx, u8 *dst, + const u8 *src); + +#endif /* AES_RISCV64_GLUE_H */ diff --git a/arch/riscv/crypto/aes-riscv64-zvkned.pl b/arch/riscv/crypto/aes-riscv64-zvkned.pl new file mode 100644 index 000000000000..583e87912e5d --- /dev/null +++ b/arch/riscv/crypto/aes-riscv64-zvkned.pl @@ -0,0 +1,453 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Christoph Müllner +# Copyright (c) 2023, Phoebe Chen +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector AES block cipher extension ('Zvkned') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +.text +.option arch, +zvkned +___ + +my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, + $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15, + $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23, + $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map("v$_",(0..31)); + +{ +################################################################################ +# void rv64i_zvkned_encrypt(const unsigned char *in, unsigned char *out, +# const AES_KEY *key); +my ($INP, $OUTP, $KEYP) = ("a0", "a1", "a2"); +my ($T0) = ("t0"); +my ($KEY_LEN) = ("a3"); + +$code .= <<___; +.p2align 3 +.globl rv64i_zvkned_encrypt +.type rv64i_zvkned_encrypt,\@function +rv64i_zvkned_encrypt: + # Load key length. + lwu $KEY_LEN, 480($KEYP) + + # Get proper routine for key length. + li $T0, 32 + beq $KEY_LEN, $T0, L_enc_256 + li $T0, 24 + beq $KEY_LEN, $T0, L_enc_192 + li $T0, 16 + beq $KEY_LEN, $T0, L_enc_128 + + j L_fail_m2 +.size rv64i_zvkned_encrypt,.-rv64i_zvkned_encrypt +___ + +$code .= <<___; +.p2align 3 +L_enc_128: + vsetivli zero, 4, e32, m1, ta, ma + + vle32.v $V1, ($INP) + + vle32.v $V10, ($KEYP) + vaesz.vs $V1, $V10 # with round key w[ 0, 3] + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + vaesem.vs $V1, $V11 # with round key w[ 4, 7] + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + vaesem.vs $V1, $V12 # with round key w[ 8,11] + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) + vaesem.vs $V1, $V13 # with round key w[12,15] + addi $KEYP, $KEYP, 16 + vle32.v $V14, ($KEYP) + vaesem.vs $V1, $V14 # with round key w[16,19] + addi $KEYP, $KEYP, 16 + vle32.v $V15, ($KEYP) + vaesem.vs $V1, $V15 # with round key w[20,23] + addi $KEYP, $KEYP, 16 + vle32.v $V16, ($KEYP) + vaesem.vs $V1, $V16 # with round key w[24,27] + addi $KEYP, $KEYP, 16 + vle32.v $V17, ($KEYP) + vaesem.vs $V1, $V17 # with round key w[28,31] + addi $KEYP, $KEYP, 16 + vle32.v $V18, ($KEYP) + vaesem.vs $V1, $V18 # with round key w[32,35] + addi $KEYP, $KEYP, 16 + vle32.v $V19, ($KEYP) + vaesem.vs $V1, $V19 # with round key w[36,39] + addi $KEYP, $KEYP, 16 + vle32.v $V20, ($KEYP) + vaesef.vs $V1, $V20 # with round key w[40,43] + + vse32.v $V1, ($OUTP) + + ret +.size L_enc_128,.-L_enc_128 +___ + +$code .= <<___; +.p2align 3 +L_enc_192: + vsetivli zero, 4, e32, m1, ta, ma + + vle32.v $V1, ($INP) + + vle32.v $V10, ($KEYP) + vaesz.vs $V1, $V10 + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + vaesem.vs $V1, $V11 + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + vaesem.vs $V1, $V12 + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) + vaesem.vs $V1, $V13 + addi $KEYP, $KEYP, 16 + vle32.v $V14, ($KEYP) + vaesem.vs $V1, $V14 + addi $KEYP, $KEYP, 16 + vle32.v $V15, ($KEYP) + vaesem.vs $V1, $V15 + addi $KEYP, $KEYP, 16 + vle32.v $V16, ($KEYP) + vaesem.vs $V1, $V16 + addi $KEYP, $KEYP, 16 + vle32.v $V17, ($KEYP) + vaesem.vs $V1, $V17 + addi $KEYP, $KEYP, 16 + vle32.v $V18, ($KEYP) + vaesem.vs $V1, $V18 + addi $KEYP, $KEYP, 16 + vle32.v $V19, ($KEYP) + vaesem.vs $V1, $V19 + addi $KEYP, $KEYP, 16 + vle32.v $V20, ($KEYP) + vaesem.vs $V1, $V20 + addi $KEYP, $KEYP, 16 + vle32.v $V21, ($KEYP) + vaesem.vs $V1, $V21 + addi $KEYP, $KEYP, 16 + vle32.v $V22, ($KEYP) + vaesef.vs $V1, $V22 + + vse32.v $V1, ($OUTP) + ret +.size L_enc_192,.-L_enc_192 +___ + +$code .= <<___; +.p2align 3 +L_enc_256: + vsetivli zero, 4, e32, m1, ta, ma + + vle32.v $V1, ($INP) + + vle32.v $V10, ($KEYP) + vaesz.vs $V1, $V10 + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + vaesem.vs $V1, $V11 + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + vaesem.vs $V1, $V12 + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) + vaesem.vs $V1, $V13 + addi $KEYP, $KEYP, 16 + vle32.v $V14, ($KEYP) + vaesem.vs $V1, $V14 + addi $KEYP, $KEYP, 16 + vle32.v $V15, ($KEYP) + vaesem.vs $V1, $V15 + addi $KEYP, $KEYP, 16 + vle32.v $V16, ($KEYP) + vaesem.vs $V1, $V16 + addi $KEYP, $KEYP, 16 + vle32.v $V17, ($KEYP) + vaesem.vs $V1, $V17 + addi $KEYP, $KEYP, 16 + vle32.v $V18, ($KEYP) + vaesem.vs $V1, $V18 + addi $KEYP, $KEYP, 16 + vle32.v $V19, ($KEYP) + vaesem.vs $V1, $V19 + addi $KEYP, $KEYP, 16 + vle32.v $V20, ($KEYP) + vaesem.vs $V1, $V20 + addi $KEYP, $KEYP, 16 + vle32.v $V21, ($KEYP) + vaesem.vs $V1, $V21 + addi $KEYP, $KEYP, 16 + vle32.v $V22, ($KEYP) + vaesem.vs $V1, $V22 + addi $KEYP, $KEYP, 16 + vle32.v $V23, ($KEYP) + vaesem.vs $V1, $V23 + addi $KEYP, $KEYP, 16 + vle32.v $V24, ($KEYP) + vaesef.vs $V1, $V24 + + vse32.v $V1, ($OUTP) + ret +.size L_enc_256,.-L_enc_256 +___ + +################################################################################ +# void rv64i_zvkned_decrypt(const unsigned char *in, unsigned char *out, +# const AES_KEY *key); +$code .= <<___; +.p2align 3 +.globl rv64i_zvkned_decrypt +.type rv64i_zvkned_decrypt,\@function +rv64i_zvkned_decrypt: + # Load key length. + lwu $KEY_LEN, 480($KEYP) + + # Get proper routine for key length. + li $T0, 32 + beq $KEY_LEN, $T0, L_dec_256 + li $T0, 24 + beq $KEY_LEN, $T0, L_dec_192 + li $T0, 16 + beq $KEY_LEN, $T0, L_dec_128 + + j L_fail_m2 +.size rv64i_zvkned_decrypt,.-rv64i_zvkned_decrypt +___ + +$code .= <<___; +.p2align 3 +L_dec_128: + vsetivli zero, 4, e32, m1, ta, ma + + vle32.v $V1, ($INP) + + addi $KEYP, $KEYP, 160 + vle32.v $V20, ($KEYP) + vaesz.vs $V1, $V20 # with round key w[40,43] + addi $KEYP, $KEYP, -16 + vle32.v $V19, ($KEYP) + vaesdm.vs $V1, $V19 # with round key w[36,39] + addi $KEYP, $KEYP, -16 + vle32.v $V18, ($KEYP) + vaesdm.vs $V1, $V18 # with round key w[32,35] + addi $KEYP, $KEYP, -16 + vle32.v $V17, ($KEYP) + vaesdm.vs $V1, $V17 # with round key w[28,31] + addi $KEYP, $KEYP, -16 + vle32.v $V16, ($KEYP) + vaesdm.vs $V1, $V16 # with round key w[24,27] + addi $KEYP, $KEYP, -16 + vle32.v $V15, ($KEYP) + vaesdm.vs $V1, $V15 # with round key w[20,23] + addi $KEYP, $KEYP, -16 + vle32.v $V14, ($KEYP) + vaesdm.vs $V1, $V14 # with round key w[16,19] + addi $KEYP, $KEYP, -16 + vle32.v $V13, ($KEYP) + vaesdm.vs $V1, $V13 # with round key w[12,15] + addi $KEYP, $KEYP, -16 + vle32.v $V12, ($KEYP) + vaesdm.vs $V1, $V12 # with round key w[ 8,11] + addi $KEYP, $KEYP, -16 + vle32.v $V11, ($KEYP) + vaesdm.vs $V1, $V11 # with round key w[ 4, 7] + addi $KEYP, $KEYP, -16 + vle32.v $V10, ($KEYP) + vaesdf.vs $V1, $V10 # with round key w[ 0, 3] + + vse32.v $V1, ($OUTP) + + ret +.size L_dec_128,.-L_dec_128 +___ + +$code .= <<___; +.p2align 3 +L_dec_192: + vsetivli zero, 4, e32, m1, ta, ma + + vle32.v $V1, ($INP) + + addi $KEYP, $KEYP, 192 + vle32.v $V22, ($KEYP) + vaesz.vs $V1, $V22 # with round key w[48,51] + addi $KEYP, $KEYP, -16 + vle32.v $V21, ($KEYP) + vaesdm.vs $V1, $V21 # with round key w[44,47] + addi $KEYP, $KEYP, -16 + vle32.v $V20, ($KEYP) + vaesdm.vs $V1, $V20 # with round key w[40,43] + addi $KEYP, $KEYP, -16 + vle32.v $V19, ($KEYP) + vaesdm.vs $V1, $V19 # with round key w[36,39] + addi $KEYP, $KEYP, -16 + vle32.v $V18, ($KEYP) + vaesdm.vs $V1, $V18 # with round key w[32,35] + addi $KEYP, $KEYP, -16 + vle32.v $V17, ($KEYP) + vaesdm.vs $V1, $V17 # with round key w[28,31] + addi $KEYP, $KEYP, -16 + vle32.v $V16, ($KEYP) + vaesdm.vs $V1, $V16 # with round key w[24,27] + addi $KEYP, $KEYP, -16 + vle32.v $V15, ($KEYP) + vaesdm.vs $V1, $V15 # with round key w[20,23] + addi $KEYP, $KEYP, -16 + vle32.v $V14, ($KEYP) + vaesdm.vs $V1, $V14 # with round key w[16,19] + addi $KEYP, $KEYP, -16 + vle32.v $V13, ($KEYP) + vaesdm.vs $V1, $V13 # with round key w[12,15] + addi $KEYP, $KEYP, -16 + vle32.v $V12, ($KEYP) + vaesdm.vs $V1, $V12 # with round key w[ 8,11] + addi $KEYP, $KEYP, -16 + vle32.v $V11, ($KEYP) + vaesdm.vs $V1, $V11 # with round key w[ 4, 7] + addi $KEYP, $KEYP, -16 + vle32.v $V10, ($KEYP) + vaesdf.vs $V1, $V10 # with round key w[ 0, 3] + + vse32.v $V1, ($OUTP) + + ret +.size L_dec_192,.-L_dec_192 +___ + +$code .= <<___; +.p2align 3 +L_dec_256: + vsetivli zero, 4, e32, m1, ta, ma + + vle32.v $V1, ($INP) + + addi $KEYP, $KEYP, 224 + vle32.v $V24, ($KEYP) + vaesz.vs $V1, $V24 # with round key w[56,59] + addi $KEYP, $KEYP, -16 + vle32.v $V23, ($KEYP) + vaesdm.vs $V1, $V23 # with round key w[52,55] + addi $KEYP, $KEYP, -16 + vle32.v $V22, ($KEYP) + vaesdm.vs $V1, $V22 # with round key w[48,51] + addi $KEYP, $KEYP, -16 + vle32.v $V21, ($KEYP) + vaesdm.vs $V1, $V21 # with round key w[44,47] + addi $KEYP, $KEYP, -16 + vle32.v $V20, ($KEYP) + vaesdm.vs $V1, $V20 # with round key w[40,43] + addi $KEYP, $KEYP, -16 + vle32.v $V19, ($KEYP) + vaesdm.vs $V1, $V19 # with round key w[36,39] + addi $KEYP, $KEYP, -16 + vle32.v $V18, ($KEYP) + vaesdm.vs $V1, $V18 # with round key w[32,35] + addi $KEYP, $KEYP, -16 + vle32.v $V17, ($KEYP) + vaesdm.vs $V1, $V17 # with round key w[28,31] + addi $KEYP, $KEYP, -16 + vle32.v $V16, ($KEYP) + vaesdm.vs $V1, $V16 # with round key w[24,27] + addi $KEYP, $KEYP, -16 + vle32.v $V15, ($KEYP) + vaesdm.vs $V1, $V15 # with round key w[20,23] + addi $KEYP, $KEYP, -16 + vle32.v $V14, ($KEYP) + vaesdm.vs $V1, $V14 # with round key w[16,19] + addi $KEYP, $KEYP, -16 + vle32.v $V13, ($KEYP) + vaesdm.vs $V1, $V13 # with round key w[12,15] + addi $KEYP, $KEYP, -16 + vle32.v $V12, ($KEYP) + vaesdm.vs $V1, $V12 # with round key w[ 8,11] + addi $KEYP, $KEYP, -16 + vle32.v $V11, ($KEYP) + vaesdm.vs $V1, $V11 # with round key w[ 4, 7] + addi $KEYP, $KEYP, -16 + vle32.v $V10, ($KEYP) + vaesdf.vs $V1, $V10 # with round key w[ 0, 3] + + vse32.v $V1, ($OUTP) + + ret +.size L_dec_256,.-L_dec_256 +___ +} + +$code .= <<___; +L_fail_m1: + li a0, -1 + ret +.size L_fail_m1,.-L_fail_m1 + +L_fail_m2: + li a0, -2 + ret +.size L_fail_m2,.-L_fail_m2 + +L_end: + ret +.size L_end,.-L_end +___ + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; From patchwork Sun Dec 31 15:27:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507241 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24BF363A7 for ; Sun, 31 Dec 2023 15:28:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="HwG9Kk17" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-1d427518d52so41799535ad.0 for ; Sun, 31 Dec 2023 07:28:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036490; x=1704641290; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vIxsdiVWBuoXoHI5S24YlGchXWXS7dKKVxtBNRPhuEQ=; b=HwG9Kk17bI5THqXcIf7fFt47rGgDUJikUTu71ivd+JiwCC9fTT+uHMT5UFIljNRNfs KuPqz0BDjuBa5eGHTCNS6CGmyWw5hw/+4k9Kh3JFl2ii4EUV50GvFVEyyC2REvJafmeE 8k/2mafzZ/5a6P36ZL9eMjpxrCo8qb8rPgtlEgfselvekOlGjHJzp1EDhevpSfI+ZGfx gxzjgJTdIowrPgFMpTttlyjB1gEezfVZVql1FNzK3vKfVwMl7MaKBZcwEU7+RzP75wMd ShDgvZNK6gaX8uxFD/W0ILLyKOKmdIKOjqWht8RKn0QK2D47tKo/DbxWC3fuCDNUGAF5 Z8ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036490; x=1704641290; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vIxsdiVWBuoXoHI5S24YlGchXWXS7dKKVxtBNRPhuEQ=; b=OTdcXSFVE++0ZeQzHt1ICGXLXVAsmdqEwNYTj7nSUmklJ74pY13Ug9W53DR7WCf57V i+WlMfeaCjYgQ1vv0hkRSvSdyP9ssXt962Ehx13C3zTOYHZUwFJsDPn2AKZgYCoCoN2r uG3C5bped7stif/m9x7TQQpe0glBMpiA0UDYNv4tl6h8NyeqebEmd3xcCFGWjqvDFaLT VsoPRhmOjta/fFGu/KYg92BHtX1PxG56808QRaF4OH6y33JDnavcPBPE1/geeb0INICz x3kM2ACNk6e9DS60EoZQcOz95e72mr/gq3s6+nqkidkCKLhPRezn6v0mMxIyYyVftTO7 BSCA== X-Gm-Message-State: AOJu0YzTyCok5Oi6f2HJIQcuDvAYda+/KDVRUouqpdt4ochpScMzMYVP ziAGTfJNPg55aRZ79a8qH7cz/h8EEqmMmA== X-Google-Smtp-Source: AGHT+IEpfMp0chD2KMMxJFnxZDJjk7f/sRapbol7ggzGvvvASdJchVHh9n67itcDkzt3sKRaT2EOWg== X-Received: by 2002:a17:903:98e:b0:1d3:8e5d:ecf4 with SMTP id mb14-20020a170903098e00b001d38e5decf4mr21299683plb.56.1704036490038; Sun, 31 Dec 2023 07:28:10 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.06 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:09 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 05/11] RISC-V: crypto: add accelerated AES-CBC/CTR/ECB/XTS implementations Date: Sun, 31 Dec 2023 23:27:37 +0800 Message-Id: <20231231152743.6304-6-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Port the vector-crypto accelerated CBC, CTR, ECB and XTS block modes for AES cipher from OpenSSL(openssl/openssl#21923). In addition, support XTS-AES-192 mode which is not existed in OpenSSL. Co-developed-by: Phoebe Chen Signed-off-by: Phoebe Chen Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. - Revert the usage of simd skcipher. - Get `walksize` from `crypto_skcipher_alg()`. Changelog v3: - Update extension checking conditions in riscv64_aes_block_mod_init(). - Add `riscv64` prefix for all setkey, encrypt and decrypt functions. - Update xts_crypt() implementation. Use the similar approach as x86's aes-xts implementation. - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `AES_BLOCK_RISCV64` option by default. - Update asm function for using aes key in `crypto_aes_ctx` structure. - Turn to use simd skcipher interface for AES-CBC/CTR/ECB/XTS modes. We still have lots of discussions for kernel-vector implementation. Before the final version of kernel-vector, use simd skcipher interface to skip the fallback path for all aes modes in all kinds of contexts. If we could always enable kernel-vector in softirq in the future, we could make the original sync skcipher algorithm back. - Refine aes-xts comments for head and tail blocks handling. - Update VLEN constraint for aex-xts mode. - Add `asmlinkage` qualifier for crypto asm function. - Rename aes-riscv64-zvbb-zvkg-zvkned to aes-riscv64-zvkned-zvbb-zvkg. - Rename aes-riscv64-zvkb-zvkned to aes-riscv64-zvkned-zvkb. - Reorder structure riscv64_aes_algs_zvkned, riscv64_aes_alg_zvkned_zvkb and riscv64_aes_alg_zvkned_zvbb_zvkg members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 21 + arch/riscv/crypto/Makefile | 11 + .../crypto/aes-riscv64-block-mode-glue.c | 459 +++++++++ .../crypto/aes-riscv64-zvkned-zvbb-zvkg.pl | 949 ++++++++++++++++++ arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl | 415 ++++++++ arch/riscv/crypto/aes-riscv64-zvkned.pl | 746 ++++++++++++++ 6 files changed, 2601 insertions(+) create mode 100644 arch/riscv/crypto/aes-riscv64-block-mode-glue.c create mode 100644 arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl create mode 100644 arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 2a7c365f2a86..2cee0f68f0c7 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -13,4 +13,25 @@ config CRYPTO_AES_RISCV64 Architecture: riscv64 using: - Zvkned vector crypto extension +config CRYPTO_AES_BLOCK_RISCV64 + tristate "Ciphers: AES, modes: ECB/CBC/CTR/XTS" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_AES_RISCV64 + select CRYPTO_SIMD + select CRYPTO_SKCIPHER + help + Length-preserving ciphers: AES cipher algorithms (FIPS-197) + with block cipher modes: + - ECB (Electronic Codebook) mode (NIST SP 800-38A) + - CBC (Cipher Block Chaining) mode (NIST SP 800-38A) + - CTR (Counter) mode (NIST SP 800-38A) + - XTS (XOR Encrypt XOR Tweakable Block Cipher with Ciphertext + Stealing) mode (NIST SP 800-38E and IEEE 1619) + + Architecture: riscv64 using: + - Zvkned vector crypto extension + - Zvbb vector extension (XTS) + - Zvkb vector crypto extension (CTR/XTS) + - Zvkg vector crypto extension (XTS) + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index 90ca91d8df26..9574b009762f 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -6,10 +6,21 @@ obj-$(CONFIG_CRYPTO_AES_RISCV64) += aes-riscv64.o aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o +obj-$(CONFIG_CRYPTO_AES_BLOCK_RISCV64) += aes-block-riscv64.o +aes-block-riscv64-y := aes-riscv64-block-mode-glue.o aes-riscv64-zvkned-zvbb-zvkg.o aes-riscv64-zvkned-zvkb.o + quiet_cmd_perlasm = PERLASM $@ cmd_perlasm = $(PERL) $(<) void $(@) $(obj)/aes-riscv64-zvkned.S: $(src)/aes-riscv64-zvkned.pl $(call cmd,perlasm) +$(obj)/aes-riscv64-zvkned-zvbb-zvkg.S: $(src)/aes-riscv64-zvkned-zvbb-zvkg.pl + $(call cmd,perlasm) + +$(obj)/aes-riscv64-zvkned-zvkb.S: $(src)/aes-riscv64-zvkned-zvkb.pl + $(call cmd,perlasm) + clean-files += aes-riscv64-zvkned.S +clean-files += aes-riscv64-zvkned-zvbb-zvkg.S +clean-files += aes-riscv64-zvkned-zvkb.S diff --git a/arch/riscv/crypto/aes-riscv64-block-mode-glue.c b/arch/riscv/crypto/aes-riscv64-block-mode-glue.c new file mode 100644 index 000000000000..929c9948468a --- /dev/null +++ b/arch/riscv/crypto/aes-riscv64-block-mode-glue.c @@ -0,0 +1,459 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Port of the OpenSSL AES block mode implementations for RISC-V + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "aes-riscv64-glue.h" + +struct riscv64_aes_xts_ctx { + struct crypto_aes_ctx ctx1; + struct crypto_aes_ctx ctx2; +}; + +/* aes cbc block mode using zvkned vector crypto extension */ +asmlinkage void rv64i_zvkned_cbc_encrypt(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key, + u8 *ivec); +asmlinkage void rv64i_zvkned_cbc_decrypt(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key, + u8 *ivec); +/* aes ecb block mode using zvkned vector crypto extension */ +asmlinkage void rv64i_zvkned_ecb_encrypt(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key); +asmlinkage void rv64i_zvkned_ecb_decrypt(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key); + +/* aes ctr block mode using zvkb and zvkned vector crypto extension */ +/* This func operates on 32-bit counter. Caller has to handle the overflow. */ +asmlinkage void +rv64i_zvkb_zvkned_ctr32_encrypt_blocks(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key, + u8 *ivec); + +/* aes xts block mode using zvbb, zvkg and zvkned vector crypto extension */ +asmlinkage void +rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key, u8 *iv, + int update_iv); +asmlinkage void +rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt(const u8 *in, u8 *out, size_t length, + const struct crypto_aes_ctx *key, u8 *iv, + int update_iv); + +/* ecb */ +static int riscv64_aes_setkey(struct crypto_skcipher *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + + return riscv64_aes_setkey_zvkned(ctx, in_key, key_len); +} + +static int riscv64_ecb_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + unsigned int nbytes; + int err; + + /* If we have error here, the `nbytes` will be zero. */ + err = skcipher_walk_virt(&walk, req, false); + while ((nbytes = walk.nbytes)) { + kernel_vector_begin(); + rv64i_zvkned_ecb_encrypt(walk.src.virt.addr, walk.dst.virt.addr, + nbytes & ~(AES_BLOCK_SIZE - 1), ctx); + kernel_vector_end(); + err = skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1)); + } + + return err; +} + +static int riscv64_ecb_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + unsigned int nbytes; + int err; + + err = skcipher_walk_virt(&walk, req, false); + while ((nbytes = walk.nbytes)) { + kernel_vector_begin(); + rv64i_zvkned_ecb_decrypt(walk.src.virt.addr, walk.dst.virt.addr, + nbytes & ~(AES_BLOCK_SIZE - 1), ctx); + kernel_vector_end(); + err = skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1)); + } + + return err; +} + +/* cbc */ +static int riscv64_cbc_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + unsigned int nbytes; + int err; + + err = skcipher_walk_virt(&walk, req, false); + while ((nbytes = walk.nbytes)) { + kernel_vector_begin(); + rv64i_zvkned_cbc_encrypt(walk.src.virt.addr, walk.dst.virt.addr, + nbytes & ~(AES_BLOCK_SIZE - 1), ctx, + walk.iv); + kernel_vector_end(); + err = skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1)); + } + + return err; +} + +static int riscv64_cbc_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + unsigned int nbytes; + int err; + + err = skcipher_walk_virt(&walk, req, false); + while ((nbytes = walk.nbytes)) { + kernel_vector_begin(); + rv64i_zvkned_cbc_decrypt(walk.src.virt.addr, walk.dst.virt.addr, + nbytes & ~(AES_BLOCK_SIZE - 1), ctx, + walk.iv); + kernel_vector_end(); + err = skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1)); + } + + return err; +} + +/* ctr */ +static int riscv64_ctr_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + unsigned int ctr32; + unsigned int nbytes; + unsigned int blocks; + unsigned int current_blocks; + unsigned int current_length; + int err; + + /* the ctr iv uses big endian */ + ctr32 = get_unaligned_be32(req->iv + 12); + err = skcipher_walk_virt(&walk, req, false); + while ((nbytes = walk.nbytes)) { + if (nbytes != walk.total) { + nbytes &= ~(AES_BLOCK_SIZE - 1); + blocks = nbytes / AES_BLOCK_SIZE; + } else { + /* This is the last walk. We should handle the tail data. */ + blocks = DIV_ROUND_UP(nbytes, AES_BLOCK_SIZE); + } + ctr32 += blocks; + + kernel_vector_begin(); + /* + * The `if` block below detects the overflow, which is then handled by + * limiting the amount of blocks to the exact overflow point. + */ + if (ctr32 >= blocks) { + rv64i_zvkb_zvkned_ctr32_encrypt_blocks( + walk.src.virt.addr, walk.dst.virt.addr, nbytes, + ctx, req->iv); + } else { + /* use 2 ctr32 function calls for overflow case */ + current_blocks = blocks - ctr32; + current_length = + min(nbytes, current_blocks * AES_BLOCK_SIZE); + rv64i_zvkb_zvkned_ctr32_encrypt_blocks( + walk.src.virt.addr, walk.dst.virt.addr, + current_length, ctx, req->iv); + crypto_inc(req->iv, 12); + + if (ctr32) { + rv64i_zvkb_zvkned_ctr32_encrypt_blocks( + walk.src.virt.addr + + current_blocks * AES_BLOCK_SIZE, + walk.dst.virt.addr + + current_blocks * AES_BLOCK_SIZE, + nbytes - current_length, ctx, req->iv); + } + } + kernel_vector_end(); + + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + + return err; +} + +/* xts */ +static int riscv64_xts_setkey(struct crypto_skcipher *tfm, const u8 *in_key, + unsigned int key_len) +{ + struct riscv64_aes_xts_ctx *ctx = crypto_skcipher_ctx(tfm); + unsigned int xts_single_key_len = key_len / 2; + int ret; + + ret = xts_verify_key(tfm, in_key, key_len); + if (ret) + return ret; + ret = riscv64_aes_setkey_zvkned(&ctx->ctx1, in_key, xts_single_key_len); + if (ret) + return ret; + return riscv64_aes_setkey_zvkned( + &ctx->ctx2, in_key + xts_single_key_len, xts_single_key_len); +} + +static int xts_crypt(struct skcipher_request *req, bool encrypt) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct riscv64_aes_xts_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_request sub_req; + struct scatterlist sg_src[2], sg_dst[2]; + struct scatterlist *src, *dst; + struct skcipher_walk walk; + unsigned int walk_size = crypto_skcipher_alg(tfm)->walksize; + unsigned int tail = req->cryptlen & (AES_BLOCK_SIZE - 1); + unsigned int nbytes; + unsigned int update_iv = 1; + int err; + + /* xts input size should be bigger than AES_BLOCK_SIZE */ + if (req->cryptlen < AES_BLOCK_SIZE) + return -EINVAL; + + riscv64_aes_encrypt_zvkned(&ctx->ctx2, req->iv, req->iv); + + if (unlikely(tail > 0 && req->cryptlen > walk_size)) { + /* + * Find the largest tail size which is small than `walk` size while the + * non-ciphertext-stealing parts still fit AES block boundary. + */ + tail = walk_size + tail - AES_BLOCK_SIZE; + + skcipher_request_set_tfm(&sub_req, tfm); + skcipher_request_set_callback( + &sub_req, skcipher_request_flags(req), NULL, NULL); + skcipher_request_set_crypt(&sub_req, req->src, req->dst, + req->cryptlen - tail, req->iv); + req = &sub_req; + } else { + tail = 0; + } + + err = skcipher_walk_virt(&walk, req, false); + if (!walk.nbytes) + return err; + + while ((nbytes = walk.nbytes)) { + if (nbytes < walk.total) + nbytes &= ~(AES_BLOCK_SIZE - 1); + else + update_iv = (tail > 0); + + kernel_vector_begin(); + if (encrypt) + rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt( + walk.src.virt.addr, walk.dst.virt.addr, nbytes, + &ctx->ctx1, req->iv, update_iv); + else + rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt( + walk.src.virt.addr, walk.dst.virt.addr, nbytes, + &ctx->ctx1, req->iv, update_iv); + kernel_vector_end(); + + err = skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + + if (unlikely(tail > 0 && !err)) { + dst = src = scatterwalk_ffwd(sg_src, req->src, req->cryptlen); + if (req->dst != req->src) + dst = scatterwalk_ffwd(sg_dst, req->dst, req->cryptlen); + + skcipher_request_set_crypt(req, src, dst, tail, req->iv); + + err = skcipher_walk_virt(&walk, req, false); + if (err) + return err; + + kernel_vector_begin(); + if (encrypt) + rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt( + walk.src.virt.addr, walk.dst.virt.addr, + walk.nbytes, &ctx->ctx1, req->iv, 0); + else + rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt( + walk.src.virt.addr, walk.dst.virt.addr, + walk.nbytes, &ctx->ctx1, req->iv, 0); + kernel_vector_end(); + + err = skcipher_walk_done(&walk, 0); + } + + return err; +} + +static int riscv64_xts_encrypt(struct skcipher_request *req) +{ + return xts_crypt(req, true); +} + +static int riscv64_xts_decrypt(struct skcipher_request *req) +{ + return xts_crypt(req, false); +} + +static struct skcipher_alg riscv64_aes_algs_zvkned[] = { + { + .setkey = riscv64_aes_setkey, + .encrypt = riscv64_ecb_encrypt, + .decrypt = riscv64_ecb_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .walksize = AES_BLOCK_SIZE * 8, + .base = { + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_priority = 300, + .cra_name = "ecb(aes)", + .cra_driver_name = "ecb-aes-riscv64-zvkned", + .cra_module = THIS_MODULE, + }, + }, { + .setkey = riscv64_aes_setkey, + .encrypt = riscv64_cbc_encrypt, + .decrypt = riscv64_cbc_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .walksize = AES_BLOCK_SIZE * 8, + .base = { + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_priority = 300, + .cra_name = "cbc(aes)", + .cra_driver_name = "cbc-aes-riscv64-zvkned", + .cra_module = THIS_MODULE, + }, + } +}; + +static struct skcipher_alg riscv64_aes_alg_zvkned_zvkb = { + .setkey = riscv64_aes_setkey, + .encrypt = riscv64_ctr_encrypt, + .decrypt = riscv64_ctr_encrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .walksize = AES_BLOCK_SIZE * 8, + .base = { + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct crypto_aes_ctx), + .cra_priority = 300, + .cra_name = "ctr(aes)", + .cra_driver_name = "ctr-aes-riscv64-zvkned-zvkb", + .cra_module = THIS_MODULE, + }, +}; + +static struct skcipher_alg riscv64_aes_alg_zvkned_zvbb_zvkg = { + .setkey = riscv64_xts_setkey, + .encrypt = riscv64_xts_encrypt, + .decrypt = riscv64_xts_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + .chunksize = AES_BLOCK_SIZE, + .walksize = AES_BLOCK_SIZE * 8, + .base = { + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct riscv64_aes_xts_ctx), + .cra_priority = 300, + .cra_name = "xts(aes)", + .cra_driver_name = "xts-aes-riscv64-zvkned-zvbb-zvkg", + .cra_module = THIS_MODULE, + }, +}; + +static int __init riscv64_aes_block_mod_init(void) +{ + int ret = -ENODEV; + + if (riscv_isa_extension_available(NULL, ZVKNED) && + riscv_vector_vlen() >= 128 && riscv_vector_vlen() <= 2048) { + ret = crypto_register_skciphers( + riscv64_aes_algs_zvkned, + ARRAY_SIZE(riscv64_aes_algs_zvkned)); + if (ret) + return ret; + + if (riscv_isa_extension_available(NULL, ZVKB)) { + ret = crypto_register_skcipher(&riscv64_aes_alg_zvkned_zvkb); + if (ret) + goto unregister_zvkned; + } + + if (riscv_isa_extension_available(NULL, ZVBB) && + riscv_isa_extension_available(NULL, ZVKG)) { + ret = crypto_register_skcipher(&riscv64_aes_alg_zvkned_zvbb_zvkg); + if (ret) + goto unregister_zvkned_zvkb; + } + } + + return ret; + +unregister_zvkned_zvkb: + crypto_unregister_skcipher(&riscv64_aes_alg_zvkned_zvkb); +unregister_zvkned: + crypto_unregister_skciphers(riscv64_aes_algs_zvkned, + ARRAY_SIZE(riscv64_aes_algs_zvkned)); + + return ret; +} + +static void __exit riscv64_aes_block_mod_fini(void) +{ + crypto_unregister_skcipher(&riscv64_aes_alg_zvkned_zvbb_zvkg); + crypto_unregister_skcipher(&riscv64_aes_alg_zvkned_zvkb); + crypto_unregister_skciphers(riscv64_aes_algs_zvkned, + ARRAY_SIZE(riscv64_aes_algs_zvkned)); +} + +module_init(riscv64_aes_block_mod_init); +module_exit(riscv64_aes_block_mod_fini); + +MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS (RISC-V accelerated)"); +MODULE_AUTHOR("Jerry Shih "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("cbc(aes)"); +MODULE_ALIAS_CRYPTO("ctr(aes)"); +MODULE_ALIAS_CRYPTO("ecb(aes)"); +MODULE_ALIAS_CRYPTO("xts(aes)"); diff --git a/arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl b/arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl new file mode 100644 index 000000000000..bc7772a5944a --- /dev/null +++ b/arch/riscv/crypto/aes-riscv64-zvkned-zvbb-zvkg.pl @@ -0,0 +1,949 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 && VLEN <= 2048 +# - RISC-V Vector AES block cipher extension ('Zvkned') +# - RISC-V Vector Bit-manipulation extension ('Zvbb') +# - RISC-V Vector GCM/GMAC extension ('Zvkg') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +.text +.option arch, +zvkned, +zvbb, +zvkg +___ + +{ +################################################################################ +# void rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt(const unsigned char *in, +# unsigned char *out, size_t length, +# const AES_KEY *key, +# unsigned char iv[16], +# int update_iv) +my ($INPUT, $OUTPUT, $LENGTH, $KEY, $IV, $UPDATE_IV) = ("a0", "a1", "a2", "a3", "a4", "a5"); +my ($TAIL_LENGTH) = ("a6"); +my ($VL) = ("a7"); +my ($T0, $T1, $T2, $T3) = ("t0", "t1", "t2", "t3"); +my ($STORE_LEN32) = ("t4"); +my ($LEN32) = ("t5"); +my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, + $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15, + $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23, + $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map("v$_",(0..31)); + +# load iv to v28 +sub load_xts_iv0 { + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V28, ($IV) +___ + + return $code; +} + +# prepare input data(v24), iv(v28), bit-reversed-iv(v16), bit-reversed-iv-multiplier(v20) +sub init_first_round { + my $code=<<___; + # load input + vsetvli $VL, $LEN32, e32, m4, ta, ma + vle32.v $V24, ($INPUT) + + li $T0, 5 + # We could simplify the initialization steps if we have `block<=1`. + blt $LEN32, $T0, 1f + + # Note: We use `vgmul` for GF(2^128) multiplication. The `vgmul` uses + # different order of coefficients. We should use`vbrev8` to reverse the + # data when we use `vgmul`. + vsetivli zero, 4, e32, m1, ta, ma + vbrev8.v $V0, $V28 + vsetvli zero, $LEN32, e32, m4, ta, ma + vmv.v.i $V16, 0 + # v16: [r-IV0, r-IV0, ...] + vaesz.vs $V16, $V0 + + # Prepare GF(2^128) multiplier [1, x, x^2, x^3, ...] in v8. + # We use `vwsll` to get power of 2 multipliers. Current rvv spec only + # supports `SEW<=64`. So, the maximum `VLEN` for this approach is `2048`. + # SEW64_BITS * AES_BLOCK_SIZE / LMUL + # = 64 * 128 / 4 = 2048 + # + # TODO: truncate the vl to `2048` for `vlen>2048` case. + slli $T0, $LEN32, 2 + vsetvli zero, $T0, e32, m1, ta, ma + # v2: [`1`, `1`, `1`, `1`, ...] + vmv.v.i $V2, 1 + # v3: [`0`, `1`, `2`, `3`, ...] + vid.v $V3 + vsetvli zero, $T0, e64, m2, ta, ma + # v4: [`1`, 0, `1`, 0, `1`, 0, `1`, 0, ...] + vzext.vf2 $V4, $V2 + # v6: [`0`, 0, `1`, 0, `2`, 0, `3`, 0, ...] + vzext.vf2 $V6, $V3 + slli $T0, $LEN32, 1 + vsetvli zero, $T0, e32, m2, ta, ma + # v8: [1<<0=1, 0, 0, 0, 1<<1=x, 0, 0, 0, 1<<2=x^2, 0, 0, 0, ...] + vwsll.vv $V8, $V4, $V6 + + # Compute [r-IV0*1, r-IV0*x, r-IV0*x^2, r-IV0*x^3, ...] in v16 + vsetvli zero, $LEN32, e32, m4, ta, ma + vbrev8.v $V8, $V8 + vgmul.vv $V16, $V8 + + # Compute [IV0*1, IV0*x, IV0*x^2, IV0*x^3, ...] in v28. + # Reverse the bits order back. + vbrev8.v $V28, $V16 + + # Prepare the x^n multiplier in v20. The `n` is the aes-xts block number + # in a LMUL=4 register group. + # n = ((VLEN*LMUL)/(32*4)) = ((VLEN*4)/(32*4)) + # = (VLEN/32) + # We could use vsetvli with `e32, m1` to compute the `n` number. + vsetvli $T0, zero, e32, m1, ta, ma + li $T1, 1 + sll $T0, $T1, $T0 + vsetivli zero, 2, e64, m1, ta, ma + vmv.v.i $V0, 0 + vsetivli zero, 1, e64, m1, tu, ma + vmv.v.x $V0, $T0 + vsetivli zero, 2, e64, m1, ta, ma + vbrev8.v $V0, $V0 + vsetvli zero, $LEN32, e32, m4, ta, ma + vmv.v.i $V20, 0 + vaesz.vs $V20, $V0 + + j 2f +1: + vsetivli zero, 4, e32, m1, ta, ma + vbrev8.v $V16, $V28 +2: +___ + + return $code; +} + +# prepare xts enc last block's input(v24) and iv(v28) +sub handle_xts_enc_last_block { + my $code=<<___; + bnez $TAIL_LENGTH, 2f + + beqz $UPDATE_IV, 1f + ## Store next IV + addi $VL, $VL, -4 + vsetivli zero, 4, e32, m4, ta, ma + # multiplier + vslidedown.vx $V16, $V16, $VL + + # setup `x` multiplier with byte-reversed order + # 0b00000010 => 0b01000000 (0x40) + li $T0, 0x40 + vsetivli zero, 4, e32, m1, ta, ma + vmv.v.i $V28, 0 + vsetivli zero, 1, e8, m1, tu, ma + vmv.v.x $V28, $T0 + + # IV * `x` + vsetivli zero, 4, e32, m1, ta, ma + vgmul.vv $V16, $V28 + # Reverse the IV's bits order back to big-endian + vbrev8.v $V28, $V16 + + vse32.v $V28, ($IV) +1: + + ret +2: + # slidedown second to last block + addi $VL, $VL, -4 + vsetivli zero, 4, e32, m4, ta, ma + # ciphertext + vslidedown.vx $V24, $V24, $VL + # multiplier + vslidedown.vx $V16, $V16, $VL + + vsetivli zero, 4, e32, m1, ta, ma + vmv.v.v $V25, $V24 + + # load last block into v24 + # note: We should load the last block before store the second to last block + # for in-place operation. + vsetvli zero, $TAIL_LENGTH, e8, m1, tu, ma + vle8.v $V24, ($INPUT) + + # setup `x` multiplier with byte-reversed order + # 0b00000010 => 0b01000000 (0x40) + li $T0, 0x40 + vsetivli zero, 4, e32, m1, ta, ma + vmv.v.i $V28, 0 + vsetivli zero, 1, e8, m1, tu, ma + vmv.v.x $V28, $T0 + + # compute IV for last block + vsetivli zero, 4, e32, m1, ta, ma + vgmul.vv $V16, $V28 + vbrev8.v $V28, $V16 + + # store second to last block + vsetvli zero, $TAIL_LENGTH, e8, m1, ta, ma + vse8.v $V25, ($OUTPUT) +___ + + return $code; +} + +# prepare xts dec second to last block's input(v24) and iv(v29) and +# last block's and iv(v28) +sub handle_xts_dec_last_block { + my $code=<<___; + bnez $TAIL_LENGTH, 2f + + beqz $UPDATE_IV, 1f + ## Store next IV + # setup `x` multiplier with byte-reversed order + # 0b00000010 => 0b01000000 (0x40) + li $T0, 0x40 + vsetivli zero, 4, e32, m1, ta, ma + vmv.v.i $V28, 0 + vsetivli zero, 1, e8, m1, tu, ma + vmv.v.x $V28, $T0 + + beqz $LENGTH, 3f + addi $VL, $VL, -4 + vsetivli zero, 4, e32, m4, ta, ma + # multiplier + vslidedown.vx $V16, $V16, $VL + +3: + # IV * `x` + vsetivli zero, 4, e32, m1, ta, ma + vgmul.vv $V16, $V28 + # Reverse the IV's bits order back to big-endian + vbrev8.v $V28, $V16 + + vse32.v $V28, ($IV) +1: + + ret +2: + # load second to last block's ciphertext + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V24, ($INPUT) + addi $INPUT, $INPUT, 16 + + # setup `x` multiplier with byte-reversed order + # 0b00000010 => 0b01000000 (0x40) + li $T0, 0x40 + vsetivli zero, 4, e32, m1, ta, ma + vmv.v.i $V20, 0 + vsetivli zero, 1, e8, m1, tu, ma + vmv.v.x $V20, $T0 + + beqz $LENGTH, 1f + # slidedown third to last block + addi $VL, $VL, -4 + vsetivli zero, 4, e32, m4, ta, ma + # multiplier + vslidedown.vx $V16, $V16, $VL + + # compute IV for last block + vsetivli zero, 4, e32, m1, ta, ma + vgmul.vv $V16, $V20 + vbrev8.v $V28, $V16 + + # compute IV for second to last block + vgmul.vv $V16, $V20 + vbrev8.v $V29, $V16 + j 2f +1: + # compute IV for second to last block + vsetivli zero, 4, e32, m1, ta, ma + vgmul.vv $V16, $V20 + vbrev8.v $V29, $V16 +2: +___ + + return $code; +} + +# Load all 11 round keys to v1-v11 registers. +sub aes_128_load_key { + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V2, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V3, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V4, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V5, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V6, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V7, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V8, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V9, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V10, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V11, ($KEY) +___ + + return $code; +} + +# Load all 13 round keys to v1-v13 registers. +sub aes_192_load_key { + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V2, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V3, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V4, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V5, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V6, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V7, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V8, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V9, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V10, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V11, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V12, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V13, ($KEY) +___ + + return $code; +} + +# Load all 15 round keys to v1-v15 registers. +sub aes_256_load_key { + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V2, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V3, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V4, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V5, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V6, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V7, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V8, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V9, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V10, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V11, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V12, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V13, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V14, ($KEY) + addi $KEY, $KEY, 16 + vle32.v $V15, ($KEY) +___ + + return $code; +} + +# aes-128 enc with round keys v1-v11 +sub aes_128_enc { + my $code=<<___; + vaesz.vs $V24, $V1 + vaesem.vs $V24, $V2 + vaesem.vs $V24, $V3 + vaesem.vs $V24, $V4 + vaesem.vs $V24, $V5 + vaesem.vs $V24, $V6 + vaesem.vs $V24, $V7 + vaesem.vs $V24, $V8 + vaesem.vs $V24, $V9 + vaesem.vs $V24, $V10 + vaesef.vs $V24, $V11 +___ + + return $code; +} + +# aes-128 dec with round keys v1-v11 +sub aes_128_dec { + my $code=<<___; + vaesz.vs $V24, $V11 + vaesdm.vs $V24, $V10 + vaesdm.vs $V24, $V9 + vaesdm.vs $V24, $V8 + vaesdm.vs $V24, $V7 + vaesdm.vs $V24, $V6 + vaesdm.vs $V24, $V5 + vaesdm.vs $V24, $V4 + vaesdm.vs $V24, $V3 + vaesdm.vs $V24, $V2 + vaesdf.vs $V24, $V1 +___ + + return $code; +} + +# aes-192 enc with round keys v1-v13 +sub aes_192_enc { + my $code=<<___; + vaesz.vs $V24, $V1 + vaesem.vs $V24, $V2 + vaesem.vs $V24, $V3 + vaesem.vs $V24, $V4 + vaesem.vs $V24, $V5 + vaesem.vs $V24, $V6 + vaesem.vs $V24, $V7 + vaesem.vs $V24, $V8 + vaesem.vs $V24, $V9 + vaesem.vs $V24, $V10 + vaesem.vs $V24, $V11 + vaesem.vs $V24, $V12 + vaesef.vs $V24, $V13 +___ + + return $code; +} + +# aes-192 dec with round keys v1-v13 +sub aes_192_dec { + my $code=<<___; + vaesz.vs $V24, $V13 + vaesdm.vs $V24, $V12 + vaesdm.vs $V24, $V11 + vaesdm.vs $V24, $V10 + vaesdm.vs $V24, $V9 + vaesdm.vs $V24, $V8 + vaesdm.vs $V24, $V7 + vaesdm.vs $V24, $V6 + vaesdm.vs $V24, $V5 + vaesdm.vs $V24, $V4 + vaesdm.vs $V24, $V3 + vaesdm.vs $V24, $V2 + vaesdf.vs $V24, $V1 +___ + + return $code; +} + +# aes-256 enc with round keys v1-v15 +sub aes_256_enc { + my $code=<<___; + vaesz.vs $V24, $V1 + vaesem.vs $V24, $V2 + vaesem.vs $V24, $V3 + vaesem.vs $V24, $V4 + vaesem.vs $V24, $V5 + vaesem.vs $V24, $V6 + vaesem.vs $V24, $V7 + vaesem.vs $V24, $V8 + vaesem.vs $V24, $V9 + vaesem.vs $V24, $V10 + vaesem.vs $V24, $V11 + vaesem.vs $V24, $V12 + vaesem.vs $V24, $V13 + vaesem.vs $V24, $V14 + vaesef.vs $V24, $V15 +___ + + return $code; +} + +# aes-256 dec with round keys v1-v15 +sub aes_256_dec { + my $code=<<___; + vaesz.vs $V24, $V15 + vaesdm.vs $V24, $V14 + vaesdm.vs $V24, $V13 + vaesdm.vs $V24, $V12 + vaesdm.vs $V24, $V11 + vaesdm.vs $V24, $V10 + vaesdm.vs $V24, $V9 + vaesdm.vs $V24, $V8 + vaesdm.vs $V24, $V7 + vaesdm.vs $V24, $V6 + vaesdm.vs $V24, $V5 + vaesdm.vs $V24, $V4 + vaesdm.vs $V24, $V3 + vaesdm.vs $V24, $V2 + vaesdf.vs $V24, $V1 +___ + + return $code; +} + +$code .= <<___; +.p2align 3 +.globl rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt +.type rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt,\@function +rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt: + @{[load_xts_iv0]} + + # aes block size is 16 + andi $TAIL_LENGTH, $LENGTH, 15 + mv $STORE_LEN32, $LENGTH + beqz $TAIL_LENGTH, 1f + sub $LENGTH, $LENGTH, $TAIL_LENGTH + addi $STORE_LEN32, $LENGTH, -16 +1: + # We make the `LENGTH` become e32 length here. + srli $LEN32, $LENGTH, 2 + srli $STORE_LEN32, $STORE_LEN32, 2 + + # Load key length. + lwu $T0, 480($KEY) + li $T1, 32 + li $T2, 24 + li $T3, 16 + beq $T0, $T1, aes_xts_enc_256 + beq $T0, $T2, aes_xts_enc_192 + beq $T0, $T3, aes_xts_enc_128 +.size rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt,.-rv64i_zvbb_zvkg_zvkned_aes_xts_encrypt +___ + +$code .= <<___; +.p2align 3 +aes_xts_enc_128: + @{[init_first_round]} + @{[aes_128_load_key]} + + vsetvli $VL, $LEN32, e32, m4, ta, ma + j 1f + +.Lenc_blocks_128: + vsetvli $VL, $LEN32, e32, m4, ta, ma + # load plaintext into v24 + vle32.v $V24, ($INPUT) + # update iv + vgmul.vv $V16, $V20 + # reverse the iv's bits order back + vbrev8.v $V28, $V16 +1: + vxor.vv $V24, $V24, $V28 + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + add $INPUT, $INPUT, $T0 + @{[aes_128_enc]} + vxor.vv $V24, $V24, $V28 + + # store ciphertext + vsetvli zero, $STORE_LEN32, e32, m4, ta, ma + vse32.v $V24, ($OUTPUT) + add $OUTPUT, $OUTPUT, $T0 + sub $STORE_LEN32, $STORE_LEN32, $VL + + bnez $LEN32, .Lenc_blocks_128 + + @{[handle_xts_enc_last_block]} + + # xts last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V28 + @{[aes_128_enc]} + vxor.vv $V24, $V24, $V28 + + # store last block ciphertext + addi $OUTPUT, $OUTPUT, -16 + vse32.v $V24, ($OUTPUT) + + ret +.size aes_xts_enc_128,.-aes_xts_enc_128 +___ + +$code .= <<___; +.p2align 3 +aes_xts_enc_192: + @{[init_first_round]} + @{[aes_192_load_key]} + + vsetvli $VL, $LEN32, e32, m4, ta, ma + j 1f + +.Lenc_blocks_192: + vsetvli $VL, $LEN32, e32, m4, ta, ma + # load plaintext into v24 + vle32.v $V24, ($INPUT) + # update iv + vgmul.vv $V16, $V20 + # reverse the iv's bits order back + vbrev8.v $V28, $V16 +1: + vxor.vv $V24, $V24, $V28 + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + add $INPUT, $INPUT, $T0 + @{[aes_192_enc]} + vxor.vv $V24, $V24, $V28 + + # store ciphertext + vsetvli zero, $STORE_LEN32, e32, m4, ta, ma + vse32.v $V24, ($OUTPUT) + add $OUTPUT, $OUTPUT, $T0 + sub $STORE_LEN32, $STORE_LEN32, $VL + + bnez $LEN32, .Lenc_blocks_192 + + @{[handle_xts_enc_last_block]} + + # xts last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V28 + @{[aes_192_enc]} + vxor.vv $V24, $V24, $V28 + + # store last block ciphertext + addi $OUTPUT, $OUTPUT, -16 + vse32.v $V24, ($OUTPUT) + + ret +.size aes_xts_enc_192,.-aes_xts_enc_192 +___ + +$code .= <<___; +.p2align 3 +aes_xts_enc_256: + @{[init_first_round]} + @{[aes_256_load_key]} + + vsetvli $VL, $LEN32, e32, m4, ta, ma + j 1f + +.Lenc_blocks_256: + vsetvli $VL, $LEN32, e32, m4, ta, ma + # load plaintext into v24 + vle32.v $V24, ($INPUT) + # update iv + vgmul.vv $V16, $V20 + # reverse the iv's bits order back + vbrev8.v $V28, $V16 +1: + vxor.vv $V24, $V24, $V28 + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + add $INPUT, $INPUT, $T0 + @{[aes_256_enc]} + vxor.vv $V24, $V24, $V28 + + # store ciphertext + vsetvli zero, $STORE_LEN32, e32, m4, ta, ma + vse32.v $V24, ($OUTPUT) + add $OUTPUT, $OUTPUT, $T0 + sub $STORE_LEN32, $STORE_LEN32, $VL + + bnez $LEN32, .Lenc_blocks_256 + + @{[handle_xts_enc_last_block]} + + # xts last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V28 + @{[aes_256_enc]} + vxor.vv $V24, $V24, $V28 + + # store last block ciphertext + addi $OUTPUT, $OUTPUT, -16 + vse32.v $V24, ($OUTPUT) + + ret +.size aes_xts_enc_256,.-aes_xts_enc_256 +___ + +################################################################################ +# void rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt(const unsigned char *in, +# unsigned char *out, size_t length, +# const AES_KEY *key, +# unsigned char iv[16], +# int update_iv) +$code .= <<___; +.p2align 3 +.globl rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt +.type rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt,\@function +rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt: + @{[load_xts_iv0]} + + # aes block size is 16 + andi $TAIL_LENGTH, $LENGTH, 15 + beqz $TAIL_LENGTH, 1f + sub $LENGTH, $LENGTH, $TAIL_LENGTH + addi $LENGTH, $LENGTH, -16 +1: + # We make the `LENGTH` become e32 length here. + srli $LEN32, $LENGTH, 2 + + # Load key length. + lwu $T0, 480($KEY) + li $T1, 32 + li $T2, 24 + li $T3, 16 + beq $T0, $T1, aes_xts_dec_256 + beq $T0, $T2, aes_xts_dec_192 + beq $T0, $T3, aes_xts_dec_128 +.size rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt,.-rv64i_zvbb_zvkg_zvkned_aes_xts_decrypt +___ + +$code .= <<___; +.p2align 3 +aes_xts_dec_128: + @{[init_first_round]} + @{[aes_128_load_key]} + + beqz $LEN32, 2f + + vsetvli $VL, $LEN32, e32, m4, ta, ma + j 1f + +.Ldec_blocks_128: + vsetvli $VL, $LEN32, e32, m4, ta, ma + # load ciphertext into v24 + vle32.v $V24, ($INPUT) + # update iv + vgmul.vv $V16, $V20 + # reverse the iv's bits order back + vbrev8.v $V28, $V16 +1: + vxor.vv $V24, $V24, $V28 + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + add $INPUT, $INPUT, $T0 + @{[aes_128_dec]} + vxor.vv $V24, $V24, $V28 + + # store plaintext + vse32.v $V24, ($OUTPUT) + add $OUTPUT, $OUTPUT, $T0 + + bnez $LEN32, .Ldec_blocks_128 + +2: + @{[handle_xts_dec_last_block]} + + ## xts second to last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V29 + @{[aes_128_dec]} + vxor.vv $V24, $V24, $V29 + vmv.v.v $V25, $V24 + + # load last block ciphertext + vsetvli zero, $TAIL_LENGTH, e8, m1, tu, ma + vle8.v $V24, ($INPUT) + + # store second to last block plaintext + addi $T0, $OUTPUT, 16 + vse8.v $V25, ($T0) + + ## xts last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V28 + @{[aes_128_dec]} + vxor.vv $V24, $V24, $V28 + + # store second to last block plaintext + vse32.v $V24, ($OUTPUT) + + ret +.size aes_xts_dec_128,.-aes_xts_dec_128 +___ + +$code .= <<___; +.p2align 3 +aes_xts_dec_192: + @{[init_first_round]} + @{[aes_192_load_key]} + + beqz $LEN32, 2f + + vsetvli $VL, $LEN32, e32, m4, ta, ma + j 1f + +.Ldec_blocks_192: + vsetvli $VL, $LEN32, e32, m4, ta, ma + # load ciphertext into v24 + vle32.v $V24, ($INPUT) + # update iv + vgmul.vv $V16, $V20 + # reverse the iv's bits order back + vbrev8.v $V28, $V16 +1: + vxor.vv $V24, $V24, $V28 + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + add $INPUT, $INPUT, $T0 + @{[aes_192_dec]} + vxor.vv $V24, $V24, $V28 + + # store plaintext + vse32.v $V24, ($OUTPUT) + add $OUTPUT, $OUTPUT, $T0 + + bnez $LEN32, .Ldec_blocks_192 + +2: + @{[handle_xts_dec_last_block]} + + ## xts second to last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V29 + @{[aes_192_dec]} + vxor.vv $V24, $V24, $V29 + vmv.v.v $V25, $V24 + + # load last block ciphertext + vsetvli zero, $TAIL_LENGTH, e8, m1, tu, ma + vle8.v $V24, ($INPUT) + + # store second to last block plaintext + addi $T0, $OUTPUT, 16 + vse8.v $V25, ($T0) + + ## xts last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V28 + @{[aes_192_dec]} + vxor.vv $V24, $V24, $V28 + + # store second to last block plaintext + vse32.v $V24, ($OUTPUT) + + ret +.size aes_xts_dec_192,.-aes_xts_dec_192 +___ + +$code .= <<___; +.p2align 3 +aes_xts_dec_256: + @{[init_first_round]} + @{[aes_256_load_key]} + + beqz $LEN32, 2f + + vsetvli $VL, $LEN32, e32, m4, ta, ma + j 1f + +.Ldec_blocks_256: + vsetvli $VL, $LEN32, e32, m4, ta, ma + # load ciphertext into v24 + vle32.v $V24, ($INPUT) + # update iv + vgmul.vv $V16, $V20 + # reverse the iv's bits order back + vbrev8.v $V28, $V16 +1: + vxor.vv $V24, $V24, $V28 + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + add $INPUT, $INPUT, $T0 + @{[aes_256_dec]} + vxor.vv $V24, $V24, $V28 + + # store plaintext + vse32.v $V24, ($OUTPUT) + add $OUTPUT, $OUTPUT, $T0 + + bnez $LEN32, .Ldec_blocks_256 + +2: + @{[handle_xts_dec_last_block]} + + ## xts second to last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V29 + @{[aes_256_dec]} + vxor.vv $V24, $V24, $V29 + vmv.v.v $V25, $V24 + + # load last block ciphertext + vsetvli zero, $TAIL_LENGTH, e8, m1, tu, ma + vle8.v $V24, ($INPUT) + + # store second to last block plaintext + addi $T0, $OUTPUT, 16 + vse8.v $V25, ($T0) + + ## xts last block + vsetivli zero, 4, e32, m1, ta, ma + vxor.vv $V24, $V24, $V28 + @{[aes_256_dec]} + vxor.vv $V24, $V24, $V28 + + # store second to last block plaintext + vse32.v $V24, ($OUTPUT) + + ret +.size aes_xts_dec_256,.-aes_xts_dec_256 +___ +} + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; diff --git a/arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl b/arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl new file mode 100644 index 000000000000..39ce998039a2 --- /dev/null +++ b/arch/riscv/crypto/aes-riscv64-zvkned-zvkb.pl @@ -0,0 +1,415 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector AES block cipher extension ('Zvkned') +# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +.text +.option arch, +zvkned, +zvkb +___ + +################################################################################ +# void rv64i_zvkb_zvkned_ctr32_encrypt_blocks(const unsigned char *in, +# unsigned char *out, size_t length, +# const void *key, +# unsigned char ivec[16]); +{ +my ($INP, $OUTP, $LEN, $KEYP, $IVP) = ("a0", "a1", "a2", "a3", "a4"); +my ($T0, $T1, $T2, $T3) = ("t0", "t1", "t2", "t3"); +my ($VL) = ("t4"); +my ($LEN32) = ("t5"); +my ($CTR) = ("t6"); +my ($MASK) = ("v0"); +my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, + $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15, + $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23, + $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map("v$_",(0..31)); + +# Prepare the AES ctr input data into v16. +sub init_aes_ctr_input { + my $code=<<___; + # Setup mask into v0 + # The mask pattern for 4*N-th elements + # mask v0: [000100010001....] + # Note: + # We could setup the mask just for the maximum element length instead of + # the VLMAX. + li $T0, 0b10001000 + vsetvli $T2, zero, e8, m1, ta, ma + vmv.v.x $MASK, $T0 + # Load IV. + # v31:[IV0, IV1, IV2, big-endian count] + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V31, ($IVP) + # Convert the big-endian counter into little-endian. + vsetivli zero, 4, e32, m1, ta, mu + vrev8.v $V31, $V31, $MASK.t + # Splat the IV to v16 + vsetvli zero, $LEN32, e32, m4, ta, ma + vmv.v.i $V16, 0 + vaesz.vs $V16, $V31 + # Prepare the ctr pattern into v20 + # v20: [x, x, x, 0, x, x, x, 1, x, x, x, 2, ...] + viota.m $V20, $MASK, $MASK.t + # v16:[IV0, IV1, IV2, count+0, IV0, IV1, IV2, count+1, ...] + vsetvli $VL, $LEN32, e32, m4, ta, mu + vadd.vv $V16, $V16, $V20, $MASK.t +___ + + return $code; +} + +$code .= <<___; +.p2align 3 +.globl rv64i_zvkb_zvkned_ctr32_encrypt_blocks +.type rv64i_zvkb_zvkned_ctr32_encrypt_blocks,\@function +rv64i_zvkb_zvkned_ctr32_encrypt_blocks: + # The aes block size is 16 bytes. + # We try to get the minimum aes block number including the tail data. + addi $T0, $LEN, 15 + # the minimum block number + srli $T0, $T0, 4 + # We make the block number become e32 length here. + slli $LEN32, $T0, 2 + + # Load key length. + lwu $T0, 480($KEYP) + li $T1, 32 + li $T2, 24 + li $T3, 16 + + beq $T0, $T1, ctr32_encrypt_blocks_256 + beq $T0, $T2, ctr32_encrypt_blocks_192 + beq $T0, $T3, ctr32_encrypt_blocks_128 + + ret +.size rv64i_zvkb_zvkned_ctr32_encrypt_blocks,.-rv64i_zvkb_zvkned_ctr32_encrypt_blocks +___ + +$code .= <<___; +.p2align 3 +ctr32_encrypt_blocks_128: + # Load all 11 round keys to v1-v11 registers. + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V2, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V3, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V4, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V5, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V6, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V7, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V8, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V9, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V10, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + + @{[init_aes_ctr_input]} + + ##### AES body + j 2f +1: + vsetvli $VL, $LEN32, e32, m4, ta, mu + # Increase ctr in v16. + vadd.vx $V16, $V16, $CTR, $MASK.t +2: + # Prepare the AES ctr input into v24. + # The ctr data uses big-endian form. + vmv.v.v $V24, $V16 + vrev8.v $V24, $V24, $MASK.t + srli $CTR, $VL, 2 + sub $LEN32, $LEN32, $VL + + # Load plaintext in bytes into v20. + vsetvli $T0, $LEN, e8, m4, ta, ma + vle8.v $V20, ($INP) + sub $LEN, $LEN, $T0 + add $INP, $INP, $T0 + + vsetvli zero, $VL, e32, m4, ta, ma + vaesz.vs $V24, $V1 + vaesem.vs $V24, $V2 + vaesem.vs $V24, $V3 + vaesem.vs $V24, $V4 + vaesem.vs $V24, $V5 + vaesem.vs $V24, $V6 + vaesem.vs $V24, $V7 + vaesem.vs $V24, $V8 + vaesem.vs $V24, $V9 + vaesem.vs $V24, $V10 + vaesef.vs $V24, $V11 + + # ciphertext + vsetvli zero, $T0, e8, m4, ta, ma + vxor.vv $V24, $V24, $V20 + + # Store the ciphertext. + vse8.v $V24, ($OUTP) + add $OUTP, $OUTP, $T0 + + bnez $LEN, 1b + + ## store ctr iv + vsetivli zero, 4, e32, m1, ta, mu + # Increase ctr in v16. + vadd.vx $V16, $V16, $CTR, $MASK.t + # Convert ctr data back to big-endian. + vrev8.v $V16, $V16, $MASK.t + vse32.v $V16, ($IVP) + + ret +.size ctr32_encrypt_blocks_128,.-ctr32_encrypt_blocks_128 +___ + +$code .= <<___; +.p2align 3 +ctr32_encrypt_blocks_192: + # Load all 13 round keys to v1-v13 registers. + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V2, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V3, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V4, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V5, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V6, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V7, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V8, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V9, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V10, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) + + @{[init_aes_ctr_input]} + + ##### AES body + j 2f +1: + vsetvli $VL, $LEN32, e32, m4, ta, mu + # Increase ctr in v16. + vadd.vx $V16, $V16, $CTR, $MASK.t +2: + # Prepare the AES ctr input into v24. + # The ctr data uses big-endian form. + vmv.v.v $V24, $V16 + vrev8.v $V24, $V24, $MASK.t + srli $CTR, $VL, 2 + sub $LEN32, $LEN32, $VL + + # Load plaintext in bytes into v20. + vsetvli $T0, $LEN, e8, m4, ta, ma + vle8.v $V20, ($INP) + sub $LEN, $LEN, $T0 + add $INP, $INP, $T0 + + vsetvli zero, $VL, e32, m4, ta, ma + vaesz.vs $V24, $V1 + vaesem.vs $V24, $V2 + vaesem.vs $V24, $V3 + vaesem.vs $V24, $V4 + vaesem.vs $V24, $V5 + vaesem.vs $V24, $V6 + vaesem.vs $V24, $V7 + vaesem.vs $V24, $V8 + vaesem.vs $V24, $V9 + vaesem.vs $V24, $V10 + vaesem.vs $V24, $V11 + vaesem.vs $V24, $V12 + vaesef.vs $V24, $V13 + + # ciphertext + vsetvli zero, $T0, e8, m4, ta, ma + vxor.vv $V24, $V24, $V20 + + # Store the ciphertext. + vse8.v $V24, ($OUTP) + add $OUTP, $OUTP, $T0 + + bnez $LEN, 1b + + ## store ctr iv + vsetivli zero, 4, e32, m1, ta, mu + # Increase ctr in v16. + vadd.vx $V16, $V16, $CTR, $MASK.t + # Convert ctr data back to big-endian. + vrev8.v $V16, $V16, $MASK.t + vse32.v $V16, ($IVP) + + ret +.size ctr32_encrypt_blocks_192,.-ctr32_encrypt_blocks_192 +___ + +$code .= <<___; +.p2align 3 +ctr32_encrypt_blocks_256: + # Load all 15 round keys to v1-v15 registers. + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V2, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V3, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V4, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V5, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V6, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V7, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V8, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V9, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V10, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V14, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V15, ($KEYP) + + @{[init_aes_ctr_input]} + + ##### AES body + j 2f +1: + vsetvli $VL, $LEN32, e32, m4, ta, mu + # Increase ctr in v16. + vadd.vx $V16, $V16, $CTR, $MASK.t +2: + # Prepare the AES ctr input into v24. + # The ctr data uses big-endian form. + vmv.v.v $V24, $V16 + vrev8.v $V24, $V24, $MASK.t + srli $CTR, $VL, 2 + sub $LEN32, $LEN32, $VL + + # Load plaintext in bytes into v20. + vsetvli $T0, $LEN, e8, m4, ta, ma + vle8.v $V20, ($INP) + sub $LEN, $LEN, $T0 + add $INP, $INP, $T0 + + vsetvli zero, $VL, e32, m4, ta, ma + vaesz.vs $V24, $V1 + vaesem.vs $V24, $V2 + vaesem.vs $V24, $V3 + vaesem.vs $V24, $V4 + vaesem.vs $V24, $V5 + vaesem.vs $V24, $V6 + vaesem.vs $V24, $V7 + vaesem.vs $V24, $V8 + vaesem.vs $V24, $V9 + vaesem.vs $V24, $V10 + vaesem.vs $V24, $V11 + vaesem.vs $V24, $V12 + vaesem.vs $V24, $V13 + vaesem.vs $V24, $V14 + vaesef.vs $V24, $V15 + + # ciphertext + vsetvli zero, $T0, e8, m4, ta, ma + vxor.vv $V24, $V24, $V20 + + # Store the ciphertext. + vse8.v $V24, ($OUTP) + add $OUTP, $OUTP, $T0 + + bnez $LEN, 1b + + ## store ctr iv + vsetivli zero, 4, e32, m1, ta, mu + # Increase ctr in v16. + vadd.vx $V16, $V16, $CTR, $MASK.t + # Convert ctr data back to big-endian. + vrev8.v $V16, $V16, $MASK.t + vse32.v $V16, ($IVP) + + ret +.size ctr32_encrypt_blocks_256,.-ctr32_encrypt_blocks_256 +___ +} + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; diff --git a/arch/riscv/crypto/aes-riscv64-zvkned.pl b/arch/riscv/crypto/aes-riscv64-zvkned.pl index 583e87912e5d..383d5fee4ff2 100644 --- a/arch/riscv/crypto/aes-riscv64-zvkned.pl +++ b/arch/riscv/crypto/aes-riscv64-zvkned.pl @@ -67,6 +67,752 @@ my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, ) = map("v$_",(0..31)); +# Load all 11 round keys to v1-v11 registers. +sub aes_128_load_key { + my $KEYP = shift; + + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V2, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V3, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V4, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V5, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V6, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V7, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V8, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V9, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V10, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) +___ + + return $code; +} + +# Load all 13 round keys to v1-v13 registers. +sub aes_192_load_key { + my $KEYP = shift; + + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V2, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V3, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V4, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V5, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V6, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V7, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V8, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V9, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V10, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) +___ + + return $code; +} + +# Load all 15 round keys to v1-v15 registers. +sub aes_256_load_key { + my $KEYP = shift; + + my $code=<<___; + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $V1, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V2, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V3, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V4, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V5, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V6, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V7, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V8, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V9, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V10, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V11, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V12, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V13, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V14, ($KEYP) + addi $KEYP, $KEYP, 16 + vle32.v $V15, ($KEYP) +___ + + return $code; +} + +# aes-128 encryption with round keys v1-v11 +sub aes_128_encrypt { + my $code=<<___; + vaesz.vs $V24, $V1 # with round key w[ 0, 3] + vaesem.vs $V24, $V2 # with round key w[ 4, 7] + vaesem.vs $V24, $V3 # with round key w[ 8,11] + vaesem.vs $V24, $V4 # with round key w[12,15] + vaesem.vs $V24, $V5 # with round key w[16,19] + vaesem.vs $V24, $V6 # with round key w[20,23] + vaesem.vs $V24, $V7 # with round key w[24,27] + vaesem.vs $V24, $V8 # with round key w[28,31] + vaesem.vs $V24, $V9 # with round key w[32,35] + vaesem.vs $V24, $V10 # with round key w[36,39] + vaesef.vs $V24, $V11 # with round key w[40,43] +___ + + return $code; +} + +# aes-128 decryption with round keys v1-v11 +sub aes_128_decrypt { + my $code=<<___; + vaesz.vs $V24, $V11 # with round key w[40,43] + vaesdm.vs $V24, $V10 # with round key w[36,39] + vaesdm.vs $V24, $V9 # with round key w[32,35] + vaesdm.vs $V24, $V8 # with round key w[28,31] + vaesdm.vs $V24, $V7 # with round key w[24,27] + vaesdm.vs $V24, $V6 # with round key w[20,23] + vaesdm.vs $V24, $V5 # with round key w[16,19] + vaesdm.vs $V24, $V4 # with round key w[12,15] + vaesdm.vs $V24, $V3 # with round key w[ 8,11] + vaesdm.vs $V24, $V2 # with round key w[ 4, 7] + vaesdf.vs $V24, $V1 # with round key w[ 0, 3] +___ + + return $code; +} + +# aes-192 encryption with round keys v1-v13 +sub aes_192_encrypt { + my $code=<<___; + vaesz.vs $V24, $V1 # with round key w[ 0, 3] + vaesem.vs $V24, $V2 # with round key w[ 4, 7] + vaesem.vs $V24, $V3 # with round key w[ 8,11] + vaesem.vs $V24, $V4 # with round key w[12,15] + vaesem.vs $V24, $V5 # with round key w[16,19] + vaesem.vs $V24, $V6 # with round key w[20,23] + vaesem.vs $V24, $V7 # with round key w[24,27] + vaesem.vs $V24, $V8 # with round key w[28,31] + vaesem.vs $V24, $V9 # with round key w[32,35] + vaesem.vs $V24, $V10 # with round key w[36,39] + vaesem.vs $V24, $V11 # with round key w[40,43] + vaesem.vs $V24, $V12 # with round key w[44,47] + vaesef.vs $V24, $V13 # with round key w[48,51] +___ + + return $code; +} + +# aes-192 decryption with round keys v1-v13 +sub aes_192_decrypt { + my $code=<<___; + vaesz.vs $V24, $V13 # with round key w[48,51] + vaesdm.vs $V24, $V12 # with round key w[44,47] + vaesdm.vs $V24, $V11 # with round key w[40,43] + vaesdm.vs $V24, $V10 # with round key w[36,39] + vaesdm.vs $V24, $V9 # with round key w[32,35] + vaesdm.vs $V24, $V8 # with round key w[28,31] + vaesdm.vs $V24, $V7 # with round key w[24,27] + vaesdm.vs $V24, $V6 # with round key w[20,23] + vaesdm.vs $V24, $V5 # with round key w[16,19] + vaesdm.vs $V24, $V4 # with round key w[12,15] + vaesdm.vs $V24, $V3 # with round key w[ 8,11] + vaesdm.vs $V24, $V2 # with round key w[ 4, 7] + vaesdf.vs $V24, $V1 # with round key w[ 0, 3] +___ + + return $code; +} + +# aes-256 encryption with round keys v1-v15 +sub aes_256_encrypt { + my $code=<<___; + vaesz.vs $V24, $V1 # with round key w[ 0, 3] + vaesem.vs $V24, $V2 # with round key w[ 4, 7] + vaesem.vs $V24, $V3 # with round key w[ 8,11] + vaesem.vs $V24, $V4 # with round key w[12,15] + vaesem.vs $V24, $V5 # with round key w[16,19] + vaesem.vs $V24, $V6 # with round key w[20,23] + vaesem.vs $V24, $V7 # with round key w[24,27] + vaesem.vs $V24, $V8 # with round key w[28,31] + vaesem.vs $V24, $V9 # with round key w[32,35] + vaesem.vs $V24, $V10 # with round key w[36,39] + vaesem.vs $V24, $V11 # with round key w[40,43] + vaesem.vs $V24, $V12 # with round key w[44,47] + vaesem.vs $V24, $V13 # with round key w[48,51] + vaesem.vs $V24, $V14 # with round key w[52,55] + vaesef.vs $V24, $V15 # with round key w[56,59] +___ + + return $code; +} + +# aes-256 decryption with round keys v1-v15 +sub aes_256_decrypt { + my $code=<<___; + vaesz.vs $V24, $V15 # with round key w[56,59] + vaesdm.vs $V24, $V14 # with round key w[52,55] + vaesdm.vs $V24, $V13 # with round key w[48,51] + vaesdm.vs $V24, $V12 # with round key w[44,47] + vaesdm.vs $V24, $V11 # with round key w[40,43] + vaesdm.vs $V24, $V10 # with round key w[36,39] + vaesdm.vs $V24, $V9 # with round key w[32,35] + vaesdm.vs $V24, $V8 # with round key w[28,31] + vaesdm.vs $V24, $V7 # with round key w[24,27] + vaesdm.vs $V24, $V6 # with round key w[20,23] + vaesdm.vs $V24, $V5 # with round key w[16,19] + vaesdm.vs $V24, $V4 # with round key w[12,15] + vaesdm.vs $V24, $V3 # with round key w[ 8,11] + vaesdm.vs $V24, $V2 # with round key w[ 4, 7] + vaesdf.vs $V24, $V1 # with round key w[ 0, 3] +___ + + return $code; +} + +{ +############################################################################### +# void rv64i_zvkned_cbc_encrypt(const unsigned char *in, unsigned char *out, +# size_t length, const AES_KEY *key, +# unsigned char *ivec, const int enc); +my ($INP, $OUTP, $LEN, $KEYP, $IVP, $ENC) = ("a0", "a1", "a2", "a3", "a4", "a5"); +my ($T0, $T1) = ("t0", "t1", "t2"); + +$code .= <<___; +.p2align 3 +.globl rv64i_zvkned_cbc_encrypt +.type rv64i_zvkned_cbc_encrypt,\@function +rv64i_zvkned_cbc_encrypt: + # check whether the length is a multiple of 16 and >= 16 + li $T1, 16 + blt $LEN, $T1, L_end + andi $T1, $LEN, 15 + bnez $T1, L_end + + # Load key length. + lwu $T0, 480($KEYP) + + # Get proper routine for key length. + li $T1, 16 + beq $T1, $T0, L_cbc_enc_128 + + li $T1, 24 + beq $T1, $T0, L_cbc_enc_192 + + li $T1, 32 + beq $T1, $T0, L_cbc_enc_256 + + ret +.size rv64i_zvkned_cbc_encrypt,.-rv64i_zvkned_cbc_encrypt +___ + +$code .= <<___; +.p2align 3 +L_cbc_enc_128: + # Load all 11 round keys to v1-v11 registers. + @{[aes_128_load_key $KEYP]} + + # Load IV. + vle32.v $V16, ($IVP) + + vle32.v $V24, ($INP) + vxor.vv $V24, $V24, $V16 + j 2f + +1: + vle32.v $V17, ($INP) + vxor.vv $V24, $V24, $V17 + +2: + # AES body + @{[aes_128_encrypt]} + + vse32.v $V24, ($OUTP) + + addi $INP, $INP, 16 + addi $OUTP, $OUTP, 16 + addi $LEN, $LEN, -16 + + bnez $LEN, 1b + + vse32.v $V24, ($IVP) + + ret +.size L_cbc_enc_128,.-L_cbc_enc_128 +___ + +$code .= <<___; +.p2align 3 +L_cbc_enc_192: + # Load all 13 round keys to v1-v13 registers. + @{[aes_192_load_key $KEYP]} + + # Load IV. + vle32.v $V16, ($IVP) + + vle32.v $V24, ($INP) + vxor.vv $V24, $V24, $V16 + j 2f + +1: + vle32.v $V17, ($INP) + vxor.vv $V24, $V24, $V17 + +2: + # AES body + @{[aes_192_encrypt]} + + vse32.v $V24, ($OUTP) + + addi $INP, $INP, 16 + addi $OUTP, $OUTP, 16 + addi $LEN, $LEN, -16 + + bnez $LEN, 1b + + vse32.v $V24, ($IVP) + + ret +.size L_cbc_enc_192,.-L_cbc_enc_192 +___ + +$code .= <<___; +.p2align 3 +L_cbc_enc_256: + # Load all 15 round keys to v1-v15 registers. + @{[aes_256_load_key $KEYP]} + + # Load IV. + vle32.v $V16, ($IVP) + + vle32.v $V24, ($INP) + vxor.vv $V24, $V24, $V16 + j 2f + +1: + vle32.v $V17, ($INP) + vxor.vv $V24, $V24, $V17 + +2: + # AES body + @{[aes_256_encrypt]} + + vse32.v $V24, ($OUTP) + + addi $INP, $INP, 16 + addi $OUTP, $OUTP, 16 + addi $LEN, $LEN, -16 + + bnez $LEN, 1b + + vse32.v $V24, ($IVP) + + ret +.size L_cbc_enc_256,.-L_cbc_enc_256 +___ + +############################################################################### +# void rv64i_zvkned_cbc_decrypt(const unsigned char *in, unsigned char *out, +# size_t length, const AES_KEY *key, +# unsigned char *ivec, const int enc); +$code .= <<___; +.p2align 3 +.globl rv64i_zvkned_cbc_decrypt +.type rv64i_zvkned_cbc_decrypt,\@function +rv64i_zvkned_cbc_decrypt: + # check whether the length is a multiple of 16 and >= 16 + li $T1, 16 + blt $LEN, $T1, L_end + andi $T1, $LEN, 15 + bnez $T1, L_end + + # Load key length. + lwu $T0, 480($KEYP) + + # Get proper routine for key length. + li $T1, 16 + beq $T1, $T0, L_cbc_dec_128 + + li $T1, 24 + beq $T1, $T0, L_cbc_dec_192 + + li $T1, 32 + beq $T1, $T0, L_cbc_dec_256 + + ret +.size rv64i_zvkned_cbc_decrypt,.-rv64i_zvkned_cbc_decrypt +___ + +$code .= <<___; +.p2align 3 +L_cbc_dec_128: + # Load all 11 round keys to v1-v11 registers. + @{[aes_128_load_key $KEYP]} + + # Load IV. + vle32.v $V16, ($IVP) + + vle32.v $V24, ($INP) + vmv.v.v $V17, $V24 + j 2f + +1: + vle32.v $V24, ($INP) + vmv.v.v $V17, $V24 + addi $OUTP, $OUTP, 16 + +2: + # AES body + @{[aes_128_decrypt]} + + vxor.vv $V24, $V24, $V16 + vse32.v $V24, ($OUTP) + vmv.v.v $V16, $V17 + + addi $LEN, $LEN, -16 + addi $INP, $INP, 16 + + bnez $LEN, 1b + + vse32.v $V16, ($IVP) + + ret +.size L_cbc_dec_128,.-L_cbc_dec_128 +___ + +$code .= <<___; +.p2align 3 +L_cbc_dec_192: + # Load all 13 round keys to v1-v13 registers. + @{[aes_192_load_key $KEYP]} + + # Load IV. + vle32.v $V16, ($IVP) + + vle32.v $V24, ($INP) + vmv.v.v $V17, $V24 + j 2f + +1: + vle32.v $V24, ($INP) + vmv.v.v $V17, $V24 + addi $OUTP, $OUTP, 16 + +2: + # AES body + @{[aes_192_decrypt]} + + vxor.vv $V24, $V24, $V16 + vse32.v $V24, ($OUTP) + vmv.v.v $V16, $V17 + + addi $LEN, $LEN, -16 + addi $INP, $INP, 16 + + bnez $LEN, 1b + + vse32.v $V16, ($IVP) + + ret +.size L_cbc_dec_192,.-L_cbc_dec_192 +___ + +$code .= <<___; +.p2align 3 +L_cbc_dec_256: + # Load all 15 round keys to v1-v15 registers. + @{[aes_256_load_key $KEYP]} + + # Load IV. + vle32.v $V16, ($IVP) + + vle32.v $V24, ($INP) + vmv.v.v $V17, $V24 + j 2f + +1: + vle32.v $V24, ($INP) + vmv.v.v $V17, $V24 + addi $OUTP, $OUTP, 16 + +2: + # AES body + @{[aes_256_decrypt]} + + vxor.vv $V24, $V24, $V16 + vse32.v $V24, ($OUTP) + vmv.v.v $V16, $V17 + + addi $LEN, $LEN, -16 + addi $INP, $INP, 16 + + bnez $LEN, 1b + + vse32.v $V16, ($IVP) + + ret +.size L_cbc_dec_256,.-L_cbc_dec_256 +___ +} + +{ +############################################################################### +# void rv64i_zvkned_ecb_encrypt(const unsigned char *in, unsigned char *out, +# size_t length, const AES_KEY *key, +# const int enc); +my ($INP, $OUTP, $LEN, $KEYP, $ENC) = ("a0", "a1", "a2", "a3", "a4"); +my ($VL) = ("a5"); +my ($LEN32) = ("a6"); +my ($T0, $T1) = ("t0", "t1"); + +$code .= <<___; +.p2align 3 +.globl rv64i_zvkned_ecb_encrypt +.type rv64i_zvkned_ecb_encrypt,\@function +rv64i_zvkned_ecb_encrypt: + # Make the LEN become e32 length. + srli $LEN32, $LEN, 2 + + # Load key length. + lwu $T0, 480($KEYP) + + # Get proper routine for key length. + li $T1, 16 + beq $T1, $T0, L_ecb_enc_128 + + li $T1, 24 + beq $T1, $T0, L_ecb_enc_192 + + li $T1, 32 + beq $T1, $T0, L_ecb_enc_256 + + ret +.size rv64i_zvkned_ecb_encrypt,.-rv64i_zvkned_ecb_encrypt +___ + +$code .= <<___; +.p2align 3 +L_ecb_enc_128: + # Load all 11 round keys to v1-v11 registers. + @{[aes_128_load_key $KEYP]} + +1: + vsetvli $VL, $LEN32, e32, m4, ta, ma + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + + vle32.v $V24, ($INP) + + # AES body + @{[aes_128_encrypt]} + + vse32.v $V24, ($OUTP) + + add $INP, $INP, $T0 + add $OUTP, $OUTP, $T0 + + bnez $LEN32, 1b + + ret +.size L_ecb_enc_128,.-L_ecb_enc_128 +___ + +$code .= <<___; +.p2align 3 +L_ecb_enc_192: + # Load all 13 round keys to v1-v13 registers. + @{[aes_192_load_key $KEYP]} + +1: + vsetvli $VL, $LEN32, e32, m4, ta, ma + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + + vle32.v $V24, ($INP) + + # AES body + @{[aes_192_encrypt]} + + vse32.v $V24, ($OUTP) + + add $INP, $INP, $T0 + add $OUTP, $OUTP, $T0 + + bnez $LEN32, 1b + + ret +.size L_ecb_enc_192,.-L_ecb_enc_192 +___ + +$code .= <<___; +.p2align 3 +L_ecb_enc_256: + # Load all 15 round keys to v1-v15 registers. + @{[aes_256_load_key $KEYP]} + +1: + vsetvli $VL, $LEN32, e32, m4, ta, ma + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + + vle32.v $V24, ($INP) + + # AES body + @{[aes_256_encrypt]} + + vse32.v $V24, ($OUTP) + + add $INP, $INP, $T0 + add $OUTP, $OUTP, $T0 + + bnez $LEN32, 1b + + ret +.size L_ecb_enc_256,.-L_ecb_enc_256 +___ + +############################################################################### +# void rv64i_zvkned_ecb_decrypt(const unsigned char *in, unsigned char *out, +# size_t length, const AES_KEY *key, +# const int enc); +$code .= <<___; +.p2align 3 +.globl rv64i_zvkned_ecb_decrypt +.type rv64i_zvkned_ecb_decrypt,\@function +rv64i_zvkned_ecb_decrypt: + # Make the LEN become e32 length. + srli $LEN32, $LEN, 2 + + # Load key length. + lwu $T0, 480($KEYP) + + # Get proper routine for key length. + li $T1, 16 + beq $T1, $T0, L_ecb_dec_128 + + li $T1, 24 + beq $T1, $T0, L_ecb_dec_192 + + li $T1, 32 + beq $T1, $T0, L_ecb_dec_256 + + ret +.size rv64i_zvkned_ecb_decrypt,.-rv64i_zvkned_ecb_decrypt +___ + +$code .= <<___; +.p2align 3 +L_ecb_dec_128: + # Load all 11 round keys to v1-v11 registers. + @{[aes_128_load_key $KEYP]} + +1: + vsetvli $VL, $LEN32, e32, m4, ta, ma + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + + vle32.v $V24, ($INP) + + # AES body + @{[aes_128_decrypt]} + + vse32.v $V24, ($OUTP) + + add $INP, $INP, $T0 + add $OUTP, $OUTP, $T0 + + bnez $LEN32, 1b + + ret +.size L_ecb_dec_128,.-L_ecb_dec_128 +___ + +$code .= <<___; +.p2align 3 +L_ecb_dec_192: + # Load all 13 round keys to v1-v13 registers. + @{[aes_192_load_key $KEYP]} + +1: + vsetvli $VL, $LEN32, e32, m4, ta, ma + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + + vle32.v $V24, ($INP) + + # AES body + @{[aes_192_decrypt]} + + vse32.v $V24, ($OUTP) + + add $INP, $INP, $T0 + add $OUTP, $OUTP, $T0 + + bnez $LEN32, 1b + + ret +.size L_ecb_dec_192,.-L_ecb_dec_192 +___ + +$code .= <<___; +.p2align 3 +L_ecb_dec_256: + # Load all 15 round keys to v1-v15 registers. + @{[aes_256_load_key $KEYP]} + +1: + vsetvli $VL, $LEN32, e32, m4, ta, ma + slli $T0, $VL, 2 + sub $LEN32, $LEN32, $VL + + vle32.v $V24, ($INP) + + # AES body + @{[aes_256_decrypt]} + + vse32.v $V24, ($OUTP) + + add $INP, $INP, $T0 + add $OUTP, $OUTP, $T0 + + bnez $LEN32, 1b + + ret +.size L_ecb_dec_256,.-L_ecb_dec_256 +___ +} + { ################################################################################ # void rv64i_zvkned_encrypt(const unsigned char *in, unsigned char *out, From patchwork Sun Dec 31 15:27:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507242 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E3CDE79DD for ; Sun, 31 Dec 2023 15:28:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="QvHNBoU3" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-1d420aaa2abso27352975ad.3 for ; Sun, 31 Dec 2023 07:28:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036493; x=1704641293; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kkFJu6wbfKrNvlCJDFW6tY4AkgVd3nqkM4USIeuwRmw=; b=QvHNBoU3nXTJAH53jVLmKL6DT3HJ8/NOiocXHShiiF6LnZZIgmCrqvPXYkb0ecLXxl lyCHIrgREI+K7IwPhsZFnN3f3XoF7A04FXKub+9eMZPjeZI4Fbhk3/KojpieyyOoqMrV 17CpHnyDdtYYkZ/2kiAZLAKIwVfz7ZpmmRdOu+sQ1eLSaP/U7JSr/ScW/CC4hmOxviCH AShYxEUSGF+ONdCzWQz0u6nwqh7PkZ/aWaOwRKCfkS18joLvZ6Hffq4QRe3rUUQ5sfVv +t5dbuXynf3aiguNp8wwtnRIGkqtkL/6YDZDabINDlECZkPkKdEoivZO3ZBj7q9STn74 WS3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036493; x=1704641293; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kkFJu6wbfKrNvlCJDFW6tY4AkgVd3nqkM4USIeuwRmw=; b=ARbPIkLI/FtfinIFtCGPMk79+HVvh8NIiaeLsMJ9ARZvKVZkwlT+P/fAgFT19xggic Iy/NGcgeNACUsJka2jgmpdEA2b/mw8blVIPCGh37Vxzin0m7IfRVuHN3nuSLfEiqKMKT HvimeyRIHQ0pRL1gByX7g/tNSqZdkVyiXS38N/PTlkgfy2iHLORzkdFjz6ny44lWHXdn lPtFqrA6c3LyPCJHZb3jCxlYJln2wc2++3ABceXKRIVRfdEKM6xx6VrhECEtBO+Fk0l8 g3fSygTlxDi5Q7bOK0ordvmAvAW02WI1PtJIsVFi1vIC2J8ch8kj6UXHX+tdiW7HkkkD NIEw== X-Gm-Message-State: AOJu0Yz4HgI1CznW4rvpGXEViElTg8snR9VOwkTESasLNcGfeFw+LMFJ Isit9ZZn3GaN9B2B2bYLiF7jjyiKA3J9oQ== X-Google-Smtp-Source: AGHT+IHIPKMRiqbavN1oME66IycwcLAtZ7yhnlsI4BJbgS3NesqNBwv7KKtpU/UoQkmQzm9DzCAALA== X-Received: by 2002:a17:903:32cc:b0:1d4:cae:9a05 with SMTP id i12-20020a17090332cc00b001d40cae9a05mr7664302plr.2.1704036493285; Sun, 31 Dec 2023 07:28:13 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.10 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:12 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 06/11] RISC-V: crypto: add Zvkg accelerated GCM GHASH implementation Date: Sun, 31 Dec 2023 23:27:38 +0800 Message-Id: <20231231152743.6304-7-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add a gcm hash implementation using the Zvkg extension from OpenSSL (openssl/openssl#21923). The perlasm here is different from the original implementation in OpenSSL. The OpenSSL assumes that the H is stored in little-endian. Thus, it needs to convert the H to big-endian for Zvkg instructions. In kernel, we have the big-endian H directly. There is no need for endian conversion. Co-developed-by: Christoph Müllner Signed-off-by: Christoph Müllner Co-developed-by: Heiko Stuebner Signed-off-by: Heiko Stuebner Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. Changelog v3: - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `GHASH_RISCV64` option by default. - Add `asmlinkage` qualifier for crypto asm function. - Update the ghash fallback path in ghash_blocks(). - Rename structure riscv64_ghash_context to riscv64_ghash_tfm_ctx. - Fold ghash_update_zvkg() and ghash_final_zvkg(). - Reorder structure riscv64_ghash_alg_zvkg members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 10 ++ arch/riscv/crypto/Makefile | 7 + arch/riscv/crypto/ghash-riscv64-glue.c | 175 ++++++++++++++++++++++++ arch/riscv/crypto/ghash-riscv64-zvkg.pl | 100 ++++++++++++++ 4 files changed, 292 insertions(+) create mode 100644 arch/riscv/crypto/ghash-riscv64-glue.c create mode 100644 arch/riscv/crypto/ghash-riscv64-zvkg.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 2cee0f68f0c7..d73b89ceb1a3 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -34,4 +34,14 @@ config CRYPTO_AES_BLOCK_RISCV64 - Zvkb vector crypto extension (CTR/XTS) - Zvkg vector crypto extension (XTS) +config CRYPTO_GHASH_RISCV64 + tristate "Hash functions: GHASH" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_GCM + help + GCM GHASH function (NIST SP 800-38D) + + Architecture: riscv64 using: + - Zvkg vector crypto extension + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index 9574b009762f..94a7f8eaa8a7 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -9,6 +9,9 @@ aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o obj-$(CONFIG_CRYPTO_AES_BLOCK_RISCV64) += aes-block-riscv64.o aes-block-riscv64-y := aes-riscv64-block-mode-glue.o aes-riscv64-zvkned-zvbb-zvkg.o aes-riscv64-zvkned-zvkb.o +obj-$(CONFIG_CRYPTO_GHASH_RISCV64) += ghash-riscv64.o +ghash-riscv64-y := ghash-riscv64-glue.o ghash-riscv64-zvkg.o + quiet_cmd_perlasm = PERLASM $@ cmd_perlasm = $(PERL) $(<) void $(@) @@ -21,6 +24,10 @@ $(obj)/aes-riscv64-zvkned-zvbb-zvkg.S: $(src)/aes-riscv64-zvkned-zvbb-zvkg.pl $(obj)/aes-riscv64-zvkned-zvkb.S: $(src)/aes-riscv64-zvkned-zvkb.pl $(call cmd,perlasm) +$(obj)/ghash-riscv64-zvkg.S: $(src)/ghash-riscv64-zvkg.pl + $(call cmd,perlasm) + clean-files += aes-riscv64-zvkned.S clean-files += aes-riscv64-zvkned-zvbb-zvkg.S clean-files += aes-riscv64-zvkned-zvkb.S +clean-files += ghash-riscv64-zvkg.S diff --git a/arch/riscv/crypto/ghash-riscv64-glue.c b/arch/riscv/crypto/ghash-riscv64-glue.c new file mode 100644 index 000000000000..b01ab5714677 --- /dev/null +++ b/arch/riscv/crypto/ghash-riscv64-glue.c @@ -0,0 +1,175 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * RISC-V optimized GHASH routines + * + * Copyright (C) 2023 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* ghash using zvkg vector crypto extension */ +asmlinkage void gcm_ghash_rv64i_zvkg(be128 *Xi, const be128 *H, const u8 *inp, + size_t len); + +struct riscv64_ghash_tfm_ctx { + be128 key; +}; + +struct riscv64_ghash_desc_ctx { + be128 shash; + u8 buffer[GHASH_BLOCK_SIZE]; + u32 bytes; +}; + +static inline void ghash_blocks(const struct riscv64_ghash_tfm_ctx *tctx, + struct riscv64_ghash_desc_ctx *dctx, + const u8 *src, size_t srclen) +{ + /* The srclen is nonzero and a multiple of 16. */ + if (crypto_simd_usable()) { + kernel_vector_begin(); + gcm_ghash_rv64i_zvkg(&dctx->shash, &tctx->key, src, srclen); + kernel_vector_end(); + } else { + do { + crypto_xor((u8 *)&dctx->shash, src, GHASH_BLOCK_SIZE); + gf128mul_lle(&dctx->shash, &tctx->key); + srclen -= GHASH_BLOCK_SIZE; + src += GHASH_BLOCK_SIZE; + } while (srclen); + } +} + +static int ghash_init(struct shash_desc *desc) +{ + struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc); + + *dctx = (struct riscv64_ghash_desc_ctx){}; + + return 0; +} + +static int ghash_update_zvkg(struct shash_desc *desc, const u8 *src, + unsigned int srclen) +{ + size_t len; + const struct riscv64_ghash_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); + struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc); + + if (dctx->bytes) { + if (dctx->bytes + srclen < GHASH_BLOCK_SIZE) { + memcpy(dctx->buffer + dctx->bytes, src, srclen); + dctx->bytes += srclen; + return 0; + } + memcpy(dctx->buffer + dctx->bytes, src, + GHASH_BLOCK_SIZE - dctx->bytes); + + ghash_blocks(tctx, dctx, dctx->buffer, GHASH_BLOCK_SIZE); + + src += GHASH_BLOCK_SIZE - dctx->bytes; + srclen -= GHASH_BLOCK_SIZE - dctx->bytes; + dctx->bytes = 0; + } + len = srclen & ~(GHASH_BLOCK_SIZE - 1); + + if (len) { + ghash_blocks(tctx, dctx, src, len); + src += len; + srclen -= len; + } + + if (srclen) { + memcpy(dctx->buffer, src, srclen); + dctx->bytes = srclen; + } + + return 0; +} + +static int ghash_final_zvkg(struct shash_desc *desc, u8 *out) +{ + const struct riscv64_ghash_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); + struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc); + int i; + + if (dctx->bytes) { + for (i = dctx->bytes; i < GHASH_BLOCK_SIZE; i++) + dctx->buffer[i] = 0; + + ghash_blocks(tctx, dctx, dctx->buffer, GHASH_BLOCK_SIZE); + } + + memcpy(out, &dctx->shash, GHASH_DIGEST_SIZE); + + return 0; +} + +static int ghash_setkey(struct crypto_shash *tfm, const u8 *key, + unsigned int keylen) +{ + struct riscv64_ghash_tfm_ctx *tctx = crypto_shash_ctx(tfm); + + if (keylen != GHASH_BLOCK_SIZE) + return -EINVAL; + + memcpy(&tctx->key, key, GHASH_BLOCK_SIZE); + + return 0; +} + +static struct shash_alg riscv64_ghash_alg_zvkg = { + .init = ghash_init, + .update = ghash_update_zvkg, + .final = ghash_final_zvkg, + .setkey = ghash_setkey, + .descsize = sizeof(struct riscv64_ghash_desc_ctx), + .digestsize = GHASH_DIGEST_SIZE, + .base = { + .cra_blocksize = GHASH_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct riscv64_ghash_tfm_ctx), + .cra_priority = 303, + .cra_name = "ghash", + .cra_driver_name = "ghash-riscv64-zvkg", + .cra_module = THIS_MODULE, + }, +}; + +static inline bool check_ghash_ext(void) +{ + return riscv_isa_extension_available(NULL, ZVKG) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_ghash_mod_init(void) +{ + if (check_ghash_ext()) + return crypto_register_shash(&riscv64_ghash_alg_zvkg); + + return -ENODEV; +} + +static void __exit riscv64_ghash_mod_fini(void) +{ + crypto_unregister_shash(&riscv64_ghash_alg_zvkg); +} + +module_init(riscv64_ghash_mod_init); +module_exit(riscv64_ghash_mod_fini); + +MODULE_DESCRIPTION("GCM GHASH (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("ghash"); diff --git a/arch/riscv/crypto/ghash-riscv64-zvkg.pl b/arch/riscv/crypto/ghash-riscv64-zvkg.pl new file mode 100644 index 000000000000..f18824496573 --- /dev/null +++ b/arch/riscv/crypto/ghash-riscv64-zvkg.pl @@ -0,0 +1,100 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Christoph Müllner +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector GCM/GMAC extension ('Zvkg') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +.text +.option arch, +zvkg +___ + +############################################################################### +# void gcm_ghash_rv64i_zvkg(be128 *Xi, const be128 *H, const u8 *inp, size_t len) +# +# input: Xi: current hash value +# H: hash key +# inp: pointer to input data +# len: length of input data in bytes (multiple of block size) +# output: Xi: Xi+1 (next hash value Xi) +{ +my ($Xi,$H,$inp,$len) = ("a0","a1","a2","a3"); +my ($vXi,$vH,$vinp,$Vzero) = ("v1","v2","v3","v4"); + +$code .= <<___; +.p2align 3 +.globl gcm_ghash_rv64i_zvkg +.type gcm_ghash_rv64i_zvkg,\@function +gcm_ghash_rv64i_zvkg: + vsetivli zero, 4, e32, m1, ta, ma + vle32.v $vH, ($H) + vle32.v $vXi, ($Xi) + +Lstep: + vle32.v $vinp, ($inp) + add $inp, $inp, 16 + add $len, $len, -16 + vghsh.vv $vXi, $vH, $vinp + bnez $len, Lstep + + vse32.v $vXi, ($Xi) + ret + +.size gcm_ghash_rv64i_zvkg,.-gcm_ghash_rv64i_zvkg +___ +} + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; From patchwork Sun Dec 31 15:27:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507243 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 539968F49 for ; Sun, 31 Dec 2023 15:28:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="a5w/+BA8" Received: by mail-pg1-f181.google.com with SMTP id 41be03b00d2f7-5ca29c131ebso6418643a12.0 for ; Sun, 31 Dec 2023 07:28:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036497; x=1704641297; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WNQBInVZby/ekPhWTRyt5y6cGlBZoZz5Zj+QnKJ0oEM=; b=a5w/+BA81QrfBKEYXy5M8HkFjIXM+nldzeYqX2bgNZ+IiDixaHubrcnxAiyYUNc0jR tzZu5MGN/oJYNfGaE3plRnuCZkoa3VIXLBckUTbUxbcadQ1UF+vnSDMiVMFTjYzbCt9D YyASwmy5rdAw/OhbcPR/7ijgYm9LmS29gNlbUmHPRdrGTQdaCgcgWFEKo0gszVX1QS0n e8PY3Ki9quvtv96MrkOHUa9e+Kec95Nqz4kZVzh5psR+uhVCMlcIjNPwEVvk96PPbKy8 vXQCt2DzGXn8XLRu9FFSkk3rYEMENHYHpG3InDSJ5JUocAjubhhyGuAjnV6n9f0Tn6vP alhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036497; x=1704641297; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WNQBInVZby/ekPhWTRyt5y6cGlBZoZz5Zj+QnKJ0oEM=; b=XmQLZx9paPE9DeFYTyfNqcLp+9tYS2ORn0ZrmrNZj6yK+kTHNjjkjtSdOBE8NMhCBa 1pw0mjPbbcWqXH/AYTRLC5+f9cdfiq/61ne4rXZLvPwotYf7w6+P+klYb87ewwdSq/DQ JTkFNMBS1S+BrB5qWXoJ6/hnYjkEuT1SkI2AIJDwjsMdU3yO4bBlBtaTX2DltuYmomkM YYl3DTtBz1grAsPTafeYa21v9zMfiR2jwfhhVwybe7W6cy3C3UF86HrJ3B5c/YJhruW8 c5T0qTo3LkxoQJ5AyW3aQpu7ksx5/mLkORKEcwfIm4MprTGif49ptMwe6TPgpznmE1U1 GO7A== X-Gm-Message-State: AOJu0Yzqs6xpVeD15IlBLSBsFBfmE6TiOHVH30WvllJwqAwguCritPyu HOrvmgVab0MsDO4nhy+ZwKdR5W4zSClOww== X-Google-Smtp-Source: AGHT+IGXrDi3jTdznJEzeGw/WEDBAI62EWrC1t/rjDI9lLHQppITWWFTgbjVsrezy/ywCDClikiLUw== X-Received: by 2002:a17:903:244d:b0:1d4:b017:a053 with SMTP id l13-20020a170903244d00b001d4b017a053mr2250935pls.116.1704036496630; Sun, 31 Dec 2023 07:28:16 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.13 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:16 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 07/11] RISC-V: crypto: add Zvknha/b accelerated SHA224/256 implementations Date: Sun, 31 Dec 2023 23:27:39 +0800 Message-Id: <20231231152743.6304-8-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add SHA224 and 256 implementations using Zvknha or Zvknhb vector crypto extensions from OpenSSL(openssl/openssl#21923). Co-developed-by: Charalampos Mitrodimas Signed-off-by: Charalampos Mitrodimas Co-developed-by: Heiko Stuebner Signed-off-by: Heiko Stuebner Co-developed-by: Phoebe Chen Signed-off-by: Phoebe Chen Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. Changelog v3: - Use `SYM_TYPED_FUNC_START` for sha256 indirect-call asm symbol. - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `SHA256_RISCV64` option by default. - Add `asmlinkage` qualifier for crypto asm function. - Rename sha256-riscv64-zvkb-zvknha_or_zvknhb to sha256-riscv64-zvknha_or_zvknhb-zvkb. - Reorder structure sha256_algs members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 11 + arch/riscv/crypto/Makefile | 7 + arch/riscv/crypto/sha256-riscv64-glue.c | 145 ++++++++ .../sha256-riscv64-zvknha_or_zvknhb-zvkb.pl | 317 ++++++++++++++++++ 4 files changed, 480 insertions(+) create mode 100644 arch/riscv/crypto/sha256-riscv64-glue.c create mode 100644 arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index d73b89ceb1a3..ff1dce4a2bcc 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -44,4 +44,15 @@ config CRYPTO_GHASH_RISCV64 Architecture: riscv64 using: - Zvkg vector crypto extension +config CRYPTO_SHA256_RISCV64 + tristate "Hash functions: SHA-224 and SHA-256" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_SHA256 + help + SHA-224 and SHA-256 secure hash algorithm (FIPS 180) + + Architecture: riscv64 using: + - Zvknha or Zvknhb vector crypto extensions + - Zvkb vector crypto extension + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index 94a7f8eaa8a7..e9d7717ec943 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -12,6 +12,9 @@ aes-block-riscv64-y := aes-riscv64-block-mode-glue.o aes-riscv64-zvkned-zvbb-zvk obj-$(CONFIG_CRYPTO_GHASH_RISCV64) += ghash-riscv64.o ghash-riscv64-y := ghash-riscv64-glue.o ghash-riscv64-zvkg.o +obj-$(CONFIG_CRYPTO_SHA256_RISCV64) += sha256-riscv64.o +sha256-riscv64-y := sha256-riscv64-glue.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o + quiet_cmd_perlasm = PERLASM $@ cmd_perlasm = $(PERL) $(<) void $(@) @@ -27,7 +30,11 @@ $(obj)/aes-riscv64-zvkned-zvkb.S: $(src)/aes-riscv64-zvkned-zvkb.pl $(obj)/ghash-riscv64-zvkg.S: $(src)/ghash-riscv64-zvkg.pl $(call cmd,perlasm) +$(obj)/sha256-riscv64-zvknha_or_zvknhb-zvkb.S: $(src)/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl + $(call cmd,perlasm) + clean-files += aes-riscv64-zvkned.S clean-files += aes-riscv64-zvkned-zvbb-zvkg.S clean-files += aes-riscv64-zvkned-zvkb.S clean-files += ghash-riscv64-zvkg.S +clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S diff --git a/arch/riscv/crypto/sha256-riscv64-glue.c b/arch/riscv/crypto/sha256-riscv64-glue.c new file mode 100644 index 000000000000..760d89031d1c --- /dev/null +++ b/arch/riscv/crypto/sha256-riscv64-glue.c @@ -0,0 +1,145 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Linux/riscv64 port of the OpenSSL SHA256 implementation for RISC-V 64 + * + * Copyright (C) 2022 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * sha256 using zvkb and zvknha/b vector crypto extension + * + * This asm function will just take the first 256-bit as the sha256 state from + * the pointer to `struct sha256_state`. + */ +asmlinkage void +sha256_block_data_order_zvkb_zvknha_or_zvknhb(struct sha256_state *digest, + const u8 *data, int num_blks); + +static int riscv64_sha256_update(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + int ret = 0; + + /* + * Make sure struct sha256_state begins directly with the SHA256 + * 256-bit internal state, as this is what the asm function expect. + */ + BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); + + if (crypto_simd_usable()) { + kernel_vector_begin(); + ret = sha256_base_do_update( + desc, data, len, + sha256_block_data_order_zvkb_zvknha_or_zvknhb); + kernel_vector_end(); + } else { + ret = crypto_sha256_update(desc, data, len); + } + + return ret; +} + +static int riscv64_sha256_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) +{ + if (crypto_simd_usable()) { + kernel_vector_begin(); + if (len) + sha256_base_do_update( + desc, data, len, + sha256_block_data_order_zvkb_zvknha_or_zvknhb); + sha256_base_do_finalize( + desc, sha256_block_data_order_zvkb_zvknha_or_zvknhb); + kernel_vector_end(); + + return sha256_base_finish(desc, out); + } + + return crypto_sha256_finup(desc, data, len, out); +} + +static int riscv64_sha256_final(struct shash_desc *desc, u8 *out) +{ + return riscv64_sha256_finup(desc, NULL, 0, out); +} + +static struct shash_alg sha256_algs[] = { + { + .init = sha256_base_init, + .update = riscv64_sha256_update, + .final = riscv64_sha256_final, + .finup = riscv64_sha256_finup, + .descsize = sizeof(struct sha256_state), + .digestsize = SHA256_DIGEST_SIZE, + .base = { + .cra_blocksize = SHA256_BLOCK_SIZE, + .cra_priority = 150, + .cra_name = "sha256", + .cra_driver_name = "sha256-riscv64-zvknha_or_zvknhb-zvkb", + .cra_module = THIS_MODULE, + }, + }, { + .init = sha224_base_init, + .update = riscv64_sha256_update, + .final = riscv64_sha256_final, + .finup = riscv64_sha256_finup, + .descsize = sizeof(struct sha256_state), + .digestsize = SHA224_DIGEST_SIZE, + .base = { + .cra_blocksize = SHA224_BLOCK_SIZE, + .cra_priority = 150, + .cra_name = "sha224", + .cra_driver_name = "sha224-riscv64-zvknha_or_zvknhb-zvkb", + .cra_module = THIS_MODULE, + }, + }, +}; + +static inline bool check_sha256_ext(void) +{ + /* + * From the spec: + * The Zvknhb ext supports both SHA-256 and SHA-512 and Zvknha only + * supports SHA-256. + */ + return (riscv_isa_extension_available(NULL, ZVKNHA) || + riscv_isa_extension_available(NULL, ZVKNHB)) && + riscv_isa_extension_available(NULL, ZVKB) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_sha256_mod_init(void) +{ + if (check_sha256_ext()) + return crypto_register_shashes(sha256_algs, + ARRAY_SIZE(sha256_algs)); + + return -ENODEV; +} + +static void __exit riscv64_sha256_mod_fini(void) +{ + crypto_unregister_shashes(sha256_algs, ARRAY_SIZE(sha256_algs)); +} + +module_init(riscv64_sha256_mod_init); +module_exit(riscv64_sha256_mod_fini); + +MODULE_DESCRIPTION("SHA-256 (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("sha224"); +MODULE_ALIAS_CRYPTO("sha256"); diff --git a/arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl b/arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl new file mode 100644 index 000000000000..22dd40d8c734 --- /dev/null +++ b/arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl @@ -0,0 +1,317 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Christoph Müllner +# Copyright (c) 2023, Phoebe Chen +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# The generated code of this file depends on the following RISC-V extensions: +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector SHA-2 Secure Hash extension ('Zvknha' or 'Zvknhb') +# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +#include + +.text +.option arch, +zvknha, +zvkb +___ + +my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, + $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15, + $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23, + $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map("v$_",(0..31)); + +my $K256 = "K256"; + +# Function arguments +my ($H, $INP, $LEN, $KT, $H2, $INDEX_PATTERN) = ("a0", "a1", "a2", "a3", "t3", "t4"); + +sub sha_256_load_constant { + my $code=<<___; + la $KT, $K256 # Load round constants K256 + vle32.v $V10, ($KT) + addi $KT, $KT, 16 + vle32.v $V11, ($KT) + addi $KT, $KT, 16 + vle32.v $V12, ($KT) + addi $KT, $KT, 16 + vle32.v $V13, ($KT) + addi $KT, $KT, 16 + vle32.v $V14, ($KT) + addi $KT, $KT, 16 + vle32.v $V15, ($KT) + addi $KT, $KT, 16 + vle32.v $V16, ($KT) + addi $KT, $KT, 16 + vle32.v $V17, ($KT) + addi $KT, $KT, 16 + vle32.v $V18, ($KT) + addi $KT, $KT, 16 + vle32.v $V19, ($KT) + addi $KT, $KT, 16 + vle32.v $V20, ($KT) + addi $KT, $KT, 16 + vle32.v $V21, ($KT) + addi $KT, $KT, 16 + vle32.v $V22, ($KT) + addi $KT, $KT, 16 + vle32.v $V23, ($KT) + addi $KT, $KT, 16 + vle32.v $V24, ($KT) + addi $KT, $KT, 16 + vle32.v $V25, ($KT) +___ + + return $code; +} + +################################################################################ +# void sha256_block_data_order_zvkb_zvknha_or_zvknhb(void *c, const void *p, size_t len) +$code .= <<___; +SYM_TYPED_FUNC_START(sha256_block_data_order_zvkb_zvknha_or_zvknhb) + vsetivli zero, 4, e32, m1, ta, ma + + @{[sha_256_load_constant]} + + # H is stored as {a,b,c,d},{e,f,g,h}, but we need {f,e,b,a},{h,g,d,c} + # The dst vtype is e32m1 and the index vtype is e8mf4. + # We use index-load with the following index pattern at v26. + # i8 index: + # 20, 16, 4, 0 + # Instead of setting the i8 index, we could use a single 32bit + # little-endian value to cover the 4xi8 index. + # i32 value: + # 0x 00 04 10 14 + li $INDEX_PATTERN, 0x00041014 + vsetivli zero, 1, e32, m1, ta, ma + vmv.v.x $V26, $INDEX_PATTERN + + addi $H2, $H, 8 + + # Use index-load to get {f,e,b,a},{h,g,d,c} + vsetivli zero, 4, e32, m1, ta, ma + vluxei8.v $V6, ($H), $V26 + vluxei8.v $V7, ($H2), $V26 + + # Setup v0 mask for the vmerge to replace the first word (idx==0) in key-scheduling. + # The AVL is 4 in SHA, so we could use a single e8(8 element masking) for masking. + vsetivli zero, 1, e8, m1, ta, ma + vmv.v.i $V0, 0x01 + + vsetivli zero, 4, e32, m1, ta, ma + +L_round_loop: + # Decrement length by 1 + add $LEN, $LEN, -1 + + # Keep the current state as we need it later: H' = H+{a',b',c',...,h'}. + vmv.v.v $V30, $V6 + vmv.v.v $V31, $V7 + + # Load the 512-bits of the message block in v1-v4 and perform + # an endian swap on each 4 bytes element. + vle32.v $V1, ($INP) + vrev8.v $V1, $V1 + add $INP, $INP, 16 + vle32.v $V2, ($INP) + vrev8.v $V2, $V2 + add $INP, $INP, 16 + vle32.v $V3, ($INP) + vrev8.v $V3, $V3 + add $INP, $INP, 16 + vle32.v $V4, ($INP) + vrev8.v $V4, $V4 + add $INP, $INP, 16 + + # Quad-round 0 (+0, Wt from oldest to newest in v1->v2->v3->v4) + vadd.vv $V5, $V10, $V1 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V3, $V2, $V0 + vsha2ms.vv $V1, $V5, $V4 # Generate W[19:16] + + # Quad-round 1 (+1, v2->v3->v4->v1) + vadd.vv $V5, $V11, $V2 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V4, $V3, $V0 + vsha2ms.vv $V2, $V5, $V1 # Generate W[23:20] + + # Quad-round 2 (+2, v3->v4->v1->v2) + vadd.vv $V5, $V12, $V3 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V1, $V4, $V0 + vsha2ms.vv $V3, $V5, $V2 # Generate W[27:24] + + # Quad-round 3 (+3, v4->v1->v2->v3) + vadd.vv $V5, $V13, $V4 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V2, $V1, $V0 + vsha2ms.vv $V4, $V5, $V3 # Generate W[31:28] + + # Quad-round 4 (+0, v1->v2->v3->v4) + vadd.vv $V5, $V14, $V1 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V3, $V2, $V0 + vsha2ms.vv $V1, $V5, $V4 # Generate W[35:32] + + # Quad-round 5 (+1, v2->v3->v4->v1) + vadd.vv $V5, $V15, $V2 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V4, $V3, $V0 + vsha2ms.vv $V2, $V5, $V1 # Generate W[39:36] + + # Quad-round 6 (+2, v3->v4->v1->v2) + vadd.vv $V5, $V16, $V3 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V1, $V4, $V0 + vsha2ms.vv $V3, $V5, $V2 # Generate W[43:40] + + # Quad-round 7 (+3, v4->v1->v2->v3) + vadd.vv $V5, $V17, $V4 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V2, $V1, $V0 + vsha2ms.vv $V4, $V5, $V3 # Generate W[47:44] + + # Quad-round 8 (+0, v1->v2->v3->v4) + vadd.vv $V5, $V18, $V1 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V3, $V2, $V0 + vsha2ms.vv $V1, $V5, $V4 # Generate W[51:48] + + # Quad-round 9 (+1, v2->v3->v4->v1) + vadd.vv $V5, $V19, $V2 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V4, $V3, $V0 + vsha2ms.vv $V2, $V5, $V1 # Generate W[55:52] + + # Quad-round 10 (+2, v3->v4->v1->v2) + vadd.vv $V5, $V20, $V3 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V1, $V4, $V0 + vsha2ms.vv $V3, $V5, $V2 # Generate W[59:56] + + # Quad-round 11 (+3, v4->v1->v2->v3) + vadd.vv $V5, $V21, $V4 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + vmerge.vvm $V5, $V2, $V1, $V0 + vsha2ms.vv $V4, $V5, $V3 # Generate W[63:60] + + # Quad-round 12 (+0, v1->v2->v3->v4) + # Note that we stop generating new message schedule words (Wt, v1-13) + # as we already generated all the words we end up consuming (i.e., W[63:60]). + vadd.vv $V5, $V22, $V1 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + + # Quad-round 13 (+1, v2->v3->v4->v1) + vadd.vv $V5, $V23, $V2 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + + # Quad-round 14 (+2, v3->v4->v1->v2) + vadd.vv $V5, $V24, $V3 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + + # Quad-round 15 (+3, v4->v1->v2->v3) + vadd.vv $V5, $V25, $V4 + vsha2cl.vv $V7, $V6, $V5 + vsha2ch.vv $V6, $V7, $V5 + + # H' = H+{a',b',c',...,h'} + vadd.vv $V6, $V30, $V6 + vadd.vv $V7, $V31, $V7 + bnez $LEN, L_round_loop + + # Store {f,e,b,a},{h,g,d,c} back to {a,b,c,d},{e,f,g,h}. + vsuxei8.v $V6, ($H), $V26 + vsuxei8.v $V7, ($H2), $V26 + + ret +SYM_FUNC_END(sha256_block_data_order_zvkb_zvknha_or_zvknhb) + +.p2align 2 +.type $K256,\@object +$K256: + .word 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5 + .word 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5 + .word 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3 + .word 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174 + .word 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc + .word 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da + .word 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7 + .word 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967 + .word 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13 + .word 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85 + .word 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3 + .word 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070 + .word 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5 + .word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3 + .word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208 + .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 +.size $K256,.-$K256 +___ + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; From patchwork Sun Dec 31 15:27:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507244 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7F7EBE556 for ; Sun, 31 Dec 2023 15:28:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="FCfsF6F6" Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-1d3ef33e68dso53170685ad.1 for ; Sun, 31 Dec 2023 07:28:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036500; x=1704641300; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+/OIIXKk5C61VbRi9dkG/9FTypVu5kojGYvk9W3ZCc4=; b=FCfsF6F6jfKcblpfzsVpdWgyDXnrHjULUpGmpF/ukVobpt/VENhpSIKOuvIDxl2s7m vi7xivdxtSzNWf76rTK3KUAkXFIVtA4z4FQRWpsTVIyUSt7vZOIxOhB8ACSP9X9RCWvd HHMWdRrknV+xKaTN+9DNxyzbpiuFy4Xq5WCAQPSgKcSKGY733oatVrYPMXq6jH6X32js WmMeTg/+2YaDUelkeDvxZdmS/JYNv23gZ6sPhmX2F5y0oU2YPGq7VC3/kbcwrsO9e5/O uaXGKOjVkIKbmg7DeT9hFnireeRCIFI1N3HsKlU6e1CDv9amHodGY+nkf6ESj8TE+wuR V99Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036500; x=1704641300; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+/OIIXKk5C61VbRi9dkG/9FTypVu5kojGYvk9W3ZCc4=; b=NA6qTSqnHNWxDqYCfPS/K1XI9aHqEQKQBfxarxqG/19KbznAsyLgaI2q/+k3JbA/ZR l7Wvly7bNkZo1loXVCnQ8saJwI6GR+6RMvFxRuJZ46PTPiw6EDq2MG4PklxKFzYBSGBI 4ehq1KOUXRyrSu1VJm+CTX/hWZ9zhCWtnNI2X5e8RXsTS0jiHUC5+9WC8iZcWSey6f25 ynsNBrDfVeXQxg5N2ii4RbTbCeOyHclfsdipMWlmhCysQyrK4MnxUOOzOfRtbpOuCtaP Rd6SIYH7yFZwn/xOhzz5CiwQbeUBN6BTlrZogLhmKfvs/sDe1iE5MhhNgILgeD2egI0m yALQ== X-Gm-Message-State: AOJu0YyCYP5OU4upieaaS/juW1UwKzknuiT3qr23Et1MtfcR3QIMqMqS pA+24tu7hSKsEIlkw/Z8LhY6Qqw6twWToA== X-Google-Smtp-Source: AGHT+IFmoUQlTfiER2b2yvKQp0iUR+CJAnj2ht1IxlQZZq26hUwGkZ4m77JGVAoiB0HPYRl/BszAPw== X-Received: by 2002:a17:903:1392:b0:1d4:f42:de02 with SMTP id jx18-20020a170903139200b001d40f42de02mr16429358plb.16.1704036499826; Sun, 31 Dec 2023 07:28:19 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.16 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:19 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 08/11] RISC-V: crypto: add Zvknhb accelerated SHA384/512 implementations Date: Sun, 31 Dec 2023 23:27:40 +0800 Message-Id: <20231231152743.6304-9-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add SHA384 and 512 implementations using Zvknhb vector crypto extension from OpenSSL(openssl/openssl#21923). Co-developed-by: Charalampos Mitrodimas Signed-off-by: Charalampos Mitrodimas Co-developed-by: Heiko Stuebner Signed-off-by: Heiko Stuebner Co-developed-by: Phoebe Chen Signed-off-by: Phoebe Chen Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. Changelog v3: - Use `SYM_TYPED_FUNC_START` for sha512 indirect-call asm symbol. - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `SHA512_RISCV64` option by default. - Add `asmlinkage` qualifier for crypto asm function. - Rename sha512-riscv64-zvkb-zvknhb to sha512-riscv64-zvknhb-zvkb. - Reorder structure sha512_algs members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 11 + arch/riscv/crypto/Makefile | 7 + arch/riscv/crypto/sha512-riscv64-glue.c | 139 +++++++++ .../crypto/sha512-riscv64-zvknhb-zvkb.pl | 265 ++++++++++++++++++ 4 files changed, 422 insertions(+) create mode 100644 arch/riscv/crypto/sha512-riscv64-glue.c create mode 100644 arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index ff1dce4a2bcc..1604782c0eed 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -55,4 +55,15 @@ config CRYPTO_SHA256_RISCV64 - Zvknha or Zvknhb vector crypto extensions - Zvkb vector crypto extension +config CRYPTO_SHA512_RISCV64 + tristate "Hash functions: SHA-384 and SHA-512" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_SHA512 + help + SHA-384 and SHA-512 secure hash algorithm (FIPS 180) + + Architecture: riscv64 using: + - Zvknhb vector crypto extension + - Zvkb vector crypto extension + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index e9d7717ec943..8aabef950ad3 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -15,6 +15,9 @@ ghash-riscv64-y := ghash-riscv64-glue.o ghash-riscv64-zvkg.o obj-$(CONFIG_CRYPTO_SHA256_RISCV64) += sha256-riscv64.o sha256-riscv64-y := sha256-riscv64-glue.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o +obj-$(CONFIG_CRYPTO_SHA512_RISCV64) += sha512-riscv64.o +sha512-riscv64-y := sha512-riscv64-glue.o sha512-riscv64-zvknhb-zvkb.o + quiet_cmd_perlasm = PERLASM $@ cmd_perlasm = $(PERL) $(<) void $(@) @@ -33,8 +36,12 @@ $(obj)/ghash-riscv64-zvkg.S: $(src)/ghash-riscv64-zvkg.pl $(obj)/sha256-riscv64-zvknha_or_zvknhb-zvkb.S: $(src)/sha256-riscv64-zvknha_or_zvknhb-zvkb.pl $(call cmd,perlasm) +$(obj)/sha512-riscv64-zvknhb-zvkb.S: $(src)/sha512-riscv64-zvknhb-zvkb.pl + $(call cmd,perlasm) + clean-files += aes-riscv64-zvkned.S clean-files += aes-riscv64-zvkned-zvbb-zvkg.S clean-files += aes-riscv64-zvkned-zvkb.S clean-files += ghash-riscv64-zvkg.S clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S +clean-files += sha512-riscv64-zvknhb-zvkb.S diff --git a/arch/riscv/crypto/sha512-riscv64-glue.c b/arch/riscv/crypto/sha512-riscv64-glue.c new file mode 100644 index 000000000000..3dd8e1c9d402 --- /dev/null +++ b/arch/riscv/crypto/sha512-riscv64-glue.c @@ -0,0 +1,139 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Linux/riscv64 port of the OpenSSL SHA512 implementation for RISC-V 64 + * + * Copyright (C) 2023 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * sha512 using zvkb and zvknhb vector crypto extension + * + * This asm function will just take the first 512-bit as the sha512 state from + * the pointer to `struct sha512_state`. + */ +asmlinkage void sha512_block_data_order_zvkb_zvknhb(struct sha512_state *digest, + const u8 *data, + int num_blks); + +static int riscv64_sha512_update(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + int ret = 0; + + /* + * Make sure struct sha512_state begins directly with the SHA512 + * 512-bit internal state, as this is what the asm function expect. + */ + BUILD_BUG_ON(offsetof(struct sha512_state, state) != 0); + + if (crypto_simd_usable()) { + kernel_vector_begin(); + ret = sha512_base_do_update( + desc, data, len, sha512_block_data_order_zvkb_zvknhb); + kernel_vector_end(); + } else { + ret = crypto_sha512_update(desc, data, len); + } + + return ret; +} + +static int riscv64_sha512_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) +{ + if (crypto_simd_usable()) { + kernel_vector_begin(); + if (len) + sha512_base_do_update( + desc, data, len, + sha512_block_data_order_zvkb_zvknhb); + sha512_base_do_finalize(desc, + sha512_block_data_order_zvkb_zvknhb); + kernel_vector_end(); + + return sha512_base_finish(desc, out); + } + + return crypto_sha512_finup(desc, data, len, out); +} + +static int riscv64_sha512_final(struct shash_desc *desc, u8 *out) +{ + return riscv64_sha512_finup(desc, NULL, 0, out); +} + +static struct shash_alg sha512_algs[] = { + { + .init = sha512_base_init, + .update = riscv64_sha512_update, + .final = riscv64_sha512_final, + .finup = riscv64_sha512_finup, + .descsize = sizeof(struct sha512_state), + .digestsize = SHA512_DIGEST_SIZE, + .base = { + .cra_blocksize = SHA512_BLOCK_SIZE, + .cra_priority = 150, + .cra_name = "sha512", + .cra_driver_name = "sha512-riscv64-zvknhb-zvkb", + .cra_module = THIS_MODULE, + }, + }, + { + .init = sha384_base_init, + .update = riscv64_sha512_update, + .final = riscv64_sha512_final, + .finup = riscv64_sha512_finup, + .descsize = sizeof(struct sha512_state), + .digestsize = SHA384_DIGEST_SIZE, + .base = { + .cra_blocksize = SHA384_BLOCK_SIZE, + .cra_priority = 150, + .cra_name = "sha384", + .cra_driver_name = "sha384-riscv64-zvknhb-zvkb", + .cra_module = THIS_MODULE, + }, + }, +}; + +static inline bool check_sha512_ext(void) +{ + return riscv_isa_extension_available(NULL, ZVKNHB) && + riscv_isa_extension_available(NULL, ZVKB) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_sha512_mod_init(void) +{ + if (check_sha512_ext()) + return crypto_register_shashes(sha512_algs, + ARRAY_SIZE(sha512_algs)); + + return -ENODEV; +} + +static void __exit riscv64_sha512_mod_fini(void) +{ + crypto_unregister_shashes(sha512_algs, ARRAY_SIZE(sha512_algs)); +} + +module_init(riscv64_sha512_mod_init); +module_exit(riscv64_sha512_mod_fini); + +MODULE_DESCRIPTION("SHA-512 (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("sha384"); +MODULE_ALIAS_CRYPTO("sha512"); diff --git a/arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl b/arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl new file mode 100644 index 000000000000..cab46ccd1fe2 --- /dev/null +++ b/arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.pl @@ -0,0 +1,265 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Christoph Müllner +# Copyright (c) 2023, Phoebe Chen +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# The generated code of this file depends on the following RISC-V extensions: +# - RV64I +# - RISC-V vector ('V') with VLEN >= 128 +# - RISC-V Vector SHA-2 Secure Hash extension ('Zvknhb') +# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +#include + +.text +.option arch, +zvknhb, +zvkb +___ + +my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, + $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15, + $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23, + $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map("v$_",(0..31)); + +my $K512 = "K512"; + +# Function arguments +my ($H, $INP, $LEN, $KT, $H2, $INDEX_PATTERN) = ("a0", "a1", "a2", "a3", "t3", "t4"); + +################################################################################ +# void sha512_block_data_order_zvkb_zvknhb(void *c, const void *p, size_t len) +$code .= <<___; +SYM_TYPED_FUNC_START(sha512_block_data_order_zvkb_zvknhb) + vsetivli zero, 4, e64, m2, ta, ma + + # H is stored as {a,b,c,d},{e,f,g,h}, but we need {f,e,b,a},{h,g,d,c} + # The dst vtype is e64m2 and the index vtype is e8mf4. + # We use index-load with the following index pattern at v1. + # i8 index: + # 40, 32, 8, 0 + # Instead of setting the i8 index, we could use a single 32bit + # little-endian value to cover the 4xi8 index. + # i32 value: + # 0x 00 08 20 28 + li $INDEX_PATTERN, 0x00082028 + vsetivli zero, 1, e32, m1, ta, ma + vmv.v.x $V1, $INDEX_PATTERN + + addi $H2, $H, 16 + + # Use index-load to get {f,e,b,a},{h,g,d,c} + vsetivli zero, 4, e64, m2, ta, ma + vluxei8.v $V22, ($H), $V1 + vluxei8.v $V24, ($H2), $V1 + + # Setup v0 mask for the vmerge to replace the first word (idx==0) in key-scheduling. + # The AVL is 4 in SHA, so we could use a single e8(8 element masking) for masking. + vsetivli zero, 1, e8, m1, ta, ma + vmv.v.i $V0, 0x01 + + vsetivli zero, 4, e64, m2, ta, ma + +L_round_loop: + # Load round constants K512 + la $KT, $K512 + + # Decrement length by 1 + addi $LEN, $LEN, -1 + + # Keep the current state as we need it later: H' = H+{a',b',c',...,h'}. + vmv.v.v $V26, $V22 + vmv.v.v $V28, $V24 + + # Load the 1024-bits of the message block in v10-v16 and perform the endian + # swap. + vle64.v $V10, ($INP) + vrev8.v $V10, $V10 + addi $INP, $INP, 32 + vle64.v $V12, ($INP) + vrev8.v $V12, $V12 + addi $INP, $INP, 32 + vle64.v $V14, ($INP) + vrev8.v $V14, $V14 + addi $INP, $INP, 32 + vle64.v $V16, ($INP) + vrev8.v $V16, $V16 + addi $INP, $INP, 32 + + .rept 4 + # Quad-round 0 (+0, v10->v12->v14->v16) + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V10 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + vmerge.vvm $V18, $V14, $V12, $V0 + vsha2ms.vv $V10, $V18, $V16 + + # Quad-round 1 (+1, v12->v14->v16->v10) + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V12 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + vmerge.vvm $V18, $V16, $V14, $V0 + vsha2ms.vv $V12, $V18, $V10 + + # Quad-round 2 (+2, v14->v16->v10->v12) + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V14 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + vmerge.vvm $V18, $V10, $V16, $V0 + vsha2ms.vv $V14, $V18, $V12 + + # Quad-round 3 (+3, v16->v10->v12->v14) + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V16 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + vmerge.vvm $V18, $V12, $V10, $V0 + vsha2ms.vv $V16, $V18, $V14 + .endr + + # Quad-round 16 (+0, v10->v12->v14->v16) + # Note that we stop generating new message schedule words (Wt, v10-16) + # as we already generated all the words we end up consuming (i.e., W[79:76]). + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V10 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + + # Quad-round 17 (+1, v12->v14->v16->v10) + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V12 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + + # Quad-round 18 (+2, v14->v16->v10->v12) + vle64.v $V20, ($KT) + addi $KT, $KT, 32 + vadd.vv $V18, $V20, $V14 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + + # Quad-round 19 (+3, v16->v10->v12->v14) + vle64.v $V20, ($KT) + # No t1 increment needed. + vadd.vv $V18, $V20, $V16 + vsha2cl.vv $V24, $V22, $V18 + vsha2ch.vv $V22, $V24, $V18 + + # H' = H+{a',b',c',...,h'} + vadd.vv $V22, $V26, $V22 + vadd.vv $V24, $V28, $V24 + bnez $LEN, L_round_loop + + # Store {f,e,b,a},{h,g,d,c} back to {a,b,c,d},{e,f,g,h}. + vsuxei8.v $V22, ($H), $V1 + vsuxei8.v $V24, ($H2), $V1 + + ret +SYM_FUNC_END(sha512_block_data_order_zvkb_zvknhb) + +.p2align 3 +.type $K512,\@object +$K512: + .dword 0x428a2f98d728ae22, 0x7137449123ef65cd + .dword 0xb5c0fbcfec4d3b2f, 0xe9b5dba58189dbbc + .dword 0x3956c25bf348b538, 0x59f111f1b605d019 + .dword 0x923f82a4af194f9b, 0xab1c5ed5da6d8118 + .dword 0xd807aa98a3030242, 0x12835b0145706fbe + .dword 0x243185be4ee4b28c, 0x550c7dc3d5ffb4e2 + .dword 0x72be5d74f27b896f, 0x80deb1fe3b1696b1 + .dword 0x9bdc06a725c71235, 0xc19bf174cf692694 + .dword 0xe49b69c19ef14ad2, 0xefbe4786384f25e3 + .dword 0x0fc19dc68b8cd5b5, 0x240ca1cc77ac9c65 + .dword 0x2de92c6f592b0275, 0x4a7484aa6ea6e483 + .dword 0x5cb0a9dcbd41fbd4, 0x76f988da831153b5 + .dword 0x983e5152ee66dfab, 0xa831c66d2db43210 + .dword 0xb00327c898fb213f, 0xbf597fc7beef0ee4 + .dword 0xc6e00bf33da88fc2, 0xd5a79147930aa725 + .dword 0x06ca6351e003826f, 0x142929670a0e6e70 + .dword 0x27b70a8546d22ffc, 0x2e1b21385c26c926 + .dword 0x4d2c6dfc5ac42aed, 0x53380d139d95b3df + .dword 0x650a73548baf63de, 0x766a0abb3c77b2a8 + .dword 0x81c2c92e47edaee6, 0x92722c851482353b + .dword 0xa2bfe8a14cf10364, 0xa81a664bbc423001 + .dword 0xc24b8b70d0f89791, 0xc76c51a30654be30 + .dword 0xd192e819d6ef5218, 0xd69906245565a910 + .dword 0xf40e35855771202a, 0x106aa07032bbd1b8 + .dword 0x19a4c116b8d2d0c8, 0x1e376c085141ab53 + .dword 0x2748774cdf8eeb99, 0x34b0bcb5e19b48a8 + .dword 0x391c0cb3c5c95a63, 0x4ed8aa4ae3418acb + .dword 0x5b9cca4f7763e373, 0x682e6ff3d6b2b8a3 + .dword 0x748f82ee5defb2fc, 0x78a5636f43172f60 + .dword 0x84c87814a1f0ab72, 0x8cc702081a6439ec + .dword 0x90befffa23631e28, 0xa4506cebde82bde9 + .dword 0xbef9a3f7b2c67915, 0xc67178f2e372532b + .dword 0xca273eceea26619c, 0xd186b8c721c0c207 + .dword 0xeada7dd6cde0eb1e, 0xf57d4f7fee6ed178 + .dword 0x06f067aa72176fba, 0x0a637dc5a2c898a6 + .dword 0x113f9804bef90dae, 0x1b710b35131c471b + .dword 0x28db77f523047d84, 0x32caab7b40c72493 + .dword 0x3c9ebe0a15c9bebc, 0x431d67c49c100d4c + .dword 0x4cc5d4becb3e42b6, 0x597f299cfc657e2a + .dword 0x5fcb6fab3ad6faec, 0x6c44198c4a475817 +.size $K512,.-$K512 +___ + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; From patchwork Sun Dec 31 15:27:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507245 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D7C79F9EA for ; Sun, 31 Dec 2023 15:28:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="GAWjKGLN" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-1d3e84fded7so39425145ad.1 for ; Sun, 31 Dec 2023 07:28:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036503; x=1704641303; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hXPPBCQQI1J2vwyZpAmmynu+6ALlp8WBjY508kn9xmw=; b=GAWjKGLNf3dN0FmdZP2affhts0UagoOpQUeCrt2+r67CCYoZvA8fVlQugWybebqEfO VEFNPncEAxI2dbi79/zvZaxqDbMPcFlC1ZsmxBvMJS1NZ0w/82JgMosXc4SXMYv8cR2T hhKxlyKRmto97MDHYVdKeSZsS1kEorJDFAI1scin754B7+dvHiA5Up8qqU/lzHyljnby HM5246LofklffBAUfrhqnwitV03RBcNmP5TugCWMsSlFqWX9sOAxGfVPCqrfAOYffR2Z twO/tIuJFjtj+IcQJaMpi9EAP5rnuVzqraLY8Q939ckkVcPemnNV/Wc81Z2JDw2kMCRB tn9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036503; x=1704641303; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hXPPBCQQI1J2vwyZpAmmynu+6ALlp8WBjY508kn9xmw=; b=XNv7oBMGx9JlD74Xez4SDAAIH8WrrSXvMTWuZzD7ZWFQCqlSgseIVyb9pgQirlNotN Au4ewrI7vMWVPz5lRiEQfTKwaDE4ZxbLV6M1ROc6O3rwsRnjVs8HzpfqDZfmntfbGPut p4mozdiFo+NfnAFP/Ixena3eW0MTNxogMxiLm65dNdZc5zayHpvF2gYaC06edeuP6YyE wD9KD5QRlBrWqfj57cgiyM194UmcH6ngJjQSM1I81HdhE7rNfUXCWeTYQz3BKLDN0UcY kyGibXlqhn9MJ6qtLqRstroR1d4TItWxOGsd/IDbZs+mTEsTiTKAGS4YHpvqxpDf1JuZ 33Dw== X-Gm-Message-State: AOJu0YzK/1WijmMu/1ASoN2IthDRTG70LzK8+q2/PXFTb69LYRotLGt4 E9jx7xys5uDPTTnFZENLOGCuzz8EHVfUcw== X-Google-Smtp-Source: AGHT+IFTJ2mwpS/mm9mFWteGyyr+L6PbECUy4sUM62B2IvRa8N3vPhAbnopbiRnHLKV1dQMJGbEpAw== X-Received: by 2002:a17:902:d3c6:b0:1d3:d8e3:266 with SMTP id w6-20020a170902d3c600b001d3d8e30266mr5312184plb.65.1704036503238; Sun, 31 Dec 2023 07:28:23 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:22 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 09/11] RISC-V: crypto: add Zvksed accelerated SM4 implementation Date: Sun, 31 Dec 2023 23:27:41 +0800 Message-Id: <20231231152743.6304-10-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add SM4 implementation using Zvksed vector crypto extension from OpenSSL (openssl/openssl#21923). The perlasm here is different from the original implementation in OpenSSL. In OpenSSL, SM4 has the separated set_encrypt_key and set_decrypt_key functions. In kernel, these set_key functions are merged into a single one in order to skip the redundant key expanding instructions. Co-developed-by: Christoph Müllner Signed-off-by: Christoph Müllner Co-developed-by: Heiko Stuebner Signed-off-by: Heiko Stuebner Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. Changelog v3: - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `SM4_RISCV64` option by default. - Add the missed `static` declaration for riscv64_sm4_zvksed_alg. - Add `asmlinkage` qualifier for crypto asm function. - Rename sm4-riscv64-zvkb-zvksed to sm4-riscv64-zvksed-zvkb. - Reorder structure riscv64_sm4_zvksed_zvkb_alg members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 17 ++ arch/riscv/crypto/Makefile | 7 + arch/riscv/crypto/sm4-riscv64-glue.c | 121 +++++++++++ arch/riscv/crypto/sm4-riscv64-zvksed.pl | 268 ++++++++++++++++++++++++ 4 files changed, 413 insertions(+) create mode 100644 arch/riscv/crypto/sm4-riscv64-glue.c create mode 100644 arch/riscv/crypto/sm4-riscv64-zvksed.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 1604782c0eed..cdf7fead0636 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -66,4 +66,21 @@ config CRYPTO_SHA512_RISCV64 - Zvknhb vector crypto extension - Zvkb vector crypto extension +config CRYPTO_SM4_RISCV64 + tristate "Ciphers: SM4 (ShangMi 4)" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_ALGAPI + select CRYPTO_SM4 + help + SM4 cipher algorithms (OSCCA GB/T 32907-2016, + ISO/IEC 18033-3:2010/Amd 1:2021) + + SM4 (GBT.32907-2016) is a cryptographic standard issued by the + Organization of State Commercial Administration of China (OSCCA) + as an authorized cryptographic algorithms for the use within China. + + Architecture: riscv64 using: + - Zvksed vector crypto extension + - Zvkb vector crypto extension + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index 8aabef950ad3..8e34861bba34 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -18,6 +18,9 @@ sha256-riscv64-y := sha256-riscv64-glue.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o obj-$(CONFIG_CRYPTO_SHA512_RISCV64) += sha512-riscv64.o sha512-riscv64-y := sha512-riscv64-glue.o sha512-riscv64-zvknhb-zvkb.o +obj-$(CONFIG_CRYPTO_SM4_RISCV64) += sm4-riscv64.o +sm4-riscv64-y := sm4-riscv64-glue.o sm4-riscv64-zvksed.o + quiet_cmd_perlasm = PERLASM $@ cmd_perlasm = $(PERL) $(<) void $(@) @@ -39,9 +42,13 @@ $(obj)/sha256-riscv64-zvknha_or_zvknhb-zvkb.S: $(src)/sha256-riscv64-zvknha_or_z $(obj)/sha512-riscv64-zvknhb-zvkb.S: $(src)/sha512-riscv64-zvknhb-zvkb.pl $(call cmd,perlasm) +$(obj)/sm4-riscv64-zvksed.S: $(src)/sm4-riscv64-zvksed.pl + $(call cmd,perlasm) + clean-files += aes-riscv64-zvkned.S clean-files += aes-riscv64-zvkned-zvbb-zvkg.S clean-files += aes-riscv64-zvkned-zvkb.S clean-files += ghash-riscv64-zvkg.S clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S clean-files += sha512-riscv64-zvknhb-zvkb.S +clean-files += sm4-riscv64-zvksed.S diff --git a/arch/riscv/crypto/sm4-riscv64-glue.c b/arch/riscv/crypto/sm4-riscv64-glue.c new file mode 100644 index 000000000000..9d9d24b67ee3 --- /dev/null +++ b/arch/riscv/crypto/sm4-riscv64-glue.c @@ -0,0 +1,121 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Linux/riscv64 port of the OpenSSL SM4 implementation for RISC-V 64 + * + * Copyright (C) 2023 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* sm4 using zvksed vector crypto extension */ +asmlinkage void rv64i_zvksed_sm4_encrypt(const u8 *in, u8 *out, const u32 *key); +asmlinkage void rv64i_zvksed_sm4_decrypt(const u8 *in, u8 *out, const u32 *key); +asmlinkage int rv64i_zvksed_sm4_set_key(const u8 *user_key, + unsigned int key_len, u32 *enc_key, + u32 *dec_key); + +static int riscv64_sm4_setkey_zvksed(struct crypto_tfm *tfm, const u8 *key, + unsigned int key_len) +{ + struct sm4_ctx *ctx = crypto_tfm_ctx(tfm); + int ret = 0; + + if (crypto_simd_usable()) { + kernel_vector_begin(); + if (rv64i_zvksed_sm4_set_key(key, key_len, ctx->rkey_enc, + ctx->rkey_dec)) + ret = -EINVAL; + kernel_vector_end(); + } else { + ret = sm4_expandkey(ctx, key, key_len); + } + + return ret; +} + +static void riscv64_sm4_encrypt_zvksed(struct crypto_tfm *tfm, u8 *dst, + const u8 *src) +{ + const struct sm4_ctx *ctx = crypto_tfm_ctx(tfm); + + if (crypto_simd_usable()) { + kernel_vector_begin(); + rv64i_zvksed_sm4_encrypt(src, dst, ctx->rkey_enc); + kernel_vector_end(); + } else { + sm4_crypt_block(ctx->rkey_enc, dst, src); + } +} + +static void riscv64_sm4_decrypt_zvksed(struct crypto_tfm *tfm, u8 *dst, + const u8 *src) +{ + const struct sm4_ctx *ctx = crypto_tfm_ctx(tfm); + + if (crypto_simd_usable()) { + kernel_vector_begin(); + rv64i_zvksed_sm4_decrypt(src, dst, ctx->rkey_dec); + kernel_vector_end(); + } else { + sm4_crypt_block(ctx->rkey_dec, dst, src); + } +} + +static struct crypto_alg riscv64_sm4_zvksed_zvkb_alg = { + .cra_flags = CRYPTO_ALG_TYPE_CIPHER, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct sm4_ctx), + .cra_priority = 300, + .cra_name = "sm4", + .cra_driver_name = "sm4-riscv64-zvksed-zvkb", + .cra_cipher = { + .cia_min_keysize = SM4_KEY_SIZE, + .cia_max_keysize = SM4_KEY_SIZE, + .cia_setkey = riscv64_sm4_setkey_zvksed, + .cia_encrypt = riscv64_sm4_encrypt_zvksed, + .cia_decrypt = riscv64_sm4_decrypt_zvksed, + }, + .cra_module = THIS_MODULE, +}; + +static inline bool check_sm4_ext(void) +{ + return riscv_isa_extension_available(NULL, ZVKSED) && + riscv_isa_extension_available(NULL, ZVKB) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_sm4_mod_init(void) +{ + if (check_sm4_ext()) + return crypto_register_alg(&riscv64_sm4_zvksed_zvkb_alg); + + return -ENODEV; +} + +static void __exit riscv64_sm4_mod_fini(void) +{ + crypto_unregister_alg(&riscv64_sm4_zvksed_zvkb_alg); +} + +module_init(riscv64_sm4_mod_init); +module_exit(riscv64_sm4_mod_fini); + +MODULE_DESCRIPTION("SM4 (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("sm4"); diff --git a/arch/riscv/crypto/sm4-riscv64-zvksed.pl b/arch/riscv/crypto/sm4-riscv64-zvksed.pl new file mode 100644 index 000000000000..1873160aac2f --- /dev/null +++ b/arch/riscv/crypto/sm4-riscv64-zvksed.pl @@ -0,0 +1,268 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Christoph Müllner +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# The generated code of this file depends on the following RISC-V extensions: +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector SM4 Block Cipher extension ('Zvksed') +# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +.text +.option arch, +zvksed, +zvkb +___ + +#### +# int rv64i_zvksed_sm4_set_key(const u8 *user_key, unsigned int key_len, +# u32 *enc_key, u32 *dec_key); +# +{ +my ($ukey,$key_len,$enc_key,$dec_key)=("a0","a1","a2","a3"); +my ($fk,$stride)=("a4","a5"); +my ($t0,$t1)=("t0","t1"); +my ($vukey,$vfk,$vk0,$vk1,$vk2,$vk3,$vk4,$vk5,$vk6,$vk7)=("v1","v2","v3","v4","v5","v6","v7","v8","v9","v10"); +$code .= <<___; +.p2align 3 +.globl rv64i_zvksed_sm4_set_key +.type rv64i_zvksed_sm4_set_key,\@function +rv64i_zvksed_sm4_set_key: + li $t0, 16 + beq $t0, $key_len, 1f + li a0, 1 + ret +1: + + vsetivli zero, 4, e32, m1, ta, ma + + # Load the user key + vle32.v $vukey, ($ukey) + vrev8.v $vukey, $vukey + + # Load the FK. + la $fk, FK + vle32.v $vfk, ($fk) + + # Generate round keys. + vxor.vv $vukey, $vukey, $vfk + vsm4k.vi $vk0, $vukey, 0 # rk[0:3] + vsm4k.vi $vk1, $vk0, 1 # rk[4:7] + vsm4k.vi $vk2, $vk1, 2 # rk[8:11] + vsm4k.vi $vk3, $vk2, 3 # rk[12:15] + vsm4k.vi $vk4, $vk3, 4 # rk[16:19] + vsm4k.vi $vk5, $vk4, 5 # rk[20:23] + vsm4k.vi $vk6, $vk5, 6 # rk[24:27] + vsm4k.vi $vk7, $vk6, 7 # rk[28:31] + + # Store enc round keys + vse32.v $vk0, ($enc_key) # rk[0:3] + addi $enc_key, $enc_key, 16 + vse32.v $vk1, ($enc_key) # rk[4:7] + addi $enc_key, $enc_key, 16 + vse32.v $vk2, ($enc_key) # rk[8:11] + addi $enc_key, $enc_key, 16 + vse32.v $vk3, ($enc_key) # rk[12:15] + addi $enc_key, $enc_key, 16 + vse32.v $vk4, ($enc_key) # rk[16:19] + addi $enc_key, $enc_key, 16 + vse32.v $vk5, ($enc_key) # rk[20:23] + addi $enc_key, $enc_key, 16 + vse32.v $vk6, ($enc_key) # rk[24:27] + addi $enc_key, $enc_key, 16 + vse32.v $vk7, ($enc_key) # rk[28:31] + + # Store dec round keys in reverse order + addi $dec_key, $dec_key, 12 + li $stride, -4 + vsse32.v $vk7, ($dec_key), $stride # rk[31:28] + addi $dec_key, $dec_key, 16 + vsse32.v $vk6, ($dec_key), $stride # rk[27:24] + addi $dec_key, $dec_key, 16 + vsse32.v $vk5, ($dec_key), $stride # rk[23:20] + addi $dec_key, $dec_key, 16 + vsse32.v $vk4, ($dec_key), $stride # rk[19:16] + addi $dec_key, $dec_key, 16 + vsse32.v $vk3, ($dec_key), $stride # rk[15:12] + addi $dec_key, $dec_key, 16 + vsse32.v $vk2, ($dec_key), $stride # rk[11:8] + addi $dec_key, $dec_key, 16 + vsse32.v $vk1, ($dec_key), $stride # rk[7:4] + addi $dec_key, $dec_key, 16 + vsse32.v $vk0, ($dec_key), $stride # rk[3:0] + + li a0, 0 + ret +.size rv64i_zvksed_sm4_set_key,.-rv64i_zvksed_sm4_set_key +___ +} + +#### +# void rv64i_zvksed_sm4_encrypt(const unsigned char *in, unsigned char *out, +# const SM4_KEY *key); +# +{ +my ($in,$out,$keys,$stride)=("a0","a1","a2","t0"); +my ($vdata,$vk0,$vk1,$vk2,$vk3,$vk4,$vk5,$vk6,$vk7,$vgen)=("v1","v2","v3","v4","v5","v6","v7","v8","v9","v10"); +$code .= <<___; +.p2align 3 +.globl rv64i_zvksed_sm4_encrypt +.type rv64i_zvksed_sm4_encrypt,\@function +rv64i_zvksed_sm4_encrypt: + vsetivli zero, 4, e32, m1, ta, ma + + # Load input data + vle32.v $vdata, ($in) + vrev8.v $vdata, $vdata + + # Order of elements was adjusted in sm4_set_key() + # Encrypt with all keys + vle32.v $vk0, ($keys) # rk[0:3] + vsm4r.vs $vdata, $vk0 + addi $keys, $keys, 16 + vle32.v $vk1, ($keys) # rk[4:7] + vsm4r.vs $vdata, $vk1 + addi $keys, $keys, 16 + vle32.v $vk2, ($keys) # rk[8:11] + vsm4r.vs $vdata, $vk2 + addi $keys, $keys, 16 + vle32.v $vk3, ($keys) # rk[12:15] + vsm4r.vs $vdata, $vk3 + addi $keys, $keys, 16 + vle32.v $vk4, ($keys) # rk[16:19] + vsm4r.vs $vdata, $vk4 + addi $keys, $keys, 16 + vle32.v $vk5, ($keys) # rk[20:23] + vsm4r.vs $vdata, $vk5 + addi $keys, $keys, 16 + vle32.v $vk6, ($keys) # rk[24:27] + vsm4r.vs $vdata, $vk6 + addi $keys, $keys, 16 + vle32.v $vk7, ($keys) # rk[28:31] + vsm4r.vs $vdata, $vk7 + + # Save the ciphertext (in reverse element order) + vrev8.v $vdata, $vdata + li $stride, -4 + addi $out, $out, 12 + vsse32.v $vdata, ($out), $stride + + ret +.size rv64i_zvksed_sm4_encrypt,.-rv64i_zvksed_sm4_encrypt +___ +} + +#### +# void rv64i_zvksed_sm4_decrypt(const unsigned char *in, unsigned char *out, +# const SM4_KEY *key); +# +{ +my ($in,$out,$keys,$stride)=("a0","a1","a2","t0"); +my ($vdata,$vk0,$vk1,$vk2,$vk3,$vk4,$vk5,$vk6,$vk7,$vgen)=("v1","v2","v3","v4","v5","v6","v7","v8","v9","v10"); +$code .= <<___; +.p2align 3 +.globl rv64i_zvksed_sm4_decrypt +.type rv64i_zvksed_sm4_decrypt,\@function +rv64i_zvksed_sm4_decrypt: + vsetivli zero, 4, e32, m1, ta, ma + + # Load input data + vle32.v $vdata, ($in) + vrev8.v $vdata, $vdata + + # Order of key elements was adjusted in sm4_set_key() + # Decrypt with all keys + vle32.v $vk7, ($keys) # rk[31:28] + vsm4r.vs $vdata, $vk7 + addi $keys, $keys, 16 + vle32.v $vk6, ($keys) # rk[27:24] + vsm4r.vs $vdata, $vk6 + addi $keys, $keys, 16 + vle32.v $vk5, ($keys) # rk[23:20] + vsm4r.vs $vdata, $vk5 + addi $keys, $keys, 16 + vle32.v $vk4, ($keys) # rk[19:16] + vsm4r.vs $vdata, $vk4 + addi $keys, $keys, 16 + vle32.v $vk3, ($keys) # rk[15:11] + vsm4r.vs $vdata, $vk3 + addi $keys, $keys, 16 + vle32.v $vk2, ($keys) # rk[11:8] + vsm4r.vs $vdata, $vk2 + addi $keys, $keys, 16 + vle32.v $vk1, ($keys) # rk[7:4] + vsm4r.vs $vdata, $vk1 + addi $keys, $keys, 16 + vle32.v $vk0, ($keys) # rk[3:0] + vsm4r.vs $vdata, $vk0 + + # Save the ciphertext (in reverse element order) + vrev8.v $vdata, $vdata + li $stride, -4 + addi $out, $out, 12 + vsse32.v $vdata, ($out), $stride + + ret +.size rv64i_zvksed_sm4_decrypt,.-rv64i_zvksed_sm4_decrypt +___ +} + +$code .= <<___; +# Family Key (little-endian 32-bit chunks) +.p2align 3 +FK: + .word 0xA3B1BAC6, 0x56AA3350, 0x677D9197, 0xB27022DC +.size FK,.-FK +___ + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; From patchwork Sun Dec 31 15:27:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507246 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2DB87107A9 for ; Sun, 31 Dec 2023 15:28:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="h5HmoMRJ" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-6d9f9689195so1525260b3a.3 for ; Sun, 31 Dec 2023 07:28:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036506; x=1704641306; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4btUclgL8prP5a5xsfbrxrrvqeEp8OCXhpchFT4JJow=; b=h5HmoMRJnW2icHN7jazJlhEZJ7fRKwffHWbcqqAF1zfaVmpjkMDrUr4AEaPsu+A2e5 kRt9U3abkI3mazF3cFbJcikg9VwhdOe2xjVxKCfh5YhDIZ7QYKeZWT6GXlF5h7yDC58s P4+asGXCclcIfB+dELpdmB4EWir7kcvlwu+ibQncxYXPsLYYUfNX4Y6+dPoQMfNYy0Na Vju3tsYkjzSsX9BX1H+r4FN5fJy1CFLbufx430oXEGsX7AUz1ZfxK/2rohMpfz5zcFFg pZKfk660qdXpx5UkXmgcW0SiOFUYfZgV2SjNDNk6DrwcjIxz1fi3tNy70M5SgBu3Yxwd GZgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036506; x=1704641306; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4btUclgL8prP5a5xsfbrxrrvqeEp8OCXhpchFT4JJow=; b=YtvTxnugjsxzrW8Dx/LEjUmUqpe6ZrvpMDSLml8BKspWN7Ey4g09KO8Lvku3vrDHW9 2e6yeKUw6ytEfVSYkSQZSzCnW7UU0uCCLJ0c5ANVlqQLXVlKA34kp25t0Ef3ZM3lv0ke kp8flME045blVMoIJkUQv26zfNEprbsSncKJ9F9eFXiEhM4AXph8RQtw1LppM0FnoSyy vdfqeHMdNkYuqsnMKRenb9WEqxW9act/N0JvUukDviRsjUXzD5Aq9UyFJktyi9EEjVPK oZbzKs+AAGST5teQmlrA36w5odAp89p80BXFe+bArvIgHDB4U55kbEd6ZkkuJjX4ctiN TMNg== X-Gm-Message-State: AOJu0YxkwbBxPViPoW+QFZPG7PPUWkrEB0g3RQVAgukvYNd8KczgJa6b 727cFC96BLW9EsDMe88+hrrw1M5/tFoVAQ== X-Google-Smtp-Source: AGHT+IGs+2VkVLO9wT3XYs3ZUo/QaAslGvl/LI9b9+V4HNtX175e8ul7UJLbAGqZqbTBd5UyWRSFTA== X-Received: by 2002:a05:6a20:5294:b0:196:5929:dcdd with SMTP id o20-20020a056a20529400b001965929dcddmr1930861pzg.6.1704036506477; Sun, 31 Dec 2023 07:28:26 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:26 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 10/11] RISC-V: crypto: add Zvksh accelerated SM3 implementation Date: Sun, 31 Dec 2023 23:27:42 +0800 Message-Id: <20231231152743.6304-11-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add SM3 implementation using Zvksh vector crypto extension from OpenSSL (openssl/openssl#21923). Co-developed-by: Christoph Müllner Signed-off-by: Christoph Müllner Co-developed-by: Heiko Stuebner Signed-off-by: Heiko Stuebner Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. Changelog v3: - Use `SYM_TYPED_FUNC_START` for sm3 indirect-call asm symbol. - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `SM3_RISCV64` option by default. - Add `asmlinkage` qualifier for crypto asm function. - Rename sm3-riscv64-zvkb-zvksh to sm3-riscv64-zvksh-zvkb. - Reorder structure sm3_alg members initialization in the order declared. --- arch/riscv/crypto/Kconfig | 12 ++ arch/riscv/crypto/Makefile | 7 + arch/riscv/crypto/sm3-riscv64-glue.c | 124 ++++++++++++++ arch/riscv/crypto/sm3-riscv64-zvksh.pl | 227 +++++++++++++++++++++++++ 4 files changed, 370 insertions(+) create mode 100644 arch/riscv/crypto/sm3-riscv64-glue.c create mode 100644 arch/riscv/crypto/sm3-riscv64-zvksh.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index cdf7fead0636..81dcae72c477 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -66,6 +66,18 @@ config CRYPTO_SHA512_RISCV64 - Zvknhb vector crypto extension - Zvkb vector crypto extension +config CRYPTO_SM3_RISCV64 + tristate "Hash functions: SM3 (ShangMi 3)" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_HASH + select CRYPTO_SM3 + help + SM3 (ShangMi 3) secure hash function (OSCCA GM/T 0004-2012) + + Architecture: riscv64 using: + - Zvksh vector crypto extension + - Zvkb vector crypto extension + config CRYPTO_SM4_RISCV64 tristate "Ciphers: SM4 (ShangMi 4)" depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index 8e34861bba34..b1f857695c1c 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -18,6 +18,9 @@ sha256-riscv64-y := sha256-riscv64-glue.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o obj-$(CONFIG_CRYPTO_SHA512_RISCV64) += sha512-riscv64.o sha512-riscv64-y := sha512-riscv64-glue.o sha512-riscv64-zvknhb-zvkb.o +obj-$(CONFIG_CRYPTO_SM3_RISCV64) += sm3-riscv64.o +sm3-riscv64-y := sm3-riscv64-glue.o sm3-riscv64-zvksh.o + obj-$(CONFIG_CRYPTO_SM4_RISCV64) += sm4-riscv64.o sm4-riscv64-y := sm4-riscv64-glue.o sm4-riscv64-zvksed.o @@ -42,6 +45,9 @@ $(obj)/sha256-riscv64-zvknha_or_zvknhb-zvkb.S: $(src)/sha256-riscv64-zvknha_or_z $(obj)/sha512-riscv64-zvknhb-zvkb.S: $(src)/sha512-riscv64-zvknhb-zvkb.pl $(call cmd,perlasm) +$(obj)/sm3-riscv64-zvksh.S: $(src)/sm3-riscv64-zvksh.pl + $(call cmd,perlasm) + $(obj)/sm4-riscv64-zvksed.S: $(src)/sm4-riscv64-zvksed.pl $(call cmd,perlasm) @@ -51,4 +57,5 @@ clean-files += aes-riscv64-zvkned-zvkb.S clean-files += ghash-riscv64-zvkg.S clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S clean-files += sha512-riscv64-zvknhb-zvkb.S +clean-files += sm3-riscv64-zvksh.S clean-files += sm4-riscv64-zvksed.S diff --git a/arch/riscv/crypto/sm3-riscv64-glue.c b/arch/riscv/crypto/sm3-riscv64-glue.c new file mode 100644 index 000000000000..0e5a2b84c930 --- /dev/null +++ b/arch/riscv/crypto/sm3-riscv64-glue.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Linux/riscv64 port of the OpenSSL SM3 implementation for RISC-V 64 + * + * Copyright (C) 2023 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* + * sm3 using zvksh vector crypto extension + * + * This asm function will just take the first 256-bit as the sm3 state from + * the pointer to `struct sm3_state`. + */ +asmlinkage void ossl_hwsm3_block_data_order_zvksh(struct sm3_state *digest, + u8 const *o, int num); + +static int riscv64_sm3_update(struct shash_desc *desc, const u8 *data, + unsigned int len) +{ + int ret = 0; + + /* + * Make sure struct sm3_state begins directly with the SM3 256-bit internal + * state, as this is what the asm function expect. + */ + BUILD_BUG_ON(offsetof(struct sm3_state, state) != 0); + + if (crypto_simd_usable()) { + kernel_vector_begin(); + ret = sm3_base_do_update(desc, data, len, + ossl_hwsm3_block_data_order_zvksh); + kernel_vector_end(); + } else { + sm3_update(shash_desc_ctx(desc), data, len); + } + + return ret; +} + +static int riscv64_sm3_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) +{ + struct sm3_state *ctx; + + if (crypto_simd_usable()) { + kernel_vector_begin(); + if (len) + sm3_base_do_update(desc, data, len, + ossl_hwsm3_block_data_order_zvksh); + sm3_base_do_finalize(desc, ossl_hwsm3_block_data_order_zvksh); + kernel_vector_end(); + + return sm3_base_finish(desc, out); + } + + ctx = shash_desc_ctx(desc); + if (len) + sm3_update(ctx, data, len); + sm3_final(ctx, out); + + return 0; +} + +static int riscv64_sm3_final(struct shash_desc *desc, u8 *out) +{ + return riscv64_sm3_finup(desc, NULL, 0, out); +} + +static struct shash_alg sm3_alg = { + .init = sm3_base_init, + .update = riscv64_sm3_update, + .final = riscv64_sm3_final, + .finup = riscv64_sm3_finup, + .descsize = sizeof(struct sm3_state), + .digestsize = SM3_DIGEST_SIZE, + .base = { + .cra_blocksize = SM3_BLOCK_SIZE, + .cra_priority = 150, + .cra_name = "sm3", + .cra_driver_name = "sm3-riscv64-zvksh-zvkb", + .cra_module = THIS_MODULE, + }, +}; + +static inline bool check_sm3_ext(void) +{ + return riscv_isa_extension_available(NULL, ZVKSH) && + riscv_isa_extension_available(NULL, ZVKB) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_sm3_mod_init(void) +{ + if (check_sm3_ext()) + return crypto_register_shash(&sm3_alg); + + return -ENODEV; +} + +static void __exit riscv64_sm3_mod_fini(void) +{ + crypto_unregister_shash(&sm3_alg); +} + +module_init(riscv64_sm3_mod_init); +module_exit(riscv64_sm3_mod_fini); + +MODULE_DESCRIPTION("SM3 (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("sm3"); diff --git a/arch/riscv/crypto/sm3-riscv64-zvksh.pl b/arch/riscv/crypto/sm3-riscv64-zvksh.pl new file mode 100644 index 000000000000..c94c99111a71 --- /dev/null +++ b/arch/riscv/crypto/sm3-riscv64-zvksh.pl @@ -0,0 +1,227 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You can obtain +# a copy in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Christoph Müllner +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# The generated code of this file depends on the following RISC-V extensions: +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector SM3 Secure Hash extension ('Zvksh') +# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +#include + +.text +.option arch, +zvksh, +zvkb +___ + +################################################################################ +# ossl_hwsm3_block_data_order_zvksh(SM3_CTX *c, const void *p, size_t num); +{ +my ($CTX, $INPUT, $NUM) = ("a0", "a1", "a2"); +my ($V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, + $V8, $V9, $V10, $V11, $V12, $V13, $V14, $V15, + $V16, $V17, $V18, $V19, $V20, $V21, $V22, $V23, + $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map("v$_",(0..31)); + +$code .= <<___; +SYM_TYPED_FUNC_START(ossl_hwsm3_block_data_order_zvksh) + vsetivli zero, 8, e32, m2, ta, ma + + # Load initial state of hash context (c->A-H). + vle32.v $V0, ($CTX) + vrev8.v $V0, $V0 + +L_sm3_loop: + # Copy the previous state to v2. + # It will be XOR'ed with the current state at the end of the round. + vmv.v.v $V2, $V0 + + # Load the 64B block in 2x32B chunks. + vle32.v $V6, ($INPUT) # v6 := {w7, ..., w0} + addi $INPUT, $INPUT, 32 + + vle32.v $V8, ($INPUT) # v8 := {w15, ..., w8} + addi $INPUT, $INPUT, 32 + + addi $NUM, $NUM, -1 + + # As vsm3c consumes only w0, w1, w4, w5 we need to slide the input + # 2 elements down so we process elements w2, w3, w6, w7 + # This will be repeated for each odd round. + vslidedown.vi $V4, $V6, 2 # v4 := {X, X, w7, ..., w2} + + vsm3c.vi $V0, $V6, 0 + vsm3c.vi $V0, $V4, 1 + + # Prepare a vector with {w11, ..., w4} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w7, ..., w4} + vslideup.vi $V4, $V8, 4 # v4 := {w11, w10, w9, w8, w7, w6, w5, w4} + + vsm3c.vi $V0, $V4, 2 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w11, w10, w9, w8, w7, w6} + vsm3c.vi $V0, $V4, 3 + + vsm3c.vi $V0, $V8, 4 + vslidedown.vi $V4, $V8, 2 # v4 := {X, X, w15, w14, w13, w12, w11, w10} + vsm3c.vi $V0, $V4, 5 + + vsm3me.vv $V6, $V8, $V6 # v6 := {w23, w22, w21, w20, w19, w18, w17, w16} + + # Prepare a register with {w19, w18, w17, w16, w15, w14, w13, w12} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w15, w14, w13, w12} + vslideup.vi $V4, $V6, 4 # v4 := {w19, w18, w17, w16, w15, w14, w13, w12} + + vsm3c.vi $V0, $V4, 6 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w19, w18, w17, w16, w15, w14} + vsm3c.vi $V0, $V4, 7 + + vsm3c.vi $V0, $V6, 8 + vslidedown.vi $V4, $V6, 2 # v4 := {X, X, w23, w22, w21, w20, w19, w18} + vsm3c.vi $V0, $V4, 9 + + vsm3me.vv $V8, $V6, $V8 # v8 := {w31, w30, w29, w28, w27, w26, w25, w24} + + # Prepare a register with {w27, w26, w25, w24, w23, w22, w21, w20} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w23, w22, w21, w20} + vslideup.vi $V4, $V8, 4 # v4 := {w27, w26, w25, w24, w23, w22, w21, w20} + + vsm3c.vi $V0, $V4, 10 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w27, w26, w25, w24, w23, w22} + vsm3c.vi $V0, $V4, 11 + + vsm3c.vi $V0, $V8, 12 + vslidedown.vi $V4, $V8, 2 # v4 := {x, X, w31, w30, w29, w28, w27, w26} + vsm3c.vi $V0, $V4, 13 + + vsm3me.vv $V6, $V8, $V6 # v6 := {w32, w33, w34, w35, w36, w37, w38, w39} + + # Prepare a register with {w35, w34, w33, w32, w31, w30, w29, w28} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w31, w30, w29, w28} + vslideup.vi $V4, $V6, 4 # v4 := {w35, w34, w33, w32, w31, w30, w29, w28} + + vsm3c.vi $V0, $V4, 14 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w35, w34, w33, w32, w31, w30} + vsm3c.vi $V0, $V4, 15 + + vsm3c.vi $V0, $V6, 16 + vslidedown.vi $V4, $V6, 2 # v4 := {X, X, w39, w38, w37, w36, w35, w34} + vsm3c.vi $V0, $V4, 17 + + vsm3me.vv $V8, $V6, $V8 # v8 := {w47, w46, w45, w44, w43, w42, w41, w40} + + # Prepare a register with {w43, w42, w41, w40, w39, w38, w37, w36} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w39, w38, w37, w36} + vslideup.vi $V4, $V8, 4 # v4 := {w43, w42, w41, w40, w39, w38, w37, w36} + + vsm3c.vi $V0, $V4, 18 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w43, w42, w41, w40, w39, w38} + vsm3c.vi $V0, $V4, 19 + + vsm3c.vi $V0, $V8, 20 + vslidedown.vi $V4, $V8, 2 # v4 := {X, X, w47, w46, w45, w44, w43, w42} + vsm3c.vi $V0, $V4, 21 + + vsm3me.vv $V6, $V8, $V6 # v6 := {w55, w54, w53, w52, w51, w50, w49, w48} + + # Prepare a register with {w51, w50, w49, w48, w47, w46, w45, w44} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w47, w46, w45, w44} + vslideup.vi $V4, $V6, 4 # v4 := {w51, w50, w49, w48, w47, w46, w45, w44} + + vsm3c.vi $V0, $V4, 22 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w51, w50, w49, w48, w47, w46} + vsm3c.vi $V0, $V4, 23 + + vsm3c.vi $V0, $V6, 24 + vslidedown.vi $V4, $V6, 2 # v4 := {X, X, w55, w54, w53, w52, w51, w50} + vsm3c.vi $V0, $V4, 25 + + vsm3me.vv $V8, $V6, $V8 # v8 := {w63, w62, w61, w60, w59, w58, w57, w56} + + # Prepare a register with {w59, w58, w57, w56, w55, w54, w53, w52} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w55, w54, w53, w52} + vslideup.vi $V4, $V8, 4 # v4 := {w59, w58, w57, w56, w55, w54, w53, w52} + + vsm3c.vi $V0, $V4, 26 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w59, w58, w57, w56, w55, w54} + vsm3c.vi $V0, $V4, 27 + + vsm3c.vi $V0, $V8, 28 + vslidedown.vi $V4, $V8, 2 # v4 := {X, X, w63, w62, w61, w60, w59, w58} + vsm3c.vi $V0, $V4, 29 + + vsm3me.vv $V6, $V8, $V6 # v6 := {w71, w70, w69, w68, w67, w66, w65, w64} + + # Prepare a register with {w67, w66, w65, w64, w63, w62, w61, w60} + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, X, X, w63, w62, w61, w60} + vslideup.vi $V4, $V6, 4 # v4 := {w67, w66, w65, w64, w63, w62, w61, w60} + + vsm3c.vi $V0, $V4, 30 + vslidedown.vi $V4, $V4, 2 # v4 := {X, X, w67, w66, w65, w64, w63, w62} + vsm3c.vi $V0, $V4, 31 + + # XOR in the previous state. + vxor.vv $V0, $V0, $V2 + + bnez $NUM, L_sm3_loop # Check if there are any more block to process +L_sm3_end: + vrev8.v $V0, $V0 + vse32.v $V0, ($CTX) + ret +SYM_FUNC_END(ossl_hwsm3_block_data_order_zvksh) +___ +} + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; From patchwork Sun Dec 31 15:27:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerry Shih X-Patchwork-Id: 13507247 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E575F9ED for ; Sun, 31 Dec 2023 15:28:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=sifive.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sifive.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sifive.com header.i=@sifive.com header.b="h5/t7JPH" Received: by mail-pj1-f45.google.com with SMTP id 98e67ed59e1d1-28c7e30c83fso2169200a91.1 for ; Sun, 31 Dec 2023 07:28:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1704036510; x=1704641310; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rj7SSogxzCOROqn86O2lcjv5NKV2ZOXp3SBjo/z8Ga0=; b=h5/t7JPHpDMqU8BXdJU4fBPr3SJ9fVakFpcO3gjyMKXqybofnOZz0RXFqtqfA6yYk1 Dc2tTK/4ch5oHEjxwE5sOrwtn53sR1ymrK2ypJIRVNk05TN8yq2HLR8vijBb7nTgcw2P zFPsViOpmYq3BSJ2OrwGej/8FUmGoHMvWKiOU58/w+gmE1RjXgXP3WKsGFEEuH4Ust4r TaM8WYwfDvFtB8q5CIkmZ6Ddjan/WqiSR/6c9kon+L6h5g6Z//0s341PqVxTJFtLTc6A 5SrPZn+uSCLpsqxOlLVNtHJ71KIajvFXQsBNAdsyobdbY2L6w44kFft1cGojiGGkzfs/ dEjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704036510; x=1704641310; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rj7SSogxzCOROqn86O2lcjv5NKV2ZOXp3SBjo/z8Ga0=; b=VbDKVrq0Bj9Y+3f0gPPD1SEtbsec5H7xXxUEJFu7eMsEU4nkDM6dgHI1RsKGV9IssU Ro6jX1CAJLS2VP0jJAhhiUlVfDY//oWAlkgx47mCLzMMb3DaHMPyDZlXnN3R2qMmZPFz xdw+QBbbIsGpnAdbD9Mv4ZY2NvKKkhmMuRDhbB7HWiGo9BdWghKauJJ1W/XZxo3IiV+/ 3UJAeIpU/+PC/8rTAZP6NqwGL3dmQu+GtpPQKxZp9RKuqgJ4geKGQcq9T1CbXp31QhCi lN/ITQ1KHQmJD/yNvOH5YLJE9tl4v9sF+1j/YPmG/BEDO2sd9Xw4CP2kBQpvFcQV0kLa Zdtw== X-Gm-Message-State: AOJu0YwnDK4wxw1RgF7Yb5EmXBEeaN5e7R07uOcJKoOJAduH+D/k59US t8j8VdS16LcDJm/lOnX1q6HAW8BQJuhwJQ== X-Google-Smtp-Source: AGHT+IGnclcxmMOJR8aaB5AN+Sgak0SsCFCASSKiW+guzFyQWbm4sKfwNn9xsx8Ll66Hl5DoG2jQ3w== X-Received: by 2002:a17:902:c411:b0:1d4:70c7:aabf with SMTP id k17-20020a170902c41100b001d470c7aabfmr5249294plk.62.1704036509945; Sun, 31 Dec 2023 07:28:29 -0800 (PST) Received: from localhost.localdomain ([49.216.222.63]) by smtp.gmail.com with ESMTPSA id n4-20020a170902e54400b001cc3c521affsm18624430plf.300.2023.12.31.07.28.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Dec 2023 07:28:29 -0800 (PST) From: Jerry Shih To: paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, ebiggers@kernel.org, ardb@kernel.org Cc: heiko@sntech.de, phoebe.chen@sifive.com, hongrong.hsu@sifive.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v4 11/11] RISC-V: crypto: add Zvkb accelerated ChaCha20 implementation Date: Sun, 31 Dec 2023 23:27:43 +0800 Message-Id: <20231231152743.6304-12-jerry.shih@sifive.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20231231152743.6304-1-jerry.shih@sifive.com> References: <20231231152743.6304-1-jerry.shih@sifive.com> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Add a ChaCha20 vector implementation from OpenSSL(openssl/openssl#21923). Signed-off-by: Jerry Shih --- Changelog v4: - Use asm mnemonics for the instructions in vector crypto 1.0 extension. - Revert the usage of simd skcipher. Changelog v3: - Rename kconfig CRYPTO_CHACHA20_RISCV64 to CRYPTO_CHACHA_RISCV64. - Rename chacha20_encrypt() to riscv64_chacha20_encrypt(). - Use asm mnemonics for the instructions in RVV 1.0 extension. Changelog v2: - Do not turn on kconfig `CHACHA20_RISCV64` option by default. - Use simd skcipher interface. - Add `asmlinkage` qualifier for crypto asm function. - Reorder structure riscv64_chacha_alg_zvkb members initialization in the order declared. - Use smaller iv buffer instead of whole state matrix as chacha20's input. --- arch/riscv/crypto/Kconfig | 12 + arch/riscv/crypto/Makefile | 7 + arch/riscv/crypto/chacha-riscv64-glue.c | 109 ++++++++ arch/riscv/crypto/chacha-riscv64-zvkb.pl | 321 +++++++++++++++++++++++ 4 files changed, 449 insertions(+) create mode 100644 arch/riscv/crypto/chacha-riscv64-glue.c create mode 100644 arch/riscv/crypto/chacha-riscv64-zvkb.pl diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 81dcae72c477..2a756f92871f 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -34,6 +34,18 @@ config CRYPTO_AES_BLOCK_RISCV64 - Zvkb vector crypto extension (CTR/XTS) - Zvkg vector crypto extension (XTS) +config CRYPTO_CHACHA_RISCV64 + tristate "Ciphers: ChaCha" + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_SIMD + select CRYPTO_SKCIPHER + select CRYPTO_LIB_CHACHA_GENERIC + help + Length-preserving ciphers: ChaCha20 stream cipher algorithm + + Architecture: riscv64 using: + - Zvkb vector crypto extension + config CRYPTO_GHASH_RISCV64 tristate "Hash functions: GHASH" depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index b1f857695c1c..31021eb3929c 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -9,6 +9,9 @@ aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o obj-$(CONFIG_CRYPTO_AES_BLOCK_RISCV64) += aes-block-riscv64.o aes-block-riscv64-y := aes-riscv64-block-mode-glue.o aes-riscv64-zvkned-zvbb-zvkg.o aes-riscv64-zvkned-zvkb.o +obj-$(CONFIG_CRYPTO_CHACHA_RISCV64) += chacha-riscv64.o +chacha-riscv64-y := chacha-riscv64-glue.o chacha-riscv64-zvkb.o + obj-$(CONFIG_CRYPTO_GHASH_RISCV64) += ghash-riscv64.o ghash-riscv64-y := ghash-riscv64-glue.o ghash-riscv64-zvkg.o @@ -36,6 +39,9 @@ $(obj)/aes-riscv64-zvkned-zvbb-zvkg.S: $(src)/aes-riscv64-zvkned-zvbb-zvkg.pl $(obj)/aes-riscv64-zvkned-zvkb.S: $(src)/aes-riscv64-zvkned-zvkb.pl $(call cmd,perlasm) +$(obj)/chacha-riscv64-zvkb.S: $(src)/chacha-riscv64-zvkb.pl + $(call cmd,perlasm) + $(obj)/ghash-riscv64-zvkg.S: $(src)/ghash-riscv64-zvkg.pl $(call cmd,perlasm) @@ -54,6 +60,7 @@ $(obj)/sm4-riscv64-zvksed.S: $(src)/sm4-riscv64-zvksed.pl clean-files += aes-riscv64-zvkned.S clean-files += aes-riscv64-zvkned-zvbb-zvkg.S clean-files += aes-riscv64-zvkned-zvkb.S +clean-files += chacha-riscv64-zvkb.S clean-files += ghash-riscv64-zvkg.S clean-files += sha256-riscv64-zvknha_or_zvknhb-zvkb.S clean-files += sha512-riscv64-zvknhb-zvkb.S diff --git a/arch/riscv/crypto/chacha-riscv64-glue.c b/arch/riscv/crypto/chacha-riscv64-glue.c new file mode 100644 index 000000000000..a7a2f0303afe --- /dev/null +++ b/arch/riscv/crypto/chacha-riscv64-glue.c @@ -0,0 +1,109 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Port of the OpenSSL ChaCha20 implementation for RISC-V 64 + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include +#include + +/* chacha20 using zvkb vector crypto extension */ +asmlinkage void ChaCha20_ctr32_zvkb(u8 *out, const u8 *input, size_t len, + const u32 *key, const u32 *counter); + +static int riscv64_chacha20_encrypt(struct skcipher_request *req) +{ + u32 iv[CHACHA_IV_SIZE / sizeof(u32)]; + u8 block_buffer[CHACHA_BLOCK_SIZE]; + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + const struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_walk walk; + unsigned int nbytes; + unsigned int tail_bytes; + int err; + + iv[0] = get_unaligned_le32(req->iv); + iv[1] = get_unaligned_le32(req->iv + 4); + iv[2] = get_unaligned_le32(req->iv + 8); + iv[3] = get_unaligned_le32(req->iv + 12); + + err = skcipher_walk_virt(&walk, req, false); + while (walk.nbytes) { + nbytes = walk.nbytes & (~(CHACHA_BLOCK_SIZE - 1)); + tail_bytes = walk.nbytes & (CHACHA_BLOCK_SIZE - 1); + kernel_vector_begin(); + if (nbytes) { + ChaCha20_ctr32_zvkb(walk.dst.virt.addr, + walk.src.virt.addr, nbytes, + ctx->key, iv); + iv[0] += nbytes / CHACHA_BLOCK_SIZE; + } + if (walk.nbytes == walk.total && tail_bytes > 0) { + memcpy(block_buffer, walk.src.virt.addr + nbytes, + tail_bytes); + ChaCha20_ctr32_zvkb(block_buffer, block_buffer, + CHACHA_BLOCK_SIZE, ctx->key, iv); + memcpy(walk.dst.virt.addr + nbytes, block_buffer, + tail_bytes); + tail_bytes = 0; + } + kernel_vector_end(); + + err = skcipher_walk_done(&walk, tail_bytes); + } + + return err; +} + +static struct skcipher_alg riscv64_chacha_alg_zvkb = { + .setkey = chacha20_setkey, + .encrypt = riscv64_chacha20_encrypt, + .decrypt = riscv64_chacha20_encrypt, + .min_keysize = CHACHA_KEY_SIZE, + .max_keysize = CHACHA_KEY_SIZE, + .ivsize = CHACHA_IV_SIZE, + .chunksize = CHACHA_BLOCK_SIZE, + .walksize = CHACHA_BLOCK_SIZE * 4, + .base = { + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct chacha_ctx), + .cra_priority = 300, + .cra_name = "chacha20", + .cra_driver_name = "chacha20-riscv64-zvkb", + .cra_module = THIS_MODULE, + }, +}; + +static inline bool check_chacha20_ext(void) +{ + return riscv_isa_extension_available(NULL, ZVKB) && + riscv_vector_vlen() >= 128; +} + +static int __init riscv64_chacha_mod_init(void) +{ + if (check_chacha20_ext()) + return crypto_register_skcipher(&riscv64_chacha_alg_zvkb); + + return -ENODEV; +} + +static void __exit riscv64_chacha_mod_fini(void) +{ + crypto_unregister_skcipher(&riscv64_chacha_alg_zvkb); +} + +module_init(riscv64_chacha_mod_init); +module_exit(riscv64_chacha_mod_fini); + +MODULE_DESCRIPTION("ChaCha20 (RISC-V accelerated)"); +MODULE_AUTHOR("Jerry Shih "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("chacha20"); diff --git a/arch/riscv/crypto/chacha-riscv64-zvkb.pl b/arch/riscv/crypto/chacha-riscv64-zvkb.pl new file mode 100644 index 000000000000..279410d9e062 --- /dev/null +++ b/arch/riscv/crypto/chacha-riscv64-zvkb.pl @@ -0,0 +1,321 @@ +#! /usr/bin/env perl +# SPDX-License-Identifier: Apache-2.0 OR BSD-2-Clause +# +# This file is dual-licensed, meaning that you can use it under your +# choice of either of the following two licenses: +# +# Copyright 2023-2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You may not use +# this file except in compliance with the License. You can obtain a copy +# in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html +# +# or +# +# Copyright (c) 2023, Jerry Shih +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +# - RV64I +# - RISC-V Vector ('V') with VLEN >= 128 +# - RISC-V Vector Cryptography Bit-manipulation extension ('Zvkb') + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT, ">$output"; + +my $code = <<___; +.text +.option arch, +zvkb +___ + +# void ChaCha20_ctr32_zvkb(unsigned char *out, const unsigned char *inp, +# size_t len, const unsigned int key[8], +# const unsigned int counter[4]); +################################################################################ +my ( $OUTPUT, $INPUT, $LEN, $KEY, $COUNTER ) = ( "a0", "a1", "a2", "a3", "a4" ); +my ( $T0 ) = ( "t0" ); +my ( $CONST_DATA0, $CONST_DATA1, $CONST_DATA2, $CONST_DATA3 ) = + ( "a5", "a6", "a7", "t1" ); +my ( $KEY0, $KEY1, $KEY2,$KEY3, $KEY4, $KEY5, $KEY6, $KEY7, + $COUNTER0, $COUNTER1, $NONCE0, $NONCE1 +) = ( "s0", "s1", "s2", "s3", "s4", "s5", "s6", + "s7", "s8", "s9", "s10", "s11" ); +my ( $VL, $STRIDE, $CHACHA_LOOP_COUNT ) = ( "t2", "t3", "t4" ); +my ( + $V0, $V1, $V2, $V3, $V4, $V5, $V6, $V7, $V8, $V9, $V10, + $V11, $V12, $V13, $V14, $V15, $V16, $V17, $V18, $V19, $V20, $V21, + $V22, $V23, $V24, $V25, $V26, $V27, $V28, $V29, $V30, $V31, +) = map( "v$_", ( 0 .. 31 ) ); + +sub chacha_quad_round_group { + my ( + $A0, $B0, $C0, $D0, $A1, $B1, $C1, $D1, + $A2, $B2, $C2, $D2, $A3, $B3, $C3, $D3 + ) = @_; + + my $code = <<___; + # a += b; d ^= a; d <<<= 16; + vadd.vv $A0, $A0, $B0 + vadd.vv $A1, $A1, $B1 + vadd.vv $A2, $A2, $B2 + vadd.vv $A3, $A3, $B3 + vxor.vv $D0, $D0, $A0 + vxor.vv $D1, $D1, $A1 + vxor.vv $D2, $D2, $A2 + vxor.vv $D3, $D3, $A3 + vror.vi $D0, $D0, 32 - 16 + vror.vi $D1, $D1, 32 - 16 + vror.vi $D2, $D2, 32 - 16 + vror.vi $D3, $D3, 32 - 16 + # c += d; b ^= c; b <<<= 12; + vadd.vv $C0, $C0, $D0 + vadd.vv $C1, $C1, $D1 + vadd.vv $C2, $C2, $D2 + vadd.vv $C3, $C3, $D3 + vxor.vv $B0, $B0, $C0 + vxor.vv $B1, $B1, $C1 + vxor.vv $B2, $B2, $C2 + vxor.vv $B3, $B3, $C3 + vror.vi $B0, $B0, 32 - 12 + vror.vi $B1, $B1, 32 - 12 + vror.vi $B2, $B2, 32 - 12 + vror.vi $B3, $B3, 32 - 12 + # a += b; d ^= a; d <<<= 8; + vadd.vv $A0, $A0, $B0 + vadd.vv $A1, $A1, $B1 + vadd.vv $A2, $A2, $B2 + vadd.vv $A3, $A3, $B3 + vxor.vv $D0, $D0, $A0 + vxor.vv $D1, $D1, $A1 + vxor.vv $D2, $D2, $A2 + vxor.vv $D3, $D3, $A3 + vror.vi $D0, $D0, 32 - 8 + vror.vi $D1, $D1, 32 - 8 + vror.vi $D2, $D2, 32 - 8 + vror.vi $D3, $D3, 32 - 8 + # c += d; b ^= c; b <<<= 7; + vadd.vv $C0, $C0, $D0 + vadd.vv $C1, $C1, $D1 + vadd.vv $C2, $C2, $D2 + vadd.vv $C3, $C3, $D3 + vxor.vv $B0, $B0, $C0 + vxor.vv $B1, $B1, $C1 + vxor.vv $B2, $B2, $C2 + vxor.vv $B3, $B3, $C3 + vror.vi $B0, $B0, 32 - 7 + vror.vi $B1, $B1, 32 - 7 + vror.vi $B2, $B2, 32 - 7 + vror.vi $B3, $B3, 32 - 7 +___ + + return $code; +} + +$code .= <<___; +.p2align 3 +.globl ChaCha20_ctr32_zvkb +.type ChaCha20_ctr32_zvkb,\@function +ChaCha20_ctr32_zvkb: + srli $LEN, $LEN, 6 + beqz $LEN, .Lend + + addi sp, sp, -96 + sd s0, 0(sp) + sd s1, 8(sp) + sd s2, 16(sp) + sd s3, 24(sp) + sd s4, 32(sp) + sd s5, 40(sp) + sd s6, 48(sp) + sd s7, 56(sp) + sd s8, 64(sp) + sd s9, 72(sp) + sd s10, 80(sp) + sd s11, 88(sp) + + li $STRIDE, 64 + + #### chacha block data + # "expa" little endian + li $CONST_DATA0, 0x61707865 + # "nd 3" little endian + li $CONST_DATA1, 0x3320646e + # "2-by" little endian + li $CONST_DATA2, 0x79622d32 + # "te k" little endian + li $CONST_DATA3, 0x6b206574 + + lw $KEY0, 0($KEY) + lw $KEY1, 4($KEY) + lw $KEY2, 8($KEY) + lw $KEY3, 12($KEY) + lw $KEY4, 16($KEY) + lw $KEY5, 20($KEY) + lw $KEY6, 24($KEY) + lw $KEY7, 28($KEY) + + lw $COUNTER0, 0($COUNTER) + lw $COUNTER1, 4($COUNTER) + lw $NONCE0, 8($COUNTER) + lw $NONCE1, 12($COUNTER) + +.Lblock_loop: + vsetvli $VL, $LEN, e32, m1, ta, ma + + # init chacha const states + vmv.v.x $V0, $CONST_DATA0 + vmv.v.x $V1, $CONST_DATA1 + vmv.v.x $V2, $CONST_DATA2 + vmv.v.x $V3, $CONST_DATA3 + + # init chacha key states + vmv.v.x $V4, $KEY0 + vmv.v.x $V5, $KEY1 + vmv.v.x $V6, $KEY2 + vmv.v.x $V7, $KEY3 + vmv.v.x $V8, $KEY4 + vmv.v.x $V9, $KEY5 + vmv.v.x $V10, $KEY6 + vmv.v.x $V11, $KEY7 + + # init chacha key states + vid.v $V12 + vadd.vx $V12, $V12, $COUNTER0 + vmv.v.x $V13, $COUNTER1 + + # init chacha nonce states + vmv.v.x $V14, $NONCE0 + vmv.v.x $V15, $NONCE1 + + # load the top-half of input data + vlsseg8e32.v $V16, ($INPUT), $STRIDE + + li $CHACHA_LOOP_COUNT, 10 +.Lround_loop: + addi $CHACHA_LOOP_COUNT, $CHACHA_LOOP_COUNT, -1 + @{[chacha_quad_round_group + $V0, $V4, $V8, $V12, + $V1, $V5, $V9, $V13, + $V2, $V6, $V10, $V14, + $V3, $V7, $V11, $V15]} + @{[chacha_quad_round_group + $V0, $V5, $V10, $V15, + $V1, $V6, $V11, $V12, + $V2, $V7, $V8, $V13, + $V3, $V4, $V9, $V14]} + bnez $CHACHA_LOOP_COUNT, .Lround_loop + + # load the bottom-half of input data + addi $T0, $INPUT, 32 + vlsseg8e32.v $V24, ($T0), $STRIDE + + # add chacha top-half initial block states + vadd.vx $V0, $V0, $CONST_DATA0 + vadd.vx $V1, $V1, $CONST_DATA1 + vadd.vx $V2, $V2, $CONST_DATA2 + vadd.vx $V3, $V3, $CONST_DATA3 + vadd.vx $V4, $V4, $KEY0 + vadd.vx $V5, $V5, $KEY1 + vadd.vx $V6, $V6, $KEY2 + vadd.vx $V7, $V7, $KEY3 + # xor with the top-half input + vxor.vv $V16, $V16, $V0 + vxor.vv $V17, $V17, $V1 + vxor.vv $V18, $V18, $V2 + vxor.vv $V19, $V19, $V3 + vxor.vv $V20, $V20, $V4 + vxor.vv $V21, $V21, $V5 + vxor.vv $V22, $V22, $V6 + vxor.vv $V23, $V23, $V7 + + # save the top-half of output + vssseg8e32.v $V16, ($OUTPUT), $STRIDE + + # add chacha bottom-half initial block states + vadd.vx $V8, $V8, $KEY4 + vadd.vx $V9, $V9, $KEY5 + vadd.vx $V10, $V10, $KEY6 + vadd.vx $V11, $V11, $KEY7 + vid.v $V0 + vadd.vx $V12, $V12, $COUNTER0 + vadd.vx $V13, $V13, $COUNTER1 + vadd.vx $V14, $V14, $NONCE0 + vadd.vx $V15, $V15, $NONCE1 + vadd.vv $V12, $V12, $V0 + # xor with the bottom-half input + vxor.vv $V24, $V24, $V8 + vxor.vv $V25, $V25, $V9 + vxor.vv $V26, $V26, $V10 + vxor.vv $V27, $V27, $V11 + vxor.vv $V29, $V29, $V13 + vxor.vv $V28, $V28, $V12 + vxor.vv $V30, $V30, $V14 + vxor.vv $V31, $V31, $V15 + + # save the bottom-half of output + addi $T0, $OUTPUT, 32 + vssseg8e32.v $V24, ($T0), $STRIDE + + # update counter + add $COUNTER0, $COUNTER0, $VL + sub $LEN, $LEN, $VL + # increase offset for `4 * 16 * VL = 64 * VL` + slli $T0, $VL, 6 + add $INPUT, $INPUT, $T0 + add $OUTPUT, $OUTPUT, $T0 + bnez $LEN, .Lblock_loop + + ld s0, 0(sp) + ld s1, 8(sp) + ld s2, 16(sp) + ld s3, 24(sp) + ld s4, 32(sp) + ld s5, 40(sp) + ld s6, 48(sp) + ld s7, 56(sp) + ld s8, 64(sp) + ld s9, 72(sp) + ld s10, 80(sp) + ld s11, 88(sp) + addi sp, sp, 96 + +.Lend: + ret +.size ChaCha20_ctr32_zvkb,.-ChaCha20_ctr32_zvkb +___ + +print $code; + +close STDOUT or die "error closing STDOUT: $!";