diff mbox series

crypto: s390/aes - Fix buffer overread in CTR mode

Message ID ZWWHFeOPcW30OYo1@gondor.apana.org.au (mailing list archive)
State Accepted
Delegated to: Herbert Xu
Headers show
Series crypto: s390/aes - Fix buffer overread in CTR mode | expand

Commit Message

Herbert Xu Nov. 28, 2023, 6:22 a.m. UTC
When processing the last block, the s390 ctr code will always read
a whole block, even if there isn't a whole block of data left.  Fix
this by using the actual length left and copy it into a buffer first
for processing.

Fixes: 0200f3ecc196 ("crypto: s390 - add System z hardware support for CTR mode")
Cc: <stable@vger.kernel.org>
Reported-by: Guangwu Zhang <guazhang@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Comments

Harald Freudenberger Nov. 28, 2023, 11:18 a.m. UTC | #1
On 2023-11-28 07:22, Herbert Xu wrote:
> When processing the last block, the s390 ctr code will always read
> a whole block, even if there isn't a whole block of data left.  Fix
> this by using the actual length left and copy it into a buffer first
> for processing.
> 
> Fixes: 0200f3ecc196 ("crypto: s390 - add System z hardware support for
> CTR mode")
> Cc: <stable@vger.kernel.org>
> Reported-by: Guangwu Zhang <guazhang@redhat.com>
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
> 
> diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
> index c773820e4af9..c6fe5405de4a 100644
> --- a/arch/s390/crypto/aes_s390.c
> +++ b/arch/s390/crypto/aes_s390.c
> @@ -597,7 +597,9 @@ static int ctr_aes_crypt(struct skcipher_request 
> *req)
>  	 * final block may be < AES_BLOCK_SIZE, copy only nbytes
>  	 */
>  	if (nbytes) {
> -		cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr,
> +		memset(buf, 0, AES_BLOCK_SIZE);
> +		memcpy(buf, walk.src.virt.addr, nbytes);
> +		cpacf_kmctr(sctx->fc, sctx->key, buf, buf,
>  			    AES_BLOCK_SIZE, walk.iv);
>  		memcpy(walk.dst.virt.addr, buf, nbytes);
>  		crypto_inc(walk.iv, AES_BLOCK_SIZE);

Reviewd-by: Harald Freudenberger <freude@de.ibm.com>

There is similar code in paes_s390.c. I'll send a patch for that.
Harald Freudenberger Nov. 28, 2023, 1:18 p.m. UTC | #2
On 2023-11-28 07:22, Herbert Xu wrote:
> When processing the last block, the s390 ctr code will always read
> a whole block, even if there isn't a whole block of data left.  Fix
> this by using the actual length left and copy it into a buffer first
> for processing.
> 
> Fixes: 0200f3ecc196 ("crypto: s390 - add System z hardware support for
> CTR mode")
> Cc: <stable@vger.kernel.org>
> Reported-by: Guangwu Zhang <guazhang@redhat.com>
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
> 
> diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
> index c773820e4af9..c6fe5405de4a 100644
> --- a/arch/s390/crypto/aes_s390.c
> +++ b/arch/s390/crypto/aes_s390.c
> @@ -597,7 +597,9 @@ static int ctr_aes_crypt(struct skcipher_request 
> *req)
>  	 * final block may be < AES_BLOCK_SIZE, copy only nbytes
>  	 */
>  	if (nbytes) {
> -		cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr,
> +		memset(buf, 0, AES_BLOCK_SIZE);
> +		memcpy(buf, walk.src.virt.addr, nbytes);
> +		cpacf_kmctr(sctx->fc, sctx->key, buf, buf,
>  			    AES_BLOCK_SIZE, walk.iv);
>  		memcpy(walk.dst.virt.addr, buf, nbytes);
>  		crypto_inc(walk.iv, AES_BLOCK_SIZE);

Here is a similar fix for the s390 paes ctr cipher. Compiles and is
tested. You may merge this with your patch for the s390 aes cipher.

--------------------------------------------------------------------------------

diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
index 8b541e44151d..55ee5567a5ea 100644
--- a/arch/s390/crypto/paes_s390.c
+++ b/arch/s390/crypto/paes_s390.c
@@ -693,9 +693,11 @@ static int ctr_paes_crypt(struct skcipher_request 
*req)
          * final block may be < AES_BLOCK_SIZE, copy only nbytes
          */
         if (nbytes) {
+               memset(buf, 0, AES_BLOCK_SIZE);
+               memcpy(buf, walk.src.virt.addr, nbytes);
                 while (1) {
                         if (cpacf_kmctr(ctx->fc, &param, buf,
-                                       walk.src.virt.addr, 
AES_BLOCK_SIZE,
+                                       buf, AES_BLOCK_SIZE,
                                         walk.iv) == AES_BLOCK_SIZE)
                                 break;
                         if (__paes_convert_key(ctx))
Herbert Xu Dec. 8, 2023, 4:06 a.m. UTC | #3
On Tue, Nov 28, 2023 at 02:18:02PM +0100, Harald Freudenberger wrote:
>
> Here is a similar fix for the s390 paes ctr cipher. Compiles and is
> tested. You may merge this with your patch for the s390 aes cipher.

Thank you.  I had to apply this by hand so please check the result
which I've just pushed out to cryptodev.

Cheers,
diff mbox series

Patch

diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
index c773820e4af9..c6fe5405de4a 100644
--- a/arch/s390/crypto/aes_s390.c
+++ b/arch/s390/crypto/aes_s390.c
@@ -597,7 +597,9 @@  static int ctr_aes_crypt(struct skcipher_request *req)
 	 * final block may be < AES_BLOCK_SIZE, copy only nbytes
 	 */
 	if (nbytes) {
-		cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr,
+		memset(buf, 0, AES_BLOCK_SIZE);
+		memcpy(buf, walk.src.virt.addr, nbytes);
+		cpacf_kmctr(sctx->fc, sctx->key, buf, buf,
 			    AES_BLOCK_SIZE, walk.iv);
 		memcpy(walk.dst.virt.addr, buf, nbytes);
 		crypto_inc(walk.iv, AES_BLOCK_SIZE);