From patchwork Mon Dec 7 19:12:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell King X-Patchwork-Id: 7789341 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E49309F1C2 for ; Mon, 7 Dec 2015 19:12:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CA1B020553 for ; Mon, 7 Dec 2015 19:12:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AD1F7203B7 for ; Mon, 7 Dec 2015 19:12:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932328AbbLGTMc (ORCPT ); Mon, 7 Dec 2015 14:12:32 -0500 Received: from pandora.arm.linux.org.uk ([78.32.30.218]:60525 "EHLO pandora.arm.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932307AbbLGTMb (ORCPT ); Mon, 7 Dec 2015 14:12:31 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=arm.linux.org.uk; s=pandora-2014; h=Date:Sender:Message-Id:Content-Type:Content-Transfer-Encoding:MIME-Version:Subject:Cc:To:From:References:In-Reply-To; bh=n7Ornc4krpgiZ8kMw5nOkNfXIYSyVk62jIZJpGI7j4M=; b=autkhfulGHEy6ezgVshrkEaRJHk0geLuzVwNeAAFxC5IbxPebvcY2KFBiCwOLKfsViJiMlHtFHghUUkwLlcpiBprdQMXQC80AIjgf1GhV5Vki0kLKgqC4rQY5dgHDN793yj2jHeyLyuWZYqQF+3r5r5SLy9zY+cMUQ2bRkprv+Q=; Received: from e0022681537dd.dyn.arm.linux.org.uk ([2002:4e20:1eda:1:222:68ff:fe15:37dd]:37514 helo=rmk-PC.arm.linux.org.uk) by pandora.arm.linux.org.uk with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.82_1-5b7a7c0-XX) (envelope-from ) id 1a61D0-0004Fp-RC; Mon, 07 Dec 2015 19:12:26 +0000 Received: from rmk by rmk-PC.arm.linux.org.uk with local (Exim 4.82_1-5b7a7c0-XX) (envelope-from ) id 1a61Cz-0007mm-QP; Mon, 07 Dec 2015 19:12:25 +0000 In-Reply-To: <20151207191134.GV8644@n2100.arm.linux.org.uk> References: <20151207191134.GV8644@n2100.arm.linux.org.uk> From: Russell King To: Fabio Estevam , Herbert Xu Cc: "David S. Miller" , linux-crypto@vger.kernel.org Subject: [PATCH RFC 06/11] crypto: caam: ensure that we clean up after an error MIME-Version: 1.0 Content-Disposition: inline Message-Id: Date: Mon, 07 Dec 2015 19:12:25 +0000 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Ensure that we clean up allocations and DMA mappings after encountering an error rather than just giving up and leaking memory and resources. Signed-off-by: Russell King --- drivers/crypto/caam/caamhash.c | 57 ++++++++++++++++++++++++++++++++++++++---- 1 file changed, 52 insertions(+), 5 deletions(-) diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index d48974d1897d..076cfddf32bb 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -824,8 +824,12 @@ static int ahash_update_ctx(struct ahash_request *req) ret = ctx_map_to_sec4_sg(desc, jrdev, state, ctx->ctx_len, edesc->sec4_sg, DMA_BIDIRECTIONAL); - if (ret) + if (ret) { + ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, + DMA_BIDIRECTIONAL); + kfree(edesc); return ret; + } state->buf_dma = try_buf_map_to_sec4_sg(jrdev, edesc->sec4_sg + 1, @@ -856,6 +860,9 @@ static int ahash_update_ctx(struct ahash_request *req) DMA_TO_DEVICE); if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) { dev_err(jrdev, "unable to map S/G table\n"); + ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, + DMA_BIDIRECTIONAL); + kfree(edesc); return -ENOMEM; } @@ -934,8 +941,11 @@ static int ahash_final_ctx(struct ahash_request *req) ret = ctx_map_to_sec4_sg(desc, jrdev, state, ctx->ctx_len, edesc->sec4_sg, DMA_TO_DEVICE); - if (ret) + if (ret) { + ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); + kfree(edesc); return ret; + } state->buf_dma = try_buf_map_to_sec4_sg(jrdev, edesc->sec4_sg + 1, buf, state->buf_dma, buflen, @@ -946,6 +956,8 @@ static int ahash_final_ctx(struct ahash_request *req) sec4_sg_bytes, DMA_TO_DEVICE); if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) { dev_err(jrdev, "unable to map S/G table\n"); + ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); + kfree(edesc); return -ENOMEM; } @@ -956,6 +968,8 @@ static int ahash_final_ctx(struct ahash_request *req) digestsize); if (dma_mapping_error(jrdev, edesc->dst_dma)) { dev_err(jrdev, "unable to map dst\n"); + ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); + kfree(edesc); return -ENOMEM; } @@ -1017,8 +1031,11 @@ static int ahash_finup_ctx(struct ahash_request *req) ret = ctx_map_to_sec4_sg(desc, jrdev, state, ctx->ctx_len, edesc->sec4_sg, DMA_TO_DEVICE); - if (ret) + if (ret) { + ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); + kfree(edesc); return ret; + } state->buf_dma = try_buf_map_to_sec4_sg(jrdev, edesc->sec4_sg + 1, buf, state->buf_dma, buflen, @@ -1031,6 +1048,8 @@ static int ahash_finup_ctx(struct ahash_request *req) sec4_sg_bytes, DMA_TO_DEVICE); if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) { dev_err(jrdev, "unable to map S/G table\n"); + ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); + kfree(edesc); return -ENOMEM; } @@ -1041,6 +1060,8 @@ static int ahash_finup_ctx(struct ahash_request *req) digestsize); if (dma_mapping_error(jrdev, edesc->dst_dma)) { dev_err(jrdev, "unable to map dst\n"); + ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); + kfree(edesc); return -ENOMEM; } @@ -1104,6 +1125,8 @@ static int ahash_digest(struct ahash_request *req) sec4_sg_bytes, DMA_TO_DEVICE); if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) { dev_err(jrdev, "unable to map S/G table\n"); + ahash_unmap(jrdev, edesc, req, digestsize); + kfree(edesc); return -ENOMEM; } src_dma = edesc->sec4_sg_dma; @@ -1118,6 +1141,8 @@ static int ahash_digest(struct ahash_request *req) digestsize); if (dma_mapping_error(jrdev, edesc->dst_dma)) { dev_err(jrdev, "unable to map dst\n"); + ahash_unmap(jrdev, edesc, req, digestsize); + kfree(edesc); return -ENOMEM; } @@ -1170,6 +1195,8 @@ static int ahash_final_no_ctx(struct ahash_request *req) state->buf_dma = dma_map_single(jrdev, buf, buflen, DMA_TO_DEVICE); if (dma_mapping_error(jrdev, state->buf_dma)) { dev_err(jrdev, "unable to map src\n"); + ahash_unmap(jrdev, edesc, req, digestsize); + kfree(edesc); return -ENOMEM; } @@ -1179,6 +1206,8 @@ static int ahash_final_no_ctx(struct ahash_request *req) digestsize); if (dma_mapping_error(jrdev, edesc->dst_dma)) { dev_err(jrdev, "unable to map dst\n"); + ahash_unmap(jrdev, edesc, req, digestsize); + kfree(edesc); return -ENOMEM; } edesc->src_nents = 0; @@ -1268,14 +1297,21 @@ static int ahash_update_no_ctx(struct ahash_request *req) DMA_TO_DEVICE); if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) { dev_err(jrdev, "unable to map S/G table\n"); + ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, + DMA_TO_DEVICE); + kfree(edesc); return -ENOMEM; } append_seq_in_ptr(desc, edesc->sec4_sg_dma, to_hash, LDST_SGF); ret = map_seq_out_ptr_ctx(desc, jrdev, state, ctx->ctx_len); - if (ret) + if (ret) { + ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, + DMA_TO_DEVICE); + kfree(edesc); return ret; + } #ifdef DEBUG print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ", @@ -1361,6 +1397,8 @@ static int ahash_finup_no_ctx(struct ahash_request *req) sec4_sg_bytes, DMA_TO_DEVICE); if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) { dev_err(jrdev, "unable to map S/G table\n"); + ahash_unmap(jrdev, edesc, req, digestsize); + kfree(edesc); return -ENOMEM; } @@ -1371,6 +1409,8 @@ static int ahash_finup_no_ctx(struct ahash_request *req) digestsize); if (dma_mapping_error(jrdev, edesc->dst_dma)) { dev_err(jrdev, "unable to map dst\n"); + ahash_unmap(jrdev, edesc, req, digestsize); + kfree(edesc); return -ENOMEM; } @@ -1451,6 +1491,9 @@ static int ahash_update_first(struct ahash_request *req) DMA_TO_DEVICE); if (dma_mapping_error(jrdev, edesc->sec4_sg_dma)) { dev_err(jrdev, "unable to map S/G table\n"); + ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, + DMA_TO_DEVICE); + kfree(edesc); return -ENOMEM; } src_dma = edesc->sec4_sg_dma; @@ -1472,8 +1515,12 @@ static int ahash_update_first(struct ahash_request *req) append_seq_in_ptr(desc, src_dma, to_hash, options); ret = map_seq_out_ptr_ctx(desc, jrdev, state, ctx->ctx_len); - if (ret) + if (ret) { + ahash_unmap_ctx(jrdev, edesc, req, ctx->ctx_len, + DMA_TO_DEVICE); + kfree(edesc); return ret; + } #ifdef DEBUG print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",