From patchwork Mon Dec 7 19:12:36 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell King X-Patchwork-Id: 7789361 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7E0379F1C2 for ; Mon, 7 Dec 2015 19:12:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7533F20553 for ; Mon, 7 Dec 2015 19:12:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7B801203B7 for ; Mon, 7 Dec 2015 19:12:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932428AbbLGTMl (ORCPT ); Mon, 7 Dec 2015 14:12:41 -0500 Received: from pandora.arm.linux.org.uk ([78.32.30.218]:60533 "EHLO pandora.arm.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932307AbbLGTMl (ORCPT ); Mon, 7 Dec 2015 14:12:41 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=arm.linux.org.uk; s=pandora-2014; h=Date:Sender:Message-Id:Content-Type:Content-Transfer-Encoding:MIME-Version:Subject:Cc:To:From:References:In-Reply-To; bh=aACIlDbosmpuSdcWtPFyHRFOggWYYJVh5ev2lxgEhjA=; b=poX88tddF7llFRW/dTvID9DHyFWsDpyX8HGvdewOhpuod1mHCnqmzwIjU8UvDxdX8yNkJNB8Hro06IMtR1sFNd8tcoCbcJRdt5mWLpDTQtUPNuMtwxcS9C/hoUijBzQQiataHMw18NC1x9mN11lbpaUnKRuGhCW+oEcFwn8Cok4=; Received: from e0022681537dd.dyn.arm.linux.org.uk ([2002:4e20:1eda:1:222:68ff:fe15:37dd]:37516 helo=rmk-PC.arm.linux.org.uk) by pandora.arm.linux.org.uk with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.82_1-5b7a7c0-XX) (envelope-from ) id 1a61DB-0004G5-1G; Mon, 07 Dec 2015 19:12:37 +0000 Received: from rmk by rmk-PC.arm.linux.org.uk with local (Exim 4.82_1-5b7a7c0-XX) (envelope-from ) id 1a61DA-0007n1-29; Mon, 07 Dec 2015 19:12:36 +0000 In-Reply-To: <20151207191134.GV8644@n2100.arm.linux.org.uk> References: <20151207191134.GV8644@n2100.arm.linux.org.uk> From: Russell King To: Fabio Estevam , Herbert Xu Cc: "David S. Miller" , linux-crypto@vger.kernel.org Subject: [PATCH RFC 08/11] crypto: caam: add ahash_edesc_alloc() for descriptor allocation MIME-Version: 1.0 Content-Disposition: inline Message-Id: Date: Mon, 07 Dec 2015 19:12:36 +0000 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add a helper function to perform the descriptor allocation. Signed-off-by: Russell King --- drivers/crypto/caam/caamhash.c | 60 +++++++++++++++++++++++------------------- 1 file changed, 33 insertions(+), 27 deletions(-) diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index 9638e9f4f001..f35f4cfc27a7 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -765,6 +765,25 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err, req->base.complete(&req->base, err); } +/* + * Allocate an enhanced descriptor, which contains the hardware descriptor + * and space for hardware scatter table containing sg_num entries. + */ +static struct ahash_edesc *ahash_edesc_alloc(struct caam_hash_ctx *ctx, + int sg_num, gfp_t flags) +{ + struct ahash_edesc *edesc; + unsigned int sg_size = sg_num * sizeof(struct sec4_sg_entry); + + edesc = kzalloc(sizeof(*edesc) + sg_size, GFP_DMA | flags); + if (!edesc) { + dev_err(ctx->jrdev, "could not allocate extended descriptor\n"); + return NULL; + } + + return edesc; +} + /* submit update job descriptor */ static int ahash_update_ctx(struct ahash_request *req) { @@ -813,11 +832,9 @@ static int ahash_update_ctx(struct ahash_request *req) * allocate space for base edesc and hw desc commands, * link tables */ - edesc = kzalloc(sizeof(*edesc) + sec4_sg_bytes, - GFP_DMA | flags); + edesc = ahash_edesc_alloc(ctx, sec4_sg_src_index + mapped_nents, + flags); if (!edesc) { - dev_err(jrdev, - "could not allocate extended descriptor\n"); dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; } @@ -930,11 +947,9 @@ static int ahash_final_ctx(struct ahash_request *req) sec4_sg_bytes = sec4_sg_src_index * sizeof(struct sec4_sg_entry); /* allocate space for base edesc and hw desc commands, link tables */ - edesc = kzalloc(sizeof(*edesc) + sec4_sg_bytes, GFP_DMA | flags); - if (!edesc) { - dev_err(jrdev, "could not allocate extended descriptor\n"); + edesc = ahash_edesc_alloc(ctx, sec4_sg_src_index, flags); + if (!edesc) return -ENOMEM; - } sh_len = desc_len(sh_desc); desc = edesc->hw_desc; @@ -1026,9 +1041,9 @@ static int ahash_finup_ctx(struct ahash_request *req) sizeof(struct sec4_sg_entry); /* allocate space for base edesc and hw desc commands, link tables */ - edesc = kzalloc(sizeof(*edesc) + sec4_sg_bytes, GFP_DMA | flags); + edesc = ahash_edesc_alloc(ctx, sec4_sg_src_index + mapped_nents, + flags); if (!edesc) { - dev_err(jrdev, "could not allocate extended descriptor\n"); dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; } @@ -1122,9 +1137,9 @@ static int ahash_digest(struct ahash_request *req) sec4_sg_bytes = 0; /* allocate space for base edesc and hw desc commands, link tables */ - edesc = kzalloc(sizeof(*edesc) + sec4_sg_bytes, GFP_DMA | flags); + edesc = ahash_edesc_alloc(ctx, mapped_nents > 1 ? mapped_nents : 0, + flags); if (!edesc) { - dev_err(jrdev, "could not allocate extended descriptor\n"); dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; } @@ -1198,13 +1213,10 @@ static int ahash_final_no_ctx(struct ahash_request *req) int sh_len; /* allocate space for base edesc and hw desc commands, link tables */ - edesc = kzalloc(sizeof(*edesc), GFP_DMA | flags); - if (!edesc) { - dev_err(jrdev, "could not allocate extended descriptor\n"); + edesc = ahash_edesc_alloc(ctx, 0, flags); + if (!edesc) return -ENOMEM; - } - edesc->sec4_sg_bytes = 0; sh_len = desc_len(sh_desc); desc = edesc->hw_desc; init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER | HDR_REVERSE); @@ -1287,11 +1299,8 @@ static int ahash_update_no_ctx(struct ahash_request *req) * allocate space for base edesc and hw desc commands, * link tables */ - edesc = kzalloc(sizeof(*edesc) + sec4_sg_bytes, - GFP_DMA | flags); + edesc = ahash_edesc_alloc(ctx, 1 + mapped_nents, flags); if (!edesc) { - dev_err(jrdev, - "could not allocate extended descriptor\n"); dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; } @@ -1406,9 +1415,8 @@ static int ahash_finup_no_ctx(struct ahash_request *req) sizeof(struct sec4_sg_entry); /* allocate space for base edesc and hw desc commands, link tables */ - edesc = kzalloc(sizeof(*edesc) + sec4_sg_bytes, GFP_DMA | flags); + edesc = ahash_edesc_alloc(ctx, sec4_sg_src_index + mapped_nents, flags); if (!edesc) { - dev_err(jrdev, "could not allocate extended descriptor\n"); dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; } @@ -1508,11 +1516,9 @@ static int ahash_update_first(struct ahash_request *req) * allocate space for base edesc and hw desc commands, * link tables */ - edesc = kzalloc(sizeof(*edesc) + sec4_sg_bytes, - GFP_DMA | flags); + edesc = ahash_edesc_alloc(ctx, mapped_nents > 1 ? + mapped_nents : 0, flags); if (!edesc) { - dev_err(jrdev, - "could not allocate extended descriptor\n"); dma_unmap_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); return -ENOMEM; }