From patchwork Mon Dec 7 19:12:00 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Russell King X-Patchwork-Id: 7789291 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 76C669F1C2 for ; Mon, 7 Dec 2015 19:12:11 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7DCB5203B7 for ; Mon, 7 Dec 2015 19:12:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 82A2B20553 for ; Mon, 7 Dec 2015 19:12:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932456AbbLGTMI (ORCPT ); Mon, 7 Dec 2015 14:12:08 -0500 Received: from pandora.arm.linux.org.uk ([78.32.30.218]:60504 "EHLO pandora.arm.linux.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932328AbbLGTMI (ORCPT ); Mon, 7 Dec 2015 14:12:08 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=arm.linux.org.uk; s=pandora-2014; h=Date:Sender:Message-Id:Content-Type:Content-Transfer-Encoding:MIME-Version:Subject:Cc:To:From:References:In-Reply-To; bh=2itUlRiAwupElNdfpLMXEcYay8NYi8Mu/qyaMaNzD40=; b=NUjCVHx7/0Ylzpu/kRbiEyGoNu+ohq8LoRmI0fS6xwFH4fIDU3EDtz6v5LsYjYQZx/KlHOmk+UXChcTWj8BE1w/w5Ix+ixNrTZeTzZKa0aTfb9InJhOiFDboiQmOwz0s+bE2GX0MRD2vLBPYfud9U+PaSDgLoGc8znct7wzgNqg=; Received: from e0022681537dd.dyn.arm.linux.org.uk ([fd8f:7570:feb6:1:222:68ff:fe15:37dd]:51812 helo=rmk-PC.arm.linux.org.uk) by pandora.arm.linux.org.uk with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.82_1-5b7a7c0-XX) (envelope-from ) id 1a61Cb-0004Es-0x; Mon, 07 Dec 2015 19:12:01 +0000 Received: from rmk by rmk-PC.arm.linux.org.uk with local (Exim 4.82_1-5b7a7c0-XX) (envelope-from ) id 1a61Ca-0007mB-6H; Mon, 07 Dec 2015 19:12:00 +0000 In-Reply-To: <20151207191134.GV8644@n2100.arm.linux.org.uk> References: <20151207191134.GV8644@n2100.arm.linux.org.uk> From: Russell King To: Fabio Estevam , Herbert Xu Cc: "David S. Miller" , linux-crypto@vger.kernel.org Subject: [PATCH RFC 01/11] crypto: caam: fix DMA API mapping leak MIME-Version: 1.0 Content-Disposition: inline Message-Id: Date: Mon, 07 Dec 2015 19:12:00 +0000 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP caamhash contains this weird code: src_nents = sg_count(req->src, req->nbytes); dma_map_sg(jrdev, req->src, src_nents ? : 1, DMA_TO_DEVICE); ... edesc->src_nents = src_nents; sg_count() returns zero when sg_nents_for_len() returns zero or one. This means we don't need to use a hardware scatterlist. However, setting src_nents to zero causes problems when we unmap: if (edesc->src_nents) dma_unmap_sg_chained(dev, req->src, edesc->src_nents, DMA_TO_DEVICE, edesc->chained); as zero here means that we have no entries to unmap. This causes us to leak DMA mappings, where we map one scatterlist entry and then fail to unmap it. This can be fixed in two ways: either by writing the number of entries that were requested of dma_map_sg(), or by reworking the "no SG required" case. We adopt the re-work solution here - we replace sg_count() with sg_nents_for_len(), so src_nents now contains the real number of scatterlist entries, and we then change the test for using the hardware scatterlist to src_nents > 1 rather than just non-zero. This change passes my sshd, openssl tests hashing /bin and tcrypt tests. Signed-off-by: Russell King --- drivers/crypto/caam/caamhash.c | 28 ++++++++++++++++++---------- 1 file changed, 18 insertions(+), 10 deletions(-) diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index 49106ea42887..eccde7207f92 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -1085,9 +1085,12 @@ static int ahash_digest(struct ahash_request *req) u32 options; int sh_len; - src_nents = sg_count(req->src, req->nbytes); - dma_map_sg(jrdev, req->src, src_nents ? : 1, DMA_TO_DEVICE); - sec4_sg_bytes = src_nents * sizeof(struct sec4_sg_entry); + src_nents = sg_nents_for_len(req->src, req->nbytes); + dma_map_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); + if (src_nents > 1) + sec4_sg_bytes = src_nents * sizeof(struct sec4_sg_entry); + else + sec4_sg_bytes = 0; /* allocate space for base edesc and hw desc commands, link tables */ edesc = kzalloc(sizeof(*edesc) + sec4_sg_bytes + DESC_JOB_IO_LEN, @@ -1105,7 +1108,7 @@ static int ahash_digest(struct ahash_request *req) desc = edesc->hw_desc; init_job_desc_shared(desc, ptr, sh_len, HDR_SHARE_DEFER | HDR_REVERSE); - if (src_nents) { + if (src_nents > 1) { sg_to_sec4_sg_last(req->src, src_nents, edesc->sec4_sg, 0); edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg, sec4_sg_bytes, DMA_TO_DEVICE); @@ -1232,8 +1235,8 @@ static int ahash_update_no_ctx(struct ahash_request *req) to_hash = in_len - *next_buflen; if (to_hash) { - src_nents = sg_nents_for_len(req->src, - req->nbytes - (*next_buflen)); + src_nents = sg_nents_for_len(req->src, req->nbytes - + *next_buflen); sec4_sg_bytes = (1 + src_nents) * sizeof(struct sec4_sg_entry); @@ -1429,9 +1432,14 @@ static int ahash_update_first(struct ahash_request *req) to_hash = req->nbytes - *next_buflen; if (to_hash) { - src_nents = sg_count(req->src, req->nbytes - (*next_buflen)); - dma_map_sg(jrdev, req->src, src_nents ? : 1, DMA_TO_DEVICE); - sec4_sg_bytes = src_nents * sizeof(struct sec4_sg_entry); + src_nents = sg_nents_for_len(req->src, req->nbytes - + *next_buflen); + dma_map_sg(jrdev, req->src, src_nents, DMA_TO_DEVICE); + if (src_nents > 1) + sec4_sg_bytes = src_nents * + sizeof(struct sec4_sg_entry); + else + sec4_sg_bytes = 0; /* * allocate space for base edesc and hw desc commands, @@ -1451,7 +1459,7 @@ static int ahash_update_first(struct ahash_request *req) DESC_JOB_IO_LEN; edesc->dst_dma = 0; - if (src_nents) { + if (src_nents > 1) { sg_to_sec4_sg_last(req->src, src_nents, edesc->sec4_sg, 0); edesc->sec4_sg_dma = dma_map_single(jrdev,