From patchwork Wed Jan 31 20:27:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junaid Shahid X-Patchwork-Id: 10194777 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 59A8360380 for ; Wed, 31 Jan 2018 20:27:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4726E2887F for ; Wed, 31 Jan 2018 20:27:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3B7CB2889A; Wed, 31 Jan 2018 20:27:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C5E8C2887F for ; Wed, 31 Jan 2018 20:27:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751833AbeAaU1b (ORCPT ); Wed, 31 Jan 2018 15:27:31 -0500 Received: from mail-pg0-f66.google.com ([74.125.83.66]:46476 "EHLO mail-pg0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751786AbeAaU1a (ORCPT ); Wed, 31 Jan 2018 15:27:30 -0500 Received: by mail-pg0-f66.google.com with SMTP id s9so10972030pgq.13 for ; Wed, 31 Jan 2018 12:27:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=S380tP9CHYNGh5DWEU0+3Sm6In+WFAFTa0BA/FqzjJ0=; b=GR3oSUaVTlHz0p0dhcCZZRbqiCaE5mkfe7SAeq6koN970ZH+ZsQESRCnz9inmueT8Y v9l6oq033tOmTgABXrYUdqLo71yWzICkyk5Hp90uwWdJ8zc3AA0RG8EgR9dQ+k6bqxRJ Oc6viCnBZVZCDrpuL8QQYb0QJxBzHgAy+xlvEDh5xfWI/SE0xSlL0GG1+5UVaobqTv0R LImcstIod/0iq7TiIZamAqDCVJ1I9QkWH6RwqML4H49tGb5yz5InBmK62OT0Ksl3G+Tp b0d/4nJpsySv/TUhvz2psE90E4dAyGox5v96cYwA+5cQitwIO6H2VRhfQOWDvA2rykR4 GfxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=S380tP9CHYNGh5DWEU0+3Sm6In+WFAFTa0BA/FqzjJ0=; b=nCwbumVvHFklSeye0eekmpJUy4eoA9IRBKSq+XyOFOB+yHsUQG9CS7kNSLx5KA2gB0 7tDP2JeA2/5qEZR4n4YpcaHnucXxU9/pwn4VqNbAEV+VUdz/YjG3M5yGzBmQb4FiQd7H Ehlq7kah5/7Zpgh/W6V7055iKSkNrfKb/BaOsKg51z/KR7eTCV0jpYX3bBdwOnjyZ4CZ z1RluYzmPYWNQb3XyCKQQ9BWG0Pu5dpgV7QoKXyJfF36ZA0oPKb3KrOWQxa4L/XEZewO 1/4QH7GOcgg1xxtKpEYmzTO0DgtaPU3ayZ9+L84p9S8CQ3ymyn4r1ifcweX/6trhWmbP UbOQ== X-Gm-Message-State: AKwxytc9f1TAcR+M7QJHsZg1JLRA4l5SeBHZb9x2f+cC6EkJ44MVme/f YjffWlEuhDnFDPhSgGlvzW/fpw== X-Google-Smtp-Source: AH8x2266Z8I+6OtJ1KwFM3zW6zHKWa/nII3Ugz8Cw5cIT3C6OQd1iit37BOT6Fo6WEi3hufTMP9VLA== X-Received: by 2002:a17:902:32a2:: with SMTP id z31-v6mr30111967plb.345.1517430449196; Wed, 31 Jan 2018 12:27:29 -0800 (PST) Received: from js-desktop.svl.corp.google.com ([2620:15c:2cb:1:cdad:b4d5:21d1:e91e]) by smtp.gmail.com with ESMTPSA id q16sm46081788pfg.124.2018.01.31.12.27.28 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 31 Jan 2018 12:27:28 -0800 (PST) From: Junaid Shahid To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, andreslc@google.com, davem@davemloft.net, gthelen@google.com, ebiggers3@gmail.com, smueller@chronox.de Subject: [PATCH v3 3/4] crypto: aesni - Directly use kmap_atomic instead of scatter_walk object in gcm(aes) Date: Wed, 31 Jan 2018 12:27:21 -0800 Message-Id: <20180131202722.212273-4-junaids@google.com> X-Mailer: git-send-email 2.16.0.rc1.238.g530d649a79-goog In-Reply-To: <20180131202722.212273-1-junaids@google.com> References: <20180131202722.212273-1-junaids@google.com> Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP gcmaes_crypt uses a scatter_walk object to map and unmap the crypto request sglists. But the only purpose that appears to serve here is to allow the D-Cache to be flushed at the end for pages that were used as output. However, that is not applicable on x86, so we can avoid using the scatter_walk object for simplicity. Signed-off-by: Junaid Shahid --- arch/x86/crypto/aesni-intel_glue.c | 36 +++++++++++++++--------------------- 1 file changed, 15 insertions(+), 21 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index c11e531d21dd..9e69e02076d2 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -750,6 +750,11 @@ static bool is_mappable(struct scatterlist *sgl, unsigned long len) (!PageHighMem(sg_page(sgl)) || sgl->offset + len <= PAGE_SIZE); } +static u8 *map_buffer(struct scatterlist *sgl) +{ + return kmap_atomic(sg_page(sgl)) + sgl->offset; +} + /* * Maps the sglist buffer and returns a pointer to the mapped buffer in * data_buf. @@ -762,14 +767,12 @@ static bool is_mappable(struct scatterlist *sgl, unsigned long len) * the data_buf and the bounce_buf should be freed using kfree(). */ static int get_request_buffer(struct scatterlist *sgl, - struct scatter_walk *sg_walk, unsigned long bounce_buf_size, u8 **data_buf, u8 **bounce_buf, bool *mapped) { if (sg_is_last(sgl) && is_mappable(sgl, sgl->length)) { *mapped = true; - scatterwalk_start(sg_walk, sgl); - *data_buf = scatterwalk_map(sg_walk); + *data_buf = map_buffer(sgl); return 0; } @@ -785,14 +788,10 @@ static int get_request_buffer(struct scatterlist *sgl, return 0; } -static void put_request_buffer(u8 *data_buf, unsigned long len, bool mapped, - struct scatter_walk *sg_walk, bool output) +static void put_request_buffer(u8 *data_buf, bool mapped) { - if (mapped) { - scatterwalk_unmap(data_buf); - scatterwalk_advance(sg_walk, len); - scatterwalk_done(sg_walk, output, 0); - } + if (mapped) + kunmap_atomic(data_buf); } /* @@ -809,16 +808,14 @@ static int gcmaes_crypt(struct aead_request *req, unsigned int assoclen, struct crypto_aead *tfm = crypto_aead_reqtfm(req); unsigned long auth_tag_len = crypto_aead_authsize(tfm); unsigned long data_len = req->cryptlen - (decrypt ? auth_tag_len : 0); - struct scatter_walk src_sg_walk; - struct scatter_walk dst_sg_walk = {}; int retval = 0; unsigned long bounce_buf_size = data_len + auth_tag_len + req->assoclen; if (auth_tag_len > 16) return -EINVAL; - retval = get_request_buffer(req->src, &src_sg_walk, bounce_buf_size, - &assoc, &bounce_buf, &src_mapped); + retval = get_request_buffer(req->src, bounce_buf_size, &assoc, + &bounce_buf, &src_mapped); if (retval) goto exit; @@ -828,9 +825,8 @@ static int gcmaes_crypt(struct aead_request *req, unsigned int assoclen, dst = src; dst_mapped = src_mapped; } else { - retval = get_request_buffer(req->dst, &dst_sg_walk, - bounce_buf_size, &dst, &bounce_buf, - &dst_mapped); + retval = get_request_buffer(req->dst, bounce_buf_size, &dst, + &bounce_buf, &dst_mapped); if (retval) goto exit; @@ -866,11 +862,9 @@ static int gcmaes_crypt(struct aead_request *req, unsigned int assoclen, 1); exit: if (req->dst != req->src) - put_request_buffer(dst - req->assoclen, req->dst->length, - dst_mapped, &dst_sg_walk, true); + put_request_buffer(dst - req->assoclen, dst_mapped); - put_request_buffer(assoc, req->src->length, src_mapped, &src_sg_walk, - false); + put_request_buffer(assoc, src_mapped); kfree(bounce_buf); return retval;