From patchwork Tue Jun 7 12:17:06 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "(Exiting) Baolin Wang" X-Patchwork-Id: 9161017 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C196E60801 for ; Tue, 7 Jun 2016 12:19:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B22C21FE95 for ; Tue, 7 Jun 2016 12:19:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A52A4264F4; Tue, 7 Jun 2016 12:19:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2AC77264F4 for ; Tue, 7 Jun 2016 12:19:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161205AbcFGMSg (ORCPT ); Tue, 7 Jun 2016 08:18:36 -0400 Received: from mail-pf0-f169.google.com ([209.85.192.169]:33188 "EHLO mail-pf0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161228AbcFGMSd (ORCPT ); Tue, 7 Jun 2016 08:18:33 -0400 Received: by mail-pf0-f169.google.com with SMTP id y124so17955137pfy.0 for ; Tue, 07 Jun 2016 05:18:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=j9PX7fggN9zdmiQAtfaEWMbwb0y4I6hHEEMPnFdMZp8=; b=YyWR4bcSYUXDtcQEHaodWfzoce8il4FtTWY3mNjqVjvB1c3wK9d3kgt4xibuyLiSmP w0v5SQlYXFG19K9yi6lK2+JXiHZA+0aXjMlRRUGcF4k1bRLt+5TMmj1n0DGLv8sUScRP 9DD/Dn51tQoMUgQUVlTZa4i67tTevrzO6P3kA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=j9PX7fggN9zdmiQAtfaEWMbwb0y4I6hHEEMPnFdMZp8=; b=XmEvRVHdBBypTmEtvOW8M5C7GvhhXUyaIf17g3q0f6hrwB+whQtfVri2Rxyy12H3Qd dXyXgP+Z7wh28S2QKkp3Q/ApkYdSZNESWPaiuR6YC0DTpr3475gZcsJ6WwMNmi154WZS VyzkA7V63cVyGqBAAXjhxuKz/sTkQHRQq6q+DdiKnLixUGM2z0Ul8Mk7MiwGt0mzOGBU HdMRcL/rMesLy+v7w36WaxAfuuV1NefxhNA4MTJ63jr1OItF84NZl5ODXZZlYn+h9V+x Y7capfy+HSU3g1g5idl8flOCN+zl8EsSkB0DSRX3kePfLYYlu8CtZ1W3aCfk3rR628E2 lGjA== X-Gm-Message-State: ALyK8tLFiZ+ZC335KRg6GZ5F6MK6vXCi1wqA96yW2wg/1dsN7bu6bgStS/a8IgnCOZH7nc4h X-Received: by 10.98.55.129 with SMTP id e123mr31596317pfa.145.1465301912067; Tue, 07 Jun 2016 05:18:32 -0700 (PDT) Received: from baolinwangubtpc.spreadtrum.com ([175.111.195.49]) by smtp.gmail.com with ESMTPSA id r64sm34995877pfi.54.2016.06.07.05.18.26 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 07 Jun 2016 05:18:31 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, agk@redhat.com, snitzer@redhat.com, dm-devel@redhat.com, herbert@gondor.apana.org.au, davem@davemloft.net Cc: ebiggers3@gmail.com, js1304@gmail.com, tadeusz.struk@intel.com, smueller@chronox.de, standby24x7@gmail.com, shli@kernel.org, dan.j.williams@intel.com, martin.petersen@oracle.com, sagig@mellanox.com, kent.overstreet@gmail.com, keith.busch@intel.com, tj@kernel.org, ming.lei@canonical.com, broonie@kernel.org, arnd@arndb.de, linux-crypto@vger.kernel.org, linux-block@vger.kernel.org, linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, baolin.wang@linaro.org Subject: [RFC v4 3/4] md: dm-crypt: Introduce the bulk mode method when sending request Date: Tue, 7 Jun 2016 20:17:06 +0800 Message-Id: <993e0828459d942ece0bb87652169d3b3b98ed14.1465301616.git.baolin.wang@linaro.org> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: In-Reply-To: References: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In now dm-crypt code, it is ineffective to map one segment (always one sector) of one bio with just only one scatterlist at one time for hardware crypto engine. Especially for some encryption mode (like ecb or xts mode) cooperating with the crypto engine, they just need one initial IV or null IV instead of different IV for each sector. In this situation We can consider to use multiple scatterlists to map the whole bio and send all scatterlists of one bio to crypto engine to encrypt or decrypt, which can improve the hardware engine's efficiency. With this optimization, On my test setup (beaglebone black board with ecb(aes) cipher and dd testing) using 64KB I/Os on an eMMC storage device I saw about 127% improvement in throughput for encrypted writes, and about 206% improvement for encrypted reads. Signed-off-by: Baolin Wang --- drivers/md/dm-crypt.c | 159 ++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 158 insertions(+), 1 deletion(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 4f3cb35..0b1d452 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -33,6 +33,7 @@ #include #define DM_MSG_PREFIX "crypt" +#define DM_MAX_SG_LIST 512 /* * context holding the current state of a multi-part conversion @@ -142,6 +143,11 @@ struct crypt_config { char *cipher; char *cipher_string; + struct sg_table sgt_in; + struct sg_table sgt_out; + atomic_t sgt_init_done; + struct completion sgt_restart; + struct crypt_iv_operations *iv_gen_ops; union { struct iv_essiv_private essiv; @@ -837,6 +843,141 @@ static u8 *iv_of_dmreq(struct crypt_config *cc, crypto_skcipher_alignmask(any_tfm(cc)) + 1); } +static void crypt_init_sg_table(struct scatterlist *sgl) +{ + struct scatterlist *sg; + int i; + + for_each_sg(sgl, sg, DM_MAX_SG_LIST, i) { + if (i < DM_MAX_SG_LIST - 1 && sg_is_last(sg)) + sg_unmark_end(sg); + else if (i == DM_MAX_SG_LIST - 1) + sg_mark_end(sg); + } + + for_each_sg(sgl, sg, DM_MAX_SG_LIST, i) { + memset(sg, 0, sizeof(struct scatterlist)); + + if (i == DM_MAX_SG_LIST - 1) + sg_mark_end(sg); + } +} + +static void crypt_reinit_sg_table(struct crypt_config *cc) +{ + if (!cc->sgt_in.orig_nents || !cc->sgt_out.orig_nents) + return; + + crypt_init_sg_table(cc->sgt_in.sgl); + crypt_init_sg_table(cc->sgt_out.sgl); + + if (atomic_inc_and_test(&cc->sgt_init_done)) + complete(&cc->sgt_restart); + atomic_set(&cc->sgt_init_done, 1); +} + +static int crypt_alloc_sg_table(struct crypt_config *cc) +{ + unsigned int bulk_mode = skcipher_is_bulk_mode(any_tfm(cc)); + int ret = 0; + + if (!bulk_mode) + goto out_skip_alloc; + + ret = sg_alloc_table(&cc->sgt_in, DM_MAX_SG_LIST, GFP_KERNEL); + if (ret) + goto out_skip_alloc; + + ret = sg_alloc_table(&cc->sgt_out, DM_MAX_SG_LIST, GFP_KERNEL); + if (ret) + goto out_free_table; + + init_completion(&cc->sgt_restart); + atomic_set(&cc->sgt_init_done, 1); + return 0; + +out_free_table: + sg_free_table(&cc->sgt_in); +out_skip_alloc: + cc->sgt_in.orig_nents = 0; + cc->sgt_out.orig_nents = 0; + + return ret; +} + +static int crypt_convert_bulk_block(struct crypt_config *cc, + struct convert_context *ctx, + struct skcipher_request *req) +{ + struct bio *bio_in = ctx->bio_in; + struct bio *bio_out = ctx->bio_out; + unsigned int total_bytes = bio_in->bi_iter.bi_size; + unsigned int total_sg_in, total_sg_out; + struct scatterlist *sg_in, *sg_out; + struct dm_crypt_request *dmreq; + u8 *iv; + int r; + + if (!cc->sgt_in.orig_nents || !cc->sgt_out.orig_nents) + return -EINVAL; + + if (!atomic_dec_and_test(&cc->sgt_init_done)) { + wait_for_completion(&cc->sgt_restart); + reinit_completion(&cc->sgt_restart); + } + + dmreq = dmreq_of_req(cc, req); + iv = iv_of_dmreq(cc, dmreq); + dmreq->iv_sector = ctx->cc_sector; + dmreq->ctx = ctx; + + total_sg_in = blk_bio_map_sg(bdev_get_queue(bio_in->bi_bdev), + bio_in, cc->sgt_in.sgl); + if ((total_sg_in <= 0) || (total_sg_in > DM_MAX_SG_LIST)) { + DMERR("%s in sg map error %d, sg table nents[%d]\n", + __func__, total_sg_in, cc->sgt_in.orig_nents); + return -EINVAL; + } + + ctx->iter_in.bi_size -= total_bytes; + sg_in = cc->sgt_in.sgl; + sg_out = cc->sgt_in.sgl; + + if (bio_data_dir(bio_in) == READ) + goto set_crypt; + + total_sg_out = blk_bio_map_sg(bdev_get_queue(bio_out->bi_bdev), + bio_out, cc->sgt_out.sgl); + if ((total_sg_out <= 0) || (total_sg_out > DM_MAX_SG_LIST)) { + DMERR("%s out sg map error %d, sg table nents[%d]\n", + __func__, total_sg_out, cc->sgt_out.orig_nents); + return -EINVAL; + } + + ctx->iter_out.bi_size -= total_bytes; + sg_out = cc->sgt_out.sgl; + +set_crypt: + if (cc->iv_gen_ops) { + r = cc->iv_gen_ops->generator(cc, iv, dmreq); + if (r < 0) + return r; + } + + atomic_set(&cc->sgt_init_done, 0); + skcipher_request_set_crypt(req, sg_in, sg_out, total_bytes, iv); + + if (bio_data_dir(ctx->bio_in) == WRITE) + r = crypto_skcipher_encrypt(req); + else + r = crypto_skcipher_decrypt(req); + + if (!r && cc->iv_gen_ops && cc->iv_gen_ops->post) + r = cc->iv_gen_ops->post(cc, iv, dmreq); + + return r; +} + static int crypt_convert_block(struct crypt_config *cc, struct convert_context *ctx, struct skcipher_request *req) @@ -920,6 +1061,7 @@ static void crypt_free_req(struct crypt_config *cc, static int crypt_convert(struct crypt_config *cc, struct convert_context *ctx) { + unsigned int bulk_mode; int r; atomic_set(&ctx->cc_pending, 1); @@ -930,7 +1072,14 @@ static int crypt_convert(struct crypt_config *cc, atomic_inc(&ctx->cc_pending); - r = crypt_convert_block(cc, ctx, ctx->req); + bulk_mode = skcipher_is_bulk_mode(any_tfm(cc)); + if (!bulk_mode) { + r = crypt_convert_block(cc, ctx, ctx->req); + } else { + r = crypt_convert_bulk_block(cc, ctx, ctx->req); + if (r == -EINVAL) + r = crypt_convert_block(cc, ctx, ctx->req); + } switch (r) { /* @@ -1081,6 +1230,7 @@ static void crypt_dec_pending(struct dm_crypt_io *io) if (io->ctx.req) crypt_free_req(cc, io->ctx.req, base_bio); + crypt_reinit_sg_table(cc); base_bio->bi_error = error; bio_endio(base_bio); } @@ -1563,6 +1713,9 @@ static void crypt_dtr(struct dm_target *ti) kzfree(cc->cipher); kzfree(cc->cipher_string); + sg_free_table(&cc->sgt_in); + sg_free_table(&cc->sgt_out); + /* Must zero key material before freeing */ kzfree(cc); } @@ -1718,6 +1871,10 @@ static int crypt_ctr_cipher(struct dm_target *ti, } } + ret = crypt_alloc_sg_table(cc); + if (ret) + DMWARN("Allocate sg table for bulk mode failed"); + ret = 0; bad: kfree(cipher_api);