From patchwork Wed Sep 16 18:40:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sudhakar Panneerselvam X-Patchwork-Id: 11781207 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 33BB4746 for ; Thu, 17 Sep 2020 00:11:55 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BAB1F21974 for ; Thu, 17 Sep 2020 00:11:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BAB1F21974 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=dm-devel-bounces@redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-423-1p4msEtMPyeSpUuUXocnBg-1; Wed, 16 Sep 2020 20:11:51 -0400 X-MC-Unique: 1p4msEtMPyeSpUuUXocnBg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ADC61186DD2A; Thu, 17 Sep 2020 00:11:46 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 802C36886A; Thu, 17 Sep 2020 00:11:43 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 2EDFB44A7F; Thu, 17 Sep 2020 00:11:40 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 08H0Bcl8015996 for ; Wed, 16 Sep 2020 20:11:38 -0400 Received: by smtp.corp.redhat.com (Postfix) id 81675200A799; Thu, 17 Sep 2020 00:11:38 +0000 (UTC) Delivered-To: dm-devel@redhat.com Received: from mimecast-mx02.redhat.com (mimecast02.extmail.prod.ext.rdu2.redhat.com [10.11.55.18]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7C82E2028CCE for ; Thu, 17 Sep 2020 00:11:35 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-1.mimecast.com [205.139.110.61]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D5512800161 for ; Thu, 17 Sep 2020 00:11:35 +0000 (UTC) Received: from userp2120.oracle.com (userp2120.oracle.com [156.151.31.85]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-532-7iiwtBvROwSixadJXeTEHg-1; Wed, 16 Sep 2020 20:11:31 -0400 X-MC-Unique: 7iiwtBvROwSixadJXeTEHg-1 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 08GIaHm8124273; Wed, 16 Sep 2020 18:40:16 GMT Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79]) by userp2120.oracle.com with ESMTP id 33j91dpg18-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 16 Sep 2020 18:40:16 +0000 Received: from pps.filterd (userp3020.oracle.com [127.0.0.1]) by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 08GIPrBN138500; Wed, 16 Sep 2020 18:40:15 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userp3020.oracle.com with ESMTP id 33hm33aeqc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Sep 2020 18:40:15 +0000 Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 08GIeEn4010214; Wed, 16 Sep 2020 18:40:14 GMT Received: from supannee-devvm-ol7.osdevelopmeniad.oraclevcn.com (/100.100.231.179) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 16 Sep 2020 18:40:14 +0000 From: Sudhakar Panneerselvam To: agk@redhat.com, snitzer@redhat.com, dm-devel@redhat.com Date: Wed, 16 Sep 2020 18:40:05 +0000 Message-Id: <1600281606-1446-2-git-send-email-sudhakar.panneerselvam@oracle.com> In-Reply-To: <1600281606-1446-1-git-send-email-sudhakar.panneerselvam@oracle.com> References: <1600281606-1446-1-git-send-email-sudhakar.panneerselvam@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9746 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 bulkscore=0 mlxlogscore=999 malwarescore=0 mlxscore=0 phishscore=0 adultscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009160130 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9746 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 impostorscore=0 priorityscore=1501 malwarescore=0 suspectscore=0 mlxlogscore=999 clxscore=1015 adultscore=0 lowpriorityscore=0 spamscore=0 mlxscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009160130 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false; X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-loop: dm-devel@redhat.com Cc: shirley.ma@oracle.com, ssudhakarp@gmail.com, martin.petersen@oracle.com Subject: [dm-devel] [RFC PATCH 1/2] dm crypt: Allow unaligned bio buffer lengths for skcipher devices X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0.002 X-Mimecast-Originator: redhat.com crypt_convert_block_skcipher() rejects the I/O if the buffer length is not a multiple of its sector size. This assumption holds true as long as the originator of the I/Os are within Linux. But, in a QEMU environment, with windows as guests that uses vhost-scsi interface, it was observed that block layer could receive bio requests whose individual buffer lengths may not be aligned at the sector size all the time. This results in windows guest failing to boot or failing to format the block devices that are backed by dm-crypt. Not all low level block drivers require that the dma alignment to be a multiple of 512 bytes. Some of the LLDs that has much relaxed constraint on the buffer alignment are iSCSI, NVMe, SBP, MegaRaid, qla2xxx. Hence, assuming the buffer lengths to be aligned to its sector size and based on that rejecting the I/Os doesn't appear to be correct. crypt_map() already ensures that the I/Os are always a multiple of its sector size, hence, by the time the I/O reaches crypt_convert_block_skcipher() and finds the data associated with the sector is not fully in the same bio vector means that it could just be that the sector data is scattered across two bio vectors. Hence, this patch removes the buffer length check and adds the code that prepares the sg list appropriately in case sector data is split between two bio vectors. With this change, windows was successfully be able to boot from a block device that is backed by dm-crypt device in a QEMU environment. Signed-off-by: Sudhakar Panneerselvam --- drivers/md/dm-crypt.c | 50 ++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 40 insertions(+), 10 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 380386c36921..9c26ad08732f 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1275,6 +1275,23 @@ static void *iv_tag_from_dmreq(struct crypt_config *cc, return tag_from_dmreq(cc, dmreq) + cc->integrity_tag_size; } +static int build_split_sg(struct scatterlist *sg, struct bio_vec *bvec, + struct bio *bio, struct bvec_iter *iter, + unsigned short int sector_size) +{ + int bytes_first_page; + + bytes_first_page = bvec->bv_len; + sg_set_page(sg, bvec->bv_page, bvec->bv_len, bvec->bv_offset); + bio_advance_iter(bio, iter, bvec->bv_len); + *bvec = bio_iter_iovec(bio, *iter); + sg++; + sg_set_page(sg, bvec->bv_page, sector_size - bytes_first_page, + bvec->bv_offset); + + return bytes_first_page; +} + static int crypt_convert_block_aead(struct crypt_config *cc, struct convert_context *ctx, struct aead_request *req, @@ -1379,15 +1396,12 @@ static int crypt_convert_block_skcipher(struct crypt_config *cc, struct bio_vec bv_in = bio_iter_iovec(ctx->bio_in, ctx->iter_in); struct bio_vec bv_out = bio_iter_iovec(ctx->bio_out, ctx->iter_out); struct scatterlist *sg_in, *sg_out; + int src_split = 0, dst_split = 0; struct dm_crypt_request *dmreq; u8 *iv, *org_iv, *tag_iv; __le64 *sector; int r = 0; - /* Reject unexpected unaligned bio. */ - if (unlikely(bv_in.bv_len & (cc->sector_size - 1))) - return -EIO; - dmreq = dmreq_of_req(cc, req); dmreq->iv_sector = ctx->cc_sector; if (test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags)) @@ -1407,11 +1421,25 @@ static int crypt_convert_block_skcipher(struct crypt_config *cc, sg_in = &dmreq->sg_in[0]; sg_out = &dmreq->sg_out[0]; - sg_init_table(sg_in, 1); - sg_set_page(sg_in, bv_in.bv_page, cc->sector_size, bv_in.bv_offset); + if (unlikely(bv_in.bv_len < cc->sector_size)) { + sg_init_table(sg_in, 2); + src_split = build_split_sg(sg_in, &bv_in, ctx->bio_in, + &ctx->iter_in, cc->sector_size); + } else { + sg_init_table(sg_in, 1); + sg_set_page(sg_in, bv_in.bv_page, cc->sector_size, + bv_in.bv_offset); + } - sg_init_table(sg_out, 1); - sg_set_page(sg_out, bv_out.bv_page, cc->sector_size, bv_out.bv_offset); + if (unlikely(bv_out.bv_len < cc->sector_size)) { + sg_init_table(sg_out, 2); + dst_split = build_split_sg(sg_out, &bv_out, ctx->bio_out, + &ctx->iter_out, cc->sector_size); + } else { + sg_init_table(sg_out, 1); + sg_set_page(sg_out, bv_out.bv_page, cc->sector_size, + bv_out.bv_offset); + } if (cc->iv_gen_ops) { /* For READs use IV stored in integrity metadata */ @@ -1442,8 +1470,10 @@ static int crypt_convert_block_skcipher(struct crypt_config *cc, if (!r && cc->iv_gen_ops && cc->iv_gen_ops->post) r = cc->iv_gen_ops->post(cc, org_iv, dmreq); - bio_advance_iter(ctx->bio_in, &ctx->iter_in, cc->sector_size); - bio_advance_iter(ctx->bio_out, &ctx->iter_out, cc->sector_size); + bio_advance_iter(ctx->bio_in, &ctx->iter_in, + cc->sector_size - src_split); + bio_advance_iter(ctx->bio_out, &ctx->iter_out, + cc->sector_size - dst_split); return r; } From patchwork Wed Sep 16 18:40:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sudhakar Panneerselvam X-Patchwork-Id: 11781053 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4611D618 for ; Wed, 16 Sep 2020 22:21:43 +0000 (UTC) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B931C20708 for ; Wed, 16 Sep 2020 22:21:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B931C20708 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=tempfail smtp.mailfrom=dm-devel-bounces@redhat.com Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-549-j7Rq2oyKPLuJAlaOWXzAiw-1; Wed, 16 Sep 2020 18:21:39 -0400 X-MC-Unique: j7Rq2oyKPLuJAlaOWXzAiw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9AC72186DD23; Wed, 16 Sep 2020 22:21:33 +0000 (UTC) Received: from colo-mx.corp.redhat.com (colo-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D46C860CD1; Wed, 16 Sep 2020 22:21:32 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by colo-mx.corp.redhat.com (Postfix) with ESMTP id 45D5C44A72; Wed, 16 Sep 2020 22:21:30 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id 08GMLSKR003424 for ; Wed, 16 Sep 2020 18:21:29 -0400 Received: by smtp.corp.redhat.com (Postfix) id C82FC200A4F4; Wed, 16 Sep 2020 22:21:28 +0000 (UTC) Delivered-To: dm-devel@redhat.com Received: from mimecast-mx02.redhat.com (mimecast05.extmail.prod.ext.rdu2.redhat.com [10.11.55.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C0621201EB53 for ; Wed, 16 Sep 2020 22:21:24 +0000 (UTC) Received: from us-smtp-1.mimecast.com (us-smtp-2.mimecast.com [207.211.31.81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3F2E89063AC for ; Wed, 16 Sep 2020 22:21:24 +0000 (UTC) Received: from userp2130.oracle.com (userp2130.oracle.com [156.151.31.86]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-405-F2nmcA4TMiiCiF3K8zAvrQ-1; Wed, 16 Sep 2020 18:21:20 -0400 X-MC-Unique: F2nmcA4TMiiCiF3K8zAvrQ-1 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 08GIaXGE168234; Wed, 16 Sep 2020 18:40:16 GMT Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by userp2130.oracle.com with ESMTP id 33gnrr4w13-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 16 Sep 2020 18:40:16 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 08GIOlVa190393; Wed, 16 Sep 2020 18:40:16 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userp3030.oracle.com with ESMTP id 33h89270xw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Sep 2020 18:40:16 +0000 Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 08GIeF3f012690; Wed, 16 Sep 2020 18:40:15 GMT Received: from supannee-devvm-ol7.osdevelopmeniad.oraclevcn.com (/100.100.231.179) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 16 Sep 2020 18:40:14 +0000 From: Sudhakar Panneerselvam To: agk@redhat.com, snitzer@redhat.com, dm-devel@redhat.com Date: Wed, 16 Sep 2020 18:40:06 +0000 Message-Id: <1600281606-1446-3-git-send-email-sudhakar.panneerselvam@oracle.com> In-Reply-To: <1600281606-1446-1-git-send-email-sudhakar.panneerselvam@oracle.com> References: <1600281606-1446-1-git-send-email-sudhakar.panneerselvam@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9746 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 spamscore=0 adultscore=0 suspectscore=0 mlxscore=0 bulkscore=0 mlxlogscore=999 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009160130 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9746 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 spamscore=0 lowpriorityscore=0 malwarescore=0 mlxscore=0 bulkscore=0 suspectscore=0 clxscore=1015 mlxlogscore=999 adultscore=0 priorityscore=1501 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009160130 X-Mimecast-Impersonation-Protect: Policy=CLT - Impersonation Protection Definition; Similar Internal Domain=false; Similar Monitored External Domain=false; Custom External Domain=false; Mimecast External Domain=false; Newly Observed Domain=false; Internal User Name=false; Custom Display Name List=false; Reply-to Address Mismatch=false; Targeted Threat Dictionary=false; Mimecast Threat Dictionary=false; Custom Threat Dictionary=false; X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-loop: dm-devel@redhat.com Cc: shirley.ma@oracle.com, ssudhakarp@gmail.com, martin.petersen@oracle.com Subject: [dm-devel] [RFC PATCH 2/2] dm crypt: Handle unaligned bio buffer lengths for lmk and tcw X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dm-devel-bounces@redhat.com X-Mimecast-Spam-Score: 0.002 X-Mimecast-Originator: redhat.com Us sg_miter_* apis to process unaligned buffer lengths while handling bio buffers for lmk and tcw IV generation algorithms. Signed-off-by: Sudhakar Panneerselvam --- drivers/md/dm-crypt.c | 104 +++++++++++++++++++++++++++++++++----------------- 1 file changed, 68 insertions(+), 36 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 9c26ad08732f..c40ada41d8ef 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -471,11 +471,13 @@ static int crypt_iv_lmk_wipe(struct crypt_config *cc) static int crypt_iv_lmk_one(struct crypt_config *cc, u8 *iv, struct dm_crypt_request *dmreq, - u8 *data) + struct scatterlist *sg) { struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk; SHASH_DESC_ON_STACK(desc, lmk->hash_tfm); + struct sg_mapping_iter miter; struct md5_state md5state; + size_t len = 16 * 31; __le32 buf[4]; int i, r; @@ -492,7 +494,19 @@ static int crypt_iv_lmk_one(struct crypt_config *cc, u8 *iv, } /* Sector is always 512B, block size 16, add data of blocks 1-31 */ - r = crypto_shash_update(desc, data + 16, 16 * 31); + sg_miter_start(&miter, sg, sg_nents(sg), + SG_MITER_ATOMIC | SG_MITER_FROM_SG); + sg_miter_skip(&miter, 16); + while (sg_miter_next(&miter) && len > 0) { + size_t hash_len = min_t(size_t, miter.length, len); + + r = crypto_shash_update(desc, miter.addr, hash_len); + if (r) + break; + len -= hash_len; + } + sg_miter_stop(&miter); + if (r) return r; @@ -520,15 +534,11 @@ static int crypt_iv_lmk_one(struct crypt_config *cc, u8 *iv, static int crypt_iv_lmk_gen(struct crypt_config *cc, u8 *iv, struct dm_crypt_request *dmreq) { - struct scatterlist *sg; - u8 *src; int r = 0; if (bio_data_dir(dmreq->ctx->bio_in) == WRITE) { - sg = crypt_get_sg_data(cc, dmreq->sg_in); - src = kmap_atomic(sg_page(sg)); - r = crypt_iv_lmk_one(cc, iv, dmreq, src + sg->offset); - kunmap_atomic(src); + r = crypt_iv_lmk_one(cc, iv, dmreq, + crypt_get_sg_data(cc, dmreq->sg_in)); } else memset(iv, 0, cc->iv_size); @@ -538,22 +548,32 @@ static int crypt_iv_lmk_gen(struct crypt_config *cc, u8 *iv, static int crypt_iv_lmk_post(struct crypt_config *cc, u8 *iv, struct dm_crypt_request *dmreq) { + struct sg_mapping_iter miter; struct scatterlist *sg; - u8 *dst; - int r; + int r, offset = 0; + size_t len; if (bio_data_dir(dmreq->ctx->bio_in) == WRITE) return 0; sg = crypt_get_sg_data(cc, dmreq->sg_out); - dst = kmap_atomic(sg_page(sg)); - r = crypt_iv_lmk_one(cc, iv, dmreq, dst + sg->offset); + r = crypt_iv_lmk_one(cc, iv, dmreq, sg); + if (r) + return r; /* Tweak the first block of plaintext sector */ - if (!r) - crypto_xor(dst + sg->offset, iv, cc->iv_size); + len = cc->iv_size; + sg_miter_start(&miter, sg, sg_nents(sg), + SG_MITER_ATOMIC | SG_MITER_TO_SG); + while (sg_miter_next(&miter) && len > 0) { + size_t xor_len = min_t(size_t, miter.length, len); + + crypto_xor(miter.addr, iv + offset, xor_len); + len -= xor_len; + offset += xor_len; + } + sg_miter_stop(&miter); - kunmap_atomic(dst); return r; } @@ -627,12 +647,14 @@ static int crypt_iv_tcw_wipe(struct crypt_config *cc) static int crypt_iv_tcw_whitening(struct crypt_config *cc, struct dm_crypt_request *dmreq, - u8 *data) + struct scatterlist *sg) { struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw; __le64 sector = cpu_to_le64(dmreq->iv_sector); + struct sg_mapping_iter miter; u8 buf[TCW_WHITENING_SIZE]; SHASH_DESC_ON_STACK(desc, tcw->crc32_tfm); + size_t remain, sgoffset = 0; int i, r; /* xor whitening with sector number */ @@ -656,8 +678,31 @@ static int crypt_iv_tcw_whitening(struct crypt_config *cc, crypto_xor(&buf[4], &buf[8], 4); /* apply whitening (8 bytes) to whole sector */ - for (i = 0; i < ((1 << SECTOR_SHIFT) / 8); i++) - crypto_xor(data + i * 8, buf, 8); + sg_miter_start(&miter, sg, sg_nents(sg), + SG_MITER_ATOMIC | SG_MITER_TO_SG); + sg_miter_next(&miter); + remain = miter.length; + for (i = 0; i < ((1 << SECTOR_SHIFT) / 8); i++) { + size_t len = 8, offset = 0; + + while (len > 0) { + size_t xor_len = min_t(size_t, remain, len); + + crypto_xor(miter.addr + sgoffset, buf + offset, + xor_len); + len -= xor_len; + remain -= xor_len; + offset += xor_len; + sgoffset += xor_len; + if (remain == 0) { + sg_miter_next(&miter); + sgoffset = 0; + remain = miter.length; + } + } + } + sg_miter_stop(&miter); + out: memzero_explicit(buf, sizeof(buf)); return r; @@ -666,19 +711,14 @@ static int crypt_iv_tcw_whitening(struct crypt_config *cc, static int crypt_iv_tcw_gen(struct crypt_config *cc, u8 *iv, struct dm_crypt_request *dmreq) { - struct scatterlist *sg; struct iv_tcw_private *tcw = &cc->iv_gen_private.tcw; __le64 sector = cpu_to_le64(dmreq->iv_sector); - u8 *src; int r = 0; /* Remove whitening from ciphertext */ - if (bio_data_dir(dmreq->ctx->bio_in) != WRITE) { - sg = crypt_get_sg_data(cc, dmreq->sg_in); - src = kmap_atomic(sg_page(sg)); - r = crypt_iv_tcw_whitening(cc, dmreq, src + sg->offset); - kunmap_atomic(src); - } + if (bio_data_dir(dmreq->ctx->bio_in) != WRITE) + r = crypt_iv_tcw_whitening(cc, dmreq, + crypt_get_sg_data(cc, dmreq->sg_in)); /* Calculate IV */ crypto_xor_cpy(iv, tcw->iv_seed, (u8 *)§or, 8); @@ -692,20 +732,12 @@ static int crypt_iv_tcw_gen(struct crypt_config *cc, u8 *iv, static int crypt_iv_tcw_post(struct crypt_config *cc, u8 *iv, struct dm_crypt_request *dmreq) { - struct scatterlist *sg; - u8 *dst; - int r; - if (bio_data_dir(dmreq->ctx->bio_in) != WRITE) return 0; /* Apply whitening on ciphertext */ - sg = crypt_get_sg_data(cc, dmreq->sg_out); - dst = kmap_atomic(sg_page(sg)); - r = crypt_iv_tcw_whitening(cc, dmreq, dst + sg->offset); - kunmap_atomic(dst); - - return r; + return crypt_iv_tcw_whitening(cc, dmreq, + crypt_get_sg_data(cc, dmreq->sg_out)); } static int crypt_iv_random_gen(struct crypt_config *cc, u8 *iv,