From patchwork Thu Dec 17 07:32:48 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "(Exiting) Baolin Wang" X-Patchwork-Id: 7869991 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 961FCBEEE5 for ; Thu, 17 Dec 2015 07:33:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8E3BD203B8 for ; Thu, 17 Dec 2015 07:33:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 917AA203E1 for ; Thu, 17 Dec 2015 07:33:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934673AbbLQHdm (ORCPT ); Thu, 17 Dec 2015 02:33:42 -0500 Received: from mail-pf0-f182.google.com ([209.85.192.182]:34826 "EHLO mail-pf0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934653AbbLQHdb (ORCPT ); Thu, 17 Dec 2015 02:33:31 -0500 Received: by mail-pf0-f182.google.com with SMTP id v86so26174873pfa.2 for ; Wed, 16 Dec 2015 23:33:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=7iUfPSLbJTPd77z+Q4XW62JK6X2Aj+CDOwXTdQ2c0qE=; b=dTtJT+zNs2ZCkhbqER8F4nm++/eRNL/7pSvRIA0lhv/eWbaD6ZnQFGK7AdPY2cY5HU RCqdLX51iKMlUNpJjnDgP2/TDLkF5Uapu6k5EBapzPDnLURBAjsU6wAO3N7R0oetjSAh 4nPFzKg3T9AonJYD8aetwLYuA3+4F53nMekjc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=7iUfPSLbJTPd77z+Q4XW62JK6X2Aj+CDOwXTdQ2c0qE=; b=if41XItI4l6ysoJ+BvyfGvImooTpOde/H/eQTk2y9BpVjap+PlWwog1nDwSSrs6h4s O2UaAaR8QjlaRBaRNaJAyelefIyqI0BTCj9Iy1IO/EskUfkowaOdirNx3vYWhtgaGpLg MTukW7DG88B9TBCD16s2E3SRnSMl6KTsBBQ/3YzYPIqY9Rp//W+P5bWqxtwGftOICeF4 aVbZJnRQt7H4ka+gFiKs3FEl6PpEMjPax07YPgzktSZ96EdNHR0WvBrXJ4tL0rtG4QrO 4ZvJc9+8LLvfg+gsA+4Rvr1Ati139qGyxZRJdczZExS/bI6p6BtRHR4fNqd5UZKVglrN YuYA== X-Gm-Message-State: ALoCoQl76FG9EeuZreQWUF1pjLr7FKOqYZEIaZrZSDtRM4Q/WVcyDuGzF9kGIvsTRPJAttZGys++iBi60OMMYO5E6njYXF8BIA== X-Received: by 10.98.12.21 with SMTP id u21mr12191127pfi.24.1450337610808; Wed, 16 Dec 2015 23:33:30 -0800 (PST) Received: from baolinwangubtpc.spreadtrum.com ([175.111.195.49]) by smtp.gmail.com with ESMTPSA id v16sm9189358pfi.94.2015.12.16.23.33.26 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 16 Dec 2015 23:33:30 -0800 (PST) From: Baolin Wang To: axboe@kernel.dk, agk@redhat.com, snitzer@redhat.com, dm-devel@redhat.com Cc: neilb@suse.com, dan.j.williams@intel.com, martin.petersen@oracle.com, sagig@mellanox.com, kent.overstreet@gmail.com, keith.busch@intel.com, tj@kernel.org, broonie@kernel.org, arnd@arndb.de, linux-block@vger.kernel.org, linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, baolin.wang@linaro.org Subject: [PATCH v3 1/2] block: Introduce blk_bio_map_sg() to map one bio Date: Thu, 17 Dec 2015 15:32:48 +0800 Message-Id: X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: In-Reply-To: References: Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_BL_SPAMCOP_NET, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In dm-crypt, it need to map one bio to scatterlist for improving the encryption efficiency. Thus this patch introduces the blk_bio_map_sg() function to map one bio with scatterlists. Signed-off-by: Baolin Wang --- block/blk-merge.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ include/linux/blkdev.h | 3 +++ 2 files changed, 48 insertions(+) diff --git a/block/blk-merge.c b/block/blk-merge.c index de5716d8..281b9e5 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -374,6 +374,51 @@ single_segment: } /* + * map a bio to scatterlist, return number of sg entries setup. + */ +int blk_bio_map_sg(struct request_queue *q, struct bio *bio, + struct scatterlist *sglist, + struct scatterlist **sg) +{ + struct bio_vec bvec, bvprv = { NULL }; + struct bvec_iter iter; + int nsegs, cluster; + + nsegs = 0; + cluster = blk_queue_cluster(q); + + if (bio->bi_rw & REQ_DISCARD) { + /* + * This is a hack - drivers should be neither modifying the + * biovec, nor relying on bi_vcnt - but because of + * blk_add_request_payload(), a discard bio may or may not have + * a payload we need to set up here (thank you Christoph) and + * bi_vcnt is really the only way of telling if we need to. + */ + + if (bio->bi_vcnt) + goto single_segment; + + return 0; + } + + if (bio->bi_rw & REQ_WRITE_SAME) { +single_segment: + *sg = sglist; + bvec = bio_iovec(bio); + sg_set_page(*sg, bvec.bv_page, bvec.bv_len, bvec.bv_offset); + return 1; + } + + bio_for_each_segment(bvec, bio, iter) + __blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg, + &nsegs, &cluster); + + return nsegs; +} +EXPORT_SYMBOL(blk_bio_map_sg); + +/* * map a request to scatterlist, return number of sg entries setup. Caller * must make sure sg can hold rq->nr_phys_segments entries */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 3fe27f8..3ca90ac 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1004,6 +1004,9 @@ extern void blk_queue_flush_queueable(struct request_queue *q, bool queueable); extern struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev); extern int blk_rq_map_sg(struct request_queue *, struct request *, struct scatterlist *); +extern int blk_bio_map_sg(struct request_queue *q, struct bio *bio, + struct scatterlist *sglist, + struct scatterlist **sg); extern void blk_dump_rq_flags(struct request *, char *); extern long nr_blockdev_pages(void);