From patchwork Wed May 6 06:08:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lin X-Patchwork-Id: 6346701 X-Patchwork-Delegate: snitzer@redhat.com Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 55A92BEEE1 for ; Wed, 6 May 2015 06:13:04 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 79B6D20225 for ; Wed, 6 May 2015 06:13:03 +0000 (UTC) Received: from mx3-phx2.redhat.com (mx3-phx2.redhat.com [209.132.183.24]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7D78A20220 for ; Wed, 6 May 2015 06:13:02 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx3-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id t4668UV6009140; Wed, 6 May 2015 02:08:31 -0400 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id t4668QFE005315 for ; Wed, 6 May 2015 02:08:26 -0400 Received: from mx1.redhat.com (ext-mx03.extmail.prod.ext.phx2.redhat.com [10.5.110.27]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t4668Qvm016166 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Wed, 6 May 2015 02:08:26 -0400 Received: from mail-pa0-f41.google.com (mail-pa0-f41.google.com [209.85.220.41]) by mx1.redhat.com (Postfix) with ESMTPS id 2BD40DBF; Wed, 6 May 2015 06:08:25 +0000 (UTC) Received: by pabtp1 with SMTP id tp1so217688002pab.2; Tue, 05 May 2015 23:08:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=EIb6PysHk30FmKQYr/kxuqKCZ1hVPk0eDeJzC50b+LQ=; b=FdPrSull1Q+kZ9sqzOWjqMgCXAsG7xoUeZG5vxSNFhXvgwUc/cySJwGz2kyhLMHR6K WGXkBq3jGoex2sbyWnUeLHdP3Xnxj8t58W41dZmItmkDMXSB33aackRfKk1cc9b05fgk +1bCUEkX3R7iDplUHgDVtu/9TEiNq9Y1EO4MWmFc82WEQX5nzVr1Rx1LQ4kvJV8v7Ve9 4ER0Rqt+h/k+GPyBj7z767DpQcSdKuPSn4eQFcMAQF5GzjlmRjHWDvUDR9PHFgVPcGI5 s2n4CIFlGDqSz4b6AMT3TkPIzcgu+dvE5EXAORSQOW5Pn+fmSHsW/t4sdnGVek2sRNNc P05g== X-Received: by 10.68.65.66 with SMTP id v2mr58527893pbs.10.1430892504660; Tue, 05 May 2015 23:08:24 -0700 (PDT) Received: from ?IPv6:2601:9:4f80:1b7:d96d:54c4:c683:ca29? ([2601:9:4f80:1b7:d96d:54c4:c683:ca29]) by mx.google.com with ESMTPSA id dp7sm732040pdb.10.2015.05.05.23.08.22 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 05 May 2015 23:08:23 -0700 (PDT) Message-ID: <5549AFD4.5020803@kernel.org> Date: Tue, 05 May 2015 23:08:20 -0700 From: Ming Lin User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: Christoph Hellwig References: <1430203717-13307-1-git-send-email-mlin@kernel.org> <1430203717-13307-2-git-send-email-mlin@kernel.org> <20150428114320.GA9790@lst.de> In-Reply-To: <20150428114320.GA9790@lst.de> X-RedHat-Spam-Score: -2.469 (BAYES_00, DCC_REPUT_13_19, FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_PASS) 209.85.220.41 mail-pa0-f41.google.com 209.85.220.41 mail-pa0-f41.google.com X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Scanned-By: MIMEDefang 2.76 on 10.5.110.27 X-loop: dm-devel@redhat.com Cc: Mike Snitzer , Ming Lei , Keith Busch , dm-devel@redhat.com, Joshua Morris , Alasdair Kergon , Lars Ellenberg , Philip Kelleher , Dongsu Park , Christoph Hellwig , Kent Overstreet , Nitin Gupta , Oleg Drokin , Al Viro , Jens Axboe , Andreas Dilger , Geoff Levand , Jiri Kosina , linux-kernel@vger.kernel.org, Jim Paris , Minchan Kim , drbd-user@lists.linbit.com Subject: Re: [dm-devel] [PATCH 01/10] block: make generic_make_request handle arbitrarily sized bios X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On 04/28/2015 04:43 AM, Christoph Hellwig wrote: > This seems to lack support for QUEUE_FLAG_SG_GAPS to work around > the retarded PPR format in the NVMe driver. I'm actually not > sure we have a way to reproduce them for BLOCK_PC requests, but I think > we should make sure to properly handle them. How about below incremental patch? block/bio.c | 8 -------- block/blk-merge.c | 8 ++++++++ 2 files changed, 8 insertions(+), 8 deletions(-) --- dm-devel mailing list dm-devel@redhat.com https://www.redhat.com/mailman/listinfo/dm-devel diff --git a/block/bio.c b/block/bio.c index ae31cdb..3f6bd9a 100644 --- a/block/bio.c +++ b/block/bio.c @@ -743,14 +743,6 @@ int bio_add_pc_page(struct request_queue *q, struct bio *bio, struct page bio->bi_iter.bi_size += len; goto done; } - - /* - * If the queue doesn't support SG gaps and adding this - * offset would create a gap, disallow it. - */ - if (q->queue_flags & (1 << QUEUE_FLAG_SG_GAPS) && - bvec_gap_to_prev(prev, offset)) - return 0; } if (bio->bi_vcnt >= bio->bi_max_vecs) diff --git a/block/blk-merge.c b/block/blk-merge.c index 9d565a0..32f6d6c 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -78,6 +78,14 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, if (sectors > queue_max_sectors(q)) goto split; + /* + * If the queue doesn't support SG gaps and adding this + * offset would create a gap, disallow it. + */ + if (q->queue_flags & (1 << QUEUE_FLAG_SG_GAPS) && + prev && bvec_gap_to_prev(&bvprv, bv.bv_offset)) + goto split; + if (prev && blk_queue_cluster(q)) { if (seg_size + bv.bv_len > queue_max_segment_size(q)) goto new_segment;