From patchwork Thu Aug 27 15:37:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11740941 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 82E1A722 for ; Thu, 27 Aug 2020 15:37:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6C16022B40 for ; Thu, 27 Aug 2020 15:37:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Dp1GkeR9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726825AbgH0Phx (ORCPT ); Thu, 27 Aug 2020 11:37:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726009AbgH0Phw (ORCPT ); Thu, 27 Aug 2020 11:37:52 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48EC0C061264 for ; Thu, 27 Aug 2020 08:37:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dg+Pw3ffK+CKK2YOFQs9gIaqFz7C/uY6fIG4tpdNGDo=; b=Dp1GkeR9exPA9oZ50ufzQ8xVq5 QjqsFrBWgY1Fe2SnN7nR5kwbFi/o8Qm0PNHBv4CgZzOy6J6WJ1TVQPAEO0TGeLHbLsnslqz/0i89a Ssz2zTVNl+bHn3wH9MLcdmZUOF9HICd3P5r8WHCRd6G+G/ImlEUCaD96fUi4f7MEH3q6EFaO1PUZt fMVC4mA/AGCCprAiV4tf3oC+tIb9Cwxok/39J4TmlEGCoTyKL4VEEo+jkN4/Pa91bmhW35AC53921 0YG/0Z4deqPWZBgh04UWbMJYe4gxFnSs2d12ZvrinctJzTTHDN5NiG9kt0KrleRGb0AUag8+N6uJJ e3VfTzsQ==; Received: from [2001:4bb8:18c:45ba:9892:9e86:5202:32f0] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kBJyI-0006xT-Pk; Thu, 27 Aug 2020 15:37:51 +0000 From: Christoph Hellwig To: axboe@kernel.dk Cc: linux-block@vger.kernel.org Subject: [PATCH 1/4] block: remove the BIO_NULL_MAPPED flag Date: Thu, 27 Aug 2020 17:37:45 +0200 Message-Id: <20200827153748.378424-2-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200827153748.378424-1-hch@lst.de> References: <20200827153748.378424-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We can simply use a boolean flag in the bio_map_data data structure instead. Signed-off-by: Christoph Hellwig --- block/blk-map.c | 9 +++++---- include/linux/blk_types.h | 1 - 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/block/blk-map.c b/block/blk-map.c index 6e804892d5ec6a..51e6195f878d3c 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -12,7 +12,8 @@ #include "blk.h" struct bio_map_data { - int is_our_pages; + bool is_our_pages : 1; + bool is_null_mapped : 1; struct iov_iter iter; struct iovec iov[]; }; @@ -108,7 +109,7 @@ static int bio_uncopy_user(struct bio *bio) struct bio_map_data *bmd = bio->bi_private; int ret = 0; - if (!bio_flagged(bio, BIO_NULL_MAPPED)) { + if (!bmd || !bmd->is_null_mapped) { /* * if we're in a workqueue, the request is orphaned, so * don't copy into a random user address space, just free @@ -158,7 +159,7 @@ static struct bio *bio_copy_user_iov(struct request_queue *q, * The caller provided iov might point to an on-stack or otherwise * shortlived one. */ - bmd->is_our_pages = map_data ? 0 : 1; + bmd->is_our_pages = !map_data; nr_pages = DIV_ROUND_UP(offset + len, PAGE_SIZE); if (nr_pages > BIO_MAX_PAGES) @@ -234,7 +235,7 @@ static struct bio *bio_copy_user_iov(struct request_queue *q, bio->bi_private = bmd; if (map_data && map_data->null_mapped) - bio_set_flag(bio, BIO_NULL_MAPPED); + bmd->is_null_mapped = true; return bio; cleanup: if (!map_data) diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 4ecf4fed171f0d..3d1bd8dad69baf 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -256,7 +256,6 @@ enum { BIO_CLONED, /* doesn't own data */ BIO_BOUNCED, /* bio is a bounce bio */ BIO_USER_MAPPED, /* contains user pages */ - BIO_NULL_MAPPED, /* contains invalid user pages */ BIO_WORKINGSET, /* contains userspace workingset pages */ BIO_QUIET, /* Make BIO Quiet */ BIO_CHAIN, /* chained bio, ->bi_remaining in effect */ From patchwork Thu Aug 27 15:37:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11740943 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2871014F6 for ; Thu, 27 Aug 2020 15:37:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0785322B4D for ; Thu, 27 Aug 2020 15:37:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="dVKnUSNk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726839AbgH0Phy (ORCPT ); Thu, 27 Aug 2020 11:37:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726009AbgH0Phx (ORCPT ); Thu, 27 Aug 2020 11:37:53 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F9F6C061264 for ; Thu, 27 Aug 2020 08:37:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=E1e0zf4VnjY94nL9MOcvfNN7fKjR4vD8MYOYhfZ9Xe0=; b=dVKnUSNk9PFpYaT6V+cWLFiCP5 zsM396SrsyWsq4ZFe7tZEUSbMMF3k7eeQfTC7UZq6WMvMRwMZl6LjhdD4Q2SUHtnK2yacIAR0BjrF Lz8tF7dIOYNHVhuFU7DgRNmqRz6Y81SzAjawzh5IWD0+q0FQjkkA7qM/dV5z+xqANxMir0x4ZTrxP 70KHftmzxH/OjLQcLPPh1q/JvhDa/DRXaoBOCXWeH0yVCUkCqehqgqZYIwQ0yQvDWl/X0wyp+5926 vRvwD5MxbiJVqNzaNA61mYR/JfhhPn3l18Z/vpKPpYg2o+PFr+VkMpom7+tIba8Hl2Cp3JEddWvAS t0r/NCnQ==; Received: from [2001:4bb8:18c:45ba:9892:9e86:5202:32f0] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kBJyJ-0006xd-WA; Thu, 27 Aug 2020 15:37:52 +0000 From: Christoph Hellwig To: axboe@kernel.dk Cc: linux-block@vger.kernel.org Subject: [PATCH 2/4] block: remove __blk_rq_unmap_user Date: Thu, 27 Aug 2020 17:37:46 +0200 Message-Id: <20200827153748.378424-3-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200827153748.378424-1-hch@lst.de> References: <20200827153748.378424-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Open code __blk_rq_unmap_user in the two callers. Both never pass a NULL bio, and one of them can use an existing local variable instead of the bio flag. Signed-off-by: Christoph Hellwig --- block/blk-map.c | 29 +++++++++++------------------ 1 file changed, 11 insertions(+), 18 deletions(-) diff --git a/block/blk-map.c b/block/blk-map.c index 51e6195f878d3c..10de4809edf9a7 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -558,20 +558,6 @@ int blk_rq_append_bio(struct request *rq, struct bio **bio) } EXPORT_SYMBOL(blk_rq_append_bio); -static int __blk_rq_unmap_user(struct bio *bio) -{ - int ret = 0; - - if (bio) { - if (bio_flagged(bio, BIO_USER_MAPPED)) - bio_unmap_user(bio); - else - ret = bio_uncopy_user(bio); - } - - return ret; -} - static int __blk_rq_map_user_iov(struct request *rq, struct rq_map_data *map_data, struct iov_iter *iter, gfp_t gfp_mask, bool copy) @@ -599,7 +585,10 @@ static int __blk_rq_map_user_iov(struct request *rq, */ ret = blk_rq_append_bio(rq, &bio); if (ret) { - __blk_rq_unmap_user(orig_bio); + if (copy) + bio_uncopy_user(orig_bio); + else + bio_unmap_user(orig_bio); return ret; } bio_get(bio); @@ -701,9 +690,13 @@ int blk_rq_unmap_user(struct bio *bio) if (unlikely(bio_flagged(bio, BIO_BOUNCED))) mapped_bio = bio->bi_private; - ret2 = __blk_rq_unmap_user(mapped_bio); - if (ret2 && !ret) - ret = ret2; + if (bio_flagged(mapped_bio, BIO_USER_MAPPED)) { + bio_unmap_user(mapped_bio); + } else { + ret2 = bio_uncopy_user(mapped_bio); + if (ret2 && !ret) + ret = ret2; + } mapped_bio = bio; bio = bio->bi_next; From patchwork Thu Aug 27 15:37:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11740945 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BAFE7722 for ; Thu, 27 Aug 2020 15:37:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9BC7F2177B for ; Thu, 27 Aug 2020 15:37:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="sRE3h/ZQ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727061AbgH0Phz (ORCPT ); Thu, 27 Aug 2020 11:37:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726009AbgH0Phz (ORCPT ); Thu, 27 Aug 2020 11:37:55 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 076BFC061264 for ; Thu, 27 Aug 2020 08:37:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=kUX7COLuyupJMmRBkcHZr/++k6y26DL5WnQBM9DrPwM=; b=sRE3h/ZQT6UfchCi5zFAhwNj0z GxLGG7Up39KTlvcypz5ZhT2izD1n+vxwcwRYT1ajofxXss/f/JeInyA1x2sW2oVDihwNiIEcqiKca IDshGLeLqLplNvLKjCTxdS78Q/zEJZ8f2Ti43nh779k+ief6XE5we6FymLeu9mx6vGgguJ9GT4Mkg cq3g9/dSJeGYULV0Cm90H4kXhxA/61S6XRB4iDOluEE6fY7kMUlWVKnCdvCGyAz9zUZnRhGJcXEmF 4Ax1D/ORDGSjx7z+ucUJ/sm6xvRaKvxcCSrt1GzB4O2C/R8gtGKIBczjFg+6HNlOIOkngydjb/m1c q3PjE90A==; Received: from [2001:4bb8:18c:45ba:9892:9e86:5202:32f0] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kBJyL-0006xk-6i; Thu, 27 Aug 2020 15:37:53 +0000 From: Christoph Hellwig To: axboe@kernel.dk Cc: linux-block@vger.kernel.org Subject: [PATCH 3/4] block: remove __blk_rq_map_user_iov Date: Thu, 27 Aug 2020 17:37:47 +0200 Message-Id: <20200827153748.378424-4-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200827153748.378424-1-hch@lst.de> References: <20200827153748.378424-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Just duplicate a small amount of code in the low-level map into the bio and copy to the bio routines, leading to much easier to follow and maintain code, and better shared error handling. Signed-off-by: Christoph Hellwig --- block/blk-map.c | 144 ++++++++++++++++++------------------------------ 1 file changed, 54 insertions(+), 90 deletions(-) diff --git a/block/blk-map.c b/block/blk-map.c index 10de4809edf9a7..427962ac2f675f 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -127,24 +127,12 @@ static int bio_uncopy_user(struct bio *bio) return ret; } -/** - * bio_copy_user_iov - copy user data to bio - * @q: destination block queue - * @map_data: pointer to the rq_map_data holding pages (if necessary) - * @iter: iovec iterator - * @gfp_mask: memory allocation flags - * - * Prepares and returns a bio for indirect user io, bouncing data - * to/from kernel pages as necessary. Must be paired with - * call bio_uncopy_user() on io completion. - */ -static struct bio *bio_copy_user_iov(struct request_queue *q, - struct rq_map_data *map_data, struct iov_iter *iter, - gfp_t gfp_mask) +static int bio_copy_user_iov(struct request *rq, struct rq_map_data *map_data, + struct iov_iter *iter, gfp_t gfp_mask) { struct bio_map_data *bmd; struct page *page; - struct bio *bio; + struct bio *bio, *bounce_bio; int i = 0, ret; int nr_pages; unsigned int len = iter->count; @@ -152,7 +140,7 @@ static struct bio *bio_copy_user_iov(struct request_queue *q, bmd = bio_alloc_map_data(iter, gfp_mask); if (!bmd) - return ERR_PTR(-ENOMEM); + return -ENOMEM; /* * We need to do a deep copy of the iov_iter including the iovecs. @@ -169,8 +157,7 @@ static struct bio *bio_copy_user_iov(struct request_queue *q, bio = bio_kmalloc(gfp_mask, nr_pages); if (!bio) goto out_bmd; - - ret = 0; + bio->bi_opf |= req_op(rq); if (map_data) { nr_pages = 1 << map_data->page_order; @@ -187,7 +174,7 @@ static struct bio *bio_copy_user_iov(struct request_queue *q, if (map_data) { if (i == map_data->nr_entries * nr_pages) { ret = -ENOMEM; - break; + goto cleanup; } page = map_data->pages[i / nr_pages]; @@ -195,14 +182,14 @@ static struct bio *bio_copy_user_iov(struct request_queue *q, i++; } else { - page = alloc_page(q->bounce_gfp | gfp_mask); + page = alloc_page(rq->q->bounce_gfp | gfp_mask); if (!page) { ret = -ENOMEM; - break; + goto cleanup; } } - if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes) { + if (bio_add_pc_page(rq->q, bio, page, bytes, offset) < bytes) { if (!map_data) __free_page(page); break; @@ -212,9 +199,6 @@ static struct bio *bio_copy_user_iov(struct request_queue *q, offset = 0; } - if (ret) - goto cleanup; - if (map_data) map_data->offset += bio->bi_iter.bi_size; @@ -236,39 +220,42 @@ static struct bio *bio_copy_user_iov(struct request_queue *q, bio->bi_private = bmd; if (map_data && map_data->null_mapped) bmd->is_null_mapped = true; - return bio; + + bounce_bio = bio; + ret = blk_rq_append_bio(rq, &bounce_bio); + if (ret) + goto cleanup; + + /* + * We link the bounce buffer in and could have to traverse it later, so + * we have to get a ref to prevent it from being freed + */ + bio_get(bounce_bio); + return 0; cleanup: if (!map_data) bio_free_pages(bio); bio_put(bio); out_bmd: kfree(bmd); - return ERR_PTR(ret); + return ret; } -/** - * bio_map_user_iov - map user iovec into bio - * @q: the struct request_queue for the bio - * @iter: iovec iterator - * @gfp_mask: memory allocation flags - * - * Map the user space address into a bio suitable for io to a block - * device. Returns an error pointer in case of error. - */ -static struct bio *bio_map_user_iov(struct request_queue *q, - struct iov_iter *iter, gfp_t gfp_mask) +static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, + gfp_t gfp_mask) { - unsigned int max_sectors = queue_max_hw_sectors(q); - int j; - struct bio *bio; + unsigned int max_sectors = queue_max_hw_sectors(rq->q); + struct bio *bio, *bounce_bio; int ret; + int j; if (!iov_iter_count(iter)) - return ERR_PTR(-EINVAL); + return -EINVAL; bio = bio_kmalloc(gfp_mask, iov_iter_npages(iter, BIO_MAX_PAGES)); if (!bio) - return ERR_PTR(-ENOMEM); + return -ENOMEM; + bio->bi_opf |= req_op(rq); while (iov_iter_count(iter)) { struct page **pages; @@ -284,7 +271,7 @@ static struct bio *bio_map_user_iov(struct request_queue *q, npages = DIV_ROUND_UP(offs + bytes, PAGE_SIZE); - if (unlikely(offs & queue_dma_alignment(q))) { + if (unlikely(offs & queue_dma_alignment(rq->q))) { ret = -EINVAL; j = 0; } else { @@ -296,7 +283,7 @@ static struct bio *bio_map_user_iov(struct request_queue *q, if (n > bytes) n = bytes; - if (!bio_add_hw_page(q, bio, page, n, offs, + if (!bio_add_hw_page(rq->q, bio, page, n, offs, max_sectors, &same_page)) { if (same_page) put_page(page); @@ -323,18 +310,30 @@ static struct bio *bio_map_user_iov(struct request_queue *q, bio_set_flag(bio, BIO_USER_MAPPED); /* - * subtle -- if bio_map_user_iov() ended up bouncing a bio, - * it would normally disappear when its bi_end_io is run. - * however, we need it for the unmap, so grab an extra - * reference to it + * Subtle: if we end up needing to bounce a bio, it would normally + * disappear when its bi_end_io is run. However, we need the original + * bio for the unmap, so grab an extra reference to it */ bio_get(bio); - return bio; + bounce_bio = bio; + ret = blk_rq_append_bio(rq, &bounce_bio); + if (ret) + goto out_put_orig; + + /* + * We link the bounce buffer in and could have to traverse it + * later, so we have to get a ref to prevent it from being freed + */ + bio_get(bounce_bio); + return 0; + + out_put_orig: + bio_put(bio); out_unmap: bio_release_pages(bio, false); bio_put(bio); - return ERR_PTR(ret); + return ret; } /** @@ -558,44 +557,6 @@ int blk_rq_append_bio(struct request *rq, struct bio **bio) } EXPORT_SYMBOL(blk_rq_append_bio); -static int __blk_rq_map_user_iov(struct request *rq, - struct rq_map_data *map_data, struct iov_iter *iter, - gfp_t gfp_mask, bool copy) -{ - struct request_queue *q = rq->q; - struct bio *bio, *orig_bio; - int ret; - - if (copy) - bio = bio_copy_user_iov(q, map_data, iter, gfp_mask); - else - bio = bio_map_user_iov(q, iter, gfp_mask); - - if (IS_ERR(bio)) - return PTR_ERR(bio); - - bio->bi_opf &= ~REQ_OP_MASK; - bio->bi_opf |= req_op(rq); - - orig_bio = bio; - - /* - * We link the bounce buffer in and could have to traverse it - * later so we have to get a ref to prevent it from being freed - */ - ret = blk_rq_append_bio(rq, &bio); - if (ret) { - if (copy) - bio_uncopy_user(orig_bio); - else - bio_unmap_user(orig_bio); - return ret; - } - bio_get(bio); - - return 0; -} - /** * blk_rq_map_user_iov - map user data to a request, for passthrough requests * @q: request queue where request should be inserted @@ -639,7 +600,10 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq, i = *iter; do { - ret =__blk_rq_map_user_iov(rq, map_data, &i, gfp_mask, copy); + if (copy) + ret = bio_copy_user_iov(rq, map_data, &i, gfp_mask); + else + ret = bio_map_user_iov(rq, &i, gfp_mask); if (ret) goto unmap_rq; if (!bio) From patchwork Thu Aug 27 15:37:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11740947 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DAAD5722 for ; Thu, 27 Aug 2020 15:37:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C262122B4D for ; Thu, 27 Aug 2020 15:37:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="DGuHyIX+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727069AbgH0Ph5 (ORCPT ); Thu, 27 Aug 2020 11:37:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726009AbgH0Ph4 (ORCPT ); Thu, 27 Aug 2020 11:37:56 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47F8CC061264 for ; Thu, 27 Aug 2020 08:37:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=PzbFGLqqN+piCUhxXTb0JA5yaeZEn4sFZBnpvKYnkxU=; b=DGuHyIX+gb1JKQIAi2R3tK6Q33 TyoEBFWB39xFCDRKudEVFChhHTApgsdBEL0dMgBT7dH2cIFSrFjRL5ZeQYH9dsgkaJ5FYd7im2sFD Tml9GYE2f0Yq2u4wa/NzqemecvrdfyFJFC6VoysURTlqbNO/2zC44tE9MhpDu2wce/Z4EeWA8+vk4 ejiEK1I2AxPLDiOm8ZsYezHEqRYsmy4Ot0dg6siVqmHxgixdjVTP668DG2GHCYjQiQiutvPWuO2Ii lhe9csDaKMqqCtu/Cy0rHBePcl8P+3M6Un7CNLpVtwdVhVBEi7pvzD+gcxeHSV3E1hqtHtR+y1N/1 YPRuuNcw==; Received: from [2001:4bb8:18c:45ba:9892:9e86:5202:32f0] (helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kBJyM-0006xt-Q4; Thu, 27 Aug 2020 15:37:54 +0000 From: Christoph Hellwig To: axboe@kernel.dk Cc: linux-block@vger.kernel.org Subject: [PATCH 4/4] block: remove the BIO_USER_MAPPED flag Date: Thu, 27 Aug 2020 17:37:48 +0200 Message-Id: <20200827153748.378424-5-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200827153748.378424-1-hch@lst.de> References: <20200827153748.378424-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Just check if there is private data, in which case the bio must have originated from bio_copy_user_iov. Signed-off-by: Christoph Hellwig --- block/blk-map.c | 10 ++++------ include/linux/blk_types.h | 1 - 2 files changed, 4 insertions(+), 7 deletions(-) diff --git a/block/blk-map.c b/block/blk-map.c index 427962ac2f675f..be118926ccf4e3 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -109,7 +109,7 @@ static int bio_uncopy_user(struct bio *bio) struct bio_map_data *bmd = bio->bi_private; int ret = 0; - if (!bmd || !bmd->is_null_mapped) { + if (!bmd->is_null_mapped) { /* * if we're in a workqueue, the request is orphaned, so * don't copy into a random user address space, just free @@ -307,8 +307,6 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, break; } - bio_set_flag(bio, BIO_USER_MAPPED); - /* * Subtle: if we end up needing to bounce a bio, it would normally * disappear when its bi_end_io is run. However, we need the original @@ -654,12 +652,12 @@ int blk_rq_unmap_user(struct bio *bio) if (unlikely(bio_flagged(bio, BIO_BOUNCED))) mapped_bio = bio->bi_private; - if (bio_flagged(mapped_bio, BIO_USER_MAPPED)) { - bio_unmap_user(mapped_bio); - } else { + if (bio->bi_private) { ret2 = bio_uncopy_user(mapped_bio); if (ret2 && !ret) ret = ret2; + } else { + bio_unmap_user(mapped_bio); } mapped_bio = bio; diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 3d1bd8dad69baf..39b1ba6da9ef71 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -255,7 +255,6 @@ enum { BIO_NO_PAGE_REF, /* don't put release vec pages */ BIO_CLONED, /* doesn't own data */ BIO_BOUNCED, /* bio is a bounce bio */ - BIO_USER_MAPPED, /* contains user pages */ BIO_WORKINGSET, /* contains userspace workingset pages */ BIO_QUIET, /* Make BIO Quiet */ BIO_CHAIN, /* chained bio, ->bi_remaining in effect */