From patchwork Fri May 12 13:38:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "hch@lst.de" X-Patchwork-Id: 13239315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC3EEC77B75 for ; Fri, 12 May 2023 13:39:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241312AbjELNjR (ORCPT ); Fri, 12 May 2023 09:39:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241143AbjELNjP (ORCPT ); Fri, 12 May 2023 09:39:15 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC526132B7 for ; Fri, 12 May 2023 06:39:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=LIiB/VbCyFbq40WUPLTMMhOjz0fmCLpHrEi8nqJDHGk=; b=tCcwNGw0kgZh6NLbC4x7C3gDc3 W8aMZtJ3HxetCsp8izeObsZmrzVa5uBqd9akvTlE3OFXeTUwaTALTXjqMP1A/3YmflbcLYVDCgMPh 8njkp25BqoMFycm3JH9mGWkJh8/3PKtJrnhW29LJM+9QifftmOAdcOa+KA8JXTspXX2Ko1zIUe+EZ lK05iYvvKPBEevKNFIeEfQKFaXPuUFCSFGUUgAoosvnlEw1DJulTLHgPyW8ZjBwBt07W7xJ3QRwPK lh0/JfX0SVRfERP2oAY8ZAvYTGp+KcDJth4lTRCEDBrz0ai5Su6mXSVL+ZeI1MVPFTTcXWf4BOgpV hqy5/OrA==; Received: from [204.239.251.3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pxSzJ-00C2hc-0S; Fri, 12 May 2023 13:39:13 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Jinyoung Choi , linux-block@vger.kernel.org Subject: [PATCH 1/8] block: tidy up the bio full checks in bio_add_hw_page Date: Fri, 12 May 2023 06:38:54 -0700 Message-Id: <20230512133901.1053543-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230512133901.1053543-1-hch@lst.de> References: <20230512133901.1053543-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org bio_add_hw_page already checks if the number of bytes trying to be added even fit into max_hw_sectors limit of the queue. Remove the call to bio_full and just do a check for the smaller of the number of segments in the bio and the queue max segments limit, and do this cheap check before the more expensive gap to previous check. Signed-off-by: Christoph Hellwig --- block/bio.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/block/bio.c b/block/bio.c index 043944fd46ebbc..1528ca0f3df6dc 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1014,6 +1014,10 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio, if (bio_try_merge_hw_seg(q, bio, page, len, offset, same_page)) return len; + if (bio->bi_vcnt >= + min(bio->bi_max_vecs, queue_max_segments(q))) + return 0; + /* * If the queue doesn't support SG gaps and adding this segment * would create a gap, disallow it. @@ -1023,12 +1027,6 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio, return 0; } - if (bio_full(bio, len)) - return 0; - - if (bio->bi_vcnt >= queue_max_segments(q)) - return 0; - bvec_set_page(&bio->bi_io_vec[bio->bi_vcnt], page, len, offset); bio->bi_vcnt++; bio->bi_iter.bi_size += len; From patchwork Fri May 12 13:38:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "hch@lst.de" X-Patchwork-Id: 13239316 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7378CC7EE26 for ; Fri, 12 May 2023 13:39:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241275AbjELNjS (ORCPT ); Fri, 12 May 2023 09:39:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241306AbjELNjQ (ORCPT ); Fri, 12 May 2023 09:39:16 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02E0612493 for ; Fri, 12 May 2023 06:39:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=BBsvpW62gpb+w5FeGpQRtYpWNARSW4/g+jU6cv6P90o=; b=XxHrtm+SEZF0YupkSvSkvA3Hg5 rV3eSc182vIGE9VZqmqs1KsCQdfLKviggV4CF/Eqd1NEsVOZPr7G3TlCi3X86zUBMMmqRZ+oEx4km 1iG8lRSscNs319tQsbostUZp1JElhP7QCnrpXuIZu/ru4+cIAIo1ewmAoPbc4tPfmPkZLoO/YHnC0 ipLDe5jXfViWVaFWW5Pb0SdBHD5DeZhUCEZxGN+Q3WeCxxdskdvS2iGVKDvckaOirdotE2jBY8oZ0 QgKhp5GDNUvwTvGhA7vGxt6dHFEr2D3AzVItOKlSQXm/aC8K619k3NVQR4HZwci3cJVuUyW7lo0H9 eGiiPUQw==; Received: from [204.239.251.3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pxSzJ-00C2hl-1p; Fri, 12 May 2023 13:39:13 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Jinyoung Choi , linux-block@vger.kernel.org Subject: [PATCH 2/8] block: use SECTOR_SHIFT bio_add_hw_page Date: Fri, 12 May 2023 06:38:55 -0700 Message-Id: <20230512133901.1053543-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230512133901.1053543-1-hch@lst.de> References: <20230512133901.1053543-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Use the SECTOR_SHIFT magic constant instead of the magic number. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn Reviewed-by: Jinyoung Choi --- block/bio.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/bio.c b/block/bio.c index 1528ca0f3df6dc..d020065e613cc8 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1007,7 +1007,7 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio, if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED))) return 0; - if (((bio->bi_iter.bi_size + len) >> 9) > max_sectors) + if (((bio->bi_iter.bi_size + len) >> SECTOR_SHIFT) > max_sectors) return 0; if (bio->bi_vcnt > 0) { From patchwork Fri May 12 13:38:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "hch@lst.de" X-Patchwork-Id: 13239317 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5806AC7EE2A for ; Fri, 12 May 2023 13:39:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241317AbjELNjR (ORCPT ); Fri, 12 May 2023 09:39:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241305AbjELNjQ (ORCPT ); Fri, 12 May 2023 09:39:16 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02EBD124BA for ; Fri, 12 May 2023 06:39:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=DzQgIv/qb6Wfu9ri4F2baSqhzwQuWj9dIDs/FEyxeRY=; b=X9dEQdLmbGkVIEDlN6wirUiu0X KR5ywD1vPC5cZvpMz/JzLTyfm+MtomnvbxXC1j+0mhQrhOjNX5pf8zBc96LOMWufvxbgehBwT8tzX nHiCaOUMAn4QLmSc1wmmfI08O0stysRF6W4/hUkvr3/kPKOXdKBB3OjxFK4W4sfpHiWczUpYLdrgU 2vt/2wvPSUGiZEZDniV05kuxPWjuTf0tcfuXCEgmjjPXUzMa2PqNqYvcCx7DlfYcYGQ4IBqdG8zaP Lsm7JQljHBwcfFIHKZI5BSwsxC2JjewSQ4OLrriACi0mBkL6gIo/I9T+w1Gy4q+W3b4EZo5tspXPc o50WWmGA==; Received: from [204.239.251.3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pxSzK-00C2i5-02; Fri, 12 May 2023 13:39:14 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Jinyoung Choi , linux-block@vger.kernel.org Subject: [PATCH 3/8] block: move the BIO_CLONED checks out of __bio_try_merge_page Date: Fri, 12 May 2023 06:38:56 -0700 Message-Id: <20230512133901.1053543-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230512133901.1053543-1-hch@lst.de> References: <20230512133901.1053543-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org __bio_try_merge_page is a way too low-level helper to assert that the bio is not cloned. Move the check into bio_add_page and bio_iov_iter_get_pages instead, which are the high level entry points that should enforce this variant. bio_add_hw_page already this check, coverig the third (indirect) caller of __bio_try_merge_page. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn Reviewed-by: Jinyoung Choi --- block/bio.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/block/bio.c b/block/bio.c index d020065e613cc8..c7bf20a779ebed 100644 --- a/block/bio.c +++ b/block/bio.c @@ -945,9 +945,6 @@ static inline bool page_is_mergeable(const struct bio_vec *bv, static bool __bio_try_merge_page(struct bio *bio, struct page *page, unsigned int len, unsigned int off, bool *same_page) { - if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED))) - return false; - if (bio->bi_vcnt > 0) { struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1]; @@ -1127,6 +1124,9 @@ int bio_add_page(struct bio *bio, struct page *page, { bool same_page = false; + if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED))) + return 0; + if (!__bio_try_merge_page(bio, page, len, offset, &same_page)) { if (bio_full(bio, len)) return 0; @@ -1328,6 +1328,9 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) { int ret = 0; + if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED))) + return -EIO; + if (iov_iter_is_bvec(iter)) { bio_iov_bvec_set(bio, iter); iov_iter_advance(iter, bio->bi_iter.bi_size); From patchwork Fri May 12 13:38:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "hch@lst.de" X-Patchwork-Id: 13239321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3B2FC77B75 for ; Fri, 12 May 2023 13:39:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241305AbjELNjW (ORCPT ); Fri, 12 May 2023 09:39:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241309AbjELNjQ (ORCPT ); Fri, 12 May 2023 09:39:16 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8495112EA6 for ; Fri, 12 May 2023 06:39:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=DuU6R+NkB6GkIUUeDxZk9rv59QqmGVvtU8pZ7WSTfAo=; b=hIdYJMB9zkv4pc/fvWJMHs4ph3 HtFlfDO5o1aXuxnw23X2Lv48dxiPlhM1qI1NdDYRTTuteKLh7/+GKYEDtRD3B15xiBjQxSGG+HJb1 /2OBbUojWEJ7FL84nuvBtuL7fRLx+HnwHYaip5oKlChdWEO3IpGyJu1uuNb7RaqratqLmh60h46Ig 5/IUgnCEjrICbgCMK0uQopXAyATlTdeseCtQ7P/M8mQM0E6159fdw+Sv6o6eJ2eKWQUYOa4l0DBBF Vr4AS/6N1n83AGI6R1NwHB6kTDyyqwLMB+4mFD4vIfKrIvO13jh38yLlaJS5EBO4cq8p9eXgtloYx szv6M/dg==; Received: from [204.239.251.3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pxSzK-00C2iD-1N; Fri, 12 May 2023 13:39:14 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Jinyoung Choi , linux-block@vger.kernel.org Subject: [PATCH 4/8] block: move the bi_vcnt check out of __bio_try_merge_page Date: Fri, 12 May 2023 06:38:57 -0700 Message-Id: <20230512133901.1053543-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230512133901.1053543-1-hch@lst.de> References: <20230512133901.1053543-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move the bi_vcnt out of __bio_try_merge_page and into the two callers that don't already have it in preparation for additional changes to __bio_try_merge_page. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn Reviewed-by: Jinyoung Choi --- block/bio.c | 44 ++++++++++++++++++++++---------------------- 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/block/bio.c b/block/bio.c index c7bf20a779ebed..5d2c95e05b1a52 100644 --- a/block/bio.c +++ b/block/bio.c @@ -945,20 +945,17 @@ static inline bool page_is_mergeable(const struct bio_vec *bv, static bool __bio_try_merge_page(struct bio *bio, struct page *page, unsigned int len, unsigned int off, bool *same_page) { - if (bio->bi_vcnt > 0) { - struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1]; - - if (page_is_mergeable(bv, page, len, off, same_page)) { - if (bio->bi_iter.bi_size > UINT_MAX - len) { - *same_page = false; - return false; - } - bv->bv_len += len; - bio->bi_iter.bi_size += len; - return true; - } + struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1]; + + if (!page_is_mergeable(bv, page, len, off, same_page)) + return false; + if (bio->bi_iter.bi_size > UINT_MAX - len) { + *same_page = false; + return false; } - return false; + bv->bv_len += len; + bio->bi_iter.bi_size += len; + return true; } /* @@ -1127,11 +1124,13 @@ int bio_add_page(struct bio *bio, struct page *page, if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED))) return 0; - if (!__bio_try_merge_page(bio, page, len, offset, &same_page)) { - if (bio_full(bio, len)) - return 0; - __bio_add_page(bio, page, len, offset); - } + if (bio->bi_vcnt > 0 && + __bio_try_merge_page(bio, page, len, offset, &same_page)) + return len; + + if (bio_full(bio, len)) + return 0; + __bio_add_page(bio, page, len, offset); return len; } EXPORT_SYMBOL(bio_add_page); @@ -1198,13 +1197,14 @@ static int bio_iov_add_page(struct bio *bio, struct page *page, { bool same_page = false; - if (!__bio_try_merge_page(bio, page, len, offset, &same_page)) { - __bio_add_page(bio, page, len, offset); + if (bio->bi_vcnt > 0 && + __bio_try_merge_page(bio, page, len, offset, &same_page)) { + if (same_page) + put_page(page); return 0; } - if (same_page) - put_page(page); + __bio_add_page(bio, page, len, offset); return 0; } From patchwork Fri May 12 13:38:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "hch@lst.de" X-Patchwork-Id: 13239319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7641BC77B7C for ; Fri, 12 May 2023 13:39:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240767AbjELNjW (ORCPT ); Fri, 12 May 2023 09:39:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52742 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241310AbjELNjQ (ORCPT ); Fri, 12 May 2023 09:39:16 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A609D1387B for ; Fri, 12 May 2023 06:39:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=PZg1TbIPC8qE49UMW1O5Ydy4Q/LUob5wE4DForL/9zU=; b=EnAW8fTvz1kovOeHjhsTC5ZZct +KoiEUkxJvWgckGqMzOS+FNeumdclLv7SGgxNfdxgae9WkZ2F59gIilljNjatQcZu2yHW6R+wDAom FUX+komwfvxl2uzF7fGH2U2Kke2wkAj9Kt2BQGcr25d5mWOTQMSP0kkEVurvAwGgWI8rBUClVDbXj eE0BSeaAMN+HBbQ9JWlusapM8anv4I6/M4fYLSw+aYnVumM1EdkpgRkaHCHsNTHjoMV6uMfg1R/U1 a43xZmtgz3fh7yAdIZR5dHiQXLGgpV944b9YDLYOh3i5m5g3aMI6ccTRaYFsolFM15UPXoNr9XmQn I3WYuzUw==; Received: from [204.239.251.3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pxSzK-00C2iJ-2h; Fri, 12 May 2023 13:39:14 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Jinyoung Choi , linux-block@vger.kernel.org Subject: [PATCH 5/8] block: move the bi_size overflow check in __bio_try_merge_page Date: Fri, 12 May 2023 06:38:58 -0700 Message-Id: <20230512133901.1053543-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230512133901.1053543-1-hch@lst.de> References: <20230512133901.1053543-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Checking for availability in bi_size in a function that attempts to merge into an existing segment is a bit odd, as the limit also applies when adding a new segment. This code works fine as we always call __bio_try_merge_page, but contributes to sub-optimal calling conventions and doesn't lead to clear code. Move it to two of the callers instead, the third one already has a more strict check that includes max_hw_segments anyway. Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn Reviewed-by: Jinyoung Choi --- block/bio.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/block/bio.c b/block/bio.c index 5d2c95e05b1a52..93e6bca3c2239f 100644 --- a/block/bio.c +++ b/block/bio.c @@ -949,10 +949,6 @@ static bool __bio_try_merge_page(struct bio *bio, struct page *page, if (!page_is_mergeable(bv, page, len, off, same_page)) return false; - if (bio->bi_iter.bi_size > UINT_MAX - len) { - *same_page = false; - return false; - } bv->bv_len += len; bio->bi_iter.bi_size += len; return true; @@ -1123,6 +1119,8 @@ int bio_add_page(struct bio *bio, struct page *page, if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED))) return 0; + if (bio->bi_iter.bi_size > UINT_MAX - len) + return 0; if (bio->bi_vcnt > 0 && __bio_try_merge_page(bio, page, len, offset, &same_page)) @@ -1197,6 +1195,9 @@ static int bio_iov_add_page(struct bio *bio, struct page *page, { bool same_page = false; + if (WARN_ON_ONCE(bio->bi_iter.bi_size > UINT_MAX - len)) + return -EIO; + if (bio->bi_vcnt > 0 && __bio_try_merge_page(bio, page, len, offset, &same_page)) { if (same_page) From patchwork Fri May 12 13:38:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "hch@lst.de" X-Patchwork-Id: 13239318 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81ED2C7EE2E for ; Fri, 12 May 2023 13:39:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241306AbjELNjT (ORCPT ); Fri, 12 May 2023 09:39:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241314AbjELNjR (ORCPT ); Fri, 12 May 2023 09:39:17 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E81C132B7 for ; Fri, 12 May 2023 06:39:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=cGTFEY4mheIEWuSf3+z+ZLW4wLppeiojvczv9Dlr4r0=; b=PTc+N91sC2jj375fD3HeaOEgWT nI6lrDEoVzTP5jndocUWVzNEDidxH16b57fKYXQr8BnkEm78mwbt41K/sofLFBiVHtbRhEzgFn4eH epcTX8vWYmAv0+bP/01ILZjiIwtXEMjkOK3MdMdtAUpRLnE3zzSiP/7v464V453qgYtFxy0u6tX3h sjtPER/u9bhpmDWrT7DA3KnvBchSlFc7sdYGLXDYXTC4M5lz2z14Vz5Y/47+PByjbQqTULj0iPbtG cEXZIkaG+WVHCJ2H1A+2aRS+hr12/tJ5tMPBrjK9MCgDALU4+N7uCMxt0N6k1IQuJeT+gK+lBgiZQ sUHHJgfg==; Received: from [204.239.251.3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pxSzL-00C2iS-0m; Fri, 12 May 2023 13:39:15 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Jinyoung Choi , linux-block@vger.kernel.org Subject: [PATCH 6/8] block: downgrade a bio_full call in bio_add_page Date: Fri, 12 May 2023 06:38:59 -0700 Message-Id: <20230512133901.1053543-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230512133901.1053543-1-hch@lst.de> References: <20230512133901.1053543-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org bio_add_page already checks that there is space in bi_size a little earlier. So after we failed to add to an existing segment, just check that there is another one available instead of duplicating the bi_size check. Signed-off-by: Christoph Hellwig Reviewed-by: Jinyoung Choi --- block/bio.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/bio.c b/block/bio.c index 93e6bca3c2239f..89b1475de0c370 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1126,7 +1126,7 @@ int bio_add_page(struct bio *bio, struct page *page, __bio_try_merge_page(bio, page, len, offset, &same_page)) return len; - if (bio_full(bio, len)) + if (bio->bi_vcnt >= bio->bi_max_vecs) return 0; __bio_add_page(bio, page, len, offset); return len; From patchwork Fri May 12 13:39:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "hch@lst.de" X-Patchwork-Id: 13239322 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EED66C7EE26 for ; Fri, 12 May 2023 13:39:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241299AbjELNjX (ORCPT ); Fri, 12 May 2023 09:39:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241304AbjELNjS (ORCPT ); Fri, 12 May 2023 09:39:18 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93BAD12087 for ; Fri, 12 May 2023 06:39:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=ptNyRjDsDOWi2id8Bfmz/XxKVOPHn5T4DqKTnkNcIdY=; b=xH7+9YipqXh3MrTjs+w8pl6zG1 kPgdfQGI/Bdwy9BJ/nWo7N8GsqEIDQvBjn9+Ig1PYMaCtYmYYhUZmt27uUQGD2r7vphYJ0W2xPn6W Qwxy4j9/hgxeyadhqURMCg/m76yFJhDido1tt4HeRDcBN7igvYh7E4BjIDzUV+YsYpFyR1CHn779J xtkkqlTsiPCWcvsFm+Uke2majQrkb/p7ZPmS5sNXWVJNvIYIwvTGuzHeSlLvklaip0UtEMzxrmNQX A6WOJmsiv7aJhVwdnWQyquaGIC0lR4g8c+BA51x+rNNrXToqo5vV+zzHZnUmyMAnCa/HyJTJsLdTA 0jeGY3/w==; Received: from [204.239.251.3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pxSzL-00C2iY-26; Fri, 12 May 2023 13:39:15 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Jinyoung Choi , linux-block@vger.kernel.org Subject: [PATCH 7/8] block: move the bi_size update out of __bio_try_merge_page Date: Fri, 12 May 2023 06:39:00 -0700 Message-Id: <20230512133901.1053543-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230512133901.1053543-1-hch@lst.de> References: <20230512133901.1053543-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The update of bi_size is the only thing in __bio_try_merge_page that needs a bio. Move it to the callers, and merge __bio_try_merge_page and page_is_mergeable into a single bvec_try_merge_page that only takes the current bvec instead of a full bio. This will allow reusing this function for supporting multi-page integrity payload bvecs. Signed-off-by: Christoph Hellwig Reviewed-by: Jinyoung Choi --- block/bio.c | 57 +++++++++++++++++++---------------------------------- 1 file changed, 20 insertions(+), 37 deletions(-) diff --git a/block/bio.c b/block/bio.c index 89b1475de0c370..106009707ca1c5 100644 --- a/block/bio.c +++ b/block/bio.c @@ -903,9 +903,8 @@ static inline bool bio_full(struct bio *bio, unsigned len) return false; } -static inline bool page_is_mergeable(const struct bio_vec *bv, - struct page *page, unsigned int len, unsigned int off, - bool *same_page) +static bool bvec_try_merge_page(struct bio_vec *bv, struct page *page, + unsigned int len, unsigned int off, bool *same_page) { size_t bv_end = bv->bv_offset + bv->bv_len; phys_addr_t vec_end_addr = page_to_phys(bv->bv_page) + bv_end - 1; @@ -919,38 +918,14 @@ static inline bool page_is_mergeable(const struct bio_vec *bv, return false; *same_page = ((vec_end_addr & PAGE_MASK) == page_addr); - if (*same_page) - return true; - else if (IS_ENABLED(CONFIG_KMSAN)) - return false; - return (bv->bv_page + bv_end / PAGE_SIZE) == (page + off / PAGE_SIZE); -} - -/** - * __bio_try_merge_page - try appending data to an existing bvec. - * @bio: destination bio - * @page: start page to add - * @len: length of the data to add - * @off: offset of the data relative to @page - * @same_page: return if the segment has been merged inside the same page - * - * Try to add the data at @page + @off to the last bvec of @bio. This is a - * useful optimisation for file systems with a block size smaller than the - * page size. - * - * Warn if (@len, @off) crosses pages in case that @same_page is true. - * - * Return %true on success or %false on failure. - */ -static bool __bio_try_merge_page(struct bio *bio, struct page *page, - unsigned int len, unsigned int off, bool *same_page) -{ - struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1]; + if (!*same_page) { + if (IS_ENABLED(CONFIG_KMSAN)) + return false; + if (bv->bv_page + bv_end / PAGE_SIZE != page + off / PAGE_SIZE) + return false; + } - if (!page_is_mergeable(bv, page, len, off, same_page)) - return false; bv->bv_len += len; - bio->bi_iter.bi_size += len; return true; } @@ -972,7 +947,7 @@ static bool bio_try_merge_hw_seg(struct request_queue *q, struct bio *bio, return false; if (bv->bv_len + len > queue_max_segment_size(q)) return false; - return __bio_try_merge_page(bio, page, len, offset, same_page); + return bvec_try_merge_page(bv, page, len, offset, same_page); } /** @@ -1001,8 +976,11 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio, return 0; if (bio->bi_vcnt > 0) { - if (bio_try_merge_hw_seg(q, bio, page, len, offset, same_page)) + if (bio_try_merge_hw_seg(q, bio, page, len, offset, + same_page)) { + bio->bi_iter.bi_size += len; return len; + } if (bio->bi_vcnt >= min(bio->bi_max_vecs, queue_max_segments(q))) @@ -1123,8 +1101,11 @@ int bio_add_page(struct bio *bio, struct page *page, return 0; if (bio->bi_vcnt > 0 && - __bio_try_merge_page(bio, page, len, offset, &same_page)) + bvec_try_merge_page(&bio->bi_io_vec[bio->bi_vcnt - 1], + page, len, offset, &same_page)) { + bio->bi_iter.bi_size += len; return len; + } if (bio->bi_vcnt >= bio->bi_max_vecs) return 0; @@ -1199,7 +1180,9 @@ static int bio_iov_add_page(struct bio *bio, struct page *page, return -EIO; if (bio->bi_vcnt > 0 && - __bio_try_merge_page(bio, page, len, offset, &same_page)) { + bvec_try_merge_page(&bio->bi_io_vec[bio->bi_vcnt - 1], + page, len, offset, &same_page)) { + bio->bi_iter.bi_size += len; if (same_page) put_page(page); return 0; From patchwork Fri May 12 13:39:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "hch@lst.de" X-Patchwork-Id: 13239320 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FDCAC7EE2F for ; Fri, 12 May 2023 13:39:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241309AbjELNjX (ORCPT ); Fri, 12 May 2023 09:39:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241299AbjELNjS (ORCPT ); Fri, 12 May 2023 09:39:18 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8F9A124B8 for ; Fri, 12 May 2023 06:39:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=NypuyImQg9oREjLzD6jqxIwGbXT3DZrzWS9cwrXtmQk=; b=p69MSwq1dAj6oMkU9rcZcqsEW+ IgZcLXSYy3XKGEduSq/PXBKtVrnMV6jjbmKtvi6MLcl0vNlmV4Eg/v0bZqHafEXfeOWZFPy7vQ/SP DlOEDv9O7JyrFesMFUTv/Gx5utg6L+e/qddo1aiGixddUQ5DPbHiOxUaWDoo/yxul5pFj9lEBSN7N YOG3adJvwG+fy3Y+NwFP9Izag9D9+fHrw/XJexXZGGS62vN1qEe07AgJjrL/Gzs08qKBGdtePwFcx KXYyqZbw5M/ZuL97+RpHLCwpbAK6jv/FpJ7hLqaVtujeonM35rOsH9NW2FMcTbfvrEYU+tEasS+Jz 3PdylS8Q==; Received: from [204.239.251.3] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1pxSzM-00C2ij-0D; Fri, 12 May 2023 13:39:16 +0000 From: Christoph Hellwig To: Jens Axboe Cc: Jinyoung Choi , linux-block@vger.kernel.org Subject: [PATCH 8/8] block: don't pass a bio to bio_try_merge_hw_seg Date: Fri, 12 May 2023 06:39:01 -0700 Message-Id: <20230512133901.1053543-9-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230512133901.1053543-1-hch@lst.de> References: <20230512133901.1053543-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org There is no good reason to pass the bio to bio_try_merge_hw_seg. Just pass the current bvec and rename the function to bvec_try_merge_hw_page. This will allow reusing this function for supporting multi-page integrity payload bvecs. Signed-off-by: Christoph Hellwig Reviewed-by: Jinyoung Choi --- block/bio.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/block/bio.c b/block/bio.c index 106009707ca1c5..79e8aa600ddbe2 100644 --- a/block/bio.c +++ b/block/bio.c @@ -934,11 +934,10 @@ static bool bvec_try_merge_page(struct bio_vec *bv, struct page *page, * size limit. This is not for normal read/write bios, but for passthrough * or Zone Append operations that we can't split. */ -static bool bio_try_merge_hw_seg(struct request_queue *q, struct bio *bio, - struct page *page, unsigned len, - unsigned offset, bool *same_page) +static bool bvec_try_merge_hw_page(struct request_queue *q, struct bio_vec *bv, + struct page *page, unsigned len, unsigned offset, + bool *same_page) { - struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1]; unsigned long mask = queue_segment_boundary(q); phys_addr_t addr1 = page_to_phys(bv->bv_page) + bv->bv_offset; phys_addr_t addr2 = page_to_phys(page) + offset + len - 1; @@ -967,8 +966,6 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio, struct page *page, unsigned int len, unsigned int offset, unsigned int max_sectors, bool *same_page) { - struct bio_vec *bvec; - if (WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED))) return 0; @@ -976,7 +973,9 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio, return 0; if (bio->bi_vcnt > 0) { - if (bio_try_merge_hw_seg(q, bio, page, len, offset, + struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1]; + + if (bvec_try_merge_hw_page(q, bv, page, len, offset, same_page)) { bio->bi_iter.bi_size += len; return len; @@ -990,8 +989,7 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio, * If the queue doesn't support SG gaps and adding this segment * would create a gap, disallow it. */ - bvec = &bio->bi_io_vec[bio->bi_vcnt - 1]; - if (bvec_gap_to_prev(&q->limits, bvec, offset)) + if (bvec_gap_to_prev(&q->limits, bv, offset)) return 0; }