From patchwork Thu May 26 01:06:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keith Busch X-Patchwork-Id: 12861930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35AD5C433EF for ; Thu, 26 May 2022 01:08:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242436AbiEZBIl (ORCPT ); Wed, 25 May 2022 21:08:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232046AbiEZBIi (ORCPT ); Wed, 25 May 2022 21:08:38 -0400 Received: from mx0a-00082601.pphosted.com (mx0a-00082601.pphosted.com [67.231.145.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3ADD09156B for ; Wed, 25 May 2022 18:08:38 -0700 (PDT) Received: from pps.filterd (m0109334.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 24Q0SitE029091 for ; Wed, 25 May 2022 18:08:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=ng5jvbQ8e5kkQ3+8JRxl9hl6c4zipP7NfhMRhPIwcAQ=; b=G0/oah+kRKu/W1WggEFAxp6KmjshoKODoYEYfUvKn7tRqDwhgev6URHqwRJBe579q7ki y0DAKmYrS+WTbbHut6WbhRVixUIS4FrbsehOxsPjGx5oVKBlaMP9K4OyV1wMlR5SkmF0 NZyOIYaQkpEF7CXYy/AOIj18STttQBC7NfU= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com (PPS) with ESMTPS id 3g9puakqat-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 25 May 2022 18:08:37 -0700 Received: from twshared10276.08.ash9.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:21d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Wed, 25 May 2022 18:08:37 -0700 Received: by devbig007.nao1.facebook.com (Postfix, from userid 544533) id 1A68045C0BDE; Wed, 25 May 2022 18:06:20 -0700 (PDT) From: Keith Busch To: , CC: , Kernel Team , , , , , , Keith Busch Subject: [PATCHv4 7/9] block/bounce: count bytes instead of sectors Date: Wed, 25 May 2022 18:06:11 -0700 Message-ID: <20220526010613.4016118-8-kbusch@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220526010613.4016118-1-kbusch@fb.com> References: <20220526010613.4016118-1-kbusch@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: ZIkqqdo-D413R1sw_fZfo-_Ivov0aq-T X-Proofpoint-ORIG-GUID: ZIkqqdo-D413R1sw_fZfo-_Ivov0aq-T X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-05-25_07,2022-05-25_02,2022-02-23_01 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Keith Busch Individual bv_len's may not be a sector size. Signed-off-by: Keith Busch Reviewed-by: Damien Le Moal Reviewed-by: Pankaj Raghav --- v3->v4: Use sector shift Add comment explaing the ALIGN_DOWN Use unsigned int type for counting bytes block/bounce.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/block/bounce.c b/block/bounce.c index 8f7b6fe3b4db..f6ae21ec2a70 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -205,19 +205,25 @@ void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig) int rw = bio_data_dir(*bio_orig); struct bio_vec *to, from; struct bvec_iter iter; - unsigned i = 0; + unsigned i = 0, bytes = 0; bool bounce = false; - int sectors = 0; + int sectors; bio_for_each_segment(from, *bio_orig, iter) { if (i++ < BIO_MAX_VECS) - sectors += from.bv_len >> 9; + bytes += from.bv_len; if (PageHighMem(from.bv_page)) bounce = true; } if (!bounce) return; + /* + * If the original has more than BIO_MAX_VECS biovecs, the total bytes + * may not be block size aligned. Align down to ensure both sides of + * the split bio are appropriately sized. + */ + sectors = ALIGN_DOWN(bytes, queue_logical_block_size(q)) >> SECTOR_SHIFT; if (sectors < bio_sectors(*bio_orig)) { bio = bio_split(*bio_orig, sectors, GFP_NOIO, &bounce_bio_split); bio_chain(bio, *bio_orig);