From patchwork Thu Nov 15 08:53:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10683835 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F153D14BA for ; Thu, 15 Nov 2018 08:56:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E1048299C2 for ; Thu, 15 Nov 2018 08:56:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D44A22B678; Thu, 15 Nov 2018 08:56:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B686299C2 for ; Thu, 15 Nov 2018 08:56:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 471636B0285; Thu, 15 Nov 2018 03:56:46 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 41FE06B0286; Thu, 15 Nov 2018 03:56:46 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E8F16B0287; Thu, 15 Nov 2018 03:56:46 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) by kanga.kvack.org (Postfix) with ESMTP id F406C6B0285 for ; Thu, 15 Nov 2018 03:56:45 -0500 (EST) Received: by mail-qk1-f198.google.com with SMTP id c7so43505482qkg.16 for ; Thu, 15 Nov 2018 00:56:45 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=E09knOifqlVupGWM9sVFS8Kn9kf1+wj380uyw3uRi34=; b=Xa62OkrBmztv+ohw8okn7oCGMOh2liO89Ehr0khbZJEeVhjwsVnwdlIhSJ1fDXj2ut P5sje0gJ61CWL+EqLlevTpVNAet7maJvRZ+5axzzkIkgaUzr/Jtp/Q74OS42vY3isd+F z7oxxCMZ6y8xL/jZJ1RL/7zxQQwz2H6ZQdVqlA0tZO9N9YVC0RzY6gRX+pgq226N7xA3 z1g05nqZlzEy3Km6ZrNSgZru/CbrV712vL9mLIB94GKAekA4LGtT1o+Z9aVkXI4MWOrj vlUbUVR4VMfTaeOD/GtpN8yPbS+6vMFRZpZC9f2AkgD9VMUs2uPV3+5D7p16oYMMFeNm 5omQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: AGRZ1gIsxkrw8c0gYtnBVuzyqI7FCrq3eXZJBIoE3iRIy0RmImaWUES+ hX5gS5zcBTr+u0pk5vBH3tnJnZegz7pP2NbGIdh7JOV0B3C2OdvTj7nPO7k/DkEoeLxffN0oHaY RnCRAlx3XLV4vKoVhhFCrFScIEa8504gzo2UH+V2qqQviuwNDAF/LM/vIhy8xkSkOkA== X-Received: by 2002:ac8:44d3:: with SMTP id b19mr5321013qto.107.1542272205728; Thu, 15 Nov 2018 00:56:45 -0800 (PST) X-Google-Smtp-Source: AJdET5e8jSKyJ0zMZFZSM0ge16f02+zD0bNWHxoA9M5MowQJE6IKPXVM16PUnrUyaUYOKFEEE8je X-Received: by 2002:ac8:44d3:: with SMTP id b19mr5320987qto.107.1542272205082; Thu, 15 Nov 2018 00:56:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542272205; cv=none; d=google.com; s=arc-20160816; b=QrZKFZfNrPpX4rVCptQNMDEsWGEfhpn/omKYZZXM8MbTtkfiKcwoAreaENPTtXesl+ jw76q95wmVjJWaNCBGkPAzSBxjs17unks/Tv5w7ZxpV6GNft1zQcsCZ6wE7wXQRokjHb 6LzcSvJckZZtrsA40fa9hKLdOVcIY9AQW+25Eta78auRvFm2JdIuD3FXuojTuT9M82bB ENtdoEHwyTFpWuRF29wwj35+yXtQlJaE4x7tNX5j+KYu3q675z3QT/DuiunC5EvGLZJC zEY3F1De6imGYILgQOUk44/+8l6LcrKGXtGktLvuKFgyZS0NFT1sFvGjRzpOFyznggyY pXzA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=E09knOifqlVupGWM9sVFS8Kn9kf1+wj380uyw3uRi34=; b=nTr89p8lrqaw7eDm9Tv/VlBU4scPnQKKlbKxbNrN/Xba9BAQC99gt4MCNzoKecxCSg dvkyOHdp45TvHc7sUNCOLBNvuxxJIcQ8L/C2SdCyaL9NHRBeAanSF9xrn3ih4bRp74Af Ixg1q9wO8SzschCyu7Zynns6Qq7ztRRD5cNBegg5RLt++N/qgXhtjFgA6nxnRlYwPfrf JYJkBdi1MlowvjybbEARN21w5SGVAxeYf7S4A3yxE422+VGFDB0s93y+1zZiEYF6jXh5 sdbDew0Ja4GfTMKiJpvxqjnDwBOTPAkNlBn0n66tzTrJvXak19E2cics1nGYRQSc0qBI jfYg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id p41si4691650qve.126.2018.11.15.00.56.44 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 15 Nov 2018 00:56:45 -0800 (PST) Received-SPF: pass (google.com: domain of ming.lei@redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D26F5821CC; Thu, 15 Nov 2018 08:56:43 +0000 (UTC) Received: from localhost (ovpn-8-23.pek2.redhat.com [10.72.8.23]) by smtp.corp.redhat.com (Postfix) with ESMTP id A2BEF26163; Thu, 15 Nov 2018 08:56:37 +0000 (UTC) From: Ming Lei To: Jens Axboe Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Ming Lei , Dave Chinner , Kent Overstreet , Mike Snitzer , dm-devel@redhat.com, Alexander Viro , linux-fsdevel@vger.kernel.org, Shaohua Li , linux-raid@vger.kernel.org, linux-erofs@lists.ozlabs.org, David Sterba , linux-btrfs@vger.kernel.org, "Darrick J . Wong" , linux-xfs@vger.kernel.org, Gao Xiang , Christoph Hellwig , Theodore Ts'o , linux-ext4@vger.kernel.org, Coly Li , linux-bcache@vger.kernel.org, Boaz Harrosh , Bob Peterson , cluster-devel@redhat.com Subject: [PATCH V10 13/19] iomap & xfs: only account for new added page Date: Thu, 15 Nov 2018 16:53:00 +0800 Message-Id: <20181115085306.9910-14-ming.lei@redhat.com> In-Reply-To: <20181115085306.9910-1-ming.lei@redhat.com> References: <20181115085306.9910-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Thu, 15 Nov 2018 08:56:44 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP After multi-page is enabled, one new page may be merged to a segment even though it is a new added page. This patch deals with this issue by post-check in case of merge, and only a freshly new added page need to be dealt with for iomap & xfs. Cc: Dave Chinner Cc: Kent Overstreet Cc: Mike Snitzer Cc: dm-devel@redhat.com Cc: Alexander Viro Cc: linux-fsdevel@vger.kernel.org Cc: Shaohua Li Cc: linux-raid@vger.kernel.org Cc: linux-erofs@lists.ozlabs.org Cc: David Sterba Cc: linux-btrfs@vger.kernel.org Cc: Darrick J. Wong Cc: linux-xfs@vger.kernel.org Cc: Gao Xiang Cc: Christoph Hellwig Cc: Theodore Ts'o Cc: linux-ext4@vger.kernel.org Cc: Coly Li Cc: linux-bcache@vger.kernel.org Cc: Boaz Harrosh Cc: Bob Peterson Cc: cluster-devel@redhat.com Signed-off-by: Ming Lei --- fs/iomap.c | 22 ++++++++++++++-------- fs/xfs/xfs_aops.c | 10 ++++++++-- include/linux/bio.h | 11 +++++++++++ 3 files changed, 33 insertions(+), 10 deletions(-) diff --git a/fs/iomap.c b/fs/iomap.c index df0212560b36..a1b97a5c726a 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -288,6 +288,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, loff_t orig_pos = pos; unsigned poff, plen; sector_t sector; + bool need_account = false; if (iomap->type == IOMAP_INLINE) { WARN_ON_ONCE(pos); @@ -313,18 +314,15 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, */ sector = iomap_sector(iomap, pos); if (ctx->bio && bio_end_sector(ctx->bio) == sector) { - if (__bio_try_merge_page(ctx->bio, page, plen, poff)) + if (__bio_try_merge_page(ctx->bio, page, plen, poff)) { + need_account = iop && bio_is_last_segment(ctx->bio, + page, plen, poff); goto done; + } is_contig = true; } - /* - * If we start a new segment we need to increase the read count, and we - * need to do so before submitting any previous full bio to make sure - * that we don't prematurely unlock the page. - */ - if (iop) - atomic_inc(&iop->read_count); + need_account = true; if (!ctx->bio || !is_contig || bio_full(ctx->bio)) { gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL); @@ -347,6 +345,14 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, __bio_add_page(ctx->bio, page, plen, poff); done: /* + * If we add a new page we need to increase the read count, and we + * need to do so before submitting any previous full bio to make sure + * that we don't prematurely unlock the page. + */ + if (iop && need_account) + atomic_inc(&iop->read_count); + + /* * Move the caller beyond our range so that it keeps making progress. * For that we have to include any leading non-uptodate ranges, but * we can skip trailing ones as they will be handled in the next diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 1f1829e506e8..d8e9cc9f751a 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -603,6 +603,7 @@ xfs_add_to_ioend( unsigned len = i_blocksize(inode); unsigned poff = offset & (PAGE_SIZE - 1); sector_t sector; + bool need_account; sector = xfs_fsb_to_db(ip, wpc->imap.br_startblock) + ((offset - XFS_FSB_TO_B(mp, wpc->imap.br_startoff)) >> 9); @@ -617,13 +618,18 @@ xfs_add_to_ioend( } if (!__bio_try_merge_page(wpc->ioend->io_bio, page, len, poff)) { - if (iop) - atomic_inc(&iop->write_count); + need_account = true; if (bio_full(wpc->ioend->io_bio)) xfs_chain_bio(wpc->ioend, wbc, bdev, sector); __bio_add_page(wpc->ioend->io_bio, page, len, poff); + } else { + need_account = iop && bio_is_last_segment(wpc->ioend->io_bio, + page, len, poff); } + if (iop && need_account) + atomic_inc(&iop->write_count); + wpc->ioend->io_size += len; } diff --git a/include/linux/bio.h b/include/linux/bio.h index 1a2430a8b89d..5040e9a2eb09 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -341,6 +341,17 @@ static inline struct bio_vec *bio_last_bvec_all(struct bio *bio) return &bio->bi_io_vec[bio->bi_vcnt - 1]; } +/* iomap needs this helper to deal with sub-pagesize bvec */ +static inline bool bio_is_last_segment(struct bio *bio, struct page *page, + unsigned int len, unsigned int off) +{ + struct bio_vec bv; + + bvec_last_segment(bio_last_bvec_all(bio), &bv); + + return bv.bv_page == page && bv.bv_len == len && bv.bv_offset == off; +} + enum bip_flags { BIP_BLOCK_INTEGRITY = 1 << 0, /* block layer owns integrity data */ BIP_MAPPED_INTEGRITY = 1 << 1, /* ref tag has been remapped */