From patchwork Wed May 23 14:43:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10421655 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 860FF6032A for ; Wed, 23 May 2018 14:46:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 754BC289B5 for ; Wed, 23 May 2018 14:46:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6A29028B65; Wed, 23 May 2018 14:46:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E3CA9289B5 for ; Wed, 23 May 2018 14:46:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4812E6B000E; Wed, 23 May 2018 10:45:34 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3BBB86B0287; Wed, 23 May 2018 10:45:34 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 20A5A6B0288; Wed, 23 May 2018 10:45:34 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f69.google.com (mail-pl0-f69.google.com [209.85.160.69]) by kanga.kvack.org (Postfix) with ESMTP id C8F456B000E for ; Wed, 23 May 2018 10:45:33 -0400 (EDT) Received: by mail-pl0-f69.google.com with SMTP id f35-v6so14244776plb.10 for ; Wed, 23 May 2018 07:45:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=0zqmkHlnBpYekw4aWsSNW/7HcGqBnG3hg36NFrJ2BSo=; b=MftYx9abM3jkpcG32GPR/zKiq8LzcNxZ0WqoKMsGM6DaVSJ/nzE2PujOTMfxDF2V+f 8wVZVBRLFfx7Oyg6eYJ7mqkGpGgQjzgOrCsrxIa2qneaNJD1/15mCrNecgdrdGa0WKOa NOGailbRbZI/5BgnHJXVs+UyhojTF9ujpWT7wXun9awG1gnWfBOdBCTzX76EY2DDsyXi AUKau24Q478Rq7yR+PCEDcZ1ez4sAmd/vHQ0NDLqa6rpytj4kcPS23tn4SdnAdYNHhFw 7V0wLapu+yGlkOByZhTgjovJZ4+jl0z9HMISV5oh3tVwyXKclwUz/NgwMSL2Asnfxc6p 0qew== X-Gm-Message-State: ALKqPwcpwvg0Hi4auxhWMk1oM+9hHJWks56oRzUOH8VPoAI4Jk5yZs2e nv+7EY0QARqpL1JJ6/5gGYJmMvJY3TyHJS8Y2BG5Y6meIenBlRSg4M0pkAgPHPwrTk3qlBGTl2D +C09Ngtl9TKkx0HgCkEEIKoBaSQmajMaFqZBA54ZGigNWaMifM5MACB26Gs9H6rk= X-Received: by 2002:a17:902:7447:: with SMTP id e7-v6mr3201283plt.369.1527086733494; Wed, 23 May 2018 07:45:33 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpi+6naUF51b4ctY6C5sXe+CASWB5glrgm1to2mc+0aqAAEyucNSIctN1m9YkSs0PzvzA5D X-Received: by 2002:a17:902:7447:: with SMTP id e7-v6mr3201243plt.369.1527086732723; Wed, 23 May 2018 07:45:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527086732; cv=none; d=google.com; s=arc-20160816; b=aNs9l5OO+SYxC+xrxM6S9U4wB3qv+z7FNfjYc6NFumlGAMYFTHS+t33aBSzleFCiFK SHUJD6D1P3WvRfGyDssq3dXloruN8kPpA7Jq6l0oKxCKb5t5cGQ4TxdXfjtxN5VuTN7Z P3X3GawK4hd+f0FDfG1eJsmaRwmhNmT2t5cfk6yy2vihiUr3h1OLVLQoYJ+ZsD6hzBa7 LmOtni3mLUmffV0dDBiSnVDvrbEt+sfZI7LbVv1pwtavyWJ05v53XxJcqY/5i4r1G4GG Bw9EiWacGWCCWU0sKMCoraU17WnoPln0hrnVaGCJrvJ20SchhNB4fju1bMOPXxFQG3wf ZLrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=0zqmkHlnBpYekw4aWsSNW/7HcGqBnG3hg36NFrJ2BSo=; b=xc9be9GoLgxb/H60HWOk6pQoRzO5T63sxa+dlNLuSDhb7iEm1uNBm7//6NEOeyiMhc anlXdfsC8ZdTQ++zaWGOcnakzzlsQHCtatrkDfrZAXnKBoBKPjzk8KQ/s3X3Xvg27m5v qoyUqTmHfqHCEh6lq9ooWW4z3qXri04HMsdqxwLgZ0AkOqAIKki6LLD0+9buY6pu2dF5 4nI1U9ozDDlgz5cWkua2agoyAK3dXOZWJoAFCsnngyvWWfKtuV+mxBRpEqEkyhExgkU1 tsg8WlpgLBzzTp9tQPTaGHWyhlswOTuKNq8Yp1ybJLh15lBAr5UEkXT+bTmTJ8DQW1hH KIKA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Kwvsohhi; spf=pass (google.com: best guess record for domain of batv+df5a2477ff0fa86e9985+5386+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+df5a2477ff0fa86e9985+5386+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id t189-v6si15289304pgc.163.2018.05.23.07.45.32 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 23 May 2018 07:45:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+df5a2477ff0fa86e9985+5386+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Kwvsohhi; spf=pass (google.com: best guess record for domain of batv+df5a2477ff0fa86e9985+5386+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+df5a2477ff0fa86e9985+5386+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=0zqmkHlnBpYekw4aWsSNW/7HcGqBnG3hg36NFrJ2BSo=; b=KwvsohhiGA22/F+JBFbf/2MLm qZu005wpui7b1rsaYRVTazPohD8xI5M9CoXbBwgjZ8LwxC/miWZ/oH62I2Z8v7XbpvbVue5OdYLpu wkuU7AmHJtVLx5/BRjHjNRPtRxwLcFddSbpSPEfVdktN8vt2o2k8Zqa1y0B8bhf+GbcvjBhA9a3cF 5hGlhd4zNMLU1gL7et4s+6BGZnK9tmJ1z6/fHM//dmwAQ+SF2TmMUFNNxnqMnwkK5ckzCnoqXckdL oA/4IHQBDLX+++G3MANM5BQd1gIMYBkBM4/oR5aAV8+K+2jfm3wkcoVad+j1Afw6qOeh9WA7Ir4J6 ocB/PobNA==; Received: from 089144199016.atnat0008.highway.a1.net ([89.144.199.16] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fLV17-0001gx-QB; Wed, 23 May 2018 14:45:30 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 29/34] xfs: don't look at buffer heads in xfs_add_to_ioend Date: Wed, 23 May 2018 16:43:52 +0200 Message-Id: <20180523144357.18985-30-hch@lst.de> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180523144357.18985-1-hch@lst.de> References: <20180523144357.18985-1-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Calculate all information for the bio based on the passed in information without requiring a buffer_head structure. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_aops.c | 68 ++++++++++++++++++++++------------------------- 1 file changed, 32 insertions(+), 36 deletions(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index f01c1dd737ec..592b33b35a30 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -44,7 +44,6 @@ struct xfs_writepage_ctx { struct xfs_bmbt_irec imap; unsigned int io_type; struct xfs_ioend *ioend; - sector_t last_block; }; void @@ -545,11 +544,6 @@ xfs_start_page_writeback( unlock_page(page); } -static inline int xfs_bio_add_buffer(struct bio *bio, struct buffer_head *bh) -{ - return bio_add_page(bio, bh->b_page, bh->b_size, bh_offset(bh)); -} - /* * Submit the bio for an ioend. We are passed an ioend with a bio attached to * it, and we submit that bio. The ioend may be used for multiple bio @@ -604,27 +598,20 @@ xfs_submit_ioend( return 0; } -static void -xfs_init_bio_from_bh( - struct bio *bio, - struct buffer_head *bh) -{ - bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); - bio_set_dev(bio, bh->b_bdev); -} - static struct xfs_ioend * xfs_alloc_ioend( struct inode *inode, unsigned int type, xfs_off_t offset, - struct buffer_head *bh) + struct block_device *bdev, + sector_t sector) { struct xfs_ioend *ioend; struct bio *bio; bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES, xfs_ioend_bioset); - xfs_init_bio_from_bh(bio, bh); + bio_set_dev(bio, bdev); + bio->bi_iter.bi_sector = sector; ioend = container_of(bio, struct xfs_ioend, io_inline_bio); INIT_LIST_HEAD(&ioend->io_list); @@ -649,13 +636,14 @@ static void xfs_chain_bio( struct xfs_ioend *ioend, struct writeback_control *wbc, - struct buffer_head *bh) + struct block_device *bdev, + sector_t sector) { struct bio *new; new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES); - xfs_init_bio_from_bh(new, bh); - + bio_set_dev(new, bdev); + new->bi_iter.bi_sector = sector; bio_chain(ioend->io_bio, new); bio_get(ioend->io_bio); /* for xfs_destroy_ioend */ ioend->io_bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc); @@ -665,39 +653,45 @@ xfs_chain_bio( } /* - * Test to see if we've been building up a completion structure for - * earlier buffers -- if so, we try to append to this ioend if we - * can, otherwise we finish off any current ioend and start another. - * Return the ioend we finished off so that the caller can submit it - * once it has finished processing the dirty page. + * Test to see if we have an existing ioend structure that we could append to + * first, otherwise finish off the current ioend and start another. */ STATIC void xfs_add_to_ioend( struct inode *inode, - struct buffer_head *bh, xfs_off_t offset, + struct page *page, struct xfs_writepage_ctx *wpc, struct writeback_control *wbc, struct list_head *iolist) { + struct xfs_inode *ip = XFS_I(inode); + struct xfs_mount *mp = ip->i_mount; + struct block_device *bdev = xfs_find_bdev_for_inode(inode); + unsigned len = i_blocksize(inode); + unsigned poff = offset & (PAGE_SIZE - 1); + sector_t sector; + + sector = xfs_fsb_to_db(ip, wpc->imap.br_startblock) + + ((offset - XFS_FSB_TO_B(mp, wpc->imap.br_startoff)) >> 9); + if (!wpc->ioend || wpc->io_type != wpc->ioend->io_type || - bh->b_blocknr != wpc->last_block + 1 || + sector != bio_end_sector(wpc->ioend->io_bio) || offset != wpc->ioend->io_offset + wpc->ioend->io_size) { if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type, offset, bh); + wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type, offset, + bdev, sector); } /* - * If the buffer doesn't fit into the bio we need to allocate a new - * one. This shouldn't happen more than once for a given buffer. + * If the block doesn't fit into the bio we need to allocate a new + * one. This shouldn't happen more than once for a given block. */ - while (xfs_bio_add_buffer(wpc->ioend->io_bio, bh) != bh->b_size) - xfs_chain_bio(wpc->ioend, wbc, bh); + while (bio_add_page(wpc->ioend->io_bio, page, len, poff) != len) + xfs_chain_bio(wpc->ioend, wbc, bdev, sector); - wpc->ioend->io_size += bh->b_size; - wpc->last_block = bh->b_blocknr; - xfs_start_buffer_writeback(bh); + wpc->ioend->io_size += len; } STATIC void @@ -893,7 +887,9 @@ xfs_writepage_map( lock_buffer(bh); xfs_map_at_offset(inode, bh, &wpc->imap, file_offset); - xfs_add_to_ioend(inode, bh, file_offset, wpc, wbc, &submit_list); + xfs_add_to_ioend(inode, file_offset, page, wpc, wbc, + &submit_list); + xfs_start_buffer_writeback(bh); count++; }