From patchwork Wed May 30 10:00:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10438289 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 76997602CC for ; Wed, 30 May 2018 10:01:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 66EE5288ED for ; Wed, 30 May 2018 10:01:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5A558288F1; Wed, 30 May 2018 10:01:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D0456288ED for ; Wed, 30 May 2018 10:01:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 477926B028A; Wed, 30 May 2018 06:01:17 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 428476B028B; Wed, 30 May 2018 06:01:17 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27AB66B028C; Wed, 30 May 2018 06:01:17 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id D67CB6B028A for ; Wed, 30 May 2018 06:01:16 -0400 (EDT) Received: by mail-pf0-f197.google.com with SMTP id l85-v6so10574258pfb.18 for ; Wed, 30 May 2018 03:01:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=IVO2SUOl8nWXwlRknzIU2qOXKVV4ya8pyOu8zMaeCZw=; b=OTkRllFI0WaXPPqXTa32AmUeTTd9JDi5SZrYgPm4o1IUC7kQwEvfZt9tpzbl6FD0ga 7TwhZes3bII8NGYi0MlE1qlKKS7fy+0IjOkVqZd6r0uAgG1BuNK8mJ1N6GNqmfIGx870 7KPrCidapeNbuwIoq4opZGSoCMeozbFwCLACIPMB7/iaSnO7jL8Tb7dVOgLCBSreWqIJ /PhFY6vyfKNCEdR+EYOfPmUzuQen1vkzC6HJQyIvIx64kUxbe+qAEkB7bE1Ai/vMj6vU 90x+RXxepqNdFFx6+dTlyWrEjS35t7Pif3rWp0DhreHBnbOMnoHTWOpJNCu7NQpWU8bU CZKQ== X-Gm-Message-State: ALKqPwfySkeHRVEcafttpX9xIqN0ECakQyXdwpb8V8W8airnhB+nDgZL p+YSm8ps3ouH6nO79MauPRpukkTRYrUNBrJt1FLFpx2z8AV4zA3C692NMHhEmO5JpX+Bac+OWTo 5KR6y00fHD1ANXGLOIMRqUEUTvRjzJhlWlFX5wjFaTsLXkZFGWTOlLGZktUCz3zc= X-Received: by 2002:a65:5382:: with SMTP id x2-v6mr1710514pgq.160.1527674476538; Wed, 30 May 2018 03:01:16 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLB0DfyQWdUuHG7MPQv8in+akWI0ViX2AWFeuu/j25W7Ml2AOQqE2B6i4Ds68lifg/mGJkB X-Received: by 2002:a65:5382:: with SMTP id x2-v6mr1710451pgq.160.1527674475548; Wed, 30 May 2018 03:01:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527674475; cv=none; d=google.com; s=arc-20160816; b=Wad7KSskCveDWaWlg2Zk393CZ4XYfl3jpIf6jBNsHgTQM2fTvzkmE3iivqCIpfKw4d XGZ2w+SkK8p/Fl9/ePnV1XD9Fl6yixjbR3ZfdCTiMqUkxnzTDrCU6E4KWZT+BsMKnLhF nA/JME0xKKX2eBLZ3VI347Od6tgzOipVWOHQ7iz37ZdX9VrS/8NEEcUer4u6ZX1OYcC5 xkGumstTsf3BuFvdTNq9VA/ynZm9U96L3NsgufaUsJX9TM50G/bOOyplG+M9PkowbGst tDspg4WvTy0swtIM7rNWX/bmK6WPwUxmbPmm6zVFGnB4rRgROl3GQqdjB1GZwSljYAm3 XU0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=IVO2SUOl8nWXwlRknzIU2qOXKVV4ya8pyOu8zMaeCZw=; b=q92UKLmUCprJ0FNiR3G2lornH0vQWOH8gdBotrDw12H2jds1yp+sv5LL+ZTEm73mlt ywu5N05s/PYDJSPXmH0Qcc92pGa3JIQ55rwkmnlaUkrY/EWCzuxFZfCLqS/UYveVrj1j JBTNCA3jPoLy9qp2CqjyFAqdfOuWXwS1uLkYnlbaoxsGTibq6HGeFGpmX0aMrOl1/JnZ JZLJEUfmWVzDVV2iHSXM4G+Rq6twWj7dj95zAVEhpb055ptaqgZF+/m66nUXfpOIh5tm B7U/C0wqaRFgxuOAfpfyUeIIul2MTgao4dHUUZshqDKbMqkqLS6Zjti5RH8Y3blLqVgm q7DQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=UWw4a06a; spf=pass (google.com: best guess record for domain of batv+1f4557cc97fec8e307c5+5393+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+1f4557cc97fec8e307c5+5393+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id h185-v6si33850361pfe.332.2018.05.30.03.01.15 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 30 May 2018 03:01:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+1f4557cc97fec8e307c5+5393+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=UWw4a06a; spf=pass (google.com: best guess record for domain of batv+1f4557cc97fec8e307c5+5393+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+1f4557cc97fec8e307c5+5393+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=IVO2SUOl8nWXwlRknzIU2qOXKVV4ya8pyOu8zMaeCZw=; b=UWw4a06aZxgL8wb4Qa/aImxaL XqQYD8uZR71FstGLs7c7o4G7bmqHARRcslR0gsXvYjBaiksDsvYuNN5HEYDiJ41E2HyGudyWI7wsv mCd6D3xtXC6xIPrDkDtCJ0fLJtNbx3+TmzL2f2CB+NAX9sa51EIubx11EB+KhKfITIi6JGLD0/S/V gIbZD7vxDN0e0HcujirDzNca1mdnAB3ZEktf0ojnv3o+pUJ+OGxu/BviUArzYqwFhAVIspYJP24mw YuM4IOAbQPDEC6xC98KMdWojMJZWXAFj+nhs8PwB8cqbC+zh2Lj6e+isBWgfpVXdtOiUZEX+sdQGA /XQt0+cxQ==; Received: from 213-225-38-123.nat.highway.a1.net ([213.225.38.123] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fNxur-00005T-El; Wed, 30 May 2018 10:01:14 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 13/18] xfs: don't look at buffer heads in xfs_add_to_ioend Date: Wed, 30 May 2018 12:00:08 +0200 Message-Id: <20180530100013.31358-14-hch@lst.de> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180530100013.31358-1-hch@lst.de> References: <20180530100013.31358-1-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Calculate all information for the bio based on the passed in information without requiring a buffer_head structure. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Reviewed-by: Brian Foster --- fs/xfs/xfs_aops.c | 68 ++++++++++++++++++++++------------------------- 1 file changed, 32 insertions(+), 36 deletions(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 910b410e5a90..7d02d04d5a5b 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -44,7 +44,6 @@ struct xfs_writepage_ctx { struct xfs_bmbt_irec imap; unsigned int io_type; struct xfs_ioend *ioend; - sector_t last_block; }; void @@ -535,11 +534,6 @@ xfs_start_page_writeback( unlock_page(page); } -static inline int xfs_bio_add_buffer(struct bio *bio, struct buffer_head *bh) -{ - return bio_add_page(bio, bh->b_page, bh->b_size, bh_offset(bh)); -} - /* * Submit the bio for an ioend. We are passed an ioend with a bio attached to * it, and we submit that bio. The ioend may be used for multiple bio @@ -594,27 +588,20 @@ xfs_submit_ioend( return 0; } -static void -xfs_init_bio_from_bh( - struct bio *bio, - struct buffer_head *bh) -{ - bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); - bio_set_dev(bio, bh->b_bdev); -} - static struct xfs_ioend * xfs_alloc_ioend( struct inode *inode, unsigned int type, xfs_off_t offset, - struct buffer_head *bh) + struct block_device *bdev, + sector_t sector) { struct xfs_ioend *ioend; struct bio *bio; bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES, xfs_ioend_bioset); - xfs_init_bio_from_bh(bio, bh); + bio_set_dev(bio, bdev); + bio->bi_iter.bi_sector = sector; ioend = container_of(bio, struct xfs_ioend, io_inline_bio); INIT_LIST_HEAD(&ioend->io_list); @@ -639,13 +626,14 @@ static void xfs_chain_bio( struct xfs_ioend *ioend, struct writeback_control *wbc, - struct buffer_head *bh) + struct block_device *bdev, + sector_t sector) { struct bio *new; new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES); - xfs_init_bio_from_bh(new, bh); - + bio_set_dev(new, bdev); + new->bi_iter.bi_sector = sector; bio_chain(ioend->io_bio, new); bio_get(ioend->io_bio); /* for xfs_destroy_ioend */ ioend->io_bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc); @@ -655,39 +643,45 @@ xfs_chain_bio( } /* - * Test to see if we've been building up a completion structure for - * earlier buffers -- if so, we try to append to this ioend if we - * can, otherwise we finish off any current ioend and start another. - * Return the ioend we finished off so that the caller can submit it - * once it has finished processing the dirty page. + * Test to see if we have an existing ioend structure that we could append to + * first, otherwise finish off the current ioend and start another. */ STATIC void xfs_add_to_ioend( struct inode *inode, - struct buffer_head *bh, xfs_off_t offset, + struct page *page, struct xfs_writepage_ctx *wpc, struct writeback_control *wbc, struct list_head *iolist) { + struct xfs_inode *ip = XFS_I(inode); + struct xfs_mount *mp = ip->i_mount; + struct block_device *bdev = xfs_find_bdev_for_inode(inode); + unsigned len = i_blocksize(inode); + unsigned poff = offset & (PAGE_SIZE - 1); + sector_t sector; + + sector = xfs_fsb_to_db(ip, wpc->imap.br_startblock) + + ((offset - XFS_FSB_TO_B(mp, wpc->imap.br_startoff)) >> 9); + if (!wpc->ioend || wpc->io_type != wpc->ioend->io_type || - bh->b_blocknr != wpc->last_block + 1 || + sector != bio_end_sector(wpc->ioend->io_bio) || offset != wpc->ioend->io_offset + wpc->ioend->io_size) { if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type, offset, bh); + wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type, offset, + bdev, sector); } /* - * If the buffer doesn't fit into the bio we need to allocate a new - * one. This shouldn't happen more than once for a given buffer. + * If the block doesn't fit into the bio we need to allocate a new + * one. This shouldn't happen more than once for a given block. */ - while (xfs_bio_add_buffer(wpc->ioend->io_bio, bh) != bh->b_size) - xfs_chain_bio(wpc->ioend, wbc, bh); + while (bio_add_page(wpc->ioend->io_bio, page, len, poff) != len) + xfs_chain_bio(wpc->ioend, wbc, bdev, sector); - wpc->ioend->io_size += bh->b_size; - wpc->last_block = bh->b_blocknr; - xfs_start_buffer_writeback(bh); + wpc->ioend->io_size += len; } STATIC void @@ -883,7 +877,9 @@ xfs_writepage_map( lock_buffer(bh); xfs_map_at_offset(inode, bh, &wpc->imap, file_offset); - xfs_add_to_ioend(inode, bh, file_offset, wpc, wbc, &submit_list); + xfs_add_to_ioend(inode, file_offset, page, wpc, wbc, + &submit_list); + xfs_start_buffer_writeback(bh); count++; }