From patchwork Wed May 9 07:48:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10388469 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0AE1D602C2 for ; Wed, 9 May 2018 07:50:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F20DA28E05 for ; Wed, 9 May 2018 07:50:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E6CD828E6C; Wed, 9 May 2018 07:50:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2440628E05 for ; Wed, 9 May 2018 07:50:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 93AA86B036B; Wed, 9 May 2018 03:50:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8C2FD6B036D; Wed, 9 May 2018 03:50:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B3626B036E; Wed, 9 May 2018 03:50:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f70.google.com (mail-pl0-f70.google.com [209.85.160.70]) by kanga.kvack.org (Postfix) with ESMTP id 320356B036D for ; Wed, 9 May 2018 03:50:16 -0400 (EDT) Received: by mail-pl0-f70.google.com with SMTP id h32-v6so3305984pld.15 for ; Wed, 09 May 2018 00:50:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=VjcgysNGTree2RDh+QPCx5HggUlHRFCfZlxXFHSGZCI=; b=l/AiHohggNE0Q49n148mmk4dGkXund+Qiqy6QnxwID5jHf6J00nANUvdcsOhRw41LK mzI7/InYv9krXnTZEtXgPd4Ibxo9f9FbsTUX7uP2x0mPemajpIcUIeCez5AG1EDNhLEG 7AwYFPA1COa2eyx5YmE+nN2HRyVuYfnw0teUZJWjVET66npj1zSBAnw2AmoIa0+lTr0d OZmvGv8z79cAi5nvJFjSXbKBZFRuW3pU2/Mo7OE8BSrh1aqEpnmbKWXgfYTI8FW23QB+ 5nQJcH108wAbV9gCZu+1d0vQBJXS6atBduTaTb/4ZkXScCt1Dou6XTj8N7bkeUgflWl7 jTVg== X-Gm-Message-State: ALQs6tBYTEGGJllESqauviax2T4L36OtYDskU/qIFvUYMlvUnk95H5tz p3YfMTWB7SC7vkjJ3nnDJEsCKGZM2Q94S9WTw/Hn/b4ns2PTrB1e1d4EQx9gEJ16lY7TMgxAJKZ nS+UMywOkF9MGn5ZpjsS3Gt7j46Ivjl1hEBR9+TCeSsqB6NCyeHDSMuR4lWcZXIA= X-Received: by 10.98.205.69 with SMTP id o66mr43236105pfg.250.1525852215870; Wed, 09 May 2018 00:50:15 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrkCxbR4YeuwMXoxCfcEYbA3XUM7I++XahYy7HRqxCHlRs4vQ5quTKf75+PtugYe95vOR/3 X-Received: by 10.98.205.69 with SMTP id o66mr43236054pfg.250.1525852214808; Wed, 09 May 2018 00:50:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525852214; cv=none; d=google.com; s=arc-20160816; b=LkXTfxqNugSBq9r6yfsDCGgkicGKkiA05xW5ganS1WPZdglh6BVCMPnTqUgyNu6XVm EvBQqscb2BbWgaRTuzGBzHS5WprJTxK9DjvBNMialVuOJbHRqlScrzpZbiriCqpVqjJ0 AfMIQYW7ksqG+F4HVQBdAIHqAWQVriaEDkHM0DdxPPEbMvs5hIZ8JZY0Cu2nUwZOfUL6 4PLGAaXuxcwoOvGT8Zhk+0ZgLTtrbtZa4nn+8/jCJMGIgbm2jSxOQU6dibQBGrHmkBUl jBHSUPH0YaQj2QG3WOSc+Td4Yge+A2cn1vCJzVxt0tvQ9hdrS70FiL6FFMuT7L3b0OzS v0cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=VjcgysNGTree2RDh+QPCx5HggUlHRFCfZlxXFHSGZCI=; b=Zv8gVnVlk221K3TIwGfIIV0ns1ouOzxIEfnZXVts1DpHJcpO2rqab6Vsj1uEAR9RuV 8ure47AfuKUAlaPQUROL4zl9qIRIslNvGLTDYaNLAzO8FrkrF08l+LHz9gM7Qf2fCoTA WxXA5Na8GW1hEdLo3xOXQ0XmzgI9uWPcsX0dhrBWetTCRq4SoyF32yIU7H4gpDKPTo64 cemaKZewUGbhZfGx7eQF2HD11IX7NRrUKThy3KbYm02Y70bWNrmC8UQXM2EwO68cDojE C5HhgNoTecTD5Y1/6ZbAHP0P0J+H8Z8/8B6w79x2aKnQRzewEEFLynnlM7jiUzE8sa5d H4jQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=KI0qnUd8; spf=pass (google.com: best guess record for domain of batv+e0efdd19ce80d487e3da+5372+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+e0efdd19ce80d487e3da+5372+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id e129si15354824pfa.217.2018.05.09.00.50.14 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 09 May 2018 00:50:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+e0efdd19ce80d487e3da+5372+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=KI0qnUd8; spf=pass (google.com: best guess record for domain of batv+e0efdd19ce80d487e3da+5372+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+e0efdd19ce80d487e3da+5372+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=VjcgysNGTree2RDh+QPCx5HggUlHRFCfZlxXFHSGZCI=; b=KI0qnUd83nl0U00m/ioKAcY3m JFRi+egvtRNqC3pZjxlSe+Yge2Rw6OlfINsAt88QaBza7VA+/jOBVJXr1lNFZab270C9+lBlw2EFz 2cxTW39QppCuc1G5uPzIO45J1rk9stusS1BJlyAa7wlbteRCUQmEVVNsu5kFtuzhMtWucUOiBNHA4 TsQzj7CNNhejXKs6R750qal85AJwNvuNWp3WSjKUCW0eWvnvsn4XxjttPenbZjgzrVZBhKn16qnqK GRmwMiuI/Myo/U8DtN62Ol3GwumgBYElBs4fXwu2CkOiB7ajkv/2UEQP2vSUnAZMCIeFvPfJiA6Ky ZkZoffVxQ==; Received: from 213-225-15-246.nat.highway.a1.net ([213.225.15.246] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fGJrQ-0001sP-Ca; Wed, 09 May 2018 07:50:04 +0000 From: Christoph Hellwig To: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 24/33] xfs: don't look at buffer heads in xfs_add_to_ioend Date: Wed, 9 May 2018 09:48:21 +0200 Message-Id: <20180509074830.16196-25-hch@lst.de> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180509074830.16196-1-hch@lst.de> References: <20180509074830.16196-1-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Calculate all information for the bio based on the passed in information without requiring a buffer_head structure. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_aops.c | 68 ++++++++++++++++++++++------------------------- 1 file changed, 32 insertions(+), 36 deletions(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 7ebd686cb723..f6d28e6aa911 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -44,7 +44,6 @@ struct xfs_writepage_ctx { struct xfs_bmbt_irec imap; unsigned int io_type; struct xfs_ioend *ioend; - sector_t last_block; }; void @@ -545,11 +544,6 @@ xfs_start_page_writeback( unlock_page(page); } -static inline int xfs_bio_add_buffer(struct bio *bio, struct buffer_head *bh) -{ - return bio_add_page(bio, bh->b_page, bh->b_size, bh_offset(bh)); -} - /* * Submit the bio for an ioend. We are passed an ioend with a bio attached to * it, and we submit that bio. The ioend may be used for multiple bio @@ -604,27 +598,20 @@ xfs_submit_ioend( return 0; } -static void -xfs_init_bio_from_bh( - struct bio *bio, - struct buffer_head *bh) -{ - bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9); - bio_set_dev(bio, bh->b_bdev); -} - static struct xfs_ioend * xfs_alloc_ioend( struct inode *inode, unsigned int type, xfs_off_t offset, - struct buffer_head *bh) + struct block_device *bdev, + sector_t sector) { struct xfs_ioend *ioend; struct bio *bio; bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES, xfs_ioend_bioset); - xfs_init_bio_from_bh(bio, bh); + bio_set_dev(bio, bdev); + bio->bi_iter.bi_sector = sector; ioend = container_of(bio, struct xfs_ioend, io_inline_bio); INIT_LIST_HEAD(&ioend->io_list); @@ -649,13 +636,14 @@ static void xfs_chain_bio( struct xfs_ioend *ioend, struct writeback_control *wbc, - struct buffer_head *bh) + struct block_device *bdev, + sector_t sector) { struct bio *new; new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES); - xfs_init_bio_from_bh(new, bh); - + bio_set_dev(new, bdev); + new->bi_iter.bi_sector = sector; bio_chain(ioend->io_bio, new); bio_get(ioend->io_bio); /* for xfs_destroy_ioend */ ioend->io_bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc); @@ -665,39 +653,45 @@ xfs_chain_bio( } /* - * Test to see if we've been building up a completion structure for - * earlier buffers -- if so, we try to append to this ioend if we - * can, otherwise we finish off any current ioend and start another. - * Return the ioend we finished off so that the caller can submit it - * once it has finished processing the dirty page. + * Test to see if we have an existing ioend structure that we could append to + * first, otherwise finish off the current ioend and start another. */ STATIC void xfs_add_to_ioend( struct inode *inode, - struct buffer_head *bh, xfs_off_t offset, + struct page *page, struct xfs_writepage_ctx *wpc, struct writeback_control *wbc, struct list_head *iolist) { + struct xfs_inode *ip = XFS_I(inode); + struct xfs_mount *mp = ip->i_mount; + struct block_device *bdev = xfs_find_bdev_for_inode(inode); + unsigned len = i_blocksize(inode); + unsigned poff = offset & (PAGE_SIZE - 1); + sector_t sector; + + sector = xfs_fsb_to_db(ip, wpc->imap.br_startblock) + + ((offset - XFS_FSB_TO_B(mp, wpc->imap.br_startoff)) >> 9); + if (!wpc->ioend || wpc->io_type != wpc->ioend->io_type || - bh->b_blocknr != wpc->last_block + 1 || + sector != bio_end_sector(wpc->ioend->io_bio) || offset != wpc->ioend->io_offset + wpc->ioend->io_size) { if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type, offset, bh); + wpc->ioend = xfs_alloc_ioend(inode, wpc->io_type, offset, + bdev, sector); } /* - * If the buffer doesn't fit into the bio we need to allocate a new - * one. This shouldn't happen more than once for a given buffer. + * If the block doesn't fit into the bio we need to allocate a new + * one. This shouldn't happen more than once for a given block. */ - while (xfs_bio_add_buffer(wpc->ioend->io_bio, bh) != bh->b_size) - xfs_chain_bio(wpc->ioend, wbc, bh); + while (bio_add_page(wpc->ioend->io_bio, page, len, poff) != len) + xfs_chain_bio(wpc->ioend, wbc, bdev, sector); - wpc->ioend->io_size += bh->b_size; - wpc->last_block = bh->b_blocknr; - xfs_start_buffer_writeback(bh); + wpc->ioend->io_size += len; } STATIC void @@ -893,7 +887,9 @@ xfs_writepage_map( lock_buffer(bh); xfs_map_at_offset(inode, bh, &wpc->imap, file_offset); - xfs_add_to_ioend(inode, bh, file_offset, wpc, wbc, &submit_list); + xfs_add_to_ioend(inode, file_offset, page, wpc, wbc, + &submit_list); + xfs_start_buffer_writeback(bh); count++; }