From patchwork Mon Jun 24 05:52:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11012323 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A038113B4 for ; Mon, 24 Jun 2019 05:54:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9283328B01 for ; Mon, 24 Jun 2019 05:54:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8707D28B1E; Mon, 24 Jun 2019 05:54:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 35FC928B01 for ; Mon, 24 Jun 2019 05:54:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726853AbfFXFxB (ORCPT ); Mon, 24 Jun 2019 01:53:01 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50588 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726791AbfFXFxA (ORCPT ); Mon, 24 Jun 2019 01:53:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=vs0VQZ8KjUPrhpsoQBCbpDKj0GvlZXsU1FRyQmL0RPo=; b=IqiCYwEtpiTr9YHNxj5geuEGZa N8AZ2BNpNmKd/Hi5NEGfff9MgrasK+4xmiGFBwxSbq7i06+7TbGjo9OQG2ukVMX05Fc7SpcLuEoak KIq9ZgkPsoU8A1N1cct2kQ8h6mbPc+Gs7zDhj/3BG+d23LW34HhDC+vkdcZ7jDDvSYwQTPxJCcgNR qxITHWy1H4+9uyUMv0NguUiTTaXJHduwVKEZSz/uDIKxzw+b+G+DLU0usY70I0jpY6xMaJS8usq2b tS7WNauamD/4DgrQJSvIAvaw8yQ1ZY0KbgKNBoZpsAo2mOgx1mVWTl5RTKgu5BZbRZetNXVzGWnrh s9MVUhyg==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfHuU-00042O-Md; Mon, 24 Jun 2019 05:52:59 +0000 From: Christoph Hellwig To: "Darrick J . Wong" Cc: Damien Le Moal , Andreas Gruenbacher , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 01/12] list.h: add a list_pop helper Date: Mon, 24 Jun 2019 07:52:42 +0200 Message-Id: <20190624055253.31183-2-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190624055253.31183-1-hch@lst.de> References: <20190624055253.31183-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We have a very common pattern where we want to delete the first entry from a list and return it as the properly typed container structure. Add a list_pop helper to implement this behavior. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- include/linux/list.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/include/linux/list.h b/include/linux/list.h index e951228db4b2..e07a5f54cc9d 100644 --- a/include/linux/list.h +++ b/include/linux/list.h @@ -500,6 +500,28 @@ static inline void list_splice_tail_init(struct list_head *list, pos__ != head__ ? list_entry(pos__, type, member) : NULL; \ }) +/** + * list_pop - delete the first entry from a list and return it + * @list: the list to take the element from. + * @type: the type of the struct this is embedded in. + * @member: the name of the list_head within the struct. + * + * Note that if the list is empty, it returns NULL. + */ +#define list_pop(list, type, member) \ +({ \ + struct list_head *head__ = (list); \ + struct list_head *pos__ = READ_ONCE(head__->next); \ + type *entry__ = NULL; \ + \ + if (pos__ != head__) { \ + entry__ = list_entry(pos__, type, member); \ + list_del(pos__); \ + } \ + \ + entry__; \ +}) + /** * list_next_entry - get the next element in list * @pos: the type * to cursor From patchwork Mon Jun 24 05:52:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11012321 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F18FD112C for ; Mon, 24 Jun 2019 05:54:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E40A328B01 for ; Mon, 24 Jun 2019 05:54:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D80AA28B1E; Mon, 24 Jun 2019 05:54:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7B87328B01 for ; Mon, 24 Jun 2019 05:54:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726977AbfFXFxF (ORCPT ); Mon, 24 Jun 2019 01:53:05 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50596 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726791AbfFXFxD (ORCPT ); Mon, 24 Jun 2019 01:53:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=xuz8htDajqDj6rm1owXRxsH3d3FZGi2klv1VrBcKiFo=; b=WrQ/q53pzrrQfxutj97MNiI8ug 9j0V7I1CZUef+qUFbnMGSRyrIGbep7ZfVw8KUxd8s6SX+tg/pRvkdH9f/wXPBW1rK/SaIr7lKy0H8 oMsDJ/2bYMTEkVmJ6MfNVhatf79Ik1LVtyLfcqqwiABlXQi/cplo9Ikrxg5KtF2W2GfrKcsehn4D8 hAbRfSH8NgG6Wh6el8GB5Luwf6i2k1l2OxgrI5GuxUkXV1T/+mxdEG3/+J4IFahKWZtW5HpH2F2jh 4w4zFIfv4NMcxolYvGtkGP6BGMks+5ZVGlVcKY5tgfyF9eJPDBn/e8HYLm4iWzacUBKDogXeSEm3n HVnoBn1w==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfHuX-00042x-Md; Mon, 24 Jun 2019 05:53:02 +0000 From: Christoph Hellwig To: "Darrick J . Wong" Cc: Damien Le Moal , Andreas Gruenbacher , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 02/12] xfs: simplify xfs_chain_bio Date: Mon, 24 Jun 2019 07:52:43 +0200 Message-Id: <20190624055253.31183-3-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190624055253.31183-1-hch@lst.de> References: <20190624055253.31183-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move setting up operation and write hint to xfs_alloc_ioend, and then just copy over all needed information from the previous bio in xfs_chain_bio and stop passing various parameters to it. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_aops.c | 35 +++++++++++++++++------------------ 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index a6f0f4761a37..9cceb90e77c5 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -665,7 +665,6 @@ xfs_submit_ioend( ioend->io_bio->bi_private = ioend; ioend->io_bio->bi_end_io = xfs_end_bio; - ioend->io_bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc); /* * If we are failing the IO now, just mark the ioend with an @@ -679,7 +678,6 @@ xfs_submit_ioend( return status; } - ioend->io_bio->bi_write_hint = ioend->io_inode->i_write_hint; submit_bio(ioend->io_bio); return 0; } @@ -691,7 +689,8 @@ xfs_alloc_ioend( xfs_exntst_t state, xfs_off_t offset, struct block_device *bdev, - sector_t sector) + sector_t sector, + struct writeback_control *wbc) { struct xfs_ioend *ioend; struct bio *bio; @@ -699,6 +698,8 @@ xfs_alloc_ioend( bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES, &xfs_ioend_bioset); bio_set_dev(bio, bdev); bio->bi_iter.bi_sector = sector; + bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc); + bio->bi_write_hint = inode->i_write_hint; ioend = container_of(bio, struct xfs_ioend, io_inline_bio); INIT_LIST_HEAD(&ioend->io_list); @@ -719,24 +720,22 @@ xfs_alloc_ioend( * so that the bi_private linkage is set up in the right direction for the * traversal in xfs_destroy_ioend(). */ -static void +static struct bio * xfs_chain_bio( - struct xfs_ioend *ioend, - struct writeback_control *wbc, - struct block_device *bdev, - sector_t sector) + struct bio *prev) { struct bio *new; new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES); - bio_set_dev(new, bdev); - new->bi_iter.bi_sector = sector; - bio_chain(ioend->io_bio, new); - bio_get(ioend->io_bio); /* for xfs_destroy_ioend */ - ioend->io_bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc); - ioend->io_bio->bi_write_hint = ioend->io_inode->i_write_hint; - submit_bio(ioend->io_bio); - ioend->io_bio = new; + bio_copy_dev(new, prev); + new->bi_iter.bi_sector = bio_end_sector(prev); + new->bi_opf = prev->bi_opf; + new->bi_write_hint = prev->bi_write_hint; + + bio_chain(prev, new); + bio_get(prev); /* for xfs_destroy_ioend */ + submit_bio(prev); + return new; } /* @@ -771,14 +770,14 @@ xfs_add_to_ioend( if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); wpc->ioend = xfs_alloc_ioend(inode, wpc->fork, - wpc->imap.br_state, offset, bdev, sector); + wpc->imap.br_state, offset, bdev, sector, wbc); } if (!__bio_try_merge_page(wpc->ioend->io_bio, page, len, poff, true)) { if (iop) atomic_inc(&iop->write_count); if (bio_full(wpc->ioend->io_bio)) - xfs_chain_bio(wpc->ioend, wbc, bdev, sector); + wpc->ioend->io_bio = xfs_chain_bio(wpc->ioend->io_bio); bio_add_page(wpc->ioend->io_bio, page, len, poff); } From patchwork Mon Jun 24 05:52:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11012317 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A17313AF for ; Mon, 24 Jun 2019 05:54:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6BEA028B01 for ; Mon, 24 Jun 2019 05:54:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 602B828B1E; Mon, 24 Jun 2019 05:54:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 115F428B01 for ; Mon, 24 Jun 2019 05:54:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726791AbfFXFxH (ORCPT ); Mon, 24 Jun 2019 01:53:07 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50608 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727048AbfFXFxG (ORCPT ); Mon, 24 Jun 2019 01:53:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=sTbD0BMyp++GtKpbwxEzwjWlE79K//OSQy3quq05cq8=; b=KYIg3iFcmgdyuQCHmLfvTro9LX RrIzZ4DVnUDYsob/Ng45rZea36+HjyiUqgqXuQ++PiuSQ+KPHlU755vAqdjOg3yLqgURrrE7itaeJ OwvJ2xAU2r/xXxH1wZqMJhNxUwHiQrFf83Jfanu7JtWZtdgtiWXpT8tt96WTxaUM1RXOEtzu0ej54 rL51k/dsWwEzB2l+eUQUjNIDSXr1QfefByNjegiU9sIe50vVYYEy2GyjRMJO2jobilPLOU9XMsIsb BkxtfAUyjnWSxWWIk1irVkV5AAltmFSUAj6QehQkpKS77vKPpn6pyXkVgAINExRoqLQSnjeDDuEuQ 0WqlXmmQ==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfHua-00043T-VL; Mon, 24 Jun 2019 05:53:05 +0000 From: Christoph Hellwig To: "Darrick J . Wong" Cc: Damien Le Moal , Andreas Gruenbacher , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 03/12] xfs: fix a comment typo in xfs_submit_ioend Date: Mon, 24 Jun 2019 07:52:44 +0200 Message-Id: <20190624055253.31183-4-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190624055253.31183-1-hch@lst.de> References: <20190624055253.31183-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The fail argument is long gone, update the comment. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_aops.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 9cceb90e77c5..dc60aec0c5a7 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -626,7 +626,7 @@ xfs_map_blocks( * reference to the ioend to ensure that the ioend completion is only done once * all bios have been submitted and the ioend is really done. * - * If @fail is non-zero, it means that we have a situation where some part of + * If @status is non-zero, it means that we have a situation where some part of * the submission process has failed after we have marked paged for writeback * and unlocked them. In this situation, we need to fail the bio and ioend * rather than submit it to IO. This typically only happens on a filesystem From patchwork Mon Jun 24 05:52:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11012281 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 62D6A13AF for ; Mon, 24 Jun 2019 05:53:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5291028B0B for ; Mon, 24 Jun 2019 05:53:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 467AB28B2C; Mon, 24 Jun 2019 05:53:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B73CA28B0B for ; Mon, 24 Jun 2019 05:53:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727122AbfFXFxK (ORCPT ); Mon, 24 Jun 2019 01:53:10 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50626 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727048AbfFXFxK (ORCPT ); Mon, 24 Jun 2019 01:53:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=8/j3UwC0rODMV/HVxGY8j/y3SRxz36zOGlJ5ulb0Mp0=; b=Ybh4lgehOBfkboskQvtoZqCyQG YGiFCfk1cPZVuMWXWLr++oU6Ov0TDvZFoJNma2Z3lyj7nnZL/w7Vk044EMQYv17GR8nxJPKh0g46/ Rfq9V04DOBCEHgmCxzuknBJtfn9rZBqJL8vSkg3ooROqhZPkdQqA6bjZFmoqa490Y/RXLr86BhSun LkgsbeqqjhYEQuzd2hiOkz9GgUE8w1aAw1R+rX+SoOiP9UsfY90rFRlwTE40kCnf5K4QQOS+rRJGO 1tFTQe3/2fyrYq6PKRxyh2PMSfB3LiJBTTIXoI46D571J5b9aryhXBYq3+QBk919pJGhPcvOZL8MV 9S8oCmig==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfHue-00044D-7k; Mon, 24 Jun 2019 05:53:08 +0000 From: Christoph Hellwig To: "Darrick J . Wong" Cc: Damien Le Moal , Andreas Gruenbacher , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 04/12] xfs: initialize ioma->flags in xfs_bmbt_to_iomap Date: Mon, 24 Jun 2019 07:52:45 +0200 Message-Id: <20190624055253.31183-5-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190624055253.31183-1-hch@lst.de> References: <20190624055253.31183-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently we don't overwrite the flags field in the iomap in xfs_bmbt_to_iomap. This works fine with 0-initialized iomaps on stack, but is harmful once we want to be able to reuse an iomap in the writeback code. Replace the shared paramter with a set of initial flags an thus ensures the flags field is always reinitialized. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_iomap.c | 28 +++++++++++++++++----------- fs/xfs/xfs_iomap.h | 2 +- fs/xfs/xfs_pnfs.c | 2 +- 3 files changed, 19 insertions(+), 13 deletions(-) diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 63d323916bba..6b29452bfba0 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -57,7 +57,7 @@ xfs_bmbt_to_iomap( struct xfs_inode *ip, struct iomap *iomap, struct xfs_bmbt_irec *imap, - bool shared) + u16 flags) { struct xfs_mount *mp = ip->i_mount; @@ -82,12 +82,11 @@ xfs_bmbt_to_iomap( iomap->length = XFS_FSB_TO_B(mp, imap->br_blockcount); iomap->bdev = xfs_find_bdev_for_inode(VFS_I(ip)); iomap->dax_dev = xfs_find_daxdev_for_inode(VFS_I(ip)); + iomap->flags = flags; if (xfs_ipincount(ip) && (ip->i_itemp->ili_fsync_fields & ~XFS_ILOG_TIMESTAMP)) iomap->flags |= IOMAP_F_DIRTY; - if (shared) - iomap->flags |= IOMAP_F_SHARED; return 0; } @@ -543,6 +542,7 @@ xfs_file_iomap_begin_delay( struct xfs_iext_cursor icur, ccur; xfs_fsblock_t prealloc_blocks = 0; bool eof = false, cow_eof = false, shared = false; + u16 iomap_flags = 0; int whichfork = XFS_DATA_FORK; int error = 0; @@ -710,7 +710,7 @@ xfs_file_iomap_begin_delay( * Flag newly allocated delalloc blocks with IOMAP_F_NEW so we punch * them out if the write happens to fail. */ - iomap->flags |= IOMAP_F_NEW; + iomap_flags |= IOMAP_F_NEW; trace_xfs_iomap_alloc(ip, offset, count, whichfork, whichfork == XFS_DATA_FORK ? &imap : &cmap); done: @@ -718,14 +718,17 @@ xfs_file_iomap_begin_delay( if (imap.br_startoff > offset_fsb) { xfs_trim_extent(&cmap, offset_fsb, imap.br_startoff - offset_fsb); - error = xfs_bmbt_to_iomap(ip, iomap, &cmap, true); + error = xfs_bmbt_to_iomap(ip, iomap, &cmap, + IOMAP_F_SHARED); goto out_unlock; } /* ensure we only report blocks we have a reservation for */ xfs_trim_extent(&imap, cmap.br_startoff, cmap.br_blockcount); shared = true; } - error = xfs_bmbt_to_iomap(ip, iomap, &imap, shared); + if (shared) + iomap_flags |= IOMAP_F_SHARED; + error = xfs_bmbt_to_iomap(ip, iomap, &imap, iomap_flags); out_unlock: xfs_iunlock(ip, XFS_ILOCK_EXCL); return error; @@ -933,6 +936,7 @@ xfs_file_iomap_begin( xfs_fileoff_t offset_fsb, end_fsb; int nimaps = 1, error = 0; bool shared = false; + u16 iomap_flags = 0; unsigned lockmode; if (XFS_FORCED_SHUTDOWN(mp)) @@ -1048,11 +1052,13 @@ xfs_file_iomap_begin( if (error) return error; - iomap->flags |= IOMAP_F_NEW; + iomap_flags |= IOMAP_F_NEW; trace_xfs_iomap_alloc(ip, offset, length, XFS_DATA_FORK, &imap); out_finish: - return xfs_bmbt_to_iomap(ip, iomap, &imap, shared); + if (shared) + iomap_flags |= IOMAP_F_SHARED; + return xfs_bmbt_to_iomap(ip, iomap, &imap, iomap_flags); out_found: ASSERT(nimaps); @@ -1196,7 +1202,7 @@ xfs_seek_iomap_begin( if (data_fsb < cow_fsb + cmap.br_blockcount) end_fsb = min(end_fsb, data_fsb); xfs_trim_extent(&cmap, offset_fsb, end_fsb); - error = xfs_bmbt_to_iomap(ip, iomap, &cmap, true); + error = xfs_bmbt_to_iomap(ip, iomap, &cmap, IOMAP_F_SHARED); /* * This is a COW extent, so we must probe the page cache * because there could be dirty page cache being backed @@ -1218,7 +1224,7 @@ xfs_seek_iomap_begin( imap.br_state = XFS_EXT_NORM; done: xfs_trim_extent(&imap, offset_fsb, end_fsb); - error = xfs_bmbt_to_iomap(ip, iomap, &imap, false); + error = xfs_bmbt_to_iomap(ip, iomap, &imap, 0); out_unlock: xfs_iunlock(ip, lockmode); return error; @@ -1264,7 +1270,7 @@ xfs_xattr_iomap_begin( if (error) return error; ASSERT(nimaps); - return xfs_bmbt_to_iomap(ip, iomap, &imap, false); + return xfs_bmbt_to_iomap(ip, iomap, &imap, 0); } const struct iomap_ops xfs_xattr_iomap_ops = { diff --git a/fs/xfs/xfs_iomap.h b/fs/xfs/xfs_iomap.h index 5c2f6aa6d78f..71d0ae460c44 100644 --- a/fs/xfs/xfs_iomap.h +++ b/fs/xfs/xfs_iomap.h @@ -16,7 +16,7 @@ int xfs_iomap_write_direct(struct xfs_inode *, xfs_off_t, size_t, int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t, bool); int xfs_bmbt_to_iomap(struct xfs_inode *, struct iomap *, - struct xfs_bmbt_irec *, bool shared); + struct xfs_bmbt_irec *, u16); xfs_extlen_t xfs_eof_alignment(struct xfs_inode *ip, xfs_extlen_t extsize); static inline xfs_filblks_t diff --git a/fs/xfs/xfs_pnfs.c b/fs/xfs/xfs_pnfs.c index bde2c9f56a46..12f664785248 100644 --- a/fs/xfs/xfs_pnfs.c +++ b/fs/xfs/xfs_pnfs.c @@ -185,7 +185,7 @@ xfs_fs_map_blocks( } xfs_iunlock(ip, XFS_IOLOCK_EXCL); - error = xfs_bmbt_to_iomap(ip, iomap, &imap, false); + error = xfs_bmbt_to_iomap(ip, iomap, &imap, 0); *device_generation = mp->m_generation; return error; out_unlock: From patchwork Mon Jun 24 05:52:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11012283 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A7BB2112C for ; Mon, 24 Jun 2019 05:53:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9549E28B01 for ; Mon, 24 Jun 2019 05:53:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 874D028B3B; Mon, 24 Jun 2019 05:53:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 918A328B01 for ; Mon, 24 Jun 2019 05:53:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727177AbfFXFxO (ORCPT ); Mon, 24 Jun 2019 01:53:14 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50642 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727156AbfFXFxN (ORCPT ); Mon, 24 Jun 2019 01:53:13 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=9jp9bzEvrxBj1Q0j7bLa6eDSaz8ziqeFmcbT3/bVAY4=; b=FUjfOOc1ytIzN4ub/Jv/1WnGin zBiA74+3jxOwEHjeEzqikUkUjZ9DY4mBdz4sK9QRwV5KRtiqIrTpGh050Mf85OwhuFjjy/+Bb7rW0 ns2HIGN3uA18HhpTCI4t1jvVyupYJ1jG8m85CFgMEy9Kyle8g/xBgS70MMgGIhCvpy0izpdsnzDMa TwdGYhIe3pV7ELWw413lPtnP0DUiL9e92MRNQDXXXqL9L9kwrykBDkORN9utM4Ta5XewJjnebu21x cNprF2qHbsambjG9KB+4QJB57ZEGKQszZa6RMbuSxH70LbJDRIAtDaTe012wgZtvAByOLXWmi3+mX FDPZVUCw==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfHuh-00044y-0N; Mon, 24 Jun 2019 05:53:11 +0000 From: Christoph Hellwig To: "Darrick J . Wong" Cc: Damien Le Moal , Andreas Gruenbacher , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 05/12] xfs: use a struct iomap in xfs_writepage_ctx Date: Mon, 24 Jun 2019 07:52:46 +0200 Message-Id: <20190624055253.31183-6-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190624055253.31183-1-hch@lst.de> References: <20190624055253.31183-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In preparation for moving the XFS writeback code to fs/iomap.c, switch it to use struct iomap instead of the XFS-specific struct xfs_bmbt_irec. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/libxfs/xfs_bmap.c | 14 +++++-- fs/xfs/libxfs/xfs_bmap.h | 3 +- fs/xfs/xfs_aops.c | 80 +++++++++++++++++++--------------------- fs/xfs/xfs_aops.h | 2 +- 4 files changed, 50 insertions(+), 49 deletions(-) diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index 4133bc461e3e..de35a0376156 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -39,6 +39,7 @@ #include "xfs_ag_resv.h" #include "xfs_refcount.h" #include "xfs_icache.h" +#include "xfs_iomap.h" kmem_zone_t *xfs_bmap_free_item_zone; @@ -4457,16 +4458,21 @@ int xfs_bmapi_convert_delalloc( struct xfs_inode *ip, int whichfork, - xfs_fileoff_t offset_fsb, - struct xfs_bmbt_irec *imap, + xfs_off_t offset, + struct iomap *iomap, unsigned int *seq) { struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, whichfork); struct xfs_mount *mp = ip->i_mount; + xfs_fileoff_t offset_fsb = XFS_B_TO_FSBT(mp, offset); struct xfs_bmalloca bma = { NULL }; + u16 flags = 0; struct xfs_trans *tp; int error; + if (whichfork == XFS_COW_FORK) + flags |= IOMAP_F_SHARED; + /* * Space for the extent and indirect blocks was reserved when the * delalloc extent was created so there's no need to do so here. @@ -4496,7 +4502,7 @@ xfs_bmapi_convert_delalloc( * the extent. Just return the real extent at this offset. */ if (!isnullstartblock(bma.got.br_startblock)) { - *imap = bma.got; + xfs_bmbt_to_iomap(ip, iomap, &bma.got, flags); *seq = READ_ONCE(ifp->if_seq); goto out_trans_cancel; } @@ -4529,7 +4535,7 @@ xfs_bmapi_convert_delalloc( XFS_STATS_INC(mp, xs_xstrat_quick); ASSERT(!isnullstartblock(bma.got.br_startblock)); - *imap = bma.got; + xfs_bmbt_to_iomap(ip, iomap, &bma.got, flags); *seq = READ_ONCE(ifp->if_seq); if (whichfork == XFS_COW_FORK) { diff --git a/fs/xfs/libxfs/xfs_bmap.h b/fs/xfs/libxfs/xfs_bmap.h index 8f597f9abdbe..3c3470f11648 100644 --- a/fs/xfs/libxfs/xfs_bmap.h +++ b/fs/xfs/libxfs/xfs_bmap.h @@ -220,8 +220,7 @@ int xfs_bmapi_reserve_delalloc(struct xfs_inode *ip, int whichfork, struct xfs_bmbt_irec *got, struct xfs_iext_cursor *cur, int eof); int xfs_bmapi_convert_delalloc(struct xfs_inode *ip, int whichfork, - xfs_fileoff_t offset_fsb, struct xfs_bmbt_irec *imap, - unsigned int *seq); + xfs_off_t offset, struct iomap *iomap, unsigned int *seq); int xfs_bmap_add_extent_unwritten_real(struct xfs_trans *tp, struct xfs_inode *ip, int whichfork, struct xfs_iext_cursor *icur, struct xfs_btree_cur **curp, diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index dc60aec0c5a7..93a760f13017 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -27,7 +27,7 @@ * structure owned by writepages passed to individual writepage calls */ struct xfs_writepage_ctx { - struct xfs_bmbt_irec imap; + struct iomap iomap; int fork; unsigned int data_seq; unsigned int cow_seq; @@ -265,7 +265,7 @@ xfs_end_ioend( */ if (ioend->io_fork == XFS_COW_FORK) error = xfs_reflink_end_cow(ip, offset, size); - else if (ioend->io_state == XFS_EXT_UNWRITTEN) + else if (ioend->io_type == IOMAP_UNWRITTEN) error = xfs_iomap_write_unwritten(ip, offset, size, false); else ASSERT(!xfs_ioend_is_append(ioend) || ioend->io_append_trans); @@ -300,8 +300,8 @@ xfs_ioend_can_merge( return false; if ((ioend->io_fork == XFS_COW_FORK) ^ (next->io_fork == XFS_COW_FORK)) return false; - if ((ioend->io_state == XFS_EXT_UNWRITTEN) ^ - (next->io_state == XFS_EXT_UNWRITTEN)) + if ((ioend->io_type == IOMAP_UNWRITTEN) ^ + (next->io_type == IOMAP_UNWRITTEN)) return false; if (ioend->io_offset + ioend->io_size != next->io_offset) return false; @@ -395,7 +395,7 @@ xfs_end_bio( unsigned long flags; if (ioend->io_fork == XFS_COW_FORK || - ioend->io_state == XFS_EXT_UNWRITTEN || + ioend->io_type == IOMAP_UNWRITTEN || ioend->io_append_trans != NULL) { spin_lock_irqsave(&ip->i_ioend_lock, flags); if (list_empty(&ip->i_ioend_list)) @@ -415,10 +415,10 @@ static bool xfs_imap_valid( struct xfs_writepage_ctx *wpc, struct xfs_inode *ip, - xfs_fileoff_t offset_fsb) + loff_t offset) { - if (offset_fsb < wpc->imap.br_startoff || - offset_fsb >= wpc->imap.br_startoff + wpc->imap.br_blockcount) + if (offset < wpc->iomap.offset || + offset >= wpc->iomap.offset + wpc->iomap.length) return false; /* * If this is a COW mapping, it is sufficient to check that the mapping @@ -445,7 +445,7 @@ xfs_imap_valid( /* * Pass in a dellalloc extent and convert it to real extents, return the real - * extent that maps offset_fsb in wpc->imap. + * extent that maps offset_fsb in wpc->iomap. * * The current page is held locked so nothing could have removed the block * backing offset_fsb, although it could have moved from the COW to the data @@ -455,23 +455,23 @@ static int xfs_convert_blocks( struct xfs_writepage_ctx *wpc, struct xfs_inode *ip, - xfs_fileoff_t offset_fsb) + loff_t offset) { int error; /* - * Attempt to allocate whatever delalloc extent currently backs - * offset_fsb and put the result into wpc->imap. Allocate in a loop - * because it may take several attempts to allocate real blocks for a - * contiguous delalloc extent if free space is sufficiently fragmented. + * Attempt to allocate whatever delalloc extent currently backs offset + * and put the result into wpc->imap. Allocate in a loop because it may + * take several attempts to allocate real blocks for a contiguous + * delalloc extent if free space is sufficiently fragmented. */ do { - error = xfs_bmapi_convert_delalloc(ip, wpc->fork, offset_fsb, - &wpc->imap, wpc->fork == XFS_COW_FORK ? + error = xfs_bmapi_convert_delalloc(ip, wpc->fork, offset, + &wpc->iomap, wpc->fork == XFS_COW_FORK ? &wpc->cow_seq : &wpc->data_seq); if (error) return error; - } while (wpc->imap.br_startoff + wpc->imap.br_blockcount <= offset_fsb); + } while (wpc->iomap.offset + wpc->iomap.length <= offset); return 0; } @@ -511,7 +511,7 @@ xfs_map_blocks( * against concurrent updates and provides a memory barrier on the way * out that ensures that we always see the current value. */ - if (xfs_imap_valid(wpc, ip, offset_fsb)) + if (xfs_imap_valid(wpc, ip, offset)) return 0; /* @@ -544,7 +544,7 @@ xfs_map_blocks( * No COW extent overlap. Revalidate now that we may have updated * ->cow_seq. If the data mapping is still valid, we're done. */ - if (xfs_imap_valid(wpc, ip, offset_fsb)) { + if (xfs_imap_valid(wpc, ip, offset)) { xfs_iunlock(ip, XFS_ILOCK_SHARED); return 0; } @@ -584,11 +584,11 @@ xfs_map_blocks( isnullstartblock(imap.br_startblock)) goto allocate_blocks; - wpc->imap = imap; + xfs_bmbt_to_iomap(ip, &wpc->iomap, &imap, 0); trace_xfs_map_blocks_found(ip, offset, count, wpc->fork, &imap); return 0; allocate_blocks: - error = xfs_convert_blocks(wpc, ip, offset_fsb); + error = xfs_convert_blocks(wpc, ip, offset); if (error) { /* * If we failed to find the extent in the COW fork we might have @@ -608,12 +608,15 @@ xfs_map_blocks( * original delalloc one. Trim the return extent to the next COW * boundary again to force a re-lookup. */ - if (wpc->fork != XFS_COW_FORK && cow_fsb != NULLFILEOFF && - cow_fsb < wpc->imap.br_startoff + wpc->imap.br_blockcount) - wpc->imap.br_blockcount = cow_fsb - wpc->imap.br_startoff; + if (wpc->fork != XFS_COW_FORK && cow_fsb != NULLFILEOFF) { + loff_t cow_offset = XFS_FSB_TO_B(mp, cow_fsb); + + if (cow_offset < wpc->iomap.offset + wpc->iomap.length) + wpc->iomap.length = cow_offset - wpc->iomap.offset; + } - ASSERT(wpc->imap.br_startoff <= offset_fsb); - ASSERT(wpc->imap.br_startoff + wpc->imap.br_blockcount > offset_fsb); + ASSERT(wpc->iomap.offset <= offset); + ASSERT(wpc->iomap.offset + wpc->iomap.length > offset); trace_xfs_map_blocks_alloc(ip, offset, count, wpc->fork, &imap); return 0; } @@ -658,7 +661,7 @@ xfs_submit_ioend( /* Reserve log space if we might write beyond the on-disk inode size. */ if (!status && (ioend->io_fork == XFS_COW_FORK || - ioend->io_state != XFS_EXT_UNWRITTEN) && + ioend->io_type != IOMAP_UNWRITTEN) && xfs_ioend_is_append(ioend) && !ioend->io_append_trans) status = xfs_setfilesize_trans_alloc(ioend); @@ -685,10 +688,8 @@ xfs_submit_ioend( static struct xfs_ioend * xfs_alloc_ioend( struct inode *inode, - int fork, - xfs_exntst_t state, + struct xfs_writepage_ctx *wpc, xfs_off_t offset, - struct block_device *bdev, sector_t sector, struct writeback_control *wbc) { @@ -696,15 +697,15 @@ xfs_alloc_ioend( struct bio *bio; bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES, &xfs_ioend_bioset); - bio_set_dev(bio, bdev); + bio_set_dev(bio, wpc->iomap.bdev); bio->bi_iter.bi_sector = sector; bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc); bio->bi_write_hint = inode->i_write_hint; ioend = container_of(bio, struct xfs_ioend, io_inline_bio); INIT_LIST_HEAD(&ioend->io_list); - ioend->io_fork = fork; - ioend->io_state = state; + ioend->io_fork = wpc->fork; + ioend->io_type = wpc->iomap.type; ioend->io_inode = inode; ioend->io_size = 0; ioend->io_offset = offset; @@ -752,25 +753,20 @@ xfs_add_to_ioend( struct writeback_control *wbc, struct list_head *iolist) { - struct xfs_inode *ip = XFS_I(inode); - struct xfs_mount *mp = ip->i_mount; - struct block_device *bdev = xfs_find_bdev_for_inode(inode); unsigned len = i_blocksize(inode); unsigned poff = offset & (PAGE_SIZE - 1); sector_t sector; - sector = xfs_fsb_to_db(ip, wpc->imap.br_startblock) + - ((offset - XFS_FSB_TO_B(mp, wpc->imap.br_startoff)) >> 9); + sector = (wpc->iomap.addr + offset - wpc->iomap.offset) >> 9; if (!wpc->ioend || wpc->fork != wpc->ioend->io_fork || - wpc->imap.br_state != wpc->ioend->io_state || + wpc->iomap.type != wpc->ioend->io_type || sector != bio_end_sector(wpc->ioend->io_bio) || offset != wpc->ioend->io_offset + wpc->ioend->io_size) { if (wpc->ioend) list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = xfs_alloc_ioend(inode, wpc->fork, - wpc->imap.br_state, offset, bdev, sector, wbc); + wpc->ioend = xfs_alloc_ioend(inode, wpc, offset, sector, wbc); } if (!__bio_try_merge_page(wpc->ioend->io_bio, page, len, poff, true)) { @@ -879,7 +875,7 @@ xfs_writepage_map( error = xfs_map_blocks(wpc, inode, file_offset); if (error) break; - if (wpc->imap.br_startblock == HOLESTARTBLOCK) + if (wpc->iomap.type == IOMAP_HOLE) continue; xfs_add_to_ioend(inode, file_offset, page, iop, wpc, wbc, &submit_list); diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h index f62b03186c62..72e30d1c3bdf 100644 --- a/fs/xfs/xfs_aops.h +++ b/fs/xfs/xfs_aops.h @@ -14,7 +14,7 @@ extern struct bio_set xfs_ioend_bioset; struct xfs_ioend { struct list_head io_list; /* next ioend in chain */ int io_fork; /* inode fork written back */ - xfs_exntst_t io_state; /* extent state */ + u16 io_type; struct inode *io_inode; /* file being written to */ size_t io_size; /* size of the extent */ xfs_off_t io_offset; /* offset in the file */ From patchwork Mon Jun 24 05:52:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11012285 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 187EA13B4 for ; Mon, 24 Jun 2019 05:53:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 07E2F28B01 for ; Mon, 24 Jun 2019 05:53:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F0A6428B2C; Mon, 24 Jun 2019 05:53:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0CD9F28B1E for ; Mon, 24 Jun 2019 05:53:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727266AbfFXFxU (ORCPT ); Mon, 24 Jun 2019 01:53:20 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50662 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726908AbfFXFxQ (ORCPT ); Mon, 24 Jun 2019 01:53:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=YQfVkRaLDzTLP/kwXTkMMaUpzKVVxzAAzyVqy0c6VlU=; b=jJP+wpYNnjZCpeoborv7TIhwpD kyjqXXYu+N6FaACVqVnh1jdlGJWU6AhEtQekffOnNKYffVDdOXOTFKQLjkwztnSnTzzFa+rVmB32i leiumMk1td3ee6SD20jN/XQWv2JdoHVogKJxtj1R+sjzsfDWcaXAq28ZiQ+0XEklyaT/9ORZiHUXp 1QmMuViXCpC+Uk0cs5w34SYp5v1DTdo4mLaXdz7swBoYnTfMKo2rwZCvOb1IXO18EZyX8T/ufriUl wYh0d5cdp2GNKd16nzjfPTZghXoQQhX/P/1W5zncZs42+SBqwlgQ1xvS/FhX7ntqyAtd7OSot3Eze SHDYQyFw==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfHuk-00045d-2x; Mon, 24 Jun 2019 05:53:14 +0000 From: Christoph Hellwig To: "Darrick J . Wong" Cc: Damien Le Moal , Andreas Gruenbacher , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 06/12] xfs: remove XFS_TRANS_NOFS Date: Mon, 24 Jun 2019 07:52:47 +0200 Message-Id: <20190624055253.31183-7-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190624055253.31183-1-hch@lst.de> References: <20190624055253.31183-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of a magic flag for xfs_trans_alloc, just ensure all callers that can't relclaim through the file system use memalloc_nofs_save to set the per-task nofs flag. Signed-off-by: Christoph Hellwig --- fs/xfs/libxfs/xfs_shared.h | 1 - fs/xfs/xfs_aops.c | 12 +++++++++--- fs/xfs/xfs_file.c | 12 +++++++++--- fs/xfs/xfs_iomap.c | 2 +- fs/xfs/xfs_reflink.c | 4 ++-- fs/xfs/xfs_trans.c | 4 +--- 6 files changed, 22 insertions(+), 13 deletions(-) diff --git a/fs/xfs/libxfs/xfs_shared.h b/fs/xfs/libxfs/xfs_shared.h index 4e909791aeac..1f2b5a0c71b4 100644 --- a/fs/xfs/libxfs/xfs_shared.h +++ b/fs/xfs/libxfs/xfs_shared.h @@ -65,7 +65,6 @@ void xfs_log_get_max_trans_res(struct xfs_mount *mp, #define XFS_TRANS_DQ_DIRTY 0x10 /* at least one dquot in trx dirty */ #define XFS_TRANS_RESERVE 0x20 /* OK to use reserved data blocks */ #define XFS_TRANS_NO_WRITECOUNT 0x40 /* do not elevate SB writecount */ -#define XFS_TRANS_NOFS 0x80 /* pass KM_NOFS to kmem_alloc */ /* * LOWMODE is used by the allocator to activate the lowspace algorithm - when * free space is running low the extent allocator may choose to allocate an diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 93a760f13017..633baaaff7ae 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -138,8 +138,7 @@ xfs_setfilesize_trans_alloc( struct xfs_trans *tp; int error; - error = xfs_trans_alloc(mp, &M_RES(mp)->tr_fsyncts, 0, 0, - XFS_TRANS_NOFS, &tp); + error = xfs_trans_alloc(mp, &M_RES(mp)->tr_fsyncts, 0, 0, 0, &tp); if (error) return error; @@ -236,6 +235,7 @@ STATIC void xfs_end_ioend( struct xfs_ioend *ioend) { + unsigned int nofs_flag = memalloc_nofs_save(); struct list_head ioend_list; struct xfs_inode *ip = XFS_I(ioend->io_inode); xfs_off_t offset = ioend->io_offset; @@ -282,6 +282,8 @@ xfs_end_ioend( list_del_init(&ioend->io_list); xfs_destroy_ioend(ioend, error); } + + memalloc_nofs_restore(nofs_flag); } /* @@ -663,8 +665,12 @@ xfs_submit_ioend( (ioend->io_fork == XFS_COW_FORK || ioend->io_type != IOMAP_UNWRITTEN) && xfs_ioend_is_append(ioend) && - !ioend->io_append_trans) + !ioend->io_append_trans) { + unsigned nofs_flag = memalloc_nofs_save(); + status = xfs_setfilesize_trans_alloc(ioend); + memalloc_nofs_restore(nofs_flag); + } ioend->io_bio->bi_private = ioend; ioend->io_bio->bi_end_io = xfs_end_bio; diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 916a35cae5e9..f2d806ef8f06 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -379,6 +379,7 @@ xfs_dio_write_end_io( struct inode *inode = file_inode(iocb->ki_filp); struct xfs_inode *ip = XFS_I(inode); loff_t offset = iocb->ki_pos; + unsigned int nofs_flag; int error = 0; trace_xfs_end_io_direct_write(ip, offset, size); @@ -395,10 +396,11 @@ xfs_dio_write_end_io( */ XFS_STATS_ADD(ip->i_mount, xs_write_bytes, size); + nofs_flag = memalloc_nofs_save(); if (flags & IOMAP_DIO_COW) { error = xfs_reflink_end_cow(ip, offset, size); if (error) - return error; + goto out; } /* @@ -407,8 +409,10 @@ xfs_dio_write_end_io( * earlier allows a racing dio read to find unwritten extents before * they are converted. */ - if (flags & IOMAP_DIO_UNWRITTEN) - return xfs_iomap_write_unwritten(ip, offset, size, true); + if (flags & IOMAP_DIO_UNWRITTEN) { + error = xfs_iomap_write_unwritten(ip, offset, size, true); + goto out; + } /* * We need to update the in-core inode size here so that we don't end up @@ -430,6 +434,8 @@ xfs_dio_write_end_io( spin_unlock(&ip->i_flags_lock); } +out: + memalloc_nofs_restore(nofs_flag); return error; } diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 6b29452bfba0..461ea023b910 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -782,7 +782,7 @@ xfs_iomap_write_unwritten( * complete here and might deadlock on the iolock. */ error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, resblks, 0, - XFS_TRANS_RESERVE | XFS_TRANS_NOFS, &tp); + XFS_TRANS_RESERVE, &tp); if (error) return error; diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c index 680ae7662a78..0b23c2b29609 100644 --- a/fs/xfs/xfs_reflink.c +++ b/fs/xfs/xfs_reflink.c @@ -572,7 +572,7 @@ xfs_reflink_cancel_cow_range( /* Start a rolling transaction to remove the mappings */ error = xfs_trans_alloc(ip->i_mount, &M_RES(ip->i_mount)->tr_write, - 0, 0, XFS_TRANS_NOFS, &tp); + 0, 0, 0, &tp); if (error) goto out; @@ -631,7 +631,7 @@ xfs_reflink_end_cow_extent( resblks = XFS_EXTENTADD_SPACE_RES(mp, XFS_DATA_FORK); error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, resblks, 0, - XFS_TRANS_RESERVE | XFS_TRANS_NOFS, &tp); + XFS_TRANS_RESERVE, &tp); if (error) return error; diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c index 0746b329a937..21228d7455af 100644 --- a/fs/xfs/xfs_trans.c +++ b/fs/xfs/xfs_trans.c @@ -264,9 +264,7 @@ xfs_trans_alloc( * GFP_NOFS allocation context so that we avoid lockdep false positives * by doing GFP_KERNEL allocations inside sb_start_intwrite(). */ - tp = kmem_zone_zalloc(xfs_trans_zone, - (flags & XFS_TRANS_NOFS) ? KM_NOFS : KM_SLEEP); - + tp = kmem_zone_zalloc(xfs_trans_zone, KM_SLEEP); if (!(flags & XFS_TRANS_NO_WRITECOUNT)) sb_start_intwrite(mp->m_super); From patchwork Mon Jun 24 05:52:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11012313 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1DEB6112C for ; Mon, 24 Jun 2019 05:54:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1029828B01 for ; Mon, 24 Jun 2019 05:54:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0454D28B1E; Mon, 24 Jun 2019 05:54:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7355528B01 for ; Mon, 24 Jun 2019 05:53:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727538AbfFXFxx (ORCPT ); Mon, 24 Jun 2019 01:53:53 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50678 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727156AbfFXFxT (ORCPT ); Mon, 24 Jun 2019 01:53:19 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=//DYfOpEMU7kNFvGSRLB7oTmBaBAqXAsZxH+fPTIU08=; b=eIINl8Uo7JjnHFtmadoV6ttrwW njCXzKXgLqNDi8Bxt79tVKbF0WcE11lKhVVzUev7yTeK8Osf+2a8HbuNgi7HfNs1SAnSF4Dp1YkV0 3LIH+AOqgqgEm0hwt0muoEtLSXGE/+nwmbGcY8b/7EhJSYX1+9oqUd/x2vz8VW0AnVQReD5aplVM2 TFe6OFH/Pf3kHO8KhTpQ/jFR0l3Alv5ykPras8+0pgt3LNG2DLUv8uMKH8RRQsQUkyaEfENGRjTIQ tYKMQiFIF1lUstfvIdY26EvM1afqtE/xnSeYuECZgBUpXMQBgylkVzV3akYkUaYS88JFA1weBo5TO xz0aGOUw==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfHun-000463-87; Mon, 24 Jun 2019 05:53:17 +0000 From: Christoph Hellwig To: "Darrick J . Wong" Cc: Damien Le Moal , Andreas Gruenbacher , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 07/12] xfs: don't preallocate a transaction for file size updates Date: Mon, 24 Jun 2019 07:52:48 +0200 Message-Id: <20190624055253.31183-8-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190624055253.31183-1-hch@lst.de> References: <20190624055253.31183-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We have historically decided that we want to preallocate the xfs_trans structure at writeback time so that we don't have to allocate on in the I/O completion handler. But we treat unwrittent extent and COW fork conversions different already, which proves that the transaction allocations in the end I/O handler are not a problem. Removing the preallocation gets rid of a lot of corner case code, and also ensures we only allocate one and log a transaction when actually required, as the ioend merging can reduce the number of actual i_size updates significantly. Signed-off-by: Christoph Hellwig --- fs/xfs/xfs_aops.c | 110 +++++----------------------------------------- fs/xfs/xfs_aops.h | 1 - 2 files changed, 12 insertions(+), 99 deletions(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 633baaaff7ae..017b87b7765f 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -130,44 +130,23 @@ static inline bool xfs_ioend_is_append(struct xfs_ioend *ioend) XFS_I(ioend->io_inode)->i_d.di_size; } -STATIC int -xfs_setfilesize_trans_alloc( - struct xfs_ioend *ioend) -{ - struct xfs_mount *mp = XFS_I(ioend->io_inode)->i_mount; - struct xfs_trans *tp; - int error; - - error = xfs_trans_alloc(mp, &M_RES(mp)->tr_fsyncts, 0, 0, 0, &tp); - if (error) - return error; - - ioend->io_append_trans = tp; - - /* - * We may pass freeze protection with a transaction. So tell lockdep - * we released it. - */ - __sb_writers_release(ioend->io_inode->i_sb, SB_FREEZE_FS); - /* - * We hand off the transaction to the completion thread now, so - * clear the flag here. - */ - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); - return 0; -} - /* * Update on-disk file size now that data has been written to disk. */ -STATIC int -__xfs_setfilesize( +int +xfs_setfilesize( struct xfs_inode *ip, - struct xfs_trans *tp, xfs_off_t offset, size_t size) { + struct xfs_mount *mp = ip->i_mount; + struct xfs_trans *tp; xfs_fsize_t isize; + int error; + + error = xfs_trans_alloc(mp, &M_RES(mp)->tr_fsyncts, 0, 0, 0, &tp); + if (error) + return error; xfs_ilock(ip, XFS_ILOCK_EXCL); isize = xfs_new_eof(ip, offset + size); @@ -186,48 +165,6 @@ __xfs_setfilesize( return xfs_trans_commit(tp); } -int -xfs_setfilesize( - struct xfs_inode *ip, - xfs_off_t offset, - size_t size) -{ - struct xfs_mount *mp = ip->i_mount; - struct xfs_trans *tp; - int error; - - error = xfs_trans_alloc(mp, &M_RES(mp)->tr_fsyncts, 0, 0, 0, &tp); - if (error) - return error; - - return __xfs_setfilesize(ip, tp, offset, size); -} - -STATIC int -xfs_setfilesize_ioend( - struct xfs_ioend *ioend, - int error) -{ - struct xfs_inode *ip = XFS_I(ioend->io_inode); - struct xfs_trans *tp = ioend->io_append_trans; - - /* - * The transaction may have been allocated in the I/O submission thread, - * thus we need to mark ourselves as being in a transaction manually. - * Similarly for freeze protection. - */ - current_set_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); - __sb_writers_acquired(VFS_I(ip)->i_sb, SB_FREEZE_FS); - - /* we abort the update if there was an IO error */ - if (error) { - xfs_trans_cancel(tp); - return error; - } - - return __xfs_setfilesize(ip, tp, ioend->io_offset, ioend->io_size); -} - /* * IO write completion. */ @@ -267,12 +204,9 @@ xfs_end_ioend( error = xfs_reflink_end_cow(ip, offset, size); else if (ioend->io_type == IOMAP_UNWRITTEN) error = xfs_iomap_write_unwritten(ip, offset, size, false); - else - ASSERT(!xfs_ioend_is_append(ioend) || ioend->io_append_trans); - + if (!error && xfs_ioend_is_append(ioend)) + error = xfs_setfilesize(ip, offset, size); done: - if (ioend->io_append_trans) - error = xfs_setfilesize_ioend(ioend, error); list_replace_init(&ioend->io_list, &ioend_list); xfs_destroy_ioend(ioend, error); @@ -307,8 +241,6 @@ xfs_ioend_can_merge( return false; if (ioend->io_offset + ioend->io_size != next->io_offset) return false; - if (xfs_ioend_is_append(ioend) != xfs_ioend_is_append(next)) - return false; return true; } @@ -320,7 +252,6 @@ xfs_ioend_try_merge( { struct xfs_ioend *next_ioend; int ioend_error; - int error; if (list_empty(more_ioends)) return; @@ -334,10 +265,6 @@ xfs_ioend_try_merge( break; list_move_tail(&next_ioend->io_list, &ioend->io_list); ioend->io_size += next_ioend->io_size; - if (ioend->io_append_trans) { - error = xfs_setfilesize_ioend(next_ioend, 1); - ASSERT(error == 1); - } } } @@ -398,7 +325,7 @@ xfs_end_bio( if (ioend->io_fork == XFS_COW_FORK || ioend->io_type == IOMAP_UNWRITTEN || - ioend->io_append_trans != NULL) { + xfs_ioend_is_append(ioend)) { spin_lock_irqsave(&ip->i_ioend_lock, flags); if (list_empty(&ip->i_ioend_list)) WARN_ON_ONCE(!queue_work(mp->m_unwritten_workqueue, @@ -660,18 +587,6 @@ xfs_submit_ioend( memalloc_nofs_restore(nofs_flag); } - /* Reserve log space if we might write beyond the on-disk inode size. */ - if (!status && - (ioend->io_fork == XFS_COW_FORK || - ioend->io_type != IOMAP_UNWRITTEN) && - xfs_ioend_is_append(ioend) && - !ioend->io_append_trans) { - unsigned nofs_flag = memalloc_nofs_save(); - - status = xfs_setfilesize_trans_alloc(ioend); - memalloc_nofs_restore(nofs_flag); - } - ioend->io_bio->bi_private = ioend; ioend->io_bio->bi_end_io = xfs_end_bio; @@ -715,7 +630,6 @@ xfs_alloc_ioend( ioend->io_inode = inode; ioend->io_size = 0; ioend->io_offset = offset; - ioend->io_append_trans = NULL; ioend->io_bio = bio; return ioend; } diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h index 72e30d1c3bdf..23c087f0bcbf 100644 --- a/fs/xfs/xfs_aops.h +++ b/fs/xfs/xfs_aops.h @@ -18,7 +18,6 @@ struct xfs_ioend { struct inode *io_inode; /* file being written to */ size_t io_size; /* size of the extent */ xfs_off_t io_offset; /* offset in the file */ - struct xfs_trans *io_append_trans;/* xact. for size update */ struct bio *io_bio; /* bio being built */ struct bio io_inline_bio; /* MUST BE LAST! */ }; From patchwork Mon Jun 24 05:52:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11012289 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9FCFB13AF for ; Mon, 24 Jun 2019 05:53:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D60628B01 for ; Mon, 24 Jun 2019 05:53:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8077028B1E; Mon, 24 Jun 2019 05:53:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 21BB428B01 for ; Mon, 24 Jun 2019 05:53:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727327AbfFXFxZ (ORCPT ); Mon, 24 Jun 2019 01:53:25 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50698 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727282AbfFXFxW (ORCPT ); Mon, 24 Jun 2019 01:53:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=6iPllgByjiycLh9HLKiv2XHWXOkGIXp7tU5AXv0oDVg=; b=hWNTtuvbA9OqXzZmc+55gkDkws brZE7otREjpfwOnTBzLkP2W8E+PPPemWS+Pn0dPJmMAosxZ2jaTHeOWILkYT+keRUoBOEls2AnaVw HDPbJ3BDknzpyqgyLHJKUPG9xTxPnowyGYg20Epq5i8vJOJ7+E+NXbnuFQooWcIy6nA7hh3fyD3U8 Qs9JOcXTWqn+Slpq5J0PJPZyNXbNkmcMDKJpXuvmUzyzwha4TeMWxwXXonKmwwsB8AN7MCeXjVAn9 5CooJGHTKyLr5qACNtrISwkuIuS35S0KLOrH2g9pbzryedFsl7yiv1jvrUHv5hp70GsN5N2L6JGD0 WDaNz/tg==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfHuq-00047L-Ah; Mon, 24 Jun 2019 05:53:20 +0000 From: Christoph Hellwig To: "Darrick J . Wong" Cc: Damien Le Moal , Andreas Gruenbacher , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 08/12] xfs: simplify xfs_ioend_can_merge Date: Mon, 24 Jun 2019 07:52:49 +0200 Message-Id: <20190624055253.31183-9-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190624055253.31183-1-hch@lst.de> References: <20190624055253.31183-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Compare the block layer status directly instead of converting it to an errno first. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_aops.c | 14 ++------------ 1 file changed, 2 insertions(+), 12 deletions(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 017b87b7765f..acbd73976067 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -226,13 +226,9 @@ xfs_end_ioend( static bool xfs_ioend_can_merge( struct xfs_ioend *ioend, - int ioend_error, struct xfs_ioend *next) { - int next_error; - - next_error = blk_status_to_errno(next->io_bio->bi_status); - if (ioend_error != next_error) + if (ioend->io_bio->bi_status != next->io_bio->bi_status) return false; if ((ioend->io_fork == XFS_COW_FORK) ^ (next->io_fork == XFS_COW_FORK)) return false; @@ -251,17 +247,11 @@ xfs_ioend_try_merge( struct list_head *more_ioends) { struct xfs_ioend *next_ioend; - int ioend_error; - - if (list_empty(more_ioends)) - return; - - ioend_error = blk_status_to_errno(ioend->io_bio->bi_status); while (!list_empty(more_ioends)) { next_ioend = list_first_entry(more_ioends, struct xfs_ioend, io_list); - if (!xfs_ioend_can_merge(ioend, ioend_error, next_ioend)) + if (!xfs_ioend_can_merge(ioend, next_ioend)) break; list_move_tail(&next_ioend->io_list, &ioend->io_list); ioend->io_size += next_ioend->io_size; From patchwork Mon Jun 24 05:52:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11012305 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7BD3513AF for ; Mon, 24 Jun 2019 05:53:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6FFA728B0B for ; Mon, 24 Jun 2019 05:53:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 63D4828B30; Mon, 24 Jun 2019 05:53:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2902A28B0B for ; Mon, 24 Jun 2019 05:53:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727435AbfFXFxg (ORCPT ); Mon, 24 Jun 2019 01:53:36 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50730 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727330AbfFXFxZ (ORCPT ); Mon, 24 Jun 2019 01:53:25 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=DHX94yND4/PlIMtVLTwnLbr10O2DpIMPSoDDoQ/oghI=; b=Izpbj93W5FYX7c80uaCw2wgyVK wu6IDZVrQHUx2tDQ5CtJSq1hCoxzVrMbI/82Tr8KblTvA4LD+EWonKFo+8QE0+yHyLcggt7iMpJZX t8B60ANfisthzCBFjR7Y0CAe6VjILSpJVCgH3+fImiB0Ip2i0MxrT9ZKnB9T5YPsm2l+zgvMQcxjv OB5AWMuZ3w3mqFcSj4dFyCmscJbto6HmsIhiBQme1993Uiz7MBP4vwA4riUMdOD9GItd+XytkEEaV xYyTIvQxgbAZc+JqbMHFKjQ+nfYIQhmbsAltou6O5+Fn7rHPqi+JgRNOYQNI/S4lGPSjAc/WXKwHV wfOAxGjQ==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfHut-00047s-GB; Mon, 24 Jun 2019 05:53:23 +0000 From: Christoph Hellwig To: "Darrick J . Wong" Cc: Damien Le Moal , Andreas Gruenbacher , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 09/12] xfs: refactor the ioend merging code Date: Mon, 24 Jun 2019 07:52:50 +0200 Message-Id: <20190624055253.31183-10-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190624055253.31183-1-hch@lst.de> References: <20190624055253.31183-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Introduce two nicely abstracted helper, which can be moved to the iomap code later. Also use list_pop and list_first_entry_or_null to simplify the code a bit. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_aops.c | 66 ++++++++++++++++++++++++++--------------------- 1 file changed, 36 insertions(+), 30 deletions(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index acbd73976067..5d302ebe2a33 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -121,6 +121,19 @@ xfs_destroy_ioend( } } +static void +xfs_destroy_ioends( + struct xfs_ioend *ioend, + int error) +{ + struct list_head tmp; + + list_replace_init(&ioend->io_list, &tmp); + xfs_destroy_ioend(ioend, error); + while ((ioend = list_pop(&tmp, struct xfs_ioend, io_list))) + xfs_destroy_ioend(ioend, error); +} + /* * Fast and loose check if this write could update the on-disk inode size. */ @@ -173,7 +186,6 @@ xfs_end_ioend( struct xfs_ioend *ioend) { unsigned int nofs_flag = memalloc_nofs_save(); - struct list_head ioend_list; struct xfs_inode *ip = XFS_I(ioend->io_inode); xfs_off_t offset = ioend->io_offset; size_t size = ioend->io_size; @@ -207,16 +219,7 @@ xfs_end_ioend( if (!error && xfs_ioend_is_append(ioend)) error = xfs_setfilesize(ip, offset, size); done: - list_replace_init(&ioend->io_list, &ioend_list); - xfs_destroy_ioend(ioend, error); - - while (!list_empty(&ioend_list)) { - ioend = list_first_entry(&ioend_list, struct xfs_ioend, - io_list); - list_del_init(&ioend->io_list); - xfs_destroy_ioend(ioend, error); - } - + xfs_destroy_ioends(ioend, error); memalloc_nofs_restore(nofs_flag); } @@ -246,15 +249,16 @@ xfs_ioend_try_merge( struct xfs_ioend *ioend, struct list_head *more_ioends) { - struct xfs_ioend *next_ioend; + struct xfs_ioend *next; - while (!list_empty(more_ioends)) { - next_ioend = list_first_entry(more_ioends, struct xfs_ioend, - io_list); - if (!xfs_ioend_can_merge(ioend, next_ioend)) + INIT_LIST_HEAD(&ioend->io_list); + + while ((next = list_first_entry_or_null(more_ioends, struct xfs_ioend, + io_list))) { + if (!xfs_ioend_can_merge(ioend, next)) break; - list_move_tail(&next_ioend->io_list, &ioend->io_list); - ioend->io_size += next_ioend->io_size; + list_move_tail(&next->io_list, &ioend->io_list); + ioend->io_size += next->io_size; } } @@ -277,29 +281,31 @@ xfs_ioend_compare( return 0; } +static void +xfs_sort_ioends( + struct list_head *ioend_list) +{ + list_sort(NULL, ioend_list, xfs_ioend_compare); +} + /* Finish all pending io completions. */ void xfs_end_io( struct work_struct *work) { - struct xfs_inode *ip; + struct xfs_inode *ip = + container_of(work, struct xfs_inode, i_ioend_work); struct xfs_ioend *ioend; - struct list_head completion_list; + struct list_head tmp; unsigned long flags; - ip = container_of(work, struct xfs_inode, i_ioend_work); - spin_lock_irqsave(&ip->i_ioend_lock, flags); - list_replace_init(&ip->i_ioend_list, &completion_list); + list_replace_init(&ip->i_ioend_list, &tmp); spin_unlock_irqrestore(&ip->i_ioend_lock, flags); - list_sort(NULL, &completion_list, xfs_ioend_compare); - - while (!list_empty(&completion_list)) { - ioend = list_first_entry(&completion_list, struct xfs_ioend, - io_list); - list_del_init(&ioend->io_list); - xfs_ioend_try_merge(ioend, &completion_list); + xfs_sort_ioends(&tmp); + while ((ioend = list_pop(&tmp, struct xfs_ioend, io_list))) { + xfs_ioend_try_merge(ioend, &tmp); xfs_end_ioend(ioend); } } From patchwork Mon Jun 24 05:52:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11012293 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E89B9112C for ; Mon, 24 Jun 2019 05:53:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB7F828B01 for ; Mon, 24 Jun 2019 05:53:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CFD9628B1E; Mon, 24 Jun 2019 05:53:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4827528B01 for ; Mon, 24 Jun 2019 05:53:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727282AbfFXFx3 (ORCPT ); Mon, 24 Jun 2019 01:53:29 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50746 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727349AbfFXFx2 (ORCPT ); Mon, 24 Jun 2019 01:53:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=u092FR54XnLKKHJJjxnVKuxneRwEmNexqMqTJICPjIY=; b=seW+20N10YIgOFQGNRwwx1EtvG UOWf7iRluF/nFgo26ESq70IHjZ/mgCIRSpaTirqdonh3A7edApG9TjBTEMcuoK+HLeWvHT5SUcAEu jVLomaWxEWpo47eJzNb+cJUROYjgoLr3KaZsNBY0NQHPGTr3Iw5MSKMb3qHKGUeU0o0GLWMZ/3uyR gWNcDEPTsQCBTfwNk57BYQ+x3lQbJhwHsgYek/C8Vct4laXyvu16vcZWOfYDnFGaXjCnPzujfU9mY eMtSM4DQL9IYVddxukp8S094Cyplk776ZGPa7FOIh1M0RT6OyFZOKI+30b8WtpEWn2SlQ9sSLLr4I Hgne30cQ==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfHuw-000498-Fn; Mon, 24 Jun 2019 05:53:27 +0000 From: Christoph Hellwig To: "Darrick J . Wong" Cc: Damien Le Moal , Andreas Gruenbacher , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 10/12] xfs: remove the fork fields in the writepage_ctx and ioend Date: Mon, 24 Jun 2019 07:52:51 +0200 Message-Id: <20190624055253.31183-11-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190624055253.31183-1-hch@lst.de> References: <20190624055253.31183-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In preparation for moving the writeback code to iomap.c, replace the XFS-specific COW fork concept with the iomap IOMAP_F_SHARED flag. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_aops.c | 40 +++++++++++++++++++++------------------- fs/xfs/xfs_aops.h | 2 +- 2 files changed, 22 insertions(+), 20 deletions(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 5d302ebe2a33..d9a7a9e6b912 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -28,7 +28,6 @@ */ struct xfs_writepage_ctx { struct iomap iomap; - int fork; unsigned int data_seq; unsigned int cow_seq; struct xfs_ioend *ioend; @@ -204,7 +203,7 @@ xfs_end_ioend( */ error = blk_status_to_errno(ioend->io_bio->bi_status); if (unlikely(error)) { - if (ioend->io_fork == XFS_COW_FORK) + if (ioend->io_flags & IOMAP_F_SHARED) xfs_reflink_cancel_cow_range(ip, offset, size, true); goto done; } @@ -212,7 +211,7 @@ xfs_end_ioend( /* * Success: commit the COW or unwritten blocks if needed. */ - if (ioend->io_fork == XFS_COW_FORK) + if (ioend->io_flags & IOMAP_F_SHARED) error = xfs_reflink_end_cow(ip, offset, size); else if (ioend->io_type == IOMAP_UNWRITTEN) error = xfs_iomap_write_unwritten(ip, offset, size, false); @@ -233,7 +232,8 @@ xfs_ioend_can_merge( { if (ioend->io_bio->bi_status != next->io_bio->bi_status) return false; - if ((ioend->io_fork == XFS_COW_FORK) ^ (next->io_fork == XFS_COW_FORK)) + if ((ioend->io_flags & IOMAP_F_SHARED) ^ + (next->io_flags & IOMAP_F_SHARED)) return false; if ((ioend->io_type == IOMAP_UNWRITTEN) ^ (next->io_type == IOMAP_UNWRITTEN)) @@ -319,7 +319,7 @@ xfs_end_bio( struct xfs_mount *mp = ip->i_mount; unsigned long flags; - if (ioend->io_fork == XFS_COW_FORK || + if ((ioend->io_flags & IOMAP_F_SHARED) || ioend->io_type == IOMAP_UNWRITTEN || xfs_ioend_is_append(ioend)) { spin_lock_irqsave(&ip->i_ioend_lock, flags); @@ -350,7 +350,7 @@ xfs_imap_valid( * covers the offset. Be careful to check this first because the caller * can revalidate a COW mapping without updating the data seqno. */ - if (wpc->fork == XFS_COW_FORK) + if (wpc->iomap.flags & IOMAP_F_SHARED) return true; /* @@ -380,6 +380,7 @@ static int xfs_convert_blocks( struct xfs_writepage_ctx *wpc, struct xfs_inode *ip, + int whichfork, loff_t offset) { int error; @@ -391,8 +392,8 @@ xfs_convert_blocks( * delalloc extent if free space is sufficiently fragmented. */ do { - error = xfs_bmapi_convert_delalloc(ip, wpc->fork, offset, - &wpc->iomap, wpc->fork == XFS_COW_FORK ? + error = xfs_bmapi_convert_delalloc(ip, whichfork, offset, + &wpc->iomap, whichfork == XFS_COW_FORK ? &wpc->cow_seq : &wpc->data_seq); if (error) return error; @@ -413,6 +414,7 @@ xfs_map_blocks( xfs_fileoff_t offset_fsb = XFS_B_TO_FSBT(mp, offset); xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, offset + count); xfs_fileoff_t cow_fsb = NULLFILEOFF; + int whichfork = XFS_DATA_FORK; struct xfs_bmbt_irec imap; struct xfs_iext_cursor icur; int retries = 0; @@ -461,7 +463,7 @@ xfs_map_blocks( wpc->cow_seq = READ_ONCE(ip->i_cowfp->if_seq); xfs_iunlock(ip, XFS_ILOCK_SHARED); - wpc->fork = XFS_COW_FORK; + whichfork = XFS_COW_FORK; goto allocate_blocks; } @@ -484,8 +486,6 @@ xfs_map_blocks( wpc->data_seq = READ_ONCE(ip->i_df.if_seq); xfs_iunlock(ip, XFS_ILOCK_SHARED); - wpc->fork = XFS_DATA_FORK; - /* landed in a hole or beyond EOF? */ if (imap.br_startoff > offset_fsb) { imap.br_blockcount = imap.br_startoff - offset_fsb; @@ -510,10 +510,10 @@ xfs_map_blocks( goto allocate_blocks; xfs_bmbt_to_iomap(ip, &wpc->iomap, &imap, 0); - trace_xfs_map_blocks_found(ip, offset, count, wpc->fork, &imap); + trace_xfs_map_blocks_found(ip, offset, count, whichfork, &imap); return 0; allocate_blocks: - error = xfs_convert_blocks(wpc, ip, offset); + error = xfs_convert_blocks(wpc, ip, whichfork, offset); if (error) { /* * If we failed to find the extent in the COW fork we might have @@ -522,7 +522,8 @@ xfs_map_blocks( * the former case, but prevent additional retries to avoid * looping forever for the latter case. */ - if (error == -EAGAIN && wpc->fork == XFS_COW_FORK && !retries++) + if (error == -EAGAIN && (wpc->iomap.flags & IOMAP_F_SHARED) && + !retries++) goto retry; ASSERT(error != -EAGAIN); return error; @@ -533,7 +534,7 @@ xfs_map_blocks( * original delalloc one. Trim the return extent to the next COW * boundary again to force a re-lookup. */ - if (wpc->fork != XFS_COW_FORK && cow_fsb != NULLFILEOFF) { + if (!(wpc->iomap.flags & IOMAP_F_SHARED) && cow_fsb != NULLFILEOFF) { loff_t cow_offset = XFS_FSB_TO_B(mp, cow_fsb); if (cow_offset < wpc->iomap.offset + wpc->iomap.length) @@ -542,7 +543,7 @@ xfs_map_blocks( ASSERT(wpc->iomap.offset <= offset); ASSERT(wpc->iomap.offset + wpc->iomap.length > offset); - trace_xfs_map_blocks_alloc(ip, offset, count, wpc->fork, &imap); + trace_xfs_map_blocks_alloc(ip, offset, count, whichfork, &imap); return 0; } @@ -567,7 +568,7 @@ xfs_submit_ioend( int status) { /* Convert CoW extents to regular */ - if (!status && ioend->io_fork == XFS_COW_FORK) { + if (!status && (ioend->io_flags & IOMAP_F_SHARED)) { /* * Yuk. This can do memory allocation, but is not a * transactional operation so everything is done in GFP_KERNEL @@ -621,8 +622,8 @@ xfs_alloc_ioend( ioend = container_of(bio, struct xfs_ioend, io_inline_bio); INIT_LIST_HEAD(&ioend->io_list); - ioend->io_fork = wpc->fork; ioend->io_type = wpc->iomap.type; + ioend->io_flags = wpc->iomap.flags; ioend->io_inode = inode; ioend->io_size = 0; ioend->io_offset = offset; @@ -676,7 +677,8 @@ xfs_add_to_ioend( sector = (wpc->iomap.addr + offset - wpc->iomap.offset) >> 9; if (!wpc->ioend || - wpc->fork != wpc->ioend->io_fork || + (wpc->iomap.flags & IOMAP_F_SHARED) != + (wpc->ioend->io_flags & IOMAP_F_SHARED) || wpc->iomap.type != wpc->ioend->io_type || sector != bio_end_sector(wpc->ioend->io_bio) || offset != wpc->ioend->io_offset + wpc->ioend->io_size) { diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h index 23c087f0bcbf..bf95837c59af 100644 --- a/fs/xfs/xfs_aops.h +++ b/fs/xfs/xfs_aops.h @@ -13,8 +13,8 @@ extern struct bio_set xfs_ioend_bioset; */ struct xfs_ioend { struct list_head io_list; /* next ioend in chain */ - int io_fork; /* inode fork written back */ u16 io_type; + u16 io_flags; struct inode *io_inode; /* file being written to */ size_t io_size; /* size of the extent */ xfs_off_t io_offset; /* offset in the file */ From patchwork Mon Jun 24 05:52:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11012299 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C90DA112C for ; Mon, 24 Jun 2019 05:53:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B46E928B01 for ; Mon, 24 Jun 2019 05:53:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A82AC28B0B; Mon, 24 Jun 2019 05:53:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2C33728B2C for ; Mon, 24 Jun 2019 05:53:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727419AbfFXFxg (ORCPT ); Mon, 24 Jun 2019 01:53:36 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50770 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727349AbfFXFxd (ORCPT ); Mon, 24 Jun 2019 01:53:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=X6Bot2OwK7jg8FMnp7PXUKWPi5pUg3+UwPwdAJsLQkM=; b=Tz5gGsWY2P8RsGEe3eQRNzUfmZ 6paoPgZvqhAmnKaU3tbj/i8UiqTf/q7Dc6iyScJD3HnYgXhbapG9whSH3RyXYdKkdrkqZowre9eTs yb3sbFu9f6RVw4GxlQl9uCzwXR9YDe3lhoq97p1TlGt3hsaIob0R6IGPSGKCokfaekzRLRXYW3q4/ PCfDXHkzQEyhnc5dxfCDrDdLT5Ur+aoFO4djnKRU5UKtaxmJIGefbd2tdrC2M1/t23Blwze/PbWL6 H2q+v46Ss6LPQwdTgNO45iYP3vXI+Orqm33/c9kLtA7ST2I+WEk8GgNsAaXYByEF/MKeZo9pE+5Yc kjgLcvcg==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfHuz-00049q-OF; Mon, 24 Jun 2019 05:53:30 +0000 From: Christoph Hellwig To: "Darrick J . Wong" Cc: Damien Le Moal , Andreas Gruenbacher , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 11/12] iomap: move the xfs writeback code to iomap.c Date: Mon, 24 Jun 2019 07:52:52 +0200 Message-Id: <20190624055253.31183-12-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190624055253.31183-1-hch@lst.de> References: <20190624055253.31183-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Takes the xfs writeback code and move it to iomap.c. A new structure with three methods is added as the abstraction from the generic writeback code to the file system. These methods are used to map blocks, submit an ioend, and cancel a page that encountered an error before it was added to an ioend. Note that we temporarily lose the writepage tracing, but that will be added back soon. Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/iomap.c | 521 ++++++++++++++++++++++++++++++++++++- fs/xfs/xfs_aops.c | 584 ++++-------------------------------------- fs/xfs/xfs_aops.h | 16 -- fs/xfs/xfs_super.c | 11 +- include/linux/iomap.h | 41 +++ 5 files changed, 605 insertions(+), 568 deletions(-) diff --git a/fs/iomap.c b/fs/iomap.c index 23ef63fd1669..72a1b622e634 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 /* * Copyright (C) 2010 Red Hat, Inc. - * Copyright (c) 2016-2018 Christoph Hellwig. + * Copyright (c) 2016-2019 Christoph Hellwig. */ #include #include @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -25,6 +26,8 @@ #include "internal.h" +static struct bio_set iomap_ioend_bioset; + /* * Execute a iomap write on a segment of the mapping that spans a * contiguous range of pages that have identical block mapping state. @@ -2192,3 +2195,519 @@ iomap_bmap(struct address_space *mapping, sector_t bno, return bno; } EXPORT_SYMBOL_GPL(iomap_bmap); + +static void +iomap_finish_page_writeback(struct inode *inode, struct bio_vec *bvec, + int error) +{ + struct iomap_page *iop = to_iomap_page(bvec->bv_page); + + if (error) { + SetPageError(bvec->bv_page); + mapping_set_error(inode->i_mapping, -EIO); + } + + WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop); + WARN_ON_ONCE(iop && atomic_read(&iop->write_count) <= 0); + + if (!iop || atomic_dec_and_test(&iop->write_count)) + end_page_writeback(bvec->bv_page); +} + +/* + * We're now finished for good with this ioend structure. Update the page + * state, release holds on bios, and finally free up memory. Do not use the + * ioend after this. + */ +void +iomap_finish_ioend(struct iomap_ioend *ioend, int error) +{ + struct inode *inode = ioend->io_inode; + struct bio *bio = &ioend->io_inline_bio; + struct bio *last = ioend->io_bio, *next; + u64 start = bio->bi_iter.bi_sector; + bool quiet = bio_flagged(bio, BIO_QUIET); + + for (bio = &ioend->io_inline_bio; bio; bio = next) { + struct bio_vec *bvec; + struct bvec_iter_all iter_all; + + /* + * For the last bio, bi_private points to the ioend, so we + * need to explicitly end the iteration here. + */ + if (bio == last) + next = NULL; + else + next = bio->bi_private; + + /* walk each page on bio, ending page IO on them */ + bio_for_each_segment_all(bvec, bio, iter_all) + iomap_finish_page_writeback(inode, bvec, error); + bio_put(bio); + } + + if (unlikely(error && !quiet)) { + printk_ratelimited(KERN_ERR + "%s: writeback error on sector %llu", + inode->i_sb->s_id, start); + } +} +EXPORT_SYMBOL_GPL(iomap_finish_ioend); + +void +iomap_finish_ioends(struct iomap_ioend *ioend, int error) +{ + struct list_head tmp; + + list_replace_init(&ioend->io_list, &tmp); + iomap_finish_ioend(ioend, error); + while ((ioend = list_pop(&tmp, struct iomap_ioend, io_list))) + iomap_finish_ioend(ioend, error); +} +EXPORT_SYMBOL_GPL(iomap_finish_ioends); + +/* + * We can merge two adjacent ioends if they have the same set of work to do. + */ +static bool +iomap_ioend_can_merge(struct iomap_ioend *ioend, struct iomap_ioend *next) +{ + if (ioend->io_bio->bi_status != next->io_bio->bi_status) + return false; + if ((ioend->io_flags & IOMAP_F_SHARED) ^ + (next->io_flags & IOMAP_F_SHARED)) + return false; + if ((ioend->io_type == IOMAP_UNWRITTEN) ^ + (next->io_type == IOMAP_UNWRITTEN)) + return false; + if (ioend->io_offset + ioend->io_size != next->io_offset) + return false; + return true; +} + +void +iomap_ioend_try_merge(struct iomap_ioend *ioend, struct list_head *more_ioends) +{ + struct iomap_ioend *next; + + INIT_LIST_HEAD(&ioend->io_list); + + while ((next = list_first_entry_or_null(more_ioends, struct iomap_ioend, + io_list))) { + if (!iomap_ioend_can_merge(ioend, next)) + break; + list_move_tail(&next->io_list, &ioend->io_list); + ioend->io_size += next->io_size; + } +} +EXPORT_SYMBOL_GPL(iomap_ioend_try_merge); + +static int +iomap_ioend_compare(void *priv, struct list_head *a, struct list_head *b) +{ + struct iomap_ioend *ia, *ib; + + ia = container_of(a, struct iomap_ioend, io_list); + ib = container_of(b, struct iomap_ioend, io_list); + if (ia->io_offset < ib->io_offset) + return -1; + else if (ia->io_offset > ib->io_offset) + return 1; + return 0; +} + +void +iomap_sort_ioends(struct list_head *ioend_list) +{ + list_sort(NULL, ioend_list, iomap_ioend_compare); +} +EXPORT_SYMBOL_GPL(iomap_sort_ioends); + +/* + * Submit the bio for an ioend. We are passed an ioend with a bio attached to + * it, and we submit that bio. The ioend may be used for multiple bio + * submissions, so we only want to allocate an append transaction for the ioend + * once. In the case of multiple bio submission, each bio will take an IO + * reference to the ioend to ensure that the ioend completion is only done once + * all bios have been submitted and the ioend is really done. + * + * If @error is non-zero, it means that we have a situation where some part of + * the submission process has failed after we have marked paged for writeback + * and unlocked them. In this situation, we need to fail the bio and ioend + * rather than submit it to IO. This typically only happens on a filesystem + * shutdown. + */ +static int +iomap_submit_ioend(struct iomap_writepage_ctx *wpc, struct iomap_ioend *ioend, + int error) +{ + /* + * If we are failing the IO now, just mark the ioend with an error and + * finish it. This will run IO completion immediately as there is only + * one reference to the ioend at this point in time. + */ + ioend->io_bio->bi_private = ioend; + error = wpc->ops->submit_ioend(ioend, error); + if (error) { + ioend->io_bio->bi_status = errno_to_blk_status(error); + bio_endio(ioend->io_bio); + return error; + } + + submit_bio(ioend->io_bio); + return 0; +} + +static struct iomap_ioend * +iomap_alloc_ioend(struct inode *inode, struct iomap_writepage_ctx *wpc, + loff_t offset, sector_t sector, struct writeback_control *wbc) +{ + struct iomap_ioend *ioend; + struct bio *bio; + + bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES, &iomap_ioend_bioset); + bio_set_dev(bio, wpc->iomap.bdev); + bio->bi_iter.bi_sector = sector; + bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc); + bio->bi_write_hint = inode->i_write_hint; + + ioend = container_of(bio, struct iomap_ioend, io_inline_bio); + INIT_LIST_HEAD(&ioend->io_list); + ioend->io_type = wpc->iomap.type; + ioend->io_flags = wpc->iomap.flags; + ioend->io_inode = inode; + ioend->io_size = 0; + ioend->io_offset = offset; + ioend->io_bio = bio; + return ioend; +} + +/* + * Allocate a new bio, and chain the old bio to the new one. + * + * Note that we have to do perform the chaining in this unintuitive order + * so that the bi_private linkage is set up in the right direction for the + * traversal in iomap_finish_ioend(). + */ +static struct bio * +iomap_chain_bio(struct bio *prev) +{ + struct bio *new; + + new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES); + bio_copy_dev(new, prev); + new->bi_iter.bi_sector = bio_end_sector(prev); + new->bi_opf = prev->bi_opf; + new->bi_write_hint = prev->bi_write_hint; + + bio_chain(prev, new); + bio_get(prev); /* for iomap_finish_ioend */ + submit_bio(prev); + return new; +} + +/* + * Test to see if we have an existing ioend structure that we could append to + * first, otherwise finish off the current ioend and start another. + */ +static void +iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, + struct iomap_page *iop, struct iomap_writepage_ctx *wpc, + struct writeback_control *wbc, struct list_head *iolist) +{ + unsigned len = i_blocksize(inode); + unsigned poff = offset & (PAGE_SIZE - 1); + sector_t sector = iomap_sector(&wpc->iomap, offset); + + if (!wpc->ioend || + (wpc->iomap.flags & IOMAP_F_SHARED) != + (wpc->ioend->io_flags & IOMAP_F_SHARED) || + wpc->iomap.type != wpc->ioend->io_type || + sector != bio_end_sector(wpc->ioend->io_bio) || + offset != wpc->ioend->io_offset + wpc->ioend->io_size) { + if (wpc->ioend) + list_add(&wpc->ioend->io_list, iolist); + wpc->ioend = iomap_alloc_ioend(inode, wpc, offset, sector, wbc); + } + + if (!__bio_try_merge_page(wpc->ioend->io_bio, page, len, poff, true)) { + if (iop) + atomic_inc(&iop->write_count); + if (bio_full(wpc->ioend->io_bio)) { + wpc->ioend->io_bio = + iomap_chain_bio(wpc->ioend->io_bio); + } + bio_add_page(wpc->ioend->io_bio, page, len, poff); + } + + wpc->ioend->io_size += len; +} + +/* + * We implement an immediate ioend submission policy here to avoid needing to + * chain multiple ioends and hence nest mempool allocations which can violate + * forward progress guarantees we need to provide. The current ioend we are + * adding blocks to is cached on the writepage context, and if the new block + * does not append to the cached ioend it will create a new ioend and cache that + * instead. + * + * If a new ioend is created and cached, the old ioend is returned and queued + * locally for submission once the entire page is processed or an error has been + * detected. While ioends are submitted immediately after they are completed, + * batching optimisations are provided by higher level block plugging. + * + * At the end of a writeback pass, there will be a cached ioend remaining on the + * writepage context that the caller will need to submit. + */ +static int +iomap_writepage_map(struct iomap_writepage_ctx *wpc, + struct writeback_control *wbc, struct inode *inode, + struct page *page, u64 end_offset) +{ + struct iomap_page *iop = to_iomap_page(page); + struct iomap_ioend *ioend, *next; + unsigned len = i_blocksize(inode); + u64 file_offset; /* file offset of page */ + int error = 0, count = 0, i; + LIST_HEAD(submit_list); + + WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop); + WARN_ON_ONCE(iop && atomic_read(&iop->write_count) != 0); + + /* + * Walk through the page to find areas to write back. If we run off the + * end of the current map or find the current map invalid, grab a new + * one. + */ + for (i = 0, file_offset = page_offset(page); + i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset; + i++, file_offset += len) { + if (iop && !test_bit(i, iop->uptodate)) + continue; + + error = wpc->ops->map_blocks(wpc, inode, file_offset); + if (error) + break; + if (wpc->iomap.type == IOMAP_HOLE) + continue; + iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc, + &submit_list); + count++; + } + + WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list)); + WARN_ON_ONCE(!PageLocked(page)); + WARN_ON_ONCE(PageWriteback(page)); + + /* + * On error, we have to fail the ioend here because we may have set + * pages under writeback, we have to make sure we run IO completion to + * mark the error state of the IO appropriately, so we can't cancel the + * ioend directly here. That means we have to mark this page as under + * writeback if we included any blocks from it in the ioend chain so + * that completion treats it correctly. + * + * If we didn't include the page in the ioend, the on error we can + * simply discard and unlock it as there are no other users of the page + * now. The caller will still need to trigger submission of outstanding + * ioends on the writepage context so they are treated correctly on + * error. + */ + if (unlikely(error)) { + if (!count) { + wpc->ops->discard_page(page); + ClearPageUptodate(page); + unlock_page(page); + goto done; + } + + /* + * If the page was not fully cleaned, we need to ensure that the + * higher layers come back to it correctly. That means we need + * to keep the page dirty, and for WB_SYNC_ALL writeback we need + * to ensure the PAGECACHE_TAG_TOWRITE index mark is not removed + * so another attempt to write this page in this writeback sweep + * will be made. + */ + set_page_writeback_keepwrite(page); + } else { + clear_page_dirty_for_io(page); + set_page_writeback(page); + } + + unlock_page(page); + + /* + * Preserve the original error if there was one, otherwise catch + * submission errors here and propagate into subsequent ioend + * submissions. + */ + list_for_each_entry_safe(ioend, next, &submit_list, io_list) { + int error2; + + list_del_init(&ioend->io_list); + error2 = iomap_submit_ioend(wpc, ioend, error); + if (error2 && !error) + error = error2; + } + + /* + * We can end up here with no error and nothing to write only if we race + * with a partial page truncate on a sub-page block sized filesystem. + */ + if (!count) + end_page_writeback(page); +done: + mapping_set_error(page->mapping, error); + return error; +} + +/* + * Write out a dirty page. + * + * For delalloc space on the page we need to allocate space and flush it. + * For unwritten space on the page we need to start the conversion to + * regular allocated space. + */ +static int +iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) +{ + struct iomap_writepage_ctx *wpc = data; + struct inode *inode = page->mapping->host; + pgoff_t end_index; + u64 end_offset; + loff_t offset; + + /* + * Refuse to write the page out if we are called from reclaim context. + * + * This avoids stack overflows when called from deeply used stacks in + * random callers for direct reclaim or memcg reclaim. We explicitly + * allow reclaim from kswapd as the stack usage there is relatively low. + * + * This should never happen except in the case of a VM regression so + * warn about it. + */ + if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) == + PF_MEMALLOC)) + goto redirty; + + /* + * Given that we do not allow direct reclaim to call us, we should + * never be called while in a filesystem transaction. + */ + if (WARN_ON_ONCE(current->flags & PF_MEMALLOC_NOFS)) + goto redirty; + + /* + * Is this page beyond the end of the file? + * + * The page index is less than the end_index, adjust the end_offset + * to the highest offset that this page should represent. + * ----------------------------------------------------- + * | file mapping | | + * ----------------------------------------------------- + * | Page ... | Page N-2 | Page N-1 | Page N | | + * ^--------------------------------^----------|-------- + * | desired writeback range | see else | + * ---------------------------------^------------------| + */ + offset = i_size_read(inode); + end_index = offset >> PAGE_SHIFT; + if (page->index < end_index) + end_offset = (loff_t)(page->index + 1) << PAGE_SHIFT; + else { + /* + * Check whether the page to write out is beyond or straddles + * i_size or not. + * ------------------------------------------------------- + * | file mapping | | + * ------------------------------------------------------- + * | Page ... | Page N-2 | Page N-1 | Page N | Beyond | + * ^--------------------------------^-----------|--------- + * | | Straddles | + * ---------------------------------^-----------|--------| + */ + unsigned offset_into_page = offset & (PAGE_SIZE - 1); + + /* + * Skip the page if it is fully outside i_size, e.g. due to a + * truncate operation that is in progress. We must redirty the + * page so that reclaim stops reclaiming it. Otherwise + * iomap_vm_releasepage() is called on it and gets confused. + * + * Note that the end_index is unsigned long, it would overflow + * if the given offset is greater than 16TB on 32-bit system + * and if we do check the page is fully outside i_size or not + * via "if (page->index >= end_index + 1)" as "end_index + 1" + * will be evaluated to 0. Hence this page will be redirtied + * and be written out repeatedly which would result in an + * infinite loop, the user program that perform this operation + * will hang. Instead, we can verify this situation by checking + * if the page to write is totally beyond the i_size or if it's + * offset is just equal to the EOF. + */ + if (page->index > end_index || + (page->index == end_index && offset_into_page == 0)) + goto redirty; + + /* + * The page straddles i_size. It must be zeroed out on each + * and every writepage invocation because it may be mmapped. + * "A file is mapped in multiples of the page size. For a file + * that is not a multiple of the page size, the remaining + * memory is zeroed when mapped, and writes to that region are + * not written out to the file." + */ + zero_user_segment(page, offset_into_page, PAGE_SIZE); + + /* Adjust the end_offset to the end of file */ + end_offset = offset; + } + + return iomap_writepage_map(wpc, wbc, inode, page, end_offset); + +redirty: + redirty_page_for_writepage(wbc, page); + unlock_page(page); + return 0; +} + +int +iomap_writepage(struct page *page, struct writeback_control *wbc, + struct iomap_writepage_ctx *wpc, + const struct iomap_writeback_ops *ops) +{ + int ret; + + wpc->ops = ops; + ret = iomap_do_writepage(page, wbc, wpc); + if (!wpc->ioend) + return ret; + return iomap_submit_ioend(wpc, wpc->ioend, ret); +} +EXPORT_SYMBOL_GPL(iomap_writepage); + +int +iomap_writepages(struct address_space *mapping, struct writeback_control *wbc, + struct iomap_writepage_ctx *wpc, + const struct iomap_writeback_ops *ops) +{ + int ret; + + wpc->ops = ops; + ret = write_cache_pages(mapping, wbc, iomap_do_writepage, wpc); + if (!wpc->ioend) + return ret; + return iomap_submit_ioend(wpc, wpc->ioend, ret); +} +EXPORT_SYMBOL_GPL(iomap_writepages); + +static int __init iomap_init(void) +{ + return bioset_init(&iomap_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE), + offsetof(struct iomap_ioend, io_inline_bio), + BIOSET_NEED_BVECS); +} +fs_initcall(iomap_init); diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index d9a7a9e6b912..26b838aea2db 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -23,16 +23,18 @@ #include "xfs_reflink.h" #include -/* - * structure owned by writepages passed to individual writepage calls - */ struct xfs_writepage_ctx { - struct iomap iomap; + struct iomap_writepage_ctx ctx; unsigned int data_seq; unsigned int cow_seq; - struct xfs_ioend *ioend; }; +static inline struct xfs_writepage_ctx * +XFS_WPC(struct iomap_writepage_ctx *ctx) +{ + return container_of(ctx, struct xfs_writepage_ctx, ctx); +} + struct block_device * xfs_find_bdev_for_inode( struct inode *inode) @@ -59,84 +61,10 @@ xfs_find_daxdev_for_inode( return mp->m_ddev_targp->bt_daxdev; } -static void -xfs_finish_page_writeback( - struct inode *inode, - struct bio_vec *bvec, - int error) -{ - struct iomap_page *iop = to_iomap_page(bvec->bv_page); - - if (error) { - SetPageError(bvec->bv_page); - mapping_set_error(inode->i_mapping, -EIO); - } - - ASSERT(iop || i_blocksize(inode) == PAGE_SIZE); - ASSERT(!iop || atomic_read(&iop->write_count) > 0); - - if (!iop || atomic_dec_and_test(&iop->write_count)) - end_page_writeback(bvec->bv_page); -} - -/* - * We're now finished for good with this ioend structure. Update the page - * state, release holds on bios, and finally free up memory. Do not use the - * ioend after this. - */ -STATIC void -xfs_destroy_ioend( - struct xfs_ioend *ioend, - int error) -{ - struct inode *inode = ioend->io_inode; - struct bio *bio = &ioend->io_inline_bio; - struct bio *last = ioend->io_bio, *next; - u64 start = bio->bi_iter.bi_sector; - bool quiet = bio_flagged(bio, BIO_QUIET); - - for (bio = &ioend->io_inline_bio; bio; bio = next) { - struct bio_vec *bvec; - struct bvec_iter_all iter_all; - - /* - * For the last bio, bi_private points to the ioend, so we - * need to explicitly end the iteration here. - */ - if (bio == last) - next = NULL; - else - next = bio->bi_private; - - /* walk each page on bio, ending page IO on them */ - bio_for_each_segment_all(bvec, bio, iter_all) - xfs_finish_page_writeback(inode, bvec, error); - bio_put(bio); - } - - if (unlikely(error && !quiet)) { - xfs_err_ratelimited(XFS_I(inode)->i_mount, - "writeback error on sector %llu", start); - } -} - -static void -xfs_destroy_ioends( - struct xfs_ioend *ioend, - int error) -{ - struct list_head tmp; - - list_replace_init(&ioend->io_list, &tmp); - xfs_destroy_ioend(ioend, error); - while ((ioend = list_pop(&tmp, struct xfs_ioend, io_list))) - xfs_destroy_ioend(ioend, error); -} - /* * Fast and loose check if this write could update the on-disk inode size. */ -static inline bool xfs_ioend_is_append(struct xfs_ioend *ioend) +static inline bool xfs_ioend_is_append(struct iomap_ioend *ioend) { return ioend->io_offset + ioend->io_size > XFS_I(ioend->io_inode)->i_d.di_size; @@ -182,7 +110,7 @@ xfs_setfilesize( */ STATIC void xfs_end_ioend( - struct xfs_ioend *ioend) + struct iomap_ioend *ioend) { unsigned int nofs_flag = memalloc_nofs_save(); struct xfs_inode *ip = XFS_I(ioend->io_inode); @@ -218,76 +146,10 @@ xfs_end_ioend( if (!error && xfs_ioend_is_append(ioend)) error = xfs_setfilesize(ip, offset, size); done: - xfs_destroy_ioends(ioend, error); + iomap_finish_ioends(ioend, error); memalloc_nofs_restore(nofs_flag); } -/* - * We can merge two adjacent ioends if they have the same set of work to do. - */ -static bool -xfs_ioend_can_merge( - struct xfs_ioend *ioend, - struct xfs_ioend *next) -{ - if (ioend->io_bio->bi_status != next->io_bio->bi_status) - return false; - if ((ioend->io_flags & IOMAP_F_SHARED) ^ - (next->io_flags & IOMAP_F_SHARED)) - return false; - if ((ioend->io_type == IOMAP_UNWRITTEN) ^ - (next->io_type == IOMAP_UNWRITTEN)) - return false; - if (ioend->io_offset + ioend->io_size != next->io_offset) - return false; - return true; -} - -/* Try to merge adjacent completions. */ -STATIC void -xfs_ioend_try_merge( - struct xfs_ioend *ioend, - struct list_head *more_ioends) -{ - struct xfs_ioend *next; - - INIT_LIST_HEAD(&ioend->io_list); - - while ((next = list_first_entry_or_null(more_ioends, struct xfs_ioend, - io_list))) { - if (!xfs_ioend_can_merge(ioend, next)) - break; - list_move_tail(&next->io_list, &ioend->io_list); - ioend->io_size += next->io_size; - } -} - -/* list_sort compare function for ioends */ -static int -xfs_ioend_compare( - void *priv, - struct list_head *a, - struct list_head *b) -{ - struct xfs_ioend *ia; - struct xfs_ioend *ib; - - ia = container_of(a, struct xfs_ioend, io_list); - ib = container_of(b, struct xfs_ioend, io_list); - if (ia->io_offset < ib->io_offset) - return -1; - else if (ia->io_offset > ib->io_offset) - return 1; - return 0; -} - -static void -xfs_sort_ioends( - struct list_head *ioend_list) -{ - list_sort(NULL, ioend_list, xfs_ioend_compare); -} - /* Finish all pending io completions. */ void xfs_end_io( @@ -295,7 +157,7 @@ xfs_end_io( { struct xfs_inode *ip = container_of(work, struct xfs_inode, i_ioend_work); - struct xfs_ioend *ioend; + struct iomap_ioend *ioend; struct list_head tmp; unsigned long flags; @@ -303,9 +165,9 @@ xfs_end_io( list_replace_init(&ip->i_ioend_list, &tmp); spin_unlock_irqrestore(&ip->i_ioend_lock, flags); - xfs_sort_ioends(&tmp); - while ((ioend = list_pop(&tmp, struct xfs_ioend, io_list))) { - xfs_ioend_try_merge(ioend, &tmp); + iomap_sort_ioends(&tmp); + while ((ioend = list_pop(&tmp, struct iomap_ioend, io_list))) { + iomap_ioend_try_merge(ioend, &tmp); xfs_end_ioend(ioend); } } @@ -314,7 +176,7 @@ STATIC void xfs_end_bio( struct bio *bio) { - struct xfs_ioend *ioend = bio->bi_private; + struct iomap_ioend *ioend = bio->bi_private; struct xfs_inode *ip = XFS_I(ioend->io_inode); struct xfs_mount *mp = ip->i_mount; unsigned long flags; @@ -329,7 +191,7 @@ xfs_end_bio( list_add_tail(&ioend->io_list, &ip->i_ioend_list); spin_unlock_irqrestore(&ip->i_ioend_lock, flags); } else - xfs_destroy_ioend(ioend, blk_status_to_errno(bio->bi_status)); + iomap_finish_ioend(ioend, blk_status_to_errno(bio->bi_status)); } /* @@ -338,7 +200,7 @@ xfs_end_bio( */ static bool xfs_imap_valid( - struct xfs_writepage_ctx *wpc, + struct iomap_writepage_ctx *wpc, struct xfs_inode *ip, loff_t offset) { @@ -360,10 +222,10 @@ xfs_imap_valid( * checked (and found nothing at this offset) could have added * overlapping blocks. */ - if (wpc->data_seq != READ_ONCE(ip->i_df.if_seq)) + if (XFS_WPC(wpc)->data_seq != READ_ONCE(ip->i_df.if_seq)) return false; if (xfs_inode_has_cow_data(ip) && - wpc->cow_seq != READ_ONCE(ip->i_cowfp->if_seq)) + XFS_WPC(wpc)->cow_seq != READ_ONCE(ip->i_cowfp->if_seq)) return false; return true; } @@ -378,12 +240,18 @@ xfs_imap_valid( */ static int xfs_convert_blocks( - struct xfs_writepage_ctx *wpc, + struct iomap_writepage_ctx *wpc, struct xfs_inode *ip, int whichfork, loff_t offset) { int error; + unsigned *seq; + + if (whichfork == XFS_COW_FORK) + seq = &XFS_WPC(wpc)->cow_seq; + else + seq = &XFS_WPC(wpc)->data_seq; /* * Attempt to allocate whatever delalloc extent currently backs offset @@ -393,8 +261,7 @@ xfs_convert_blocks( */ do { error = xfs_bmapi_convert_delalloc(ip, whichfork, offset, - &wpc->iomap, whichfork == XFS_COW_FORK ? - &wpc->cow_seq : &wpc->data_seq); + &wpc->iomap, seq); if (error) return error; } while (wpc->iomap.offset + wpc->iomap.length <= offset); @@ -402,9 +269,9 @@ xfs_convert_blocks( return 0; } -STATIC int +static int xfs_map_blocks( - struct xfs_writepage_ctx *wpc, + struct iomap_writepage_ctx *wpc, struct inode *inode, loff_t offset) { @@ -460,7 +327,7 @@ xfs_map_blocks( xfs_iext_lookup_extent(ip, ip->i_cowfp, offset_fsb, &icur, &imap)) cow_fsb = imap.br_startoff; if (cow_fsb != NULLFILEOFF && cow_fsb <= offset_fsb) { - wpc->cow_seq = READ_ONCE(ip->i_cowfp->if_seq); + XFS_WPC(wpc)->cow_seq = READ_ONCE(ip->i_cowfp->if_seq); xfs_iunlock(ip, XFS_ILOCK_SHARED); whichfork = XFS_COW_FORK; @@ -483,7 +350,7 @@ xfs_map_blocks( */ if (!xfs_iext_lookup_extent(ip, &ip->i_df, offset_fsb, &icur, &imap)) imap.br_startoff = end_fsb; /* fake a hole past EOF */ - wpc->data_seq = READ_ONCE(ip->i_df.if_seq); + XFS_WPC(wpc)->data_seq = READ_ONCE(ip->i_df.if_seq); xfs_iunlock(ip, XFS_ILOCK_SHARED); /* landed in a hole or beyond EOF? */ @@ -547,24 +414,9 @@ xfs_map_blocks( return 0; } -/* - * Submit the bio for an ioend. We are passed an ioend with a bio attached to - * it, and we submit that bio. The ioend may be used for multiple bio - * submissions, so we only want to allocate an append transaction for the ioend - * once. In the case of multiple bio submission, each bio will take an IO - * reference to the ioend to ensure that the ioend completion is only done once - * all bios have been submitted and the ioend is really done. - * - * If @status is non-zero, it means that we have a situation where some part of - * the submission process has failed after we have marked paged for writeback - * and unlocked them. In this situation, we need to fail the bio and ioend - * rather than submit it to IO. This typically only happens on a filesystem - * shutdown. - */ -STATIC int +static int xfs_submit_ioend( - struct writeback_control *wbc, - struct xfs_ioend *ioend, + struct iomap_ioend *ioend, int status) { /* Convert CoW extents to regular */ @@ -584,118 +436,8 @@ xfs_submit_ioend( memalloc_nofs_restore(nofs_flag); } - ioend->io_bio->bi_private = ioend; ioend->io_bio->bi_end_io = xfs_end_bio; - - /* - * If we are failing the IO now, just mark the ioend with an - * error and finish it. This will run IO completion immediately - * as there is only one reference to the ioend at this point in - * time. - */ - if (status) { - ioend->io_bio->bi_status = errno_to_blk_status(status); - bio_endio(ioend->io_bio); - return status; - } - - submit_bio(ioend->io_bio); - return 0; -} - -static struct xfs_ioend * -xfs_alloc_ioend( - struct inode *inode, - struct xfs_writepage_ctx *wpc, - xfs_off_t offset, - sector_t sector, - struct writeback_control *wbc) -{ - struct xfs_ioend *ioend; - struct bio *bio; - - bio = bio_alloc_bioset(GFP_NOFS, BIO_MAX_PAGES, &xfs_ioend_bioset); - bio_set_dev(bio, wpc->iomap.bdev); - bio->bi_iter.bi_sector = sector; - bio->bi_opf = REQ_OP_WRITE | wbc_to_write_flags(wbc); - bio->bi_write_hint = inode->i_write_hint; - - ioend = container_of(bio, struct xfs_ioend, io_inline_bio); - INIT_LIST_HEAD(&ioend->io_list); - ioend->io_type = wpc->iomap.type; - ioend->io_flags = wpc->iomap.flags; - ioend->io_inode = inode; - ioend->io_size = 0; - ioend->io_offset = offset; - ioend->io_bio = bio; - return ioend; -} - -/* - * Allocate a new bio, and chain the old bio to the new one. - * - * Note that we have to do perform the chaining in this unintuitive order - * so that the bi_private linkage is set up in the right direction for the - * traversal in xfs_destroy_ioend(). - */ -static struct bio * -xfs_chain_bio( - struct bio *prev) -{ - struct bio *new; - - new = bio_alloc(GFP_NOFS, BIO_MAX_PAGES); - bio_copy_dev(new, prev); - new->bi_iter.bi_sector = bio_end_sector(prev); - new->bi_opf = prev->bi_opf; - new->bi_write_hint = prev->bi_write_hint; - - bio_chain(prev, new); - bio_get(prev); /* for xfs_destroy_ioend */ - submit_bio(prev); - return new; -} - -/* - * Test to see if we have an existing ioend structure that we could append to - * first, otherwise finish off the current ioend and start another. - */ -STATIC void -xfs_add_to_ioend( - struct inode *inode, - xfs_off_t offset, - struct page *page, - struct iomap_page *iop, - struct xfs_writepage_ctx *wpc, - struct writeback_control *wbc, - struct list_head *iolist) -{ - unsigned len = i_blocksize(inode); - unsigned poff = offset & (PAGE_SIZE - 1); - sector_t sector; - - sector = (wpc->iomap.addr + offset - wpc->iomap.offset) >> 9; - - if (!wpc->ioend || - (wpc->iomap.flags & IOMAP_F_SHARED) != - (wpc->ioend->io_flags & IOMAP_F_SHARED) || - wpc->iomap.type != wpc->ioend->io_type || - sector != bio_end_sector(wpc->ioend->io_bio) || - offset != wpc->ioend->io_offset + wpc->ioend->io_size) { - if (wpc->ioend) - list_add(&wpc->ioend->io_list, iolist); - wpc->ioend = xfs_alloc_ioend(inode, wpc, offset, sector, wbc); - } - - if (!__bio_try_merge_page(wpc->ioend->io_bio, page, len, poff, true)) { - if (iop) - atomic_inc(&iop->write_count); - if (bio_full(wpc->ioend->io_bio)) - wpc->ioend->io_bio = xfs_chain_bio(wpc->ioend->io_bio); - bio_add_page(wpc->ioend->io_bio, page, len, poff); - } - - wpc->ioend->io_size += len; + return status; } STATIC void @@ -719,8 +461,8 @@ xfs_vm_invalidatepage( * transaction as there is no space left for block reservation (typically why we * see a ENOSPC in writeback). */ -STATIC void -xfs_aops_discard_page( +static void +xfs_discard_page( struct page *page) { struct inode *inode = page->mapping->host; @@ -745,243 +487,11 @@ xfs_aops_discard_page( xfs_vm_invalidatepage(page, 0, PAGE_SIZE); } -/* - * We implement an immediate ioend submission policy here to avoid needing to - * chain multiple ioends and hence nest mempool allocations which can violate - * forward progress guarantees we need to provide. The current ioend we are - * adding blocks to is cached on the writepage context, and if the new block - * does not append to the cached ioend it will create a new ioend and cache that - * instead. - * - * If a new ioend is created and cached, the old ioend is returned and queued - * locally for submission once the entire page is processed or an error has been - * detected. While ioends are submitted immediately after they are completed, - * batching optimisations are provided by higher level block plugging. - * - * At the end of a writeback pass, there will be a cached ioend remaining on the - * writepage context that the caller will need to submit. - */ -static int -xfs_writepage_map( - struct xfs_writepage_ctx *wpc, - struct writeback_control *wbc, - struct inode *inode, - struct page *page, - uint64_t end_offset) -{ - LIST_HEAD(submit_list); - struct iomap_page *iop = to_iomap_page(page); - unsigned len = i_blocksize(inode); - struct xfs_ioend *ioend, *next; - uint64_t file_offset; /* file offset of page */ - int error = 0, count = 0, i; - - ASSERT(iop || i_blocksize(inode) == PAGE_SIZE); - ASSERT(!iop || atomic_read(&iop->write_count) == 0); - - /* - * Walk through the page to find areas to write back. If we run off the - * end of the current map or find the current map invalid, grab a new - * one. - */ - for (i = 0, file_offset = page_offset(page); - i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset; - i++, file_offset += len) { - if (iop && !test_bit(i, iop->uptodate)) - continue; - - error = xfs_map_blocks(wpc, inode, file_offset); - if (error) - break; - if (wpc->iomap.type == IOMAP_HOLE) - continue; - xfs_add_to_ioend(inode, file_offset, page, iop, wpc, wbc, - &submit_list); - count++; - } - - ASSERT(wpc->ioend || list_empty(&submit_list)); - ASSERT(PageLocked(page)); - ASSERT(!PageWriteback(page)); - - /* - * On error, we have to fail the ioend here because we may have set - * pages under writeback, we have to make sure we run IO completion to - * mark the error state of the IO appropriately, so we can't cancel the - * ioend directly here. That means we have to mark this page as under - * writeback if we included any blocks from it in the ioend chain so - * that completion treats it correctly. - * - * If we didn't include the page in the ioend, the on error we can - * simply discard and unlock it as there are no other users of the page - * now. The caller will still need to trigger submission of outstanding - * ioends on the writepage context so they are treated correctly on - * error. - */ - if (unlikely(error)) { - if (!count) { - xfs_aops_discard_page(page); - ClearPageUptodate(page); - unlock_page(page); - goto done; - } - - /* - * If the page was not fully cleaned, we need to ensure that the - * higher layers come back to it correctly. That means we need - * to keep the page dirty, and for WB_SYNC_ALL writeback we need - * to ensure the PAGECACHE_TAG_TOWRITE index mark is not removed - * so another attempt to write this page in this writeback sweep - * will be made. - */ - set_page_writeback_keepwrite(page); - } else { - clear_page_dirty_for_io(page); - set_page_writeback(page); - } - - unlock_page(page); - - /* - * Preserve the original error if there was one, otherwise catch - * submission errors here and propagate into subsequent ioend - * submissions. - */ - list_for_each_entry_safe(ioend, next, &submit_list, io_list) { - int error2; - - list_del_init(&ioend->io_list); - error2 = xfs_submit_ioend(wbc, ioend, error); - if (error2 && !error) - error = error2; - } - - /* - * We can end up here with no error and nothing to write only if we race - * with a partial page truncate on a sub-page block sized filesystem. - */ - if (!count) - end_page_writeback(page); -done: - mapping_set_error(page->mapping, error); - return error; -} - -/* - * Write out a dirty page. - * - * For delalloc space on the page we need to allocate space and flush it. - * For unwritten space on the page we need to start the conversion to - * regular allocated space. - */ -STATIC int -xfs_do_writepage( - struct page *page, - struct writeback_control *wbc, - void *data) -{ - struct xfs_writepage_ctx *wpc = data; - struct inode *inode = page->mapping->host; - loff_t offset; - uint64_t end_offset; - pgoff_t end_index; - - trace_xfs_writepage(inode, page, 0, 0); - - /* - * Refuse to write the page out if we are called from reclaim context. - * - * This avoids stack overflows when called from deeply used stacks in - * random callers for direct reclaim or memcg reclaim. We explicitly - * allow reclaim from kswapd as the stack usage there is relatively low. - * - * This should never happen except in the case of a VM regression so - * warn about it. - */ - if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) == - PF_MEMALLOC)) - goto redirty; - - /* - * Given that we do not allow direct reclaim to call us, we should - * never be called while in a filesystem transaction. - */ - if (WARN_ON_ONCE(current->flags & PF_MEMALLOC_NOFS)) - goto redirty; - - /* - * Is this page beyond the end of the file? - * - * The page index is less than the end_index, adjust the end_offset - * to the highest offset that this page should represent. - * ----------------------------------------------------- - * | file mapping | | - * ----------------------------------------------------- - * | Page ... | Page N-2 | Page N-1 | Page N | | - * ^--------------------------------^----------|-------- - * | desired writeback range | see else | - * ---------------------------------^------------------| - */ - offset = i_size_read(inode); - end_index = offset >> PAGE_SHIFT; - if (page->index < end_index) - end_offset = (xfs_off_t)(page->index + 1) << PAGE_SHIFT; - else { - /* - * Check whether the page to write out is beyond or straddles - * i_size or not. - * ------------------------------------------------------- - * | file mapping | | - * ------------------------------------------------------- - * | Page ... | Page N-2 | Page N-1 | Page N | Beyond | - * ^--------------------------------^-----------|--------- - * | | Straddles | - * ---------------------------------^-----------|--------| - */ - unsigned offset_into_page = offset & (PAGE_SIZE - 1); - - /* - * Skip the page if it is fully outside i_size, e.g. due to a - * truncate operation that is in progress. We must redirty the - * page so that reclaim stops reclaiming it. Otherwise - * xfs_vm_releasepage() is called on it and gets confused. - * - * Note that the end_index is unsigned long, it would overflow - * if the given offset is greater than 16TB on 32-bit system - * and if we do check the page is fully outside i_size or not - * via "if (page->index >= end_index + 1)" as "end_index + 1" - * will be evaluated to 0. Hence this page will be redirtied - * and be written out repeatedly which would result in an - * infinite loop, the user program that perform this operation - * will hang. Instead, we can verify this situation by checking - * if the page to write is totally beyond the i_size or if it's - * offset is just equal to the EOF. - */ - if (page->index > end_index || - (page->index == end_index && offset_into_page == 0)) - goto redirty; - - /* - * The page straddles i_size. It must be zeroed out on each - * and every writepage invocation because it may be mmapped. - * "A file is mapped in multiples of the page size. For a file - * that is not a multiple of the page size, the remaining - * memory is zeroed when mapped, and writes to that region are - * not written out to the file." - */ - zero_user_segment(page, offset_into_page, PAGE_SIZE); - - /* Adjust the end_offset to the end of file */ - end_offset = offset; - } - - return xfs_writepage_map(wpc, wbc, inode, page, end_offset); - -redirty: - redirty_page_for_writepage(wbc, page); - unlock_page(page); - return 0; -} +static const struct iomap_writeback_ops xfs_writeback_ops = { + .map_blocks = xfs_map_blocks, + .submit_ioend = xfs_submit_ioend, + .discard_page = xfs_discard_page, +}; STATIC int xfs_vm_writepage( @@ -989,12 +499,8 @@ xfs_vm_writepage( struct writeback_control *wbc) { struct xfs_writepage_ctx wpc = { }; - int ret; - ret = xfs_do_writepage(page, wbc, &wpc); - if (wpc.ioend) - ret = xfs_submit_ioend(wbc, wpc.ioend, ret); - return ret; + return iomap_writepage(page, wbc, &wpc.ctx, &xfs_writeback_ops); } STATIC int @@ -1003,13 +509,9 @@ xfs_vm_writepages( struct writeback_control *wbc) { struct xfs_writepage_ctx wpc = { }; - int ret; xfs_iflags_clear(XFS_I(mapping->host), XFS_ITRUNCATED); - ret = write_cache_pages(mapping, wbc, xfs_do_writepage, &wpc); - if (wpc.ioend) - ret = xfs_submit_ioend(wbc, wpc.ioend, ret); - return ret; + return iomap_writepages(mapping, wbc, &wpc.ctx, &xfs_writeback_ops); } STATIC int diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h index bf95837c59af..26a7772d4b81 100644 --- a/fs/xfs/xfs_aops.h +++ b/fs/xfs/xfs_aops.h @@ -6,22 +6,6 @@ #ifndef __XFS_AOPS_H__ #define __XFS_AOPS_H__ -extern struct bio_set xfs_ioend_bioset; - -/* - * Structure for buffered I/O completions. - */ -struct xfs_ioend { - struct list_head io_list; /* next ioend in chain */ - u16 io_type; - u16 io_flags; - struct inode *io_inode; /* file being written to */ - size_t io_size; /* size of the extent */ - xfs_off_t io_offset; /* offset in the file */ - struct bio *io_bio; /* bio being built */ - struct bio io_inline_bio; /* MUST BE LAST! */ -}; - extern const struct address_space_operations xfs_address_space_operations; extern const struct address_space_operations xfs_dax_aops; diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 594c119824cc..52b89e175bc5 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -53,7 +53,6 @@ #include static const struct super_operations xfs_super_operations; -struct bio_set xfs_ioend_bioset; static struct kset *xfs_kset; /* top-level xfs sysfs dir */ #ifdef DEBUG @@ -1870,15 +1869,10 @@ MODULE_ALIAS_FS("xfs"); STATIC int __init xfs_init_zones(void) { - if (bioset_init(&xfs_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE), - offsetof(struct xfs_ioend, io_inline_bio), - BIOSET_NEED_BVECS)) - goto out; - xfs_log_ticket_zone = kmem_zone_init(sizeof(xlog_ticket_t), "xfs_log_ticket"); if (!xfs_log_ticket_zone) - goto out_free_ioend_bioset; + goto out; xfs_bmap_free_item_zone = kmem_zone_init( sizeof(struct xfs_extent_free_item), @@ -2013,8 +2007,6 @@ xfs_init_zones(void) kmem_zone_destroy(xfs_bmap_free_item_zone); out_destroy_log_ticket_zone: kmem_zone_destroy(xfs_log_ticket_zone); - out_free_ioend_bioset: - bioset_exit(&xfs_ioend_bioset); out: return -ENOMEM; } @@ -2045,7 +2037,6 @@ xfs_destroy_zones(void) kmem_zone_destroy(xfs_btree_cur_zone); kmem_zone_destroy(xfs_bmap_free_item_zone); kmem_zone_destroy(xfs_log_ticket_zone); - bioset_exit(&xfs_ioend_bioset); } STATIC int __init diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 2103b94cb1bf..e87f44810c53 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -4,6 +4,7 @@ #include #include +#include #include #include #include @@ -11,6 +12,7 @@ struct address_space; struct fiemap_extent_info; struct inode; +struct iomap_writepage_ctx; struct iov_iter; struct kiocb; struct page; @@ -165,6 +167,45 @@ loff_t iomap_seek_data(struct inode *inode, loff_t offset, sector_t iomap_bmap(struct address_space *mapping, sector_t bno, const struct iomap_ops *ops); +/* + * Structure for writeback I/O completions. + */ +struct iomap_ioend { + struct list_head io_list; /* next ioend in chain */ + u16 io_type; + u16 io_flags; + struct inode *io_inode; /* file being written to */ + size_t io_size; /* size of the extent */ + loff_t io_offset; /* offset in the file */ + struct bio *io_bio; /* bio being built */ + struct bio io_inline_bio; /* MUST BE LAST! */ +}; + +struct iomap_writeback_ops { + int (*map_blocks)(struct iomap_writepage_ctx *wpc, struct inode *inode, + loff_t offset); + int (*submit_ioend)(struct iomap_ioend *ioend, int status); + void (*discard_page)(struct page *page); +}; + +struct iomap_writepage_ctx { + struct iomap iomap; + struct iomap_ioend *ioend; + const struct iomap_writeback_ops *ops; +}; + +void iomap_finish_ioend(struct iomap_ioend *ioend, int error); +void iomap_finish_ioends(struct iomap_ioend *ioend, int error); +void iomap_ioend_try_merge(struct iomap_ioend *ioend, + struct list_head *more_ioends); +void iomap_sort_ioends(struct list_head *ioend_list); +int iomap_writepage(struct page *page, struct writeback_control *wbc, + struct iomap_writepage_ctx *wpc, + const struct iomap_writeback_ops *ops); +int iomap_writepages(struct address_space *mapping, + struct writeback_control *wbc, struct iomap_writepage_ctx *wpc, + const struct iomap_writeback_ops *ops); + /* * Flags for direct I/O ->end_io: */ From patchwork Mon Jun 24 05:52:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11012297 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4B1A413AF for ; Mon, 24 Jun 2019 05:53:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B9CD28B01 for ; Mon, 24 Jun 2019 05:53:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3023F28B1E; Mon, 24 Jun 2019 05:53:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 743B428B01 for ; Mon, 24 Jun 2019 05:53:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727429AbfFXFxg (ORCPT ); Mon, 24 Jun 2019 01:53:36 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:50782 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727417AbfFXFxf (ORCPT ); Mon, 24 Jun 2019 01:53:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=nXwHSV9G9pSxp+aOBRw+VzgBQ6uxpkHCFbdGmbXP9Ac=; b=WI2MitQonKr3b3BKorRMxVs6NX zezAhzXqPy003fBUdCj03zwaQUI4jVNk1aJgOOyFd0mtjUzJBVjyRLBaXhap0vh6Kd4CkG1eYuphy egvcj+eDj8bKPh/f/gbykXX6EkTqYDyYynfqub0KIX6iGg3rzDuYfQTnmV0Ltw60mYWczg0O6b1+c DWW8d6xuFB4un9wCAewbJ7s8YCiTMqqT8EweP4lqWhBzWmzSaJ7F05r9M1wjiKzwBlmUrxZhtY+tx EEwBzGRq/anfI0R8ROlTodxxwRZPCkic+PIahIMDPow9fgbBSnLoy7Ris+BI/XS6OkS8W1AWJO5B1 AKav8W+A==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfHv3-0004AK-HG; Mon, 24 Jun 2019 05:53:33 +0000 From: Christoph Hellwig To: "Darrick J . Wong" Cc: Damien Le Moal , Andreas Gruenbacher , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 12/12] iomap: add tracing for the address space operations Date: Mon, 24 Jun 2019 07:52:53 +0200 Message-Id: <20190624055253.31183-13-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190624055253.31183-1-hch@lst.de> References: <20190624055253.31183-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Lift the xfs code for tracing address space operations to the iomap layer. Signed-off-by: Christoph Hellwig --- fs/iomap.c | 13 +++++- fs/xfs/xfs_aops.c | 27 ++---------- fs/xfs/xfs_trace.h | 65 ---------------------------- include/trace/events/iomap.h | 82 ++++++++++++++++++++++++++++++++++++ 4 files changed, 97 insertions(+), 90 deletions(-) create mode 100644 include/trace/events/iomap.h diff --git a/fs/iomap.c b/fs/iomap.c index 72a1b622e634..c98107a6bf81 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -23,7 +23,8 @@ #include #include #include - +#define CREATE_TRACE_POINTS +#include #include "internal.h" static struct bio_set iomap_ioend_bioset; @@ -369,6 +370,8 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops) unsigned poff; loff_t ret; + trace_iomap_readpage(page->mapping->host, 1); + for (poff = 0; poff < PAGE_SIZE; poff += ret) { ret = iomap_apply(inode, page_offset(page) + poff, PAGE_SIZE - poff, 0, ops, &ctx, @@ -465,6 +468,8 @@ iomap_readpages(struct address_space *mapping, struct list_head *pages, loff_t last = page_offset(list_entry(pages->next, struct page, lru)); loff_t length = last - pos + PAGE_SIZE, ret = 0; + trace_iomap_readpages(mapping->host, nr_pages); + while (length > 0) { ret = iomap_apply(mapping->host, pos, length, 0, ops, &ctx, iomap_readpages_actor); @@ -531,6 +536,8 @@ EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate); int iomap_releasepage(struct page *page, gfp_t gfp_mask) { + trace_iomap_releasepage(page->mapping->host, page, 0, 0); + /* * mm accommodates an old ext3 case where clean pages might not have had * the dirty bit cleared. Thus, it can send actual dirty pages to @@ -546,6 +553,8 @@ EXPORT_SYMBOL_GPL(iomap_releasepage); void iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) { + trace_iomap_invalidatepage(page->mapping->host, page, offset, len); + /* * If we are invalidating the entire page, clear the dirty state from it * and release it to avoid unnecessary buildup of the LRU. @@ -2579,6 +2588,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) u64 end_offset; loff_t offset; + trace_iomap_writepage(inode, page, 0, 0); + /* * Refuse to write the page out if we are called from reclaim context. * diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 26b838aea2db..a27ecce31c88 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -440,16 +440,6 @@ xfs_submit_ioend( return status; } -STATIC void -xfs_vm_invalidatepage( - struct page *page, - unsigned int offset, - unsigned int length) -{ - trace_xfs_invalidatepage(page->mapping->host, page, offset, length); - iomap_invalidatepage(page, offset, length); -} - /* * If the page has delalloc blocks on it, we need to punch them out before we * invalidate the page. If we don't, we leave a stale delalloc mapping on the @@ -484,7 +474,7 @@ xfs_discard_page( if (error && !XFS_FORCED_SHUTDOWN(mp)) xfs_alert(mp, "page discard unable to remove delalloc mapping."); out_invalidate: - xfs_vm_invalidatepage(page, 0, PAGE_SIZE); + iomap_invalidatepage(page, 0, PAGE_SIZE); } static const struct iomap_writeback_ops xfs_writeback_ops = { @@ -524,15 +514,6 @@ xfs_dax_writepages( xfs_find_bdev_for_inode(mapping->host), wbc); } -STATIC int -xfs_vm_releasepage( - struct page *page, - gfp_t gfp_mask) -{ - trace_xfs_releasepage(page->mapping->host, page, 0, 0); - return iomap_releasepage(page, gfp_mask); -} - STATIC sector_t xfs_vm_bmap( struct address_space *mapping, @@ -561,7 +542,6 @@ xfs_vm_readpage( struct file *unused, struct page *page) { - trace_xfs_vm_readpage(page->mapping->host, 1); return iomap_readpage(page, &xfs_iomap_ops); } @@ -572,7 +552,6 @@ xfs_vm_readpages( struct list_head *pages, unsigned nr_pages) { - trace_xfs_vm_readpages(mapping->host, nr_pages); return iomap_readpages(mapping, pages, nr_pages, &xfs_iomap_ops); } @@ -592,8 +571,8 @@ const struct address_space_operations xfs_address_space_operations = { .writepage = xfs_vm_writepage, .writepages = xfs_vm_writepages, .set_page_dirty = iomap_set_page_dirty, - .releasepage = xfs_vm_releasepage, - .invalidatepage = xfs_vm_invalidatepage, + .releasepage = iomap_releasepage, + .invalidatepage = iomap_invalidatepage, .bmap = xfs_vm_bmap, .direct_IO = noop_direct_IO, .migratepage = iomap_migrate_page, diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h index 2464ea351f83..051bd7d4769a 100644 --- a/fs/xfs/xfs_trace.h +++ b/fs/xfs/xfs_trace.h @@ -1153,71 +1153,6 @@ DEFINE_RW_EVENT(xfs_file_buffered_write); DEFINE_RW_EVENT(xfs_file_direct_write); DEFINE_RW_EVENT(xfs_file_dax_write); -DECLARE_EVENT_CLASS(xfs_page_class, - TP_PROTO(struct inode *inode, struct page *page, unsigned long off, - unsigned int len), - TP_ARGS(inode, page, off, len), - TP_STRUCT__entry( - __field(dev_t, dev) - __field(xfs_ino_t, ino) - __field(pgoff_t, pgoff) - __field(loff_t, size) - __field(unsigned long, offset) - __field(unsigned int, length) - ), - TP_fast_assign( - __entry->dev = inode->i_sb->s_dev; - __entry->ino = XFS_I(inode)->i_ino; - __entry->pgoff = page_offset(page); - __entry->size = i_size_read(inode); - __entry->offset = off; - __entry->length = len; - ), - TP_printk("dev %d:%d ino 0x%llx pgoff 0x%lx size 0x%llx offset %lx " - "length %x", - MAJOR(__entry->dev), MINOR(__entry->dev), - __entry->ino, - __entry->pgoff, - __entry->size, - __entry->offset, - __entry->length) -) - -#define DEFINE_PAGE_EVENT(name) \ -DEFINE_EVENT(xfs_page_class, name, \ - TP_PROTO(struct inode *inode, struct page *page, unsigned long off, \ - unsigned int len), \ - TP_ARGS(inode, page, off, len)) -DEFINE_PAGE_EVENT(xfs_writepage); -DEFINE_PAGE_EVENT(xfs_releasepage); -DEFINE_PAGE_EVENT(xfs_invalidatepage); - -DECLARE_EVENT_CLASS(xfs_readpage_class, - TP_PROTO(struct inode *inode, int nr_pages), - TP_ARGS(inode, nr_pages), - TP_STRUCT__entry( - __field(dev_t, dev) - __field(xfs_ino_t, ino) - __field(int, nr_pages) - ), - TP_fast_assign( - __entry->dev = inode->i_sb->s_dev; - __entry->ino = inode->i_ino; - __entry->nr_pages = nr_pages; - ), - TP_printk("dev %d:%d ino 0x%llx nr_pages %d", - MAJOR(__entry->dev), MINOR(__entry->dev), - __entry->ino, - __entry->nr_pages) -) - -#define DEFINE_READPAGE_EVENT(name) \ -DEFINE_EVENT(xfs_readpage_class, name, \ - TP_PROTO(struct inode *inode, int nr_pages), \ - TP_ARGS(inode, nr_pages)) -DEFINE_READPAGE_EVENT(xfs_vm_readpage); -DEFINE_READPAGE_EVENT(xfs_vm_readpages); - DECLARE_EVENT_CLASS(xfs_imap_class, TP_PROTO(struct xfs_inode *ip, xfs_off_t offset, ssize_t count, int whichfork, struct xfs_bmbt_irec *irec), diff --git a/include/trace/events/iomap.h b/include/trace/events/iomap.h new file mode 100644 index 000000000000..da50ece663f8 --- /dev/null +++ b/include/trace/events/iomap.h @@ -0,0 +1,82 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (c) 2009-2019, Christoph Hellwig + * All Rights Reserved. + */ +#undef TRACE_SYSTEM +#define TRACE_SYSTEM iomap + +#if !defined(_TRACE_IOMAP_H) || defined(TRACE_HEADER_MULTI_READ) +#define _TRACE_IOMAP_H + +#include + +DECLARE_EVENT_CLASS(iomap_page_class, + TP_PROTO(struct inode *inode, struct page *page, unsigned long off, + unsigned int len), + TP_ARGS(inode, page, off, len), + TP_STRUCT__entry( + __field(dev_t, dev) + __field(u64, ino) + __field(pgoff_t, pgoff) + __field(loff_t, size) + __field(unsigned long, offset) + __field(unsigned int, length) + ), + TP_fast_assign( + __entry->dev = inode->i_sb->s_dev; + __entry->ino = inode->i_ino; + __entry->pgoff = page_offset(page); + __entry->size = i_size_read(inode); + __entry->offset = off; + __entry->length = len; + ), + TP_printk("dev %d:%d ino 0x%llx pgoff 0x%lx size 0x%llx offset %lx " + "length %x", + MAJOR(__entry->dev), MINOR(__entry->dev), + __entry->ino, + __entry->pgoff, + __entry->size, + __entry->offset, + __entry->length) +) + +#define DEFINE_PAGE_EVENT(name) \ +DEFINE_EVENT(iomap_page_class, name, \ + TP_PROTO(struct inode *inode, struct page *page, unsigned long off, \ + unsigned int len), \ + TP_ARGS(inode, page, off, len)) +DEFINE_PAGE_EVENT(iomap_writepage); +DEFINE_PAGE_EVENT(iomap_releasepage); +DEFINE_PAGE_EVENT(iomap_invalidatepage); + +DECLARE_EVENT_CLASS(iomap_readpage_class, + TP_PROTO(struct inode *inode, int nr_pages), + TP_ARGS(inode, nr_pages), + TP_STRUCT__entry( + __field(dev_t, dev) + __field(u64, ino) + __field(int, nr_pages) + ), + TP_fast_assign( + __entry->dev = inode->i_sb->s_dev; + __entry->ino = inode->i_ino; + __entry->nr_pages = nr_pages; + ), + TP_printk("dev %d:%d ino 0x%llx nr_pages %d", + MAJOR(__entry->dev), MINOR(__entry->dev), + __entry->ino, + __entry->nr_pages) +) + +#define DEFINE_READPAGE_EVENT(name) \ +DEFINE_EVENT(iomap_readpage_class, name, \ + TP_PROTO(struct inode *inode, int nr_pages), \ + TP_ARGS(inode, nr_pages)) +DEFINE_READPAGE_EVENT(iomap_readpage); +DEFINE_READPAGE_EVENT(iomap_readpages); + +#endif /* _TRACE_IOMAP_H */ + +/* This part must be outside protection */ +#include