From patchwork Mon Jan 30 20:26:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liu Bo X-Patchwork-Id: 9546085 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BAA52604A0 for ; Mon, 30 Jan 2017 20:27:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 97510281A7 for ; Mon, 30 Jan 2017 20:27:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C0BD28210; Mon, 30 Jan 2017 20:27:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F2082281A7 for ; Mon, 30 Jan 2017 20:27:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754244AbdA3U12 (ORCPT ); Mon, 30 Jan 2017 15:27:28 -0500 Received: from userp1040.oracle.com ([156.151.31.81]:33384 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754119AbdA3U11 (ORCPT ); Mon, 30 Jan 2017 15:27:27 -0500 Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v0UKRHVN000988 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 30 Jan 2017 20:27:18 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id v0UKRGQV020139 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 30 Jan 2017 20:27:17 GMT Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id v0UKRGTH006772; Mon, 30 Jan 2017 20:27:16 GMT Received: from localhost.us.oracle.com (/10.211.47.181) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 30 Jan 2017 12:27:16 -0800 From: Liu Bo To: linux-btrfs@vger.kernel.org Cc: David Sterba Subject: [PATCH] Btrfs: remove duplicated find_get_pages_contig Date: Mon, 30 Jan 2017 12:26:30 -0800 Message-Id: <1485807990-17973-1-git-send-email-bo.li.liu@oracle.com> X-Mailer: git-send-email 2.5.5 X-Source-IP: aserv0022.oracle.com [141.146.126.234] Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This creates a helper to manipulate page bits to avoid duplicate uses. Signed-off-by: Liu Bo --- fs/btrfs/extent_io.c | 202 ++++++++++++++++++++++++--------------------------- fs/btrfs/extent_io.h | 3 +- 2 files changed, 98 insertions(+), 107 deletions(-) diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index d5f3edb..136fe96 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1549,94 +1549,122 @@ static noinline u64 find_delalloc_range(struct extent_io_tree *tree, return found; } -static noinline void __unlock_for_delalloc(struct inode *inode, - struct page *locked_page, - u64 start, u64 end) +/* + * index_ret: record where we stop + * This only returns errors when page_ops has PAGE_LOCK. + */ +static int +__process_pages_contig(struct address_space *mapping, struct page *locked_page, + pgoff_t start_index, pgoff_t end_index, + unsigned long page_ops, pgoff_t *index_ret) { - int ret; + unsigned long nr_pages = end_index - start_index + 1; + pgoff_t index = start_index; struct page *pages[16]; - unsigned long index = start >> PAGE_SHIFT; - unsigned long end_index = end >> PAGE_SHIFT; - unsigned long nr_pages = end_index - index + 1; + unsigned pages_locked = 0; + unsigned ret; + int err = 0; int i; - if (index == locked_page->index && end_index == index) - return; + /* + * Do NOT skip locked_page since we may need to set PagePrivate2 on it. + */ - while (nr_pages > 0) { - ret = find_get_pages_contig(inode->i_mapping, index, - min_t(unsigned long, nr_pages, - ARRAY_SIZE(pages)), pages); - for (i = 0; i < ret; i++) { - if (pages[i] != locked_page) - unlock_page(pages[i]); - put_page(pages[i]); - } - nr_pages -= ret; - index += ret; - cond_resched(); + /* PAGE_LOCK should not be mixed with other ops. */ + if (page_ops & PAGE_LOCK) { + ASSERT(page_ops == PAGE_LOCK); + ASSERT(index_ret); + ASSERT(*index_ret == start_index); } -} -static noinline int lock_delalloc_pages(struct inode *inode, - struct page *locked_page, - u64 delalloc_start, - u64 delalloc_end) -{ - unsigned long index = delalloc_start >> PAGE_SHIFT; - unsigned long start_index = index; - unsigned long end_index = delalloc_end >> PAGE_SHIFT; - unsigned long pages_locked = 0; - struct page *pages[16]; - unsigned long nrpages; - int ret; - int i; - - /* the caller is responsible for locking the start index */ - if (index == locked_page->index && index == end_index) - return 0; + if ((page_ops & PAGE_SET_ERROR) && nr_pages > 0) + mapping_set_error(mapping, -EIO); - /* skip the page at the start index */ - nrpages = end_index - index + 1; - while (nrpages > 0) { - ret = find_get_pages_contig(inode->i_mapping, index, + while (nr_pages > 0) { + ret = find_get_pages_contig(mapping, index, min_t(unsigned long, - nrpages, ARRAY_SIZE(pages)), pages); + nr_pages, ARRAY_SIZE(pages)), pages); if (ret == 0) { - ret = -EAGAIN; - goto done; - } - /* now we have an array of pages, lock them all */ - for (i = 0; i < ret; i++) { /* - * the caller is taking responsibility for - * locked_page + * Only if we're going to lock these pages, can we find + * nothing at @index. */ + ASSERT(page_ops & PAGE_LOCK); + goto out; + } + + for (i = 0; i < ret; i++) { + if (page_ops & PAGE_SET_PRIVATE2) + SetPagePrivate2(pages[i]); + if (pages[i] != locked_page) { - lock_page(pages[i]); - if (!PageDirty(pages[i]) || - pages[i]->mapping != inode->i_mapping) { - ret = -EAGAIN; + if (page_ops & PAGE_CLEAR_DIRTY) + clear_page_dirty_for_io(pages[i]); + if (page_ops & PAGE_SET_WRITEBACK) + set_page_writeback(pages[i]); + if (page_ops & PAGE_SET_ERROR) + SetPageError(pages[i]); + if (page_ops & PAGE_END_WRITEBACK) + end_page_writeback(pages[i]); + if (page_ops & PAGE_UNLOCK) unlock_page(pages[i]); - put_page(pages[i]); - goto done; + if (page_ops & PAGE_LOCK) { + lock_page(pages[i]); + if (!PageDirty(pages[i]) || + pages[i]->mapping != mapping) { + unlock_page(pages[i]); + put_page(pages[i]); + err = -EAGAIN; + goto out; + } } } put_page(pages[i]); pages_locked++; } - nrpages -= ret; + nr_pages -= ret; index += ret; cond_resched(); } - ret = 0; -done: - if (ret && pages_locked) { - __unlock_for_delalloc(inode, locked_page, - delalloc_start, - ((u64)(start_index + pages_locked - 1)) << - PAGE_SHIFT); +out: + if (err && index_ret) { + *index_ret = start_index + pages_locked - 1; + } + return err; +} + +static noinline void __unlock_for_delalloc(struct inode *inode, + struct page *locked_page, + u64 start, u64 end) +{ + unsigned long page_ops = PAGE_UNLOCK; + + ASSERT(locked_page); + __process_pages_contig(inode->i_mapping, locked_page, + start >> PAGE_SHIFT, end >> PAGE_SHIFT, + page_ops, NULL); +} + +static noinline int lock_delalloc_pages(struct inode *inode, + struct page *locked_page, + u64 delalloc_start, + u64 delalloc_end) +{ + pgoff_t index = delalloc_start >> PAGE_SHIFT; + pgoff_t start_index = index; + pgoff_t end_index = delalloc_end >> PAGE_SHIFT; + unsigned long page_ops = PAGE_LOCK; + int ret = 0; + + ASSERT(locked_page); + + ret = __process_pages_contig(inode->i_mapping, locked_page, start_index, + end_index, page_ops, &index); + if (ret == -EAGAIN) { + __unlock_for_delalloc(inode, locked_page, delalloc_start, + ((u64)index) << PAGE_SHIFT); } + return ret; } @@ -1732,49 +1760,11 @@ void extent_clear_unlock_delalloc(struct inode *inode, u64 start, u64 end, unsigned long page_ops) { struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree; - int ret; - struct page *pages[16]; - unsigned long index = start >> PAGE_SHIFT; - unsigned long end_index = end >> PAGE_SHIFT; - unsigned long nr_pages = end_index - index + 1; - int i; clear_extent_bit(tree, start, end, clear_bits, 1, 0, NULL, GFP_NOFS); - if (page_ops == 0) - return; - - if ((page_ops & PAGE_SET_ERROR) && nr_pages > 0) - mapping_set_error(inode->i_mapping, -EIO); - - while (nr_pages > 0) { - ret = find_get_pages_contig(inode->i_mapping, index, - min_t(unsigned long, - nr_pages, ARRAY_SIZE(pages)), pages); - for (i = 0; i < ret; i++) { - - if (page_ops & PAGE_SET_PRIVATE2) - SetPagePrivate2(pages[i]); - - if (pages[i] == locked_page) { - put_page(pages[i]); - continue; - } - if (page_ops & PAGE_CLEAR_DIRTY) - clear_page_dirty_for_io(pages[i]); - if (page_ops & PAGE_SET_WRITEBACK) - set_page_writeback(pages[i]); - if (page_ops & PAGE_SET_ERROR) - SetPageError(pages[i]); - if (page_ops & PAGE_END_WRITEBACK) - end_page_writeback(pages[i]); - if (page_ops & PAGE_UNLOCK) - unlock_page(pages[i]); - put_page(pages[i]); - } - nr_pages -= ret; - index += ret; - cond_resched(); - } + __process_pages_contig(inode->i_mapping, locked_page, + start >> PAGE_SHIFT, end >> PAGE_SHIFT, + page_ops, NULL); } /* diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h index 17f9ce4..4551a5b 100644 --- a/fs/btrfs/extent_io.h +++ b/fs/btrfs/extent_io.h @@ -45,13 +45,14 @@ #define EXTENT_BUFFER_IN_TREE 10 #define EXTENT_BUFFER_WRITE_ERR 11 /* write IO error */ -/* these are flags for extent_clear_unlock_delalloc */ +/* these are flags for __process_pages_contig */ #define PAGE_UNLOCK (1 << 0) #define PAGE_CLEAR_DIRTY (1 << 1) #define PAGE_SET_WRITEBACK (1 << 2) #define PAGE_END_WRITEBACK (1 << 3) #define PAGE_SET_PRIVATE2 (1 << 4) #define PAGE_SET_ERROR (1 << 5) +#define PAGE_LOCK (1 << 6) /* * page->private values. Every page that is controlled by the extent