From patchwork Thu Jan 26 11:58:13 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 9539105 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A335E601D7 for ; Thu, 26 Jan 2017 12:06:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 99F3026E91 for ; Thu, 26 Jan 2017 12:06:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8D6472818E; Thu, 26 Jan 2017 12:06:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2208126E91 for ; Thu, 26 Jan 2017 12:06:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753220AbdAZL7B (ORCPT ); Thu, 26 Jan 2017 06:59:01 -0500 Received: from mga07.intel.com ([134.134.136.100]:55662 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753143AbdAZL67 (ORCPT ); Thu, 26 Jan 2017 06:58:59 -0500 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP; 26 Jan 2017 03:58:52 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,289,1477983600"; d="scan'208";a="35741236" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga002.jf.intel.com with ESMTP; 26 Jan 2017 03:58:48 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id A07CA623; Thu, 26 Jan 2017 13:58:26 +0200 (EET) From: "Kirill A. Shutemov" To: "Theodore Ts'o" , Andreas Dilger , Jan Kara , Andrew Morton Cc: Alexander Viro , Hugh Dickins , Andrea Arcangeli , Dave Hansen , Vlastimil Babka , Matthew Wilcox , Ross Zwisler , linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv6 31/37] ext4: handle writeback with huge pages Date: Thu, 26 Jan 2017 14:58:13 +0300 Message-Id: <20170126115819.58875-32-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170126115819.58875-1-kirill.shutemov@linux.intel.com> References: <20170126115819.58875-1-kirill.shutemov@linux.intel.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Modify mpage_map_and_submit_buffers() and mpage_release_unused_pages() to deal with huge pages. Mostly result of try-and-error. Critical view would be appreciated. Signed-off-by: Kirill A. Shutemov --- fs/ext4/inode.c | 61 ++++++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 43 insertions(+), 18 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index afba41b65a15..409ebd81e436 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1665,20 +1665,32 @@ static void mpage_release_unused_pages(struct mpage_da_data *mpd, if (nr_pages == 0) break; for (i = 0; i < nr_pages; i++) { - struct page *page = pvec.pages[i]; + struct page *page = compound_head(pvec.pages[i]); + if (page->index > end) break; BUG_ON(!PageLocked(page)); BUG_ON(PageWriteback(page)); if (invalidate) { + unsigned long offset, len; + + offset = (index % hpage_nr_pages(page)); + len = min_t(unsigned long, end - page->index, + hpage_nr_pages(page)); + if (page_mapped(page)) clear_page_dirty_for_io(page); - block_invalidatepage(page, 0, PAGE_SIZE); + block_invalidatepage(page, offset << PAGE_SHIFT, + len << PAGE_SHIFT); ClearPageUptodate(page); } unlock_page(page); + if (PageTransHuge(page)) + break; } - index = pvec.pages[nr_pages - 1]->index + 1; + index = page_to_pgoff(pvec.pages[nr_pages - 1]) + 1; + if (PageTransCompound(pvec.pages[nr_pages - 1])) + index = round_up(index, HPAGE_PMD_NR); pagevec_release(&pvec); } } @@ -2112,16 +2124,16 @@ static int mpage_submit_page(struct mpage_da_data *mpd, struct page *page) loff_t size = i_size_read(mpd->inode); int err; - BUG_ON(page->index != mpd->first_page); - if (page->index == size >> PAGE_SHIFT) - len = size & ~PAGE_MASK; - else - len = PAGE_SIZE; + page = compound_head(page); + len = hpage_size(page); + if (page->index + hpage_nr_pages(page) - 1 == size >> PAGE_SHIFT) + len = size & ~hpage_mask(page); + clear_page_dirty_for_io(page); err = ext4_bio_write_page(&mpd->io_submit, page, len, mpd->wbc, false); if (!err) - mpd->wbc->nr_to_write--; - mpd->first_page++; + mpd->wbc->nr_to_write -= hpage_nr_pages(page); + mpd->first_page = round_up(mpd->first_page + 1, hpage_nr_pages(page)); return err; } @@ -2269,12 +2281,16 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd) break; for (i = 0; i < nr_pages; i++) { struct page *page = pvec.pages[i]; + unsigned long diff; - if (page->index > end) + if (page_to_pgoff(page) > end) break; /* Up to 'end' pages must be contiguous */ - BUG_ON(page->index != start); + BUG_ON(page_to_pgoff(page) != start); + diff = (page - compound_head(page)) << bpp_bits; bh = head = page_buffers(page); + while (diff--) + bh = bh->b_this_page; do { if (lblk < mpd->map.m_lblk) continue; @@ -2311,7 +2327,10 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd) * supports blocksize < pagesize as we will try to * convert potentially unmapped parts of inode. */ - mpd->io_submit.io_end->size += PAGE_SIZE; + if (PageTransCompound(page)) + mpd->io_submit.io_end->size += HPAGE_PMD_SIZE; + else + mpd->io_submit.io_end->size += PAGE_SIZE; /* Page fully mapped - let IO run! */ err = mpage_submit_page(mpd, page); if (err < 0) { @@ -2319,6 +2338,10 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd) return err; } start++; + if (PageTransCompound(page)) { + start = round_up(start, HPAGE_PMD_NR); + break; + } } pagevec_release(&pvec); } @@ -2555,7 +2578,7 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd) * mapping. However, page->index will not change * because we have a reference on the page. */ - if (page->index > end) + if (page_to_pgoff(page) > end) goto out; /* @@ -2570,7 +2593,7 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd) goto out; /* If we can't merge this page, we are done. */ - if (mpd->map.m_len > 0 && mpd->next_page != page->index) + if (mpd->map.m_len > 0 && mpd->next_page != page_to_pgoff(page)) goto out; lock_page(page); @@ -2584,7 +2607,7 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd) if (!PageDirty(page) || (PageWriteback(page) && (mpd->wbc->sync_mode == WB_SYNC_NONE)) || - unlikely(page->mapping != mapping)) { + unlikely(page_mapping(page) != mapping)) { unlock_page(page); continue; } @@ -2593,8 +2616,10 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd) BUG_ON(PageWriteback(page)); if (mpd->map.m_len == 0) - mpd->first_page = page->index; - mpd->next_page = page->index + 1; + mpd->first_page = page_to_pgoff(page); + page = compound_head(page); + mpd->next_page = round_up(page->index + 1, + hpage_nr_pages(page)); /* Add all dirty buffers to mpd */ lblk = ((ext4_lblk_t)page->index) << (PAGE_SHIFT - blkbits);