From patchwork Thu Sep 15 11:55:08 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 9333441 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EF9E86077F for ; Thu, 15 Sep 2016 12:08:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DC9C02969E for ; Thu, 15 Sep 2016 12:08:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D0F72296A2; Thu, 15 Sep 2016 12:08:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E38E2969E for ; Thu, 15 Sep 2016 12:08:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934504AbcIOMIk (ORCPT ); Thu, 15 Sep 2016 08:08:40 -0400 Received: from mga09.intel.com ([134.134.136.24]:40135 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934192AbcIOLzp (ORCPT ); Thu, 15 Sep 2016 07:55:45 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP; 15 Sep 2016 04:55:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,339,1470726000"; d="scan'208";a="1040321666" Received: from black.fi.intel.com ([10.237.72.56]) by fmsmga001.fm.intel.com with ESMTP; 15 Sep 2016 04:55:40 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 343D09C4; Thu, 15 Sep 2016 14:55:27 +0300 (EEST) From: "Kirill A. Shutemov" To: "Theodore Ts'o" , Andreas Dilger , Jan Kara , Andrew Morton Cc: Alexander Viro , Hugh Dickins , Andrea Arcangeli , Dave Hansen , Vlastimil Babka , Matthew Wilcox , Ross Zwisler , linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv3 26/41] truncate: make truncate_inode_pages_range() aware about huge pages Date: Thu, 15 Sep 2016 14:55:08 +0300 Message-Id: <20160915115523.29737-27-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20160915115523.29737-1-kirill.shutemov@linux.intel.com> References: <20160915115523.29737-1-kirill.shutemov@linux.intel.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As with shmem_undo_range(), truncate_inode_pages_range() removes huge pages, if it fully within range. Partial truncate of huge pages zero out this part of THP. Unlike with shmem, it doesn't prevent us having holes in the middle of huge page we still can skip writeback not touched buffers. With memory-mapped IO we would loose holes in some cases when we have THP in page cache, since we cannot track access on 4k level in this case. Signed-off-by: Kirill A. Shutemov --- fs/buffer.c | 2 +- mm/truncate.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++++++++++----- 2 files changed, 88 insertions(+), 9 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index e53808e790e2..20898b051044 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -1534,7 +1534,7 @@ void block_invalidatepage(struct page *page, unsigned int offset, /* * Check for overflow */ - BUG_ON(stop > PAGE_SIZE || stop < length); + BUG_ON(stop > hpage_size(page) || stop < length); head = page_buffers(page); bh = head; diff --git a/mm/truncate.c b/mm/truncate.c index ce904e4b1708..9c339e6255f2 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -90,7 +90,7 @@ void do_invalidatepage(struct page *page, unsigned int offset, { void (*invalidatepage)(struct page *, unsigned int, unsigned int); - invalidatepage = page->mapping->a_ops->invalidatepage; + invalidatepage = page_mapping(page)->a_ops->invalidatepage; #ifdef CONFIG_BLOCK if (!invalidatepage) invalidatepage = block_invalidatepage; @@ -116,7 +116,7 @@ truncate_complete_page(struct address_space *mapping, struct page *page) return -EIO; if (page_has_private(page)) - do_invalidatepage(page, 0, PAGE_SIZE); + do_invalidatepage(page, 0, hpage_size(page)); /* * Some filesystems seem to re-dirty the page even after @@ -288,6 +288,36 @@ void truncate_inode_pages_range(struct address_space *mapping, unlock_page(page); continue; } + + if (PageTransTail(page)) { + /* Middle of THP: zero out the page */ + clear_highpage(page); + if (page_has_private(page)) { + int off = page - compound_head(page); + do_invalidatepage(compound_head(page), + off * PAGE_SIZE, + PAGE_SIZE); + } + unlock_page(page); + continue; + } else if (PageTransHuge(page)) { + if (index == round_down(end, HPAGE_PMD_NR)) { + /* + * Range ends in the middle of THP: + * zero out the page + */ + clear_highpage(page); + if (page_has_private(page)) { + do_invalidatepage(page, 0, + PAGE_SIZE); + } + unlock_page(page); + continue; + } + index += HPAGE_PMD_NR - 1; + i += HPAGE_PMD_NR - 1; + } + truncate_inode_page(mapping, page); unlock_page(page); } @@ -309,9 +339,12 @@ void truncate_inode_pages_range(struct address_space *mapping, wait_on_page_writeback(page); zero_user_segment(page, partial_start, top); cleancache_invalidate_page(mapping, page); - if (page_has_private(page)) - do_invalidatepage(page, partial_start, - top - partial_start); + if (page_has_private(page)) { + int off = page - compound_head(page); + do_invalidatepage(compound_head(page), + off * PAGE_SIZE + partial_start, + top - partial_start); + } unlock_page(page); put_page(page); } @@ -322,9 +355,12 @@ void truncate_inode_pages_range(struct address_space *mapping, wait_on_page_writeback(page); zero_user_segment(page, 0, partial_end); cleancache_invalidate_page(mapping, page); - if (page_has_private(page)) - do_invalidatepage(page, 0, - partial_end); + if (page_has_private(page)) { + int off = page - compound_head(page); + do_invalidatepage(compound_head(page), + off * PAGE_SIZE, + partial_end); + } unlock_page(page); put_page(page); } @@ -373,6 +409,49 @@ void truncate_inode_pages_range(struct address_space *mapping, lock_page(page); WARN_ON(page_to_pgoff(page) != index); wait_on_page_writeback(page); + + if (PageTransTail(page)) { + /* Middle of THP: zero out the page */ + clear_highpage(page); + if (page_has_private(page)) { + int off = page - compound_head(page); + do_invalidatepage(compound_head(page), + off * PAGE_SIZE, + PAGE_SIZE); + } + unlock_page(page); + /* + * Partial thp truncate due 'start' in middle + * of THP: don't need to look on these pages + * again on !pvec.nr restart. + */ + if (index != round_down(end, HPAGE_PMD_NR)) + start++; + continue; + } else if (PageTransHuge(page)) { + if (index == round_down(end, HPAGE_PMD_NR)) { + /* + * Range ends in the middle of THP: + * zero out the page + */ + clear_highpage(page); + if (page_has_private(page)) { + do_invalidatepage(page, 0, + PAGE_SIZE); + } + unlock_page(page); + /* + * Partial thp truncate due 'end' in + * middle of THP: don't need to look on + * these pages again restart. + */ + start++; + continue; + } + index += HPAGE_PMD_NR - 1; + i += HPAGE_PMD_NR - 1; + } + truncate_inode_page(mapping, page); unlock_page(page); }