From patchwork Tue Jul 26 00:35:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 9247401 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A33B960869 for ; Tue, 26 Jul 2016 00:42:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 94C1C2787C for ; Tue, 26 Jul 2016 00:42:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 89BBE2793D; Tue, 26 Jul 2016 00:42:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2C94927813 for ; Tue, 26 Jul 2016 00:42:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932360AbcGZAkh (ORCPT ); Mon, 25 Jul 2016 20:40:37 -0400 Received: from mga11.intel.com ([192.55.52.93]:1478 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755310AbcGZAgD (ORCPT ); Mon, 25 Jul 2016 20:36:03 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 25 Jul 2016 17:36:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,421,1464678000"; d="scan'208";a="1013759537" Received: from black.fi.intel.com ([10.237.72.93]) by fmsmga001.fm.intel.com with ESMTP; 25 Jul 2016 17:35:57 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id DC2CAB04; Tue, 26 Jul 2016 03:35:47 +0300 (EEST) From: "Kirill A. Shutemov" To: "Theodore Ts'o" , Andreas Dilger , Jan Kara Cc: Alexander Viro , Hugh Dickins , Andrea Arcangeli , Andrew Morton , Dave Hansen , Vlastimil Babka , Matthew Wilcox , Ross Zwisler , linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv1, RFC 25/33] ext4: make ext4_mpage_readpages() hugepage-aware Date: Tue, 26 Jul 2016 03:35:27 +0300 Message-Id: <1469493335-3622-26-git-send-email-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1469493335-3622-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1469493335-3622-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch modifies ext4_mpage_readpages() to deal with huge pages. We read out 2M at once, so we have to alloc (HPAGE_PMD_NR * blocks_per_page) sector_t for that. I'm not entirely happy with kmalloc in this codepath, but don't see any other option. Signed-off-by: Kirill A. Shutemov --- fs/ext4/readpage.c | 38 ++++++++++++++++++++++++++++++++------ 1 file changed, 32 insertions(+), 6 deletions(-) diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c index a81b829d56de..6d7cbddceeb2 100644 --- a/fs/ext4/readpage.c +++ b/fs/ext4/readpage.c @@ -104,12 +104,12 @@ int ext4_mpage_readpages(struct address_space *mapping, struct inode *inode = mapping->host; const unsigned blkbits = inode->i_blkbits; - const unsigned blocks_per_page = PAGE_SIZE >> blkbits; const unsigned blocksize = 1 << blkbits; sector_t block_in_file; sector_t last_block; sector_t last_block_in_file; - sector_t blocks[MAX_BUF_PER_PAGE]; + sector_t blocks_on_stack[MAX_BUF_PER_PAGE]; + sector_t *blocks = blocks_on_stack; unsigned page_block; struct block_device *bdev = inode->i_sb->s_bdev; int length; @@ -122,8 +122,9 @@ int ext4_mpage_readpages(struct address_space *mapping, map.m_flags = 0; for (; nr_pages; nr_pages--) { - int fully_mapped = 1; - unsigned first_hole = blocks_per_page; + int fully_mapped = 1, nr = nr_pages; + unsigned blocks_per_page = PAGE_SIZE >> blkbits; + unsigned first_hole; prefetchw(&page->flags); if (pages) { @@ -138,10 +139,31 @@ int ext4_mpage_readpages(struct address_space *mapping, goto confused; block_in_file = (sector_t)page->index << (PAGE_SHIFT - blkbits); - last_block = block_in_file + nr_pages * blocks_per_page; + + if (PageTransHuge(page)) { + BUILD_BUG_ON(BIO_MAX_PAGES < HPAGE_PMD_NR); + nr = HPAGE_PMD_NR * blocks_per_page; + /* XXX: need a better solution ? */ + blocks = kmalloc(sizeof(sector_t) * nr, GFP_NOFS); + if (!blocks) { + if (pages) { + delete_from_page_cache(page); + goto next_page; + } + return -ENOMEM; + } + + blocks_per_page *= HPAGE_PMD_NR; + last_block = block_in_file + blocks_per_page; + } else { + blocks = blocks_on_stack; + last_block = block_in_file + nr * blocks_per_page; + } + last_block_in_file = (i_size_read(inode) + blocksize - 1) >> blkbits; if (last_block > last_block_in_file) last_block = last_block_in_file; + first_hole = blocks_per_page; page_block = 0; /* @@ -213,6 +235,8 @@ int ext4_mpage_readpages(struct address_space *mapping, } } if (first_hole != blocks_per_page) { + if (PageTransHuge(page)) + goto confused; zero_user_segment(page, first_hole << blkbits, PAGE_SIZE); if (first_hole == 0) { @@ -248,7 +272,7 @@ int ext4_mpage_readpages(struct address_space *mapping, goto set_error_page; } bio = bio_alloc(GFP_KERNEL, - min_t(int, nr_pages, BIO_MAX_PAGES)); + min_t(int, nr, BIO_MAX_PAGES)); if (!bio) { if (ctx) fscrypt_release_ctx(ctx); @@ -289,5 +313,7 @@ int ext4_mpage_readpages(struct address_space *mapping, BUG_ON(pages && !list_empty(pages)); if (bio) submit_bio(bio); + if (blocks != blocks_on_stack) + kfree(blocks); return 0; }