From patchwork Wed Jun 14 17:22:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 9787053 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 033C2602D9 for ; Wed, 14 Jun 2017 17:23:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB85C27E5A for ; Wed, 14 Jun 2017 17:23:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CE0BE283A6; Wed, 14 Jun 2017 17:23:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4792627E5A for ; Wed, 14 Jun 2017 17:23:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752677AbdFNRWf (ORCPT ); Wed, 14 Jun 2017 13:22:35 -0400 Received: from mga06.intel.com ([134.134.136.31]:42473 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752562AbdFNRWd (ORCPT ); Wed, 14 Jun 2017 13:22:33 -0400 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga104.jf.intel.com with ESMTP; 14 Jun 2017 10:22:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,341,1493708400"; d="scan'208";a="99492165" Received: from theros.lm.intel.com ([10.232.112.77]) by orsmga002.jf.intel.com with ESMTP; 14 Jun 2017 10:22:20 -0700 From: Ross Zwisler To: Andrew Morton , linux-kernel@vger.kernel.org Cc: Ross Zwisler , "Darrick J. Wong" , "Theodore Ts'o" , Alexander Viro , Andreas Dilger , Christoph Hellwig , Dan Williams , Dave Hansen , Ingo Molnar , Jan Kara , Jonathan Corbet , Matthew Wilcox , Steven Rostedt , linux-doc@vger.kernel.org, linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nvdimm@lists.01.org, linux-xfs@vger.kernel.org Subject: [PATCH v2 2/3] dax: relocate dax_load_hole() Date: Wed, 14 Jun 2017 11:22:10 -0600 Message-Id: <20170614172211.19820-3-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.9.4 In-Reply-To: <20170614172211.19820-1-ross.zwisler@linux.intel.com> References: <20170614172211.19820-1-ross.zwisler@linux.intel.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP dax_load_hole() will soon need to call dax_insert_mapping_entry(), so it needs to be moved lower in dax.c so the definition exists. Signed-off-by: Ross Zwisler --- fs/dax.c | 88 ++++++++++++++++++++++++++++++++-------------------------------- 1 file changed, 44 insertions(+), 44 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 2a6889b..66e0e93 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -469,50 +469,6 @@ int dax_invalidate_mapping_entry_sync(struct address_space *mapping, return __dax_invalidate_mapping_entry(mapping, index, false); } -/* - * The user has performed a load from a hole in the file. Allocating - * a new page in the file would cause excessive storage usage for - * workloads with sparse files. We allocate a page cache page instead. - * We'll kick it out of the page cache if it's ever written to, - * otherwise it will simply fall out of the page cache under memory - * pressure without ever having been dirtied. - */ -static int dax_load_hole(struct address_space *mapping, void **entry, - struct vm_fault *vmf) -{ - struct inode *inode = mapping->host; - struct page *page; - int ret; - - /* Hole page already exists? Return it... */ - if (!radix_tree_exceptional_entry(*entry)) { - page = *entry; - goto finish_fault; - } - - /* This will replace locked radix tree entry with a hole page */ - page = find_or_create_page(mapping, vmf->pgoff, - vmf->gfp_mask | __GFP_ZERO); - if (!page) { - ret = VM_FAULT_OOM; - goto out; - } - -finish_fault: - vmf->page = page; - ret = finish_fault(vmf); - vmf->page = NULL; - *entry = page; - if (!ret) { - /* Grab reference for PTE that is now referencing the page */ - get_page(page); - ret = VM_FAULT_NOPAGE; - } -out: - trace_dax_load_hole(inode, vmf, ret); - return ret; -} - static int copy_user_dax(struct block_device *bdev, struct dax_device *dax_dev, sector_t sector, size_t size, struct page *to, unsigned long vaddr) @@ -936,6 +892,50 @@ int dax_pfn_mkwrite(struct vm_fault *vmf) } EXPORT_SYMBOL_GPL(dax_pfn_mkwrite); +/* + * The user has performed a load from a hole in the file. Allocating + * a new page in the file would cause excessive storage usage for + * workloads with sparse files. We allocate a page cache page instead. + * We'll kick it out of the page cache if it's ever written to, + * otherwise it will simply fall out of the page cache under memory + * pressure without ever having been dirtied. + */ +static int dax_load_hole(struct address_space *mapping, void **entry, + struct vm_fault *vmf) +{ + struct inode *inode = mapping->host; + struct page *page; + int ret; + + /* Hole page already exists? Return it... */ + if (!radix_tree_exceptional_entry(*entry)) { + page = *entry; + goto finish_fault; + } + + /* This will replace locked radix tree entry with a hole page */ + page = find_or_create_page(mapping, vmf->pgoff, + vmf->gfp_mask | __GFP_ZERO); + if (!page) { + ret = VM_FAULT_OOM; + goto out; + } + +finish_fault: + vmf->page = page; + ret = finish_fault(vmf); + vmf->page = NULL; + *entry = page; + if (!ret) { + /* Grab reference for PTE that is now referencing the page */ + get_page(page); + ret = VM_FAULT_NOPAGE; + } +out: + trace_dax_load_hole(inode, vmf, ret); + return ret; +} + static bool dax_range_is_aligned(struct block_device *bdev, unsigned int offset, unsigned int length) {