From patchwork Thu Mar 10 23:55:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wilcox, Matthew R" X-Patchwork-Id: 8560571 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A2BF19F7CA for ; Thu, 10 Mar 2016 23:55:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A482E20390 for ; Thu, 10 Mar 2016 23:55:47 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6A77E203B4 for ; Thu, 10 Mar 2016 23:55:46 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id E39DE1A1F90; Thu, 10 Mar 2016 15:55:56 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by ml01.01.org (Postfix) with ESMTP id F38541A1EF8 for ; Thu, 10 Mar 2016 15:55:55 -0800 (PST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP; 10 Mar 2016 15:55:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,317,1455004800"; d="scan'208";a="667361593" Received: from jduan1-mobl2.amr.corp.intel.com (HELO thog.int.wil.cx) ([10.252.134.213]) by FMSMGA003.fm.intel.com with SMTP; 10 Mar 2016 15:55:40 -0800 Received: by thog.int.wil.cx (Postfix, from userid 1000) id 58DE95FC0A; Thu, 10 Mar 2016 18:55:36 -0500 (EST) From: Matthew Wilcox To: Andrew Morton Subject: [PATCH v5 14/14] dax: Use vmf->pgoff in fault handlers Date: Thu, 10 Mar 2016 18:55:31 -0500 Message-Id: <1457654131-4562-15-git-send-email-matthew.r.wilcox@intel.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1457654131-4562-1-git-send-email-matthew.r.wilcox@intel.com> References: <1457654131-4562-1-git-send-email-matthew.r.wilcox@intel.com> Cc: linux-nvdimm@lists.01.org, x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Matthew Wilcox , linux-fsdevel@vger.kernel.org X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that the PMD and PUD fault handlers are passed pgoff, there's no need to calculate it themselves. Signed-off-by: Matthew Wilcox --- fs/dax.c | 26 ++++++++++++-------------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index c5d87be..5db3841 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -736,7 +736,7 @@ static int dax_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf, unsigned long pmd_addr = address & PMD_MASK; bool write = vmf->flags & FAULT_FLAG_WRITE; struct block_device *bdev; - pgoff_t size, pgoff; + pgoff_t size; sector_t block; int error, result = 0; bool alloc = false; @@ -761,12 +761,11 @@ static int dax_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf, return VM_FAULT_FALLBACK; } - pgoff = linear_page_index(vma, pmd_addr); size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT; - if (pgoff >= size) + if (vmf->pgoff >= size) return VM_FAULT_SIGBUS; /* If the PMD would cover blocks out of the file */ - if ((pgoff | PG_PMD_COLOUR) >= size) { + if ((vmf->pgoff | PG_PMD_COLOUR) >= size) { dax_pmd_dbg(NULL, address, "offset + huge page size > file size"); return VM_FAULT_FALLBACK; @@ -774,7 +773,7 @@ static int dax_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf, memset(&bh, 0, sizeof(bh)); bh.b_bdev = inode->i_sb->s_bdev; - block = (sector_t)pgoff << (PAGE_SHIFT - blkbits); + block = (sector_t)vmf->pgoff << (PAGE_SHIFT - blkbits); bh.b_size = PMD_SIZE; @@ -804,7 +803,7 @@ static int dax_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf, * zero pages covering this hole */ if (alloc) { - loff_t lstart = pgoff << PAGE_SHIFT; + loff_t lstart = vmf->pgoff << PAGE_SHIFT; loff_t lend = lstart + PMD_SIZE - 1; /* inclusive */ truncate_pagecache_range(inode, lstart, lend); @@ -890,8 +889,8 @@ static int dax_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf, * the write to insert a dirty entry. */ if (write) { - error = dax_radix_entry(mapping, pgoff, dax.sector, - true, true); + error = dax_radix_entry(mapping, vmf->pgoff, + dax.sector, true, true); if (error) { dax_pmd_dbg(&bh, address, "PMD radix insertion failed"); @@ -942,7 +941,7 @@ static int dax_pud_fault(struct vm_area_struct *vma, struct vm_fault *vmf, unsigned long pud_addr = address & PUD_MASK; bool write = vmf->flags & FAULT_FLAG_WRITE; struct block_device *bdev; - pgoff_t size, pgoff; + pgoff_t size; sector_t block; int result = 0; bool alloc = false; @@ -967,12 +966,11 @@ static int dax_pud_fault(struct vm_area_struct *vma, struct vm_fault *vmf, return VM_FAULT_FALLBACK; } - pgoff = linear_page_index(vma, pud_addr); size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT; - if (pgoff >= size) + if (vmf->pgoff >= size) return VM_FAULT_SIGBUS; /* If the PUD would cover blocks out of the file */ - if ((pgoff | PG_PUD_COLOUR) >= size) { + if ((vmf->pgoff | PG_PUD_COLOUR) >= size) { dax_pud_dbg(NULL, address, "offset + huge page size > file size"); return VM_FAULT_FALLBACK; @@ -980,7 +978,7 @@ static int dax_pud_fault(struct vm_area_struct *vma, struct vm_fault *vmf, memset(&bh, 0, sizeof(bh)); bh.b_bdev = inode->i_sb->s_bdev; - block = (sector_t)pgoff << (PAGE_SHIFT - blkbits); + block = (sector_t)vmf->pgoff << (PAGE_SHIFT - blkbits); bh.b_size = PUD_SIZE; @@ -1010,7 +1008,7 @@ static int dax_pud_fault(struct vm_area_struct *vma, struct vm_fault *vmf, * zero pages covering this hole */ if (alloc) { - loff_t lstart = pgoff << PAGE_SHIFT; + loff_t lstart = vmf->pgoff << PAGE_SHIFT; loff_t lend = lstart + PUD_SIZE - 1; /* inclusive */ truncate_pagecache_range(inode, lstart, lend);