From patchwork Sun Jan 31 12:19:53 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wilcox, Matthew R" X-Patchwork-Id: 8174011 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id EFCB39FE6B for ; Sun, 31 Jan 2016 12:20:02 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E84D22014A for ; Sun, 31 Jan 2016 12:20:01 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AF91620154 for ; Sun, 31 Jan 2016 12:20:00 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id BB14B1A22F4; Sun, 31 Jan 2016 04:19:59 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by ml01.01.org (Postfix) with ESMTP id 67E121A21B3 for ; Sun, 31 Jan 2016 04:19:58 -0800 (PST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP; 31 Jan 2016 04:19:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,375,1449561600"; d="scan'208";a="905187348" Received: from vsundhar-mobl.amr.corp.intel.com (HELO thog.int.wil.cx) ([10.252.206.106]) by fmsmga002.fm.intel.com with SMTP; 31 Jan 2016 04:19:54 -0800 Received: by thog.int.wil.cx (Postfix, from userid 1000) id D1F6F61BE4; Sun, 31 Jan 2016 07:19:56 -0500 (EST) From: Matthew Wilcox To: Andrew Morton Subject: [PATCH 4/6] dax: Use PAGE_CACHE_SIZE where appropriate Date: Sun, 31 Jan 2016 23:19:53 +1100 Message-Id: <1454242795-18038-5-git-send-email-matthew.r.wilcox@intel.com> X-Mailer: git-send-email 2.7.0.rc3 In-Reply-To: <1454242795-18038-1-git-send-email-matthew.r.wilcox@intel.com> References: <1454242795-18038-1-git-send-email-matthew.r.wilcox@intel.com> Cc: linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Matthew Wilcox , linux-fsdevel@vger.kernel.org X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We were a little sloppy about using PAGE_SIZE instead of PAGE_CACHE_SIZE. The important thing to remember is that the VM is gicing us a pgoff_t and asking us to populate that. If PAGE_CACHE_SIZE were larger than PAGE_SIZE, then we would not successfully fill in the PTEs for faults that occurred in the upper portions of PAGE_CACHE_SIZE. Of course, we actually only fill in one PTE, so this still doesn't solve the problem. I have my doubts we will ever increase PAGE_CACHE_SIZE now that we have map_pages. Signed-off-by: Matthew Wilcox --- fs/dax.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index d0e1334..f0c204d 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -558,14 +558,14 @@ static int dax_pte_fault(struct vm_area_struct *vma, struct vm_fault *vmf, int error; int major = 0; - size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT; + size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; if (vmf->pgoff >= size) return VM_FAULT_SIGBUS; memset(&bh, 0, sizeof(bh)); - block = (sector_t)vmf->pgoff << (PAGE_SHIFT - blkbits); + block = (sector_t)vmf->pgoff << (PAGE_CACHE_SHIFT - blkbits); bh.b_bdev = inode->i_sb->s_bdev; - bh.b_size = PAGE_SIZE; + bh.b_size = PAGE_CACHE_SIZE; repeat: page = find_get_page(mapping, vmf->pgoff); @@ -582,7 +582,7 @@ static int dax_pte_fault(struct vm_area_struct *vma, struct vm_fault *vmf, } error = get_block(inode, block, &bh, 0); - if (!error && (bh.b_size < PAGE_SIZE)) + if (!error && (bh.b_size < PAGE_CACHE_SIZE)) error = -EIO; /* fs corruption? */ if (error) goto unlock_page; @@ -593,7 +593,7 @@ static int dax_pte_fault(struct vm_area_struct *vma, struct vm_fault *vmf, count_vm_event(PGMAJFAULT); mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT); major = VM_FAULT_MAJOR; - if (!error && (bh.b_size < PAGE_SIZE)) + if (!error && (bh.b_size < PAGE_CACHE_SIZE)) error = -EIO; if (error) goto unlock_page; @@ -630,7 +630,7 @@ static int dax_pte_fault(struct vm_area_struct *vma, struct vm_fault *vmf, page = find_lock_page(mapping, vmf->pgoff); if (page) { - unmap_mapping_range(mapping, vmf->pgoff << PAGE_SHIFT, + unmap_mapping_range(mapping, vmf->pgoff << PAGE_CACHE_SHIFT, PAGE_CACHE_SIZE, 0); delete_from_page_cache(page); unlock_page(page); @@ -677,7 +677,7 @@ static int dax_pte_fault(struct vm_area_struct *vma, struct vm_fault *vmf, * The 'colour' (ie low bits) within a PMD of a page offset. This comes up * more often than one might expect in the below function. */ -#define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1) +#define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_CACHE_SHIFT) - 1) static void __dax_dbg(struct buffer_head *bh, unsigned long address, const char *reason, const char *fn) @@ -734,7 +734,7 @@ static int dax_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf, return VM_FAULT_FALLBACK; } - size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT; + size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; if (vmf->pgoff >= size) return VM_FAULT_SIGBUS; /* If the PMD would cover blocks out of the file */ @@ -746,7 +746,7 @@ static int dax_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf, memset(&bh, 0, sizeof(bh)); bh.b_bdev = inode->i_sb->s_bdev; - block = (sector_t)vmf->pgoff << (PAGE_SHIFT - blkbits); + block = (sector_t)vmf->pgoff << (PAGE_CACHE_SHIFT - blkbits); bh.b_size = PMD_SIZE; @@ -776,7 +776,7 @@ static int dax_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf, * zero pages covering this hole */ if (alloc) { - loff_t lstart = vmf->pgoff << PAGE_SHIFT; + loff_t lstart = vmf->pgoff << PAGE_CACHE_SHIFT; loff_t lend = lstart + PMD_SIZE - 1; /* inclusive */ truncate_pagecache_range(inode, lstart, lend); @@ -904,7 +904,7 @@ static int dax_pmd_fault(struct vm_area_struct *vma, struct vm_fault *vmf, * The 'colour' (ie low bits) within a PUD of a page offset. This comes up * more often than one might expect in the below function. */ -#define PG_PUD_COLOUR ((PUD_SIZE >> PAGE_SHIFT) - 1) +#define PG_PUD_COLOUR ((PUD_SIZE >> PAGE_CACHE_SHIFT) - 1) #define dax_pud_dbg(bh, address, reason) __dax_dbg(bh, address, reason, "dax_pud") @@ -945,7 +945,7 @@ static int dax_pud_fault(struct vm_area_struct *vma, struct vm_fault *vmf, return VM_FAULT_FALLBACK; } - size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT; + size = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; if (vmf->pgoff >= size) return VM_FAULT_SIGBUS; /* If the PUD would cover blocks out of the file */ @@ -957,7 +957,7 @@ static int dax_pud_fault(struct vm_area_struct *vma, struct vm_fault *vmf, memset(&bh, 0, sizeof(bh)); bh.b_bdev = inode->i_sb->s_bdev; - block = (sector_t)vmf->pgoff << (PAGE_SHIFT - blkbits); + block = (sector_t)vmf->pgoff << (PAGE_CACHE_SHIFT - blkbits); bh.b_size = PUD_SIZE; @@ -987,7 +987,7 @@ static int dax_pud_fault(struct vm_area_struct *vma, struct vm_fault *vmf, * zero pages covering this hole */ if (alloc) { - loff_t lstart = vmf->pgoff << PAGE_SHIFT; + loff_t lstart = vmf->pgoff << PAGE_CACHE_SHIFT; loff_t lend = lstart + PUD_SIZE - 1; /* inclusive */ truncate_pagecache_range(inode, lstart, lend);