From patchwork Fri Jan 22 21:36:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 8093571 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 967829F440 for ; Fri, 22 Jan 2016 21:40:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id ADD4D205B5 for ; Fri, 22 Jan 2016 21:40:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BE042205B1 for ; Fri, 22 Jan 2016 21:40:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755306AbcAVVkf (ORCPT ); Fri, 22 Jan 2016 16:40:35 -0500 Received: from mga03.intel.com ([134.134.136.65]:47314 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755038AbcAVVgY (ORCPT ); Fri, 22 Jan 2016 16:36:24 -0500 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP; 22 Jan 2016 13:36:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,333,1449561600"; d="scan'208";a="732559491" Received: from rzwisler-desk1.amr.corp.intel.com (HELO tarkir.lm.intel.com) ([10.232.112.142]) by orsmga003.jf.intel.com with ESMTP; 22 Jan 2016 13:36:24 -0800 From: Ross Zwisler To: linux-kernel@vger.kernel.org Cc: Ross Zwisler , Alexander Viro , Andrew Morton , Dan Williams , Dave Chinner , Jan Kara , Matthew Wilcox , linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org Subject: [PATCH v3 5/5] dax: fix clearing of holes in __dax_pmd_fault() Date: Fri, 22 Jan 2016 14:36:13 -0700 Message-Id: <1453498573-6328-6-git-send-email-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1453498573-6328-1-git-send-email-ross.zwisler@linux.intel.com> References: <1453498573-6328-1-git-send-email-ross.zwisler@linux.intel.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When the user reads from a DAX hole via a mmap we service page faults using zero-filled page cache pages. These zero pages are also placed into the address_space radix tree. When we get our first write for that space, we can allocate a PMD page worth of DAX storage to replace that hole. When this happens we need to unmap the zero pages and remove them from the radix tree. Prior to this patch we were unmapping *all* storage in our PMD's range, which is incorrect because it removed DAX entries as well on non-allocating page faults. Instead, keep track of when get_block() actually gives us storage so that we can be sure to only remove zero pages that were covering holes. Signed-off-by: Ross Zwisler Reported-by: Jan Kara --- fs/dax.c | 32 ++++++++++++++++++++++---------- 1 file changed, 22 insertions(+), 10 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index a2ed009..206650f 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -791,9 +791,9 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, bool write = flags & FAULT_FLAG_WRITE; struct block_device *bdev; pgoff_t size, pgoff; - loff_t lstart, lend; sector_t block; int error, result = 0; + bool alloc = false; /* dax pmd mappings require pfn_t_devmap() */ if (!IS_ENABLED(CONFIG_FS_DAX_PMD)) @@ -831,10 +831,17 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, block = (sector_t)pgoff << (PAGE_SHIFT - blkbits); bh.b_size = PMD_SIZE; - if (get_block(inode, block, &bh, write) != 0) + + if (get_block(inode, block, &bh, 0) != 0) return VM_FAULT_SIGBUS; + + if (!buffer_mapped(&bh) && write) { + if (get_block(inode, block, &bh, 1) != 0) + return VM_FAULT_SIGBUS; + alloc = true; + } + bdev = bh.b_bdev; - i_mmap_lock_read(mapping); /* * If the filesystem isn't willing to tell us the length of a hole, @@ -843,15 +850,20 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, */ if (!buffer_size_valid(&bh) || bh.b_size < PMD_SIZE) { dax_pmd_dbg(&bh, address, "allocated block too small"); - goto fallback; + return VM_FAULT_FALLBACK; + } + + /* + * If we allocated new storage, make sure no process has any + * zero pages covering this hole + */ + if (alloc) { + loff_t lstart = pgoff << PAGE_SHIFT; + loff_t lend = lstart + PMD_SIZE - 1; /* inclusive */ + + truncate_pagecache_range(inode, lstart, lend); } - /* make sure no process has any zero pages covering this hole */ - lstart = pgoff << PAGE_SHIFT; - lend = lstart + PMD_SIZE - 1; /* inclusive */ - i_mmap_unlock_read(mapping); - unmap_mapping_range(mapping, lstart, PMD_SIZE, 0); - truncate_inode_pages_range(mapping, lstart, lend); i_mmap_lock_read(mapping); /*