From patchwork Thu Oct 1 07:46:33 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 7305991 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 2614F9F314 for ; Thu, 1 Oct 2015 07:50:14 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3482B20807 for ; Thu, 1 Oct 2015 07:50:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4A77120803 for ; Thu, 1 Oct 2015 07:50:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755279AbbJAHuK (ORCPT ); Thu, 1 Oct 2015 03:50:10 -0400 Received: from ipmail04.adl6.internode.on.net ([150.101.137.141]:51799 "EHLO ipmail04.adl6.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752759AbbJAHrz (ORCPT ); Thu, 1 Oct 2015 03:47:55 -0400 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2AACQCY5AxWPEcOLHleGQGDDYgcokkDBpAfiU2EBk0BAQEBAQEHAQEBAUE/hCUBBScvIxAISTkDBxSIRswMhiyKCWeEFQWHNo5DnByMS4ItAUYdgWYsiGOBSAEBAQ Received: from ppp121-44-14-71.lns20.syd4.internode.on.net (HELO dastard) ([121.44.14.71]) by ipmail04.adl6.internode.on.net with ESMTP; 01 Oct 2015 17:16:46 +0930 Received: from disappointment.disaster.area ([192.168.1.110] helo=disappointment) by dastard with esmtp (Exim 4.80) (envelope-from ) id 1ZhYZh-0005hs-PL; Thu, 01 Oct 2015 17:46:45 +1000 Received: from dave by disappointment with local (Exim 4.86) (envelope-from ) id 1ZhYZh-0001lf-OO; Thu, 01 Oct 2015 17:46:45 +1000 From: Dave Chinner To: xfs@oss.sgi.com Cc: linux-fsdevel@vger.kernel.org, ross.zwisler@linux.intel.com, willy@linux.intel.com, dan.j.williams@intel.com, kirill.shutemov@linux.intel.com, linux-nvdimm@lists.01.org, jack@suse.cz, linux-kernel@vger.kernel.org Subject: [PATCH 1/7] Revert "mm: take i_mmap_lock in unmap_mapping_range() for DAX" Date: Thu, 1 Oct 2015 17:46:33 +1000 Message-Id: <1443685599-4843-2-git-send-email-david@fromorbit.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1443685599-4843-1-git-send-email-david@fromorbit.com> References: <1443685599-4843-1-git-send-email-david@fromorbit.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This reverts commit 46c043ede4711e8d598b9d63c5616c1fedb0605e. --- fs/dax.c | 36 ++++++++++++++++-------------------- mm/memory.c | 11 +++++++++-- 2 files changed, 25 insertions(+), 22 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 7ae6df7..400fe95 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -569,26 +569,6 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, if (!buffer_size_valid(&bh) || bh.b_size < PMD_SIZE) goto fallback; - if (buffer_unwritten(&bh) || buffer_new(&bh)) { - int i; - for (i = 0; i < PTRS_PER_PMD; i++) - clear_pmem(kaddr + i * PAGE_SIZE, PAGE_SIZE); - wmb_pmem(); - count_vm_event(PGMAJFAULT); - mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT); - result |= VM_FAULT_MAJOR; - } - - /* - * If we allocated new storage, make sure no process has any - * zero pages covering this hole - */ - if (buffer_new(&bh)) { - i_mmap_unlock_write(mapping); - unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, PMD_SIZE, 0); - i_mmap_lock_write(mapping); - } - /* * If a truncate happened while we were allocating blocks, we may * leave blocks allocated to the file that are beyond EOF. We can't @@ -603,6 +583,13 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, if ((pgoff | PG_PMD_COLOUR) >= size) goto fallback; + /* + * If we allocated new storage, make sure no process has any + * zero pages covering this hole + */ + if (buffer_new(&bh)) + unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, PMD_SIZE, 0); + if (!write && !buffer_mapped(&bh) && buffer_uptodate(&bh)) { spinlock_t *ptl; pmd_t entry; @@ -633,6 +620,15 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR)) goto fallback; + if (buffer_unwritten(&bh) || buffer_new(&bh)) { + int i; + for (i = 0; i < PTRS_PER_PMD; i++) + clear_page(kaddr + i * PAGE_SIZE); + count_vm_event(PGMAJFAULT); + mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT); + result |= VM_FAULT_MAJOR; + } + result |= vmf_insert_pfn_pmd(vma, address, pmd, pfn, write); } diff --git a/mm/memory.c b/mm/memory.c index 9cb2747..5ec066f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2426,10 +2426,17 @@ void unmap_mapping_range(struct address_space *mapping, if (details.last_index < details.first_index) details.last_index = ULONG_MAX; - i_mmap_lock_write(mapping); + + /* + * DAX already holds i_mmap_lock to serialise file truncate vs + * page fault and page fault vs page fault. + */ + if (!IS_DAX(mapping->host)) + i_mmap_lock_write(mapping); if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap))) unmap_mapping_range_tree(&mapping->i_mmap, &details); - i_mmap_unlock_write(mapping); + if (!IS_DAX(mapping->host)) + i_mmap_unlock_write(mapping); } EXPORT_SYMBOL(unmap_mapping_range);