From patchwork Thu Mar 10 19:55:21 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wilcox, Matthew R" X-Patchwork-Id: 8559871 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 89D71C0553 for ; Thu, 10 Mar 2016 19:55:54 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7FE8C20381 for ; Thu, 10 Mar 2016 19:55:53 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8ECFF20380 for ; Thu, 10 Mar 2016 19:55:52 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 7D7721A1EFA; Thu, 10 Mar 2016 11:56:07 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by ml01.01.org (Postfix) with ESMTP id A34CA1A1EFA for ; Thu, 10 Mar 2016 11:56:06 -0800 (PST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP; 10 Mar 2016 11:55:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,317,1455004800"; d="scan'208";a="934120217" Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203]) by fmsmga002.fm.intel.com with ESMTP; 10 Mar 2016 11:55:24 -0800 Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 10 Mar 2016 11:55:22 -0800 Received: from fmsmsx114.amr.corp.intel.com ([169.254.6.167]) by fmsmsx115.amr.corp.intel.com ([10.18.116.19]) with mapi id 14.03.0248.002; Thu, 10 Mar 2016 11:55:22 -0800 From: "Wilcox, Matthew R" To: Jan Kara , "linux-fsdevel@vger.kernel.org" Subject: RE: [PATCH 05/12] dax: Remove synchronization using i_mmap_lock Thread-Topic: [PATCH 05/12] dax: Remove synchronization using i_mmap_lock Thread-Index: AQHRewG1JtBIMbAJuUaMAsTR44KFCJ9TFedw Date: Thu, 10 Mar 2016 19:55:21 +0000 Message-ID: <100D68C7BA14664A8938383216E40DE0422079E9@FMSMSX114.amr.corp.intel.com> References: <1457637535-21633-1-git-send-email-jack@suse.cz> <1457637535-21633-6-git-send-email-jack@suse.cz> In-Reply-To: <1457637535-21633-6-git-send-email-jack@suse.cz> Accept-Language: en-CA, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.1.200.106] MIME-Version: 1.0 Cc: NeilBrown , "linux-nvdimm@lists.01.org" X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This locking's still necessary. i_mmap_sem has already been released by the time we're back in do_cow_fault(), so it doesn't protect that page, and truncate can have whizzed past and thinks there's nothing to unmap. So a task can have a MAP_PRIVATE page still in its address space after it's supposed to have been unmapped. We need a test suite for this ;-) -----Original Message----- From: Jan Kara [mailto:jack@suse.cz] Sent: Thursday, March 10, 2016 11:19 AM To: linux-fsdevel@vger.kernel.org Cc: Wilcox, Matthew R; Ross Zwisler; Williams, Dan J; linux-nvdimm@lists.01.org; NeilBrown; Jan Kara Subject: [PATCH 05/12] dax: Remove synchronization using i_mmap_lock At one point DAX used i_mmap_lock so synchronize page faults with page table invalidation during truncate. However these days DAX uses filesystem specific RW semaphores to protect against these races (i_mmap_sem in ext2 & ext4 cases, XFS_MMAPLOCK in xfs case). So remove the unnecessary locking. Signed-off-by: Jan Kara --- fs/dax.c | 19 ------------------- mm/memory.c | 14 -------------- 2 files changed, 33 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 9c4d697fb6fc..e409e8fc13b7 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -563,8 +563,6 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, pgoff_t size; int error; - i_mmap_lock_read(mapping); - /* * Check truncate didn't happen while we were allocating a block. * If it did, this block may or may not be still allocated to the @@ -597,8 +595,6 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh, error = vm_insert_mixed(vma, vaddr, dax.pfn); out: - i_mmap_unlock_read(mapping); - return error; } @@ -695,17 +691,6 @@ int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf, if (error) goto unlock_page; vmf->page = page; - if (!page) { - i_mmap_lock_read(mapping); - /* Check we didn't race with truncate */ - size = (i_size_read(inode) + PAGE_SIZE - 1) >> - PAGE_SHIFT; - if (vmf->pgoff >= size) { - i_mmap_unlock_read(mapping); - error = -EIO; - goto out; - } - } return VM_FAULT_LOCKED; } @@ -895,8 +880,6 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, truncate_pagecache_range(inode, lstart, lend); } - i_mmap_lock_read(mapping); - /* * If a truncate happened while we were allocating blocks, we may * leave blocks allocated to the file that are beyond EOF. We can't @@ -1013,8 +996,6 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, } out: - i_mmap_unlock_read(mapping); - if (buffer_unwritten(&bh)) complete_unwritten(&bh, !(result & VM_FAULT_ERROR)); diff --git a/mm/memory.c b/mm/memory.c index 8132787ae4d5..13f76eb08f33 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2430,8 +2430,6 @@ void unmap_mapping_range(struct address_space *mapping, if (details.last_index < details.first_index) details.last_index = ULONG_MAX; - - /* DAX uses i_mmap_lock to serialise file truncate vs page fault */ i_mmap_lock_write(mapping); if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap))) unmap_mapping_range_tree(&mapping->i_mmap, &details); @@ -3019,12 +3017,6 @@ static int do_cow_fault(struct mm_struct *mm, struct vm_area_struct *vma, if (fault_page) { unlock_page(fault_page); page_cache_release(fault_page); - } else { - /* - * The fault handler has no page to lock, so it holds - * i_mmap_lock for read to protect against truncate. - */ - i_mmap_unlock_read(vma->vm_file->f_mapping); } goto uncharge_out; } @@ -3035,12 +3027,6 @@ static int do_cow_fault(struct mm_struct *mm, struct vm_area_struct *vma, if (fault_page) { unlock_page(fault_page); page_cache_release(fault_page); - } else { - /* - * The fault handler has no page to lock, so it holds - * i_mmap_lock for read to protect against truncate. - */ - i_mmap_unlock_read(vma->vm_file->f_mapping); } return ret; uncharge_out: