From patchwork Thu May 12 16:29:20 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 9084161 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id AFF5E9FB96 for ; Thu, 12 May 2016 16:29:38 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DEE052024F for ; Thu, 12 May 2016 16:29:37 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0B6712020F for ; Thu, 12 May 2016 16:29:37 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 6F1F01A1F92; Thu, 12 May 2016 09:29:37 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 8CE201A1F90 for ; Thu, 12 May 2016 09:29:35 -0700 (PDT) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id B756EADE4; Thu, 12 May 2016 16:29:31 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 78CE21E0A17; Thu, 12 May 2016 18:29:28 +0200 (CEST) From: Jan Kara To: linux-nvdimm@lists.01.org Subject: [PATCH 7/7] dax: Remove i_mmap_lock protection Date: Thu, 12 May 2016 18:29:20 +0200 Message-Id: <1463070560-1979-8-git-send-email-jack@suse.cz> X-Mailer: git-send-email 2.6.6 In-Reply-To: <1463070560-1979-1-git-send-email-jack@suse.cz> References: <1463070560-1979-1-git-send-email-jack@suse.cz> X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ted Tso , linux-fsdevel@vger.kernel.org, Jan Kara , linux-ext4@vger.kernel.org MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-3.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently faults are protected against truncate by filesystem specific i_mmap_sem and page lock in case of hole page. Cow faults are protected DAX radix tree entry locking. So there's no need for i_mmap_lock in DAX code. Remove it. Reviewed-by: Ross Zwisler Signed-off-by: Jan Kara --- fs/dax.c | 24 +++++------------------- mm/memory.c | 2 -- 2 files changed, 5 insertions(+), 21 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index a09d19f5371e..a07202ab8f61 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -830,29 +830,19 @@ static int dax_insert_mapping(struct address_space *mapping, .sector = to_sector(bh, mapping->host), .size = bh->b_size, }; - int error; void *ret; void *entry = *entryp; - i_mmap_lock_read(mapping); - - if (dax_map_atomic(bdev, &dax) < 0) { - error = PTR_ERR(dax.addr); - goto out; - } + if (dax_map_atomic(bdev, &dax) < 0) + return PTR_ERR(dax.addr); dax_unmap_atomic(bdev, &dax); ret = dax_insert_mapping_entry(mapping, vmf, entry, dax.sector); - if (IS_ERR(ret)) { - error = PTR_ERR(ret); - goto out; - } + if (IS_ERR(ret)) + return PTR_ERR(ret); *entryp = ret; - error = vm_insert_mixed(vma, vaddr, dax.pfn); - out: - i_mmap_unlock_read(mapping); - return error; + return vm_insert_mixed(vma, vaddr, dax.pfn); } /** @@ -1090,8 +1080,6 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, truncate_pagecache_range(inode, lstart, lend); } - i_mmap_lock_read(mapping); - if (!write && !buffer_mapped(&bh)) { spinlock_t *ptl; pmd_t entry; @@ -1180,8 +1168,6 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, } out: - i_mmap_unlock_read(mapping); - return result; fallback: diff --git a/mm/memory.c b/mm/memory.c index 7f063c9ad711..f6f1c15c8682 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2486,8 +2486,6 @@ void unmap_mapping_range(struct address_space *mapping, if (details.last_index < details.first_index) details.last_index = ULONG_MAX; - - /* DAX uses i_mmap_lock to serialise file truncate vs page fault */ i_mmap_lock_write(mapping); if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap))) unmap_mapping_range_tree(&mapping->i_mmap, &details);