From patchwork Tue Aug 23 22:04:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 9296503 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8BCA160757 for ; Tue, 23 Aug 2016 22:06:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6CCE42880A for ; Tue, 23 Aug 2016 22:06:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 615FB28D67; Tue, 23 Aug 2016 22:06:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C023828D66 for ; Tue, 23 Aug 2016 22:05:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758246AbcHWWF3 (ORCPT ); Tue, 23 Aug 2016 18:05:29 -0400 Received: from mga02.intel.com ([134.134.136.20]:17980 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757316AbcHWWEg (ORCPT ); Tue, 23 Aug 2016 18:04:36 -0400 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP; 23 Aug 2016 15:04:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,567,1464678000"; d="scan'208";a="752634176" Received: from theros.lm.intel.com ([10.232.112.92]) by FMSMGA003.fm.intel.com with ESMTP; 23 Aug 2016 15:04:33 -0700 From: Ross Zwisler To: linux-kernel@vger.kernel.org Cc: Ross Zwisler , "Theodore Ts'o" , Alexander Viro , Andreas Dilger , Andrew Morton , Dan Williams , Dave Chinner , Jan Kara , linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nvdimm@lists.01.org, Matthew Wilcox Subject: [PATCH v2 7/9] dax: coordinate locking for offsets in PMD range Date: Tue, 23 Aug 2016 16:04:17 -0600 Message-Id: <20160823220419.11717-8-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20160823220419.11717-1-ross.zwisler@linux.intel.com> References: <20160823220419.11717-1-ross.zwisler@linux.intel.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP DAX radix tree locking currently locks entries based on the unique combination of the 'mapping' pointer and the pgoff_t 'index' for the entry. This works for PTEs, but as we move to PMDs we will need to have all the offsets within the range covered by the PMD to map to the same bit lock. To accomplish this, for ranges covered by a PMD entry we will instead lock based on the page offset of the beginning of the PMD entry. The 'mapping' pointer is still used in the same way. Signed-off-by: Ross Zwisler --- fs/dax.c | 37 ++++++++++++++++++++++++------------- include/linux/dax.h | 2 +- mm/filemap.c | 2 +- 3 files changed, 26 insertions(+), 15 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 0e3f462..955e184 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -62,10 +62,17 @@ static int __init init_dax_wait_table(void) } fs_initcall(init_dax_wait_table); +static pgoff_t dax_entry_start(pgoff_t index, void *entry) +{ + if (RADIX_DAX_TYPE(entry) == RADIX_DAX_PMD) + index &= (PMD_MASK >> PAGE_SHIFT); + return index; +} + static wait_queue_head_t *dax_entry_waitqueue(struct address_space *mapping, - pgoff_t index) + pgoff_t entry_start) { - unsigned long hash = hash_long((unsigned long)mapping ^ index, + unsigned long hash = hash_long((unsigned long)mapping ^ entry_start, DAX_WAIT_TABLE_BITS); return wait_table + hash; } @@ -283,7 +290,7 @@ EXPORT_SYMBOL_GPL(dax_do_io); */ struct exceptional_entry_key { struct address_space *mapping; - unsigned long index; + pgoff_t entry_start; }; struct wait_exceptional_entry_queue { @@ -299,7 +306,7 @@ static int wake_exceptional_entry_func(wait_queue_t *wait, unsigned int mode, container_of(wait, struct wait_exceptional_entry_queue, wait); if (key->mapping != ewait->key.mapping || - key->index != ewait->key.index) + key->entry_start != ewait->key.entry_start) return 0; return autoremove_wake_function(wait, mode, sync, NULL); } @@ -357,12 +364,10 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping, { void *entry, **slot; struct wait_exceptional_entry_queue ewait; - wait_queue_head_t *wq = dax_entry_waitqueue(mapping, index); + wait_queue_head_t *wq; init_wait(&ewait.wait); ewait.wait.func = wake_exceptional_entry_func; - ewait.key.mapping = mapping; - ewait.key.index = index; for (;;) { entry = __radix_tree_lookup(&mapping->page_tree, index, NULL, @@ -373,6 +378,11 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping, *slotp = slot; return entry; } + + wq = dax_entry_waitqueue(mapping, + dax_entry_start(index, entry)); + ewait.key.mapping = mapping; + ewait.key.entry_start = dax_entry_start(index, entry); prepare_to_wait_exclusive(wq, &ewait.wait, TASK_UNINTERRUPTIBLE); spin_unlock_irq(&mapping->tree_lock); @@ -445,10 +455,11 @@ restart: return entry; } -void dax_wake_mapping_entry_waiter(struct address_space *mapping, +void dax_wake_mapping_entry_waiter(void *entry, struct address_space *mapping, pgoff_t index, bool wake_all) { - wait_queue_head_t *wq = dax_entry_waitqueue(mapping, index); + wait_queue_head_t *wq = dax_entry_waitqueue(mapping, + dax_entry_start(index, entry)); /* * Checking for locked entry and prepare_to_wait_exclusive() happens @@ -460,7 +471,7 @@ void dax_wake_mapping_entry_waiter(struct address_space *mapping, struct exceptional_entry_key key; key.mapping = mapping; - key.index = index; + key.entry_start = dax_entry_start(index, entry); __wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key); } } @@ -478,7 +489,7 @@ void dax_unlock_mapping_entry(struct address_space *mapping, pgoff_t index) } unlock_slot(mapping, slot); spin_unlock_irq(&mapping->tree_lock); - dax_wake_mapping_entry_waiter(mapping, index, false); + dax_wake_mapping_entry_waiter(entry, mapping, index, false); } static void put_locked_mapping_entry(struct address_space *mapping, @@ -503,7 +514,7 @@ static void put_unlocked_mapping_entry(struct address_space *mapping, return; /* We have to wake up next waiter for the radix tree entry lock */ - dax_wake_mapping_entry_waiter(mapping, index, false); + dax_wake_mapping_entry_waiter(entry, mapping, index, false); } /* @@ -530,7 +541,7 @@ int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index) radix_tree_delete(&mapping->page_tree, index); mapping->nrexceptional--; spin_unlock_irq(&mapping->tree_lock); - dax_wake_mapping_entry_waiter(mapping, index, true); + dax_wake_mapping_entry_waiter(entry, mapping, index, true); return 1; } diff --git a/include/linux/dax.h b/include/linux/dax.h index 9c6dc77..f6cab31 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -15,7 +15,7 @@ int dax_zero_page_range(struct inode *, loff_t from, unsigned len, get_block_t); int dax_truncate_page(struct inode *, loff_t from, get_block_t); int dax_fault(struct vm_area_struct *, struct vm_fault *, get_block_t); int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index); -void dax_wake_mapping_entry_waiter(struct address_space *mapping, +void dax_wake_mapping_entry_waiter(void *entry, struct address_space *mapping, pgoff_t index, bool wake_all); #ifdef CONFIG_FS_DAX diff --git a/mm/filemap.c b/mm/filemap.c index 8a287df..35e880d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -617,7 +617,7 @@ static int page_cache_tree_insert(struct address_space *mapping, if (node) workingset_node_pages_dec(node); /* Wakeup waiters for exceptional entry lock */ - dax_wake_mapping_entry_waiter(mapping, page->index, + dax_wake_mapping_entry_waiter(p, mapping, page->index, false); } }