From patchwork Sun Jun 17 02:00:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10468315 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A856F603B4 for ; Sun, 17 Jun 2018 02:03:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 99EB928CC9 for ; Sun, 17 Jun 2018 02:03:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8E70328CCF; Sun, 17 Jun 2018 02:03:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 22C0128CC9 for ; Sun, 17 Jun 2018 02:03:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4DAA36B0290; Sat, 16 Jun 2018 22:01:34 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 449136B0291; Sat, 16 Jun 2018 22:01:34 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2DE8B6B0292; Sat, 16 Jun 2018 22:01:34 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f200.google.com (mail-pf0-f200.google.com [209.85.192.200]) by kanga.kvack.org (Postfix) with ESMTP id D77846B0290 for ; Sat, 16 Jun 2018 22:01:33 -0400 (EDT) Received: by mail-pf0-f200.google.com with SMTP id b5-v6so6241374pfi.5 for ; Sat, 16 Jun 2018 19:01:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=48LpEvkQqhIIHZka3Rt488wqdWU55j8+vgW3hpTgoEg=; b=tok21FANCBB7md3k4iCBQjMsihBoFdwKtR/dQIJqKJjQd0lmiYJ7m3aH24m/9mZza8 YrgXeBlb5UembqxOh4xarUXaEjgNC49uX7hfexgjSm7dm7gFDDa0/xz2xJ2g7mUqIJxJ ZWWoo6HQVvRqlGERgcRbAR9ZL4XR7LygNqbIcBfm00YTbkGzY+QTwZM6rxz8EpUpZOuO DkVQIY8qm7H2UjbubVw20TDDDmjHqj4/McwUGMuJkeEOGBFZX/QRuBIGzTM4DvHBHCYk LtL8Bpa3Vf5ZSjSGIcE5ApkVGGCRQnzClJosBMM3QgvUiepoeXmdafFkgMXPDocw2zTk Wr3w== X-Gm-Message-State: APt69E0626wfoVQGI0eKOkWJkv1WbMrM/Roa2/6R6KhNj0FX1gZ6gDmz HYg0h27vuOHfVGB/+tZIRqoH8lEjDLUkbBdyZLjFjG96WoLOkJhmXuO0ePEsYAP/Pm8rLFmgPbT 2T9AzQcM++0u6b46bFlTD0DQC7YbOytGBDaNuWYKZi3KLAGWbWXLbrZNkX/6zdcrtiQ== X-Received: by 2002:a63:85c8:: with SMTP id u191-v6mr6290121pgd.300.1529200893536; Sat, 16 Jun 2018 19:01:33 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJD2OMzVyk7wMMi2rd3r6g4qMKIxw5z4hIQ2R1gMxV38xk2Khh0VU1EJ7+5a9RkB31YpECU X-Received: by 2002:a63:85c8:: with SMTP id u191-v6mr6290090pgd.300.1529200892641; Sat, 16 Jun 2018 19:01:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529200892; cv=none; d=google.com; s=arc-20160816; b=Ac55YbXo2vP2gACs8oUwtk9gNDTMP/UX+EMvo5PLAzgQgzCX/SnIw10vCd3sbT4quc XIlhH+tc+4ZHA+h8XNznH3dtxTaQ2Y6ygC4DOVlhBHjjec6JTnxuFA6MoNcqVkcUXsSr 7eoPhDQzCytcfXX29RPeVRP4RS0qTTusAP8RiUF7yHZcJTXrhujDB0UiUQewwOhArUxQ 3jda0t5ucQKNohwSMH7vLsrFnUPenjc7T53fL6RebOGl7DcJemturETWEMzQNU5ZoNQK lCPNSmGt2RwBFZYGIb1wrcAba7DDRKl+RKzzWqjxPd98+P48AwRAyqS+kECxhTIej04M ASww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=48LpEvkQqhIIHZka3Rt488wqdWU55j8+vgW3hpTgoEg=; b=yPmWuKCs6XOUsH5COt7fVljwWdm/SqB2jaztpk6h5DLawTb3cLMzVpxU49goIPcPkw h8D5gwZ18GH+nGBocLXxxtGFdWimh/Vby5S2aHBE2rkzKRynrddRYLdYCRI6ogESJ6FW 9SkRpF2jRwtzj/Miw/jTD51rBhje3mWWSb9wE3ZxZ0b/8DR1DePpdtCYUShRhuMARx7D wQb3wxeEpKQH9vQpc4bRloT3GbVCDZ21k+KEJh4LzhcSezNHU3LHwuU80kMhbiHhxgfd PDr+KGL7rrmPpHqSF9m+iPuiebajau5yiS1fiKMDpVTby8WtaupQjtTPcyLBXNnBqwV3 wvUQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="uR/oSlgB"; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id f7-v6si13257137plj.122.2018.06.16.19.01.32 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sat, 16 Jun 2018 19:01:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="uR/oSlgB"; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=48LpEvkQqhIIHZka3Rt488wqdWU55j8+vgW3hpTgoEg=; b=uR/oSlgB6PfCxT7IbhiF0Y9iy zU1Tleybk/DH+sWNNJM8r5aGUZy1X1kvVypZw7rqrNx96384la3H+Qk3+eQrbwE+wUzs84/laRW9l yZyEWePrpAzv0SSoBAmzi0Zs8RjNPZiO0LJuumgkVXpAFPNGvfJzeUDmcwuJ3DLJyJTSrkXEB9HC5 yqJviFQkqVLs1hXYDb+UVQnyDvTSxmtzAuUKVM6Au23QKJPvyV9eo3LjZuzZ0909r5O0EJrHLrhLz QpScWxPTDVYo1UAPq51a4/ezse0GZhL7guOEa+dbB/lUI4qUplt597CoJA7JlUQpo9VDm2GDgumot hR1iOOl9w==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fUN0V-0001bn-1v; Sun, 17 Jun 2018 02:01:31 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Jan Kara , Jeff Layton , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Nicholas Piggin , Ryusuke Konishi , linux-nilfs@vger.kernel.org, Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net Subject: [PATCH v14 63/74] dax: Hash on XArray instead of mapping Date: Sat, 16 Jun 2018 19:00:41 -0700 Message-Id: <20180617020052.4759-64-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180617020052.4759-1-willy@infradead.org> References: <20180617020052.4759-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Since the XArray is embedded in the struct address_space, its address contains exactly as much entropy as the address of the mapping. This patch is purely preparatory for later patches which will simplify the wait/wake interfaces. Signed-off-by: Matthew Wilcox --- fs/dax.c | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 157762fe2ba1..b7f54e386da8 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -116,7 +116,7 @@ static int dax_is_empty_entry(void *entry) * DAX page cache entry locking */ struct exceptional_entry_key { - struct address_space *mapping; + struct xarray *xa; pgoff_t entry_start; }; @@ -125,7 +125,7 @@ struct wait_exceptional_entry_queue { struct exceptional_entry_key key; }; -static wait_queue_head_t *dax_entry_waitqueue(struct address_space *mapping, +static wait_queue_head_t *dax_entry_waitqueue(struct xarray *xa, pgoff_t index, void *entry, struct exceptional_entry_key *key) { unsigned long hash; @@ -138,21 +138,21 @@ static wait_queue_head_t *dax_entry_waitqueue(struct address_space *mapping, if (dax_is_pmd_entry(entry)) index &= ~PG_PMD_COLOUR; - key->mapping = mapping; + key->xa = xa; key->entry_start = index; - hash = hash_long((unsigned long)mapping ^ index, DAX_WAIT_TABLE_BITS); + hash = hash_long((unsigned long)xa ^ index, DAX_WAIT_TABLE_BITS); return wait_table + hash; } -static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mode, - int sync, void *keyp) +static int wake_exceptional_entry_func(wait_queue_entry_t *wait, + unsigned int mode, int sync, void *keyp) { struct exceptional_entry_key *key = keyp; struct wait_exceptional_entry_queue *ewait = container_of(wait, struct wait_exceptional_entry_queue, wait); - if (key->mapping != ewait->key.mapping || + if (key->xa != ewait->key.xa || key->entry_start != ewait->key.entry_start) return 0; return autoremove_wake_function(wait, mode, sync, NULL); @@ -163,13 +163,13 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo * The important information it's conveying is whether the entry at * this index used to be a PMD entry. */ -static void dax_wake_mapping_entry_waiter(struct address_space *mapping, +static void dax_wake_mapping_entry_waiter(struct xarray *xa, pgoff_t index, void *entry, bool wake_all) { struct exceptional_entry_key key; wait_queue_head_t *wq; - wq = dax_entry_waitqueue(mapping, index, entry, &key); + wq = dax_entry_waitqueue(xa, index, entry, &key); /* * Checking for locked entry and prepare_to_wait_exclusive() happens @@ -246,7 +246,8 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping, return entry; } - wq = dax_entry_waitqueue(mapping, index, entry, &ewait.key); + wq = dax_entry_waitqueue(&mapping->i_pages, index, entry, + &ewait.key); prepare_to_wait_exclusive(wq, &ewait.wait, TASK_UNINTERRUPTIBLE); xa_unlock_irq(&mapping->i_pages); @@ -270,7 +271,7 @@ static void dax_unlock_mapping_entry(struct address_space *mapping, } unlock_slot(mapping, slot); xa_unlock_irq(&mapping->i_pages); - dax_wake_mapping_entry_waiter(mapping, index, entry, false); + dax_wake_mapping_entry_waiter(&mapping->i_pages, index, entry, false); } static void put_locked_mapping_entry(struct address_space *mapping, @@ -290,7 +291,7 @@ static void put_unlocked_mapping_entry(struct address_space *mapping, return; /* We have to wake up next waiter for the page cache entry lock */ - dax_wake_mapping_entry_waiter(mapping, index, entry, false); + dax_wake_mapping_entry_waiter(&mapping->i_pages, index, entry, false); } static unsigned long dax_entry_size(void *entry) @@ -423,7 +424,8 @@ struct page *dax_lock_page(unsigned long pfn) break; } - wq = dax_entry_waitqueue(mapping, index, entry, &ewait.key); + wq = dax_entry_waitqueue(&mapping->i_pages, index, entry, + &ewait.key); prepare_to_wait_exclusive(wq, &ewait.wait, TASK_UNINTERRUPTIBLE); xa_unlock_irq(&mapping->i_pages); @@ -556,8 +558,8 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index, dax_disassociate_entry(entry, mapping, false); radix_tree_delete(&mapping->i_pages, index); mapping->nrexceptional--; - dax_wake_mapping_entry_waiter(mapping, index, entry, - true); + dax_wake_mapping_entry_waiter(&mapping->i_pages, + index, entry, true); } entry = dax_make_locked(0, size_flag | DAX_EMPTY);