From patchwork Sun Jun 17 02:00:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10468383 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 95092603B4 for ; Sun, 17 Jun 2018 02:05:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 842C528D38 for ; Sun, 17 Jun 2018 02:05:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7869028D44; Sun, 17 Jun 2018 02:05:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B98F928D41 for ; Sun, 17 Jun 2018 02:05:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D05F6B02D6; Sat, 16 Jun 2018 22:02:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 881FE6B02D8; Sat, 16 Jun 2018 22:02:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D7426B02D9; Sat, 16 Jun 2018 22:02:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f69.google.com (mail-pl0-f69.google.com [209.85.160.69]) by kanga.kvack.org (Postfix) with ESMTP id 2324D6B02D6 for ; Sat, 16 Jun 2018 22:02:54 -0400 (EDT) Received: by mail-pl0-f69.google.com with SMTP id e1-v6so7648738pld.23 for ; Sat, 16 Jun 2018 19:02:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=SeRSb0Y9h85iyiF7jwGNJotyxofsAVlcnNE7p5237z8=; b=eBr33BtbX9CxnmpmP8TOtt0pDURz5BZj/v+Jjp2fiNPdFEmOvbRj4Td/23WFGA2XqS QjBGucFyN47nJX7B3cd1zn/kp6+fb62wFCMHC5FK8P+41gfPlFMBhxILku66Oh44kDww mdt62Dzr/JIZmGXM+h0gJxzBD5D5IgkKLb0JHyWstsG8G9MOkKBq5+jxWB1FEnItaMfv g0OAD00vfS+CKXBGU2p9XC9JPjPG0q9EIHiUrgCz1tcBhZJ1cFn0xccrP7EkSt0EWNKQ Nd6Yl3OSvlnEhJf5CJY2yx9y4P6QVqOcoE+xgdc9zEP+H3qrRJgJa7AeWz4Lt3Xp6YNS w3vA== X-Gm-Message-State: APt69E0ZjPavCyJpsB/a+osKDJ/UmbeyXNY0BCcdX6xm1SjWe+imtyTY 9RRkMi1eWr1m/HXuewrXZ1MTkrket8aZnCEri1VPj6tFukUYTKhsjny8hKhOIhhLgG3+RoyROZy ZCFyaIFPbb0aZBCLMzswMAz00do9Q++8Snfz74P76/4x6zMYv3xqAxEMRymSXTpA39g== X-Received: by 2002:a17:902:6b09:: with SMTP id o9-v6mr8306490plk.256.1529200973813; Sat, 16 Jun 2018 19:02:53 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKc1wrd/6SyzG+0U3dwRsoFF4iHa6zjq8TCcboM+R6fwPZaMeM53iO6/9mCaDc4v2vu50jX X-Received: by 2002:a17:902:6b09:: with SMTP id o9-v6mr8301348plk.256.1529200859763; Sat, 16 Jun 2018 19:00:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529200859; cv=none; d=google.com; s=arc-20160816; b=A3QNWF1yMBS3PU5uD5z/9mMSunwyjdlq7JMIrmWc7tsRxXgK6m2rxlwAVDRM0dOWuN r5mYiE/+Nm9VdjtTp6Ixf6y4rpN43Bf7JbfZq7hTXY0SjJ0ZPIbCQfkNRHQjCt/Ulvfr S6evbXSE52w20az6Lju2acduHu0x0Uk0A+ODmeQhm2nyBL56BR8ul1Is1Bbw8RrNFkqr SCLzJ9UcLFefGrtn3RUTmlzLawlHfr9MnPSN9hSZ0YkHMDPdXthR7DGhqKieGhdHjWwz QZFIbcreMEkZWlSOsT1HnuNcUdsL4HqePOCGJKDYFfipe+8STHSEaXWbfTCeStW8izcL /CtA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=SeRSb0Y9h85iyiF7jwGNJotyxofsAVlcnNE7p5237z8=; b=KCFbEbvJfU9T2qNX7PLWTddQeH82YCvDF0RAUtLJrXJMPdmjmIpobZ2C0d4sb0tSeX eGWaHPxmZvtmq0bykdaDnm7qVGLYijda82SLYRfEUy9Qjdnn/8RvM4wMZZ9Q60a+v2s6 8sQ9gKIDbSZtqYgS8j8qpxF391+xOSiewNqYUzuR+z3AFN5+Pe+oJIztIikNb0WxVVJ3 IPVmZc9+k0Mvxyc9f1CfBobE+DROzzgEgOyDHNz5sLfxshVMDXwipDKdr+modAD7KyO+ cY/9dqkGvVEwKmKQ34exh2636E4EecmcvwVkGrDhlLjK7eYugAYtntN9ttEkZ3HuEfkx Yd0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=P9S9rDSz; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id n68-v6si11421725pfb.152.2018.06.16.19.00.59 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sat, 16 Jun 2018 19:00:59 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=P9S9rDSz; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=SeRSb0Y9h85iyiF7jwGNJotyxofsAVlcnNE7p5237z8=; b=P9S9rDSzHT9kigYQF2plix5v0 4Vb/mYk96kvFcF1w14mVmU/8H+hyphvGf8oxT2WapEt16o/4vr8gRh9UPuDyXla3kgnmR/hwd0QlH aSpool7u9knzjFCcS3L7EN6DbOah5c2nKtOTRIC8GwOVi7Bg70M8ZQjOMDOp5LzBQOOhrD2jPgXDq Lh8A7r2X85ZvaBlc9ZUmdZ5Ux7JavSvSOuneIR1m/mEnJ+tAFf3+DlJmJVV+TjTCMY7SdIRAlSuXq kZuetj1Cb0C+2ErNS2KxJFK0+mgBu8GJwClDJ3nZPVmzsoMgXOE+UOsJK7Nmr4W9mIxbTaWGLZMW4 p1xQGQtDw==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fUMzz-0001HX-4P; Sun, 17 Jun 2018 02:00:59 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Jan Kara , Jeff Layton , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Nicholas Piggin , Ryusuke Konishi , linux-nilfs@vger.kernel.org, Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net Subject: [PATCH v14 22/74] page cache: Add and replace pages using the XArray Date: Sat, 16 Jun 2018 19:00:00 -0700 Message-Id: <20180617020052.4759-23-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180617020052.4759-1-willy@infradead.org> References: <20180617020052.4759-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Use the XArray APIs to add and replace pages in the page cache. This removes two uses of the radix tree preload API and is significantly shorter code. It also removes the last user of __radix_tree_create() outside radix-tree.c itself, so make it static. Signed-off-by: Matthew Wilcox --- include/linux/radix-tree.h | 3 - include/linux/swap.h | 8 ++- lib/radix-tree.c | 6 +- mm/filemap.c | 139 +++++++++++++++---------------------- 4 files changed, 66 insertions(+), 90 deletions(-) diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h index f64beb9ba175..4b6f685309fc 100644 --- a/include/linux/radix-tree.h +++ b/include/linux/radix-tree.h @@ -231,9 +231,6 @@ static inline int radix_tree_exception(void *arg) return unlikely((unsigned long)arg & RADIX_TREE_ENTRY_MASK); } -int __radix_tree_create(struct radix_tree_root *, unsigned long index, - unsigned order, struct radix_tree_node **nodep, - void __rcu ***slotp); int __radix_tree_insert(struct radix_tree_root *, unsigned long index, unsigned order, void *); static inline int radix_tree_insert(struct radix_tree_root *root, diff --git a/include/linux/swap.h b/include/linux/swap.h index f73eafcaf4e9..1b91e7f7bdeb 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -300,8 +300,12 @@ void *workingset_eviction(struct address_space *mapping, struct page *page); bool workingset_refault(void *shadow); void workingset_activation(struct page *page); -/* Do not use directly, use workingset_lookup_update */ -void workingset_update_node(struct radix_tree_node *node); +/* Only track the nodes of mappings with shadow entries */ +void workingset_update_node(struct xa_node *node); +#define mapping_set_update(xas, mapping) do { \ + if (!dax_mapping(mapping) && !shmem_mapping(mapping)) \ + xas_set_update(xas, workingset_update_node); \ +} while (0) /* Returns workingset_update_node() if the mapping has shadow entries. */ #define workingset_lookup_update(mapping) \ diff --git a/lib/radix-tree.c b/lib/radix-tree.c index f7785f7cbd5f..5c8a262f506c 100644 --- a/lib/radix-tree.c +++ b/lib/radix-tree.c @@ -740,9 +740,9 @@ static bool delete_node(struct radix_tree_root *root, * * Returns -ENOMEM, or 0 for success. */ -int __radix_tree_create(struct radix_tree_root *root, unsigned long index, - unsigned order, struct radix_tree_node **nodep, - void __rcu ***slotp) +static int __radix_tree_create(struct radix_tree_root *root, + unsigned long index, unsigned order, + struct radix_tree_node **nodep, void __rcu ***slotp) { struct radix_tree_node *node = NULL, *child; void __rcu **slot = (void __rcu **)&root->xa_head; diff --git a/mm/filemap.c b/mm/filemap.c index 8de36e14e22f..965ff68e5b8d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -111,35 +111,6 @@ * ->tasklist_lock (memory_failure, collect_procs_ao) */ -static int page_cache_tree_insert(struct address_space *mapping, - struct page *page, void **shadowp) -{ - struct radix_tree_node *node; - void **slot; - int error; - - error = __radix_tree_create(&mapping->i_pages, page->index, 0, - &node, &slot); - if (error) - return error; - if (*slot) { - void *p; - - p = radix_tree_deref_slot_protected(slot, - &mapping->i_pages.xa_lock); - if (!xa_is_value(p)) - return -EEXIST; - - mapping->nrexceptional--; - if (shadowp) - *shadowp = p; - } - __radix_tree_replace(&mapping->i_pages, node, slot, page, - workingset_lookup_update(mapping)); - mapping->nrpages++; - return 0; -} - static void page_cache_tree_delete(struct address_space *mapping, struct page *page, void *shadow) { @@ -775,51 +746,44 @@ EXPORT_SYMBOL(file_write_and_wait_range); * locked. This function does not add the new page to the LRU, the * caller must do that. * - * The remove + add is atomic. The only way this function can fail is - * memory allocation failure. + * The remove + add is atomic. This function cannot fail. */ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask) { - int error; + struct address_space *mapping = old->mapping; + void (*freepage)(struct page *) = mapping->a_ops->freepage; + pgoff_t offset = old->index; + XA_STATE(xas, &mapping->i_pages, offset); + unsigned long flags; VM_BUG_ON_PAGE(!PageLocked(old), old); VM_BUG_ON_PAGE(!PageLocked(new), new); VM_BUG_ON_PAGE(new->mapping, new); - error = radix_tree_preload(gfp_mask & GFP_RECLAIM_MASK); - if (!error) { - struct address_space *mapping = old->mapping; - void (*freepage)(struct page *); - unsigned long flags; - - pgoff_t offset = old->index; - freepage = mapping->a_ops->freepage; + get_page(new); + new->mapping = mapping; + new->index = offset; - get_page(new); - new->mapping = mapping; - new->index = offset; + xas_lock_irqsave(&xas, flags); + xas_store(&xas, new); - xa_lock_irqsave(&mapping->i_pages, flags); - __delete_from_page_cache(old, NULL); - error = page_cache_tree_insert(mapping, new, NULL); - BUG_ON(error); - - /* - * hugetlb pages do not participate in page cache accounting. - */ - if (!PageHuge(new)) - __inc_node_page_state(new, NR_FILE_PAGES); - if (PageSwapBacked(new)) - __inc_node_page_state(new, NR_SHMEM); - xa_unlock_irqrestore(&mapping->i_pages, flags); - mem_cgroup_migrate(old, new); - radix_tree_preload_end(); - if (freepage) - freepage(old); - put_page(old); - } + old->mapping = NULL; + /* hugetlb pages do not participate in page cache accounting. */ + if (!PageHuge(old)) + __dec_node_page_state(new, NR_FILE_PAGES); + if (!PageHuge(new)) + __inc_node_page_state(new, NR_FILE_PAGES); + if (PageSwapBacked(old)) + __dec_node_page_state(new, NR_SHMEM); + if (PageSwapBacked(new)) + __inc_node_page_state(new, NR_SHMEM); + xas_unlock_irqrestore(&xas, flags); + mem_cgroup_migrate(old, new); + if (freepage) + freepage(old); + put_page(old); - return error; + return 0; } EXPORT_SYMBOL_GPL(replace_page_cache_page); @@ -828,12 +792,15 @@ static int __add_to_page_cache_locked(struct page *page, pgoff_t offset, gfp_t gfp_mask, void **shadowp) { + XA_STATE(xas, &mapping->i_pages, offset); int huge = PageHuge(page); struct mem_cgroup *memcg; int error; + void *old; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageSwapBacked(page), page); + mapping_set_update(&xas, mapping); if (!huge) { error = mem_cgroup_try_charge(page, current->mm, @@ -842,39 +809,47 @@ static int __add_to_page_cache_locked(struct page *page, return error; } - error = radix_tree_maybe_preload(gfp_mask & GFP_RECLAIM_MASK); - if (error) { - if (!huge) - mem_cgroup_cancel_charge(page, memcg, false); - return error; - } - get_page(page); page->mapping = mapping; page->index = offset; - xa_lock_irq(&mapping->i_pages); - error = page_cache_tree_insert(mapping, page, shadowp); - radix_tree_preload_end(); - if (unlikely(error)) - goto err_insert; + do { + xas_lock_irq(&xas); + old = xas_load(&xas); + if (old && !xa_is_value(old)) + xas_set_err(&xas, -EEXIST); + xas_store(&xas, page); + if (xas_error(&xas)) + goto unlock; + + if (xa_is_value(old)) { + mapping->nrexceptional--; + if (shadowp) + *shadowp = old; + } + mapping->nrpages++; + + /* hugetlb pages do not participate in page cache accounting */ + if (!huge) + __inc_node_page_state(page, NR_FILE_PAGES); +unlock: + xas_unlock_irq(&xas); + } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); + + if (xas_error(&xas)) + goto error; - /* hugetlb pages do not participate in page cache accounting. */ - if (!huge) - __inc_node_page_state(page, NR_FILE_PAGES); - xa_unlock_irq(&mapping->i_pages); if (!huge) mem_cgroup_commit_charge(page, memcg, false, false); trace_mm_filemap_add_to_page_cache(page); return 0; -err_insert: +error: page->mapping = NULL; /* Leave page->index set: truncation relies upon it */ - xa_unlock_irq(&mapping->i_pages); if (!huge) mem_cgroup_cancel_charge(page, memcg, false); put_page(page); - return error; + return xas_error(&xas); } /**