From patchwork Mon Feb 19 19:45:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10228597 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E37FD602B1 for ; Mon, 19 Feb 2018 19:49:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D5BC72842B for ; Mon, 19 Feb 2018 19:49:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CAAA128AFA; Mon, 19 Feb 2018 19:49:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 65EE02842B for ; Mon, 19 Feb 2018 19:49:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753827AbeBSTqc (ORCPT ); Mon, 19 Feb 2018 14:46:32 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:45004 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753812AbeBSTq1 (ORCPT ); Mon, 19 Feb 2018 14:46:27 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=dtC7eqwOI/wevvmjfXQ84307BzgiHsYvb6EmrCRbSLs=; b=Z06smVx9haSQ1CN4AbCu26Y2/ km8uZS3DUZeEbzQArht++OeB1pACNCKwTxWEG9wL/N0oDfX6nCCwFTqIq52VydVBndcrBC7SKBH0D dkbsKVwqYQa7Gy7qYjJ47NsHSdfwJEzValsDfYSlg4N1fqJSdmXNrmcKkLebNkdLMLSi0Xjxcp8eM Zz+7p5AxlbMMj8Fw2pPBvRum5e5VxbrjpNNZnfDLVt0w6pU3Z9B1ulIr9bTGHN79jRHwPBhnJziTA ItBpjrrBoUApnNWPAWEvx+wIrrsETGiMmdejL1XZr9Cp4ilmO+q4JWvHW4QNtWLO3DFxX29lQUnDk sFsawWLMA==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1enrOM-0001vK-Fc; Mon, 19 Feb 2018 19:46:26 +0000 From: Matthew Wilcox To: Andrew Morton Cc: Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v7 49/61] shmem: Convert shmem_add_to_page_cache to XArray Date: Mon, 19 Feb 2018 11:45:44 -0800 Message-Id: <20180219194556.6575-50-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180219194556.6575-1-willy@infradead.org> References: <20180219194556.6575-1-willy@infradead.org> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox This removes the last caller of radix_tree_maybe_preload_order(). Simpler code, unless we run out of memory for new xa_nodes partway through inserting entries into the xarray. Hopefully we can support multi-index entries in the page cache soon and all the awful code goes away. Signed-off-by: Matthew Wilcox --- mm/shmem.c | 87 ++++++++++++++++++++++++++++---------------------------------- 1 file changed, 39 insertions(+), 48 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index ccb6d7ecdee0..347661653803 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -558,9 +558,10 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, */ static int shmem_add_to_page_cache(struct page *page, struct address_space *mapping, - pgoff_t index, void *expected) + pgoff_t index, void *expected, gfp_t gfp) { - int error, nr = hpage_nr_pages(page); + XA_STATE(xas, &mapping->pages, index); + unsigned long i, nr = 1UL << compound_order(page); VM_BUG_ON_PAGE(PageTail(page), page); VM_BUG_ON_PAGE(index != round_down(index, nr), page); @@ -569,49 +570,47 @@ static int shmem_add_to_page_cache(struct page *page, VM_BUG_ON(expected && PageTransHuge(page)); page_ref_add(page, nr); - page->mapping = mapping; page->index = index; + page->mapping = mapping; - xa_lock_irq(&mapping->pages); - if (PageTransHuge(page)) { - void __rcu **results; - pgoff_t idx; - int i; - - error = 0; - if (radix_tree_gang_lookup_slot(&mapping->pages, - &results, &idx, index, 1) && - idx < index + HPAGE_PMD_NR) { - error = -EEXIST; + do { + xas_lock_irq(&xas); + xas_create_range(&xas, index + nr - 1); + if (xas_error(&xas)) + goto unlock; + for (i = 0; i < nr; i++) { + void *entry = xas_load(&xas); + if (entry != expected) + xas_set_err(&xas, -ENOENT); + if (xas_error(&xas)) + goto undo; + xas_store(&xas, page + i); + xas_next(&xas); } - - if (!error) { - for (i = 0; i < HPAGE_PMD_NR; i++) { - error = radix_tree_insert(&mapping->pages, - index + i, page + i); - VM_BUG_ON(error); - } + if (PageTransHuge(page)) { count_vm_event(THP_FILE_ALLOC); + __inc_node_page_state(page, NR_SHMEM_THPS); } - } else if (!expected) { - error = radix_tree_insert(&mapping->pages, index, page); - } else { - error = shmem_xa_replace(mapping, index, expected, page); - } - - if (!error) { mapping->nrpages += nr; - if (PageTransHuge(page)) - __inc_node_page_state(page, NR_SHMEM_THPS); __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr); __mod_node_page_state(page_pgdat(page), NR_SHMEM, nr); - xa_unlock_irq(&mapping->pages); - } else { + goto unlock; +undo: + while (i-- > 0) { + xas_store(&xas, NULL); + xas_prev(&xas); + } +unlock: + xas_unlock_irq(&xas); + } while (xas_nomem(&xas, gfp)); + + if (xas_error(&xas)) { page->mapping = NULL; - xa_unlock_irq(&mapping->pages); page_ref_sub(page, nr); + return xas_error(&xas); } - return error; + + return 0; } /* @@ -1159,7 +1158,7 @@ static int shmem_unuse_inode(struct shmem_inode_info *info, */ if (!error) error = shmem_add_to_page_cache(*pagep, mapping, index, - radswap); + radswap, gfp); if (error != -ENOMEM) { /* * Truncation and eviction use free_swap_and_cache(), which @@ -1677,7 +1676,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, false); if (!error) { error = shmem_add_to_page_cache(page, mapping, index, - swp_to_radix_entry(swap)); + swp_to_radix_entry(swap), gfp); /* * We already confirmed swap under page lock, and make * no memory allocation here, so usually no possibility @@ -1783,13 +1782,8 @@ alloc_nohuge: page = shmem_alloc_and_acct_page(gfp, inode, PageTransHuge(page)); if (error) goto unacct; - error = radix_tree_maybe_preload_order(gfp & GFP_RECLAIM_MASK, - compound_order(page)); - if (!error) { - error = shmem_add_to_page_cache(page, mapping, hindex, - NULL); - radix_tree_preload_end(); - } + error = shmem_add_to_page_cache(page, mapping, hindex, + NULL, gfp & GFP_RECLAIM_MASK); if (error) { mem_cgroup_cancel_charge(page, memcg, PageTransHuge(page)); @@ -2256,11 +2250,8 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (ret) goto out_release; - ret = radix_tree_maybe_preload(gfp & GFP_RECLAIM_MASK); - if (!ret) { - ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL); - radix_tree_preload_end(); - } + ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, + gfp & GFP_RECLAIM_MASK); if (ret) goto out_release_uncharge;