From patchwork Tue Mar 6 19:24:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10262591 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EB7CF602C8 for ; Tue, 6 Mar 2018 19:32:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DBDAC29153 for ; Tue, 6 Mar 2018 19:32:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D07AC29161; Tue, 6 Mar 2018 19:32:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 608ED29153 for ; Tue, 6 Mar 2018 19:32:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754108AbeCFTbM (ORCPT ); Tue, 6 Mar 2018 14:31:12 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:49300 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933297AbeCFTYl (ORCPT ); Tue, 6 Mar 2018 14:24:41 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=gyb9Qm4DfyPt3dYM4uU9XVaSLgvPmG8u7M2kgi/AW98=; b=QIDyVPo7hluVBR7LaQVHPOL3W UzbHPHSjzUyHmKGeCCz+uKBAu2EaahfwAhTmEsVmEu/YIBuzu8jugPw3L9UF6sm9h5yBjN2TJZPjk qSD+vswEfzwG6FcQzVMQRGki6jR6md9oUL5kXrlItTXmEWji4+q5XRI9YnTbIVr6AVXmFdf5DIscC P7YFelD5ZfVEd39bYf++Y9AB0iP+SQ1SVDYYloB7zWW6egShf+aTcnhbqdx7L08YV+eXG7aOIDT9J DGK2mqB6k/QQz7C+llwexZK6SVsMY+/QOAmUkIxdApX73bkwoBaNOrJ7Uw+oaA9P6n7EScU+oWVXu RMsJgV3pw==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1etICW-0001fm-53; Tue, 06 Mar 2018 19:24:40 +0000 From: Matthew Wilcox To: Andrew Morton Cc: Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Ryusuke Konishi , linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org Subject: [PATCH v8 52/63] memfd: Convert shmem_tag_pins to XArray Date: Tue, 6 Mar 2018 11:24:02 -0800 Message-Id: <20180306192413.5499-53-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180306192413.5499-1-willy@infradead.org> References: <20180306192413.5499-1-willy@infradead.org> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox Simplify the locking by taking the spinlock while we walk the tree on the assumption that many acquires and releases of the lock will be worse than holding the lock for a (potentially) long time. We could replicate the same locking behaviour with the xarray, but would have to be careful that the xa_node wasn't RCU-freed under us before we took the lock. Signed-off-by: Matthew Wilcox --- mm/memfd.c | 43 ++++++++++++++++++------------------------- 1 file changed, 18 insertions(+), 25 deletions(-) diff --git a/mm/memfd.c b/mm/memfd.c index 4cf7401cb09c..3b299d72df78 100644 --- a/mm/memfd.c +++ b/mm/memfd.c @@ -21,7 +21,7 @@ #include /* - * We need a tag: a new tag would expand every radix_tree_node by 8 bytes, + * We need a tag: a new tag would expand every xa_node by 8 bytes, * so reuse a tag which we firmly believe is never set or cleared on shmem. */ #define SHMEM_TAG_PINNED PAGECACHE_TAG_TOWRITE @@ -29,35 +29,28 @@ static void shmem_tag_pins(struct address_space *mapping) { - struct radix_tree_iter iter; - void __rcu **slot; - pgoff_t start; + XA_STATE(xas, &mapping->i_pages, 0); struct page *page; + unsigned int tagged = 0; lru_add_drain(); - start = 0; - rcu_read_lock(); - - radix_tree_for_each_slot(slot, &mapping->i_pages, &iter, start) { - page = radix_tree_deref_slot(slot); - if (!page || radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - } else if (page_count(page) - page_mapcount(page) > 1) { - xa_lock_irq(&mapping->i_pages); - radix_tree_tag_set(&mapping->i_pages, iter.index, - SHMEM_TAG_PINNED); - xa_unlock_irq(&mapping->i_pages); - } - if (need_resched()) { - slot = radix_tree_iter_resume(slot, &iter); - cond_resched_rcu(); - } + xas_lock_irq(&xas); + xas_for_each(&xas, page, ULONG_MAX) { + if (xa_is_value(page)) + continue; + if (page_count(page) - page_mapcount(page) > 1) + xas_set_tag(&xas, SHMEM_TAG_PINNED); + + if (++tagged % XA_CHECK_SCHED) + continue; + + xas_pause(&xas); + xas_unlock_irq(&xas); + cond_resched(); + xas_lock_irq(&xas); } - rcu_read_unlock(); + xas_unlock_irq(&xas); } /*