From patchwork Wed Dec 6 00:41:22 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10094497 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 577D06035E for ; Wed, 6 Dec 2017 00:54:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 48FCC28573 for ; Wed, 6 Dec 2017 00:54:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3DA7828A27; Wed, 6 Dec 2017 00:54:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D950A28A22 for ; Wed, 6 Dec 2017 00:54:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753804AbdLFAyQ (ORCPT ); Tue, 5 Dec 2017 19:54:16 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:43940 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753339AbdLFAmN (ORCPT ); Tue, 5 Dec 2017 19:42:13 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=h7aOOeRKdFlCK3tqBh8CkIS6qk37hPe6Qp9ye6pTi2c=; b=BLdfAP4Zm3oME+H0aL5SyD4ir CSwmvunMJNt2YQ7ymXoWta4kRL2XggJD7z2DHl/KSXftSChJwLiN8qRCzwfEELhZi2TWZCzrjJdX+ CMOQ+HHIRss7ERD0lfCuL7ftP23bmCO3gl2/pOt5YS+l9gFLtUv95TdIF1bB07MEWeRe8wtb78xN2 DWi0DI24VxF2RMpivd3rsuL+v+VKOqaUJAlKDbWq4YPw6Z5/8Yqcfemxbou2GbOf3VXt2AqCHKaer a7LuHdjKfSrQRO+Y8JWvY0eAXjODgdS9i8jfq+nFbwtxE12bxrz0Dgg9YNJQtONrM1+ki0o5YcES0 JEsM6mb5g==; Received: from willy by bombadil.infradead.org with local (Exim 4.87 #1 (Red Hat Linux)) id 1eMNmr-000166-Ue; Wed, 06 Dec 2017 00:42:09 +0000 From: Matthew Wilcox Cc: Matthew Wilcox , Ross Zwisler , Jens Axboe , Rehas Sachdeva , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-xfs@vger.kernel.org, linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 36/73] mm: Convert page migration to XArray Date: Tue, 5 Dec 2017 16:41:22 -0800 Message-Id: <20171206004159.3755-37-willy@infradead.org> X-Mailer: git-send-email 2.9.5 In-Reply-To: <20171206004159.3755-1-willy@infradead.org> References: <20171206004159.3755-1-willy@infradead.org> To: unlisted-recipients:; (no To-header on input) Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox Signed-off-by: Matthew Wilcox --- mm/migrate.c | 40 ++++++++++++++++------------------------ 1 file changed, 16 insertions(+), 24 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 59f18c571120..7122fec9b075 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -322,7 +322,7 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, page = migration_entry_to_page(entry); /* - * Once radix-tree replacement of page migration started, page_count + * Once page cache replacement of page migration started, page_count * *must* be zero. And, we don't want to call wait_on_page_locked() * against a page without get_page(). * So, we use get_page_unless_zero(), here. Even failed, page fault @@ -437,10 +437,10 @@ int migrate_page_move_mapping(struct address_space *mapping, struct buffer_head *head, enum migrate_mode mode, int extra_count) { + XA_STATE(xas, &mapping->pages, page_index(page)); struct zone *oldzone, *newzone; int dirty; int expected_count = 1 + extra_count; - void **pslot; /* * Device public or private pages have an extra refcount as they are @@ -466,20 +466,16 @@ int migrate_page_move_mapping(struct address_space *mapping, oldzone = page_zone(page); newzone = page_zone(newpage); - xa_lock_irq(&mapping->pages); - - pslot = radix_tree_lookup_slot(&mapping->pages, - page_index(page)); + xas_lock_irq(&xas); expected_count += 1 + page_has_private(page); - if (page_count(page) != expected_count || - radix_tree_deref_slot_protected(pslot, &mapping->pages.xa_lock) != page) { - xa_unlock_irq(&mapping->pages); + if (page_count(page) != expected_count || xas_load(&xas) != page) { + xas_unlock_irq(&xas); return -EAGAIN; } if (!page_ref_freeze(page, expected_count)) { - xa_unlock_irq(&mapping->pages); + xas_unlock_irq(&xas); return -EAGAIN; } @@ -493,7 +489,7 @@ int migrate_page_move_mapping(struct address_space *mapping, if (mode == MIGRATE_ASYNC && head && !buffer_migrate_lock_buffers(head, mode)) { page_ref_unfreeze(page, expected_count); - xa_unlock_irq(&mapping->pages); + xas_unlock_irq(&xas); return -EAGAIN; } @@ -521,7 +517,7 @@ int migrate_page_move_mapping(struct address_space *mapping, SetPageDirty(newpage); } - radix_tree_replace_slot(&mapping->pages, pslot, newpage); + xas_store(&xas, newpage); /* * Drop cache reference from old page by unfreezing @@ -530,7 +526,7 @@ int migrate_page_move_mapping(struct address_space *mapping, */ page_ref_unfreeze(page, expected_count - 1); - xa_unlock(&mapping->pages); + xas_unlock(&xas); /* Leave irq disabled to prevent preemption while updating stats */ /* @@ -570,22 +566,18 @@ EXPORT_SYMBOL(migrate_page_move_mapping); int migrate_huge_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page) { + XA_STATE(xas, &mapping->pages, page_index(page)); int expected_count; - void **pslot; - - xa_lock_irq(&mapping->pages); - - pslot = radix_tree_lookup_slot(&mapping->pages, page_index(page)); + xas_lock_irq(&xas); expected_count = 2 + page_has_private(page); - if (page_count(page) != expected_count || - radix_tree_deref_slot_protected(pslot, &mapping->pages.xa_lock) != page) { - xa_unlock_irq(&mapping->pages); + if (page_count(page) != expected_count || xas_load(&xas) != page) { + xas_unlock_irq(&xas); return -EAGAIN; } if (!page_ref_freeze(page, expected_count)) { - xa_unlock_irq(&mapping->pages); + xas_unlock_irq(&xas); return -EAGAIN; } @@ -594,11 +586,11 @@ int migrate_huge_page_move_mapping(struct address_space *mapping, get_page(newpage); - radix_tree_replace_slot(&mapping->pages, pslot, newpage); + xas_store(&xas, newpage); page_ref_unfreeze(page, expected_count - 1); - xa_unlock_irq(&mapping->pages); + xas_unlock_irq(&xas); return MIGRATEPAGE_SUCCESS; }