From patchwork Wed Sep 25 23:21:22 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 2945771 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id CDFB59F288 for ; Wed, 25 Sep 2013 23:27:38 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CAEDB20360 for ; Wed, 25 Sep 2013 23:27:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E967B20372 for ; Wed, 25 Sep 2013 23:27:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756329Ab3IYXZn (ORCPT ); Wed, 25 Sep 2013 19:25:43 -0400 Received: from e23smtp08.au.ibm.com ([202.81.31.141]:45029 "EHLO e23smtp08.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756319Ab3IYXZk (ORCPT ); Wed, 25 Sep 2013 19:25:40 -0400 Received: from /spool/local by e23smtp08.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 26 Sep 2013 09:25:39 +1000 Received: from d23dlp02.au.ibm.com (202.81.31.213) by e23smtp08.au.ibm.com (202.81.31.205) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 26 Sep 2013 09:25:36 +1000 Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id AB8D62BB0056; Thu, 26 Sep 2013 09:25:35 +1000 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r8PN8tj729491430; Thu, 26 Sep 2013 09:08:55 +1000 Received: from d23av04.au.ibm.com (loopback [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r8PNPWt8019351; Thu, 26 Sep 2013 09:25:35 +1000 Received: from srivatsabhat.in.ibm.com ([9.79.250.85]) by d23av04.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r8PNPOIR019235; Thu, 26 Sep 2013 09:25:25 +1000 From: "Srivatsa S. Bhat" Subject: [RFC PATCH v4 34/40] mm: Restructure the compaction part of CMA for wider use To: akpm@linux-foundation.org, mgorman@suse.de, dave@sr71.net, hannes@cmpxchg.org, tony.luck@intel.com, matthew.garrett@nebula.com, riel@redhat.com, arjan@linux.intel.com, srinivas.pandruvada@linux.intel.com, willy@linux.intel.com, kamezawa.hiroyu@jp.fujitsu.com, lenb@kernel.org, rjw@sisk.pl Cc: gargankita@gmail.com, paulmck@linux.vnet.ibm.com, svaidy@linux.vnet.ibm.com, andi@firstfloor.org, isimatu.yasuaki@jp.fujitsu.com, santosh.shilimkar@ti.com, kosaki.motohiro@gmail.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Thu, 26 Sep 2013 04:51:22 +0530 Message-ID: <20130925232120.26184.71686.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130925231250.26184.31438.stgit@srivatsabhat.in.ibm.com> References: <20130925231250.26184.31438.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13092523-5140-0000-0000-000003E418FC Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00,KHOP_BIG_TO_CC, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP CMA uses bits and pieces of the memory compaction algorithms to perform large contiguous allocations. Those algorithms would be useful for memory power management too, to evacuate entire regions of memory. So rewrite the code in a way that helps us to easily reuse the code for both use-cases. Signed-off-by: Srivatsa S. Bhat --- mm/compaction.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ mm/internal.h | 40 +++++++++++++++++++++++++++ mm/page_alloc.c | 51 +++++++++-------------------------- 3 files changed, 134 insertions(+), 38 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/mm/compaction.c b/mm/compaction.c index 511b191..c775066 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -816,6 +816,87 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone, return ISOLATE_SUCCESS; } +/* + * Make free pages available within the given range, using compaction to + * migrate used pages elsewhere. + * + * [start, end) must belong to a single zone. + * + * This function is roughly based on the logic inside compact_zone(). + */ +int compact_range(struct compact_control *cc, struct aggression_control *ac, + struct free_page_control *fc, unsigned long start, + unsigned long end) +{ + unsigned long pfn = start; + int ret = 0, tries, migrate_mode; + + if (ac->prep_all) + migrate_prep(); + else + migrate_prep_local(); + + while (pfn < end || !list_empty(&cc->migratepages)) { + if (list_empty(&cc->migratepages)) { + cc->nr_migratepages = 0; + pfn = isolate_migratepages_range(cc->zone, cc, + pfn, end, ac->isolate_unevictable); + + if (!pfn) { + ret = -EINTR; + break; + } + } + + for (tries = 0; tries < ac->max_tries; tries++) { + unsigned long nr_migrate, nr_remaining; + + if (fatal_signal_pending(current)){ + ret = -EINTR; + goto out; + } + + if (ac->reclaim_clean) { + int nr_reclaimed; + + nr_reclaimed = + reclaim_clean_pages_from_list(cc->zone, + &cc->migratepages); + + cc->nr_migratepages -= nr_reclaimed; + } + + migrate_mode = cc->sync ? MIGRATE_SYNC : MIGRATE_ASYNC; + nr_migrate = cc->nr_migratepages; + ret = migrate_pages(&cc->migratepages, + fc->free_page_alloc, fc->alloc_data, + migrate_mode, ac->reason); + + update_nr_listpages(cc); + nr_remaining = cc->nr_migratepages; + trace_mm_compaction_migratepages( + nr_migrate - nr_remaining, nr_remaining); + } + + if (tries == ac->max_tries) { + ret = ret < 0 ? ret : -EBUSY; + break; + } + } + +out: + if (ret < 0) + putback_movable_pages(&cc->migratepages); + + /* Release free pages and check accounting */ + if (fc->release_freepages) + cc->nr_freepages -= fc->release_freepages(fc->free_data); + + VM_BUG_ON(cc->nr_freepages != 0); + + return ret; +} + static int compact_finished(struct zone *zone, struct compact_control *cc) { diff --git a/mm/internal.h b/mm/internal.h index 684f7aa..acb50f8 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -107,6 +107,42 @@ extern bool is_free_buddy_page(struct page *page); /* * in mm/compaction.c */ + +struct free_page_control { + + /* Function used to allocate free pages as target of migration. */ + struct page * (*free_page_alloc)(struct page *migratepage, + unsigned long data, + int **result); + + unsigned long alloc_data; /* Private data for free_page_alloc() */ + + /* + * Function to release the accumulated free pages after the compaction + * run. + */ + unsigned long (*release_freepages)(unsigned long info); + unsigned long free_data; /* Private data for release_freepages() */ +}; + +/* + * aggression_control gives us fine-grained control to specify how aggressively + * we want to compact memory. + */ +struct aggression_control { + bool isolate_unevictable; /* Isolate unevictable pages too */ + bool prep_all; /* Use migrate_prep() instead of + * migrate_prep_local(). + */ + bool reclaim_clean; /* Reclaim clean page-cache pages */ + int max_tries; /* No. of tries to migrate the + * isolated pages before giving up. + */ + int reason; /* Reason for compaction, passed on + * as reason for migrate_pages(). + */ +}; + /* * compact_control is used to track pages being migrated and the free pages * they are being migrated to during memory compaction. The free_pfn starts @@ -141,6 +177,10 @@ unsigned long isolate_migratepages_range(struct zone *zone, struct compact_control *cc, unsigned long low_pfn, unsigned long end_pfn, bool unevictable); +int compact_range(struct compact_control *cc, struct aggression_control *ac, + struct free_page_control *fc, unsigned long start, + unsigned long end); + #endif /* diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a15ac96..70c3d7a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6893,46 +6893,21 @@ static unsigned long pfn_max_align_up(unsigned long pfn) static int __alloc_contig_migrate_range(struct compact_control *cc, unsigned long start, unsigned long end) { - /* This function is based on compact_zone() from compaction.c. */ - unsigned long nr_reclaimed; - unsigned long pfn = start; - unsigned int tries = 0; - int ret = 0; - - migrate_prep(); - - while (pfn < end || !list_empty(&cc->migratepages)) { - if (fatal_signal_pending(current)) { - ret = -EINTR; - break; - } - - if (list_empty(&cc->migratepages)) { - cc->nr_migratepages = 0; - pfn = isolate_migratepages_range(cc->zone, cc, - pfn, end, true); - if (!pfn) { - ret = -EINTR; - break; - } - tries = 0; - } else if (++tries == 5) { - ret = ret < 0 ? ret : -EBUSY; - break; - } + struct aggression_control ac = { + .isolate_unevictable = true, + .prep_all = true, + .reclaim_clean = true, + .max_tries = 5, + .reason = MR_CMA, + }; - nr_reclaimed = reclaim_clean_pages_from_list(cc->zone, - &cc->migratepages); - cc->nr_migratepages -= nr_reclaimed; + struct free_page_control fc = { + .free_page_alloc = alloc_migrate_target, + .alloc_data = 0, + .release_freepages = NULL, + }; - ret = migrate_pages(&cc->migratepages, alloc_migrate_target, - 0, MIGRATE_SYNC, MR_CMA); - } - if (ret < 0) { - putback_movable_pages(&cc->migratepages); - return ret; - } - return 0; + return compact_range(cc, &ac, &fc, start, end); } /**