From patchwork Fri Aug 30 13:22:55 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 2852051 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C15D99F313 for ; Fri, 30 Aug 2013 13:27:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9B02D205D6 for ; Fri, 30 Aug 2013 13:27:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 63CF4205D2 for ; Fri, 30 Aug 2013 13:27:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756875Ab3H3N1A (ORCPT ); Fri, 30 Aug 2013 09:27:00 -0400 Received: from e39.co.us.ibm.com ([32.97.110.160]:52255 "EHLO e39.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755711Ab3H3N05 (ORCPT ); Fri, 30 Aug 2013 09:26:57 -0400 Received: from /spool/local by e39.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 30 Aug 2013 07:26:57 -0600 Received: from d03dlp02.boulder.ibm.com (9.17.202.178) by e39.co.us.ibm.com (192.168.1.139) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 30 Aug 2013 07:26:56 -0600 Received: from d03relay01.boulder.ibm.com (d03relay01.boulder.ibm.com [9.17.195.226]) by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id E7B023E40044; Fri, 30 Aug 2013 07:26:55 -0600 (MDT) Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by d03relay01.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r7UDQtfD174274; Fri, 30 Aug 2013 07:26:55 -0600 Received: from d03av03.boulder.ibm.com (loopback [127.0.0.1]) by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r7UDQpk7024542; Fri, 30 Aug 2013 07:26:55 -0600 Received: from srivatsabhat.in.ibm.com ([9.79.248.196]) by d03av03.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r7UDQfeY023750; Fri, 30 Aug 2013 07:26:43 -0600 From: "Srivatsa S. Bhat" Subject: [RFC PATCH v3 28/35] mm: Connect Page Allocator(PA) to Region Allocator(RA); add PA <= RA flow To: akpm@linux-foundation.org, mgorman@suse.de, hannes@cmpxchg.org, tony.luck@intel.com, matthew.garrett@nebula.com, dave@sr71.net, riel@redhat.com, arjan@linux.intel.com, srinivas.pandruvada@linux.intel.com, willy@linux.intel.com, kamezawa.hiroyu@jp.fujitsu.com, lenb@kernel.org, rjw@sisk.pl Cc: gargankita@gmail.com, paulmck@linux.vnet.ibm.com, svaidy@linux.vnet.ibm.com, andi@firstfloor.org, isimatu.yasuaki@jp.fujitsu.com, santosh.shilimkar@ti.com, kosaki.motohiro@gmail.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Fri, 30 Aug 2013 18:52:55 +0530 Message-ID: <20130830132253.4947.86808.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130830131221.4947.99764.stgit@srivatsabhat.in.ibm.com> References: <20130830131221.4947.99764.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13083013-9332-0000-0000-00000141D8D1 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-9.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now that we have built up an infrastructure that forms a "Memory Region Allocator", connect it with the page allocator. To entities requesting memory, the page allocator will function as a front-end, whereas the region allocator will act as a back-end to the page allocator. (Analogy: page allocator is like free cash, whereas region allocator is like a bank). Implement the flow of freepages from the region allocator to the page allocator. When __rmqueue_smallest() comes out empty handed, try to get freepages from the region allocator. If that fails, only then fallback to an allocation from a different migratetype. This helps significantly in avoiding mixing of allocations of different migratetypes in a single region. Thus it helps in keeping entire memory regions homogeneous with respect to the type of allocations. Simplification: We assume that the freepages of a memory region can be completely represented by a set of MAX_ORDER-1 pages. That is, we only need to consider the buddy freelists corresponding to MAX_ORDER-1, while interacting with the region allocator. Furthermore, we assume that pageblock_order == MAX_ORDER-1. (These assumptions are used to ease the implementation, so that one can quickly evaluate the benefits of the overall design without getting bogged down by too many corner cases and constraints. Of course future implementations will handle more scenarios and will have reduced dependence on such simplifying assumptions.) Signed-off-by: Srivatsa S. Bhat --- mm/page_alloc.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b8af5a2..3749e2a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1702,10 +1702,18 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order, { struct page *page; -retry_reserve: +retry: page = __rmqueue_smallest(zone, order, migratetype); if (unlikely(!page) && migratetype != MIGRATE_RESERVE) { + + /* + * Try to get a region from the region allocator before falling + * back to an allocation from a different migratetype. + */ + if (!del_from_region_allocator(zone, MAX_ORDER-1, migratetype)) + goto retry; + page = __rmqueue_fallback(zone, order, migratetype); /* @@ -1715,7 +1723,7 @@ retry_reserve: */ if (!page) { migratetype = MIGRATE_RESERVE; - goto retry_reserve; + goto retry; } }