From patchwork Fri Aug 30 13:24:59 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 2852067 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id C97C6C0AB5 for ; Fri, 30 Aug 2013 13:29:37 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 91BD0205F4 for ; Fri, 30 Aug 2013 13:29:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 55A81205ED for ; Fri, 30 Aug 2013 13:29:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756982Ab3H3N3S (ORCPT ); Fri, 30 Aug 2013 09:29:18 -0400 Received: from e35.co.us.ibm.com ([32.97.110.153]:39073 "EHLO e35.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756977Ab3H3N3P (ORCPT ); Fri, 30 Aug 2013 09:29:15 -0400 Received: from /spool/local by e35.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 30 Aug 2013 07:29:15 -0600 Received: from d03dlp03.boulder.ibm.com (9.17.202.179) by e35.co.us.ibm.com (192.168.1.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 30 Aug 2013 07:29:04 -0600 Received: from d03relay05.boulder.ibm.com (d03relay05.boulder.ibm.com [9.17.195.107]) by d03dlp03.boulder.ibm.com (Postfix) with ESMTP id 1544619D8045; Fri, 30 Aug 2013 07:29:03 -0600 (MDT) Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d03relay05.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r7UDSv3q168278; Fri, 30 Aug 2013 07:28:59 -0600 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r7UDSsHN022664; Fri, 30 Aug 2013 07:28:56 -0600 Received: from srivatsabhat.in.ibm.com ([9.79.248.196]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r7UDSib4021661; Fri, 30 Aug 2013 07:28:46 -0600 From: "Srivatsa S. Bhat" Subject: [RFC PATCH v3 35/35] mm: Use a cache between page-allocator and region-allocator To: akpm@linux-foundation.org, mgorman@suse.de, hannes@cmpxchg.org, tony.luck@intel.com, matthew.garrett@nebula.com, dave@sr71.net, riel@redhat.com, arjan@linux.intel.com, srinivas.pandruvada@linux.intel.com, willy@linux.intel.com, kamezawa.hiroyu@jp.fujitsu.com, lenb@kernel.org, rjw@sisk.pl Cc: gargankita@gmail.com, paulmck@linux.vnet.ibm.com, svaidy@linux.vnet.ibm.com, andi@firstfloor.org, isimatu.yasuaki@jp.fujitsu.com, santosh.shilimkar@ti.com, kosaki.motohiro@gmail.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Fri, 30 Aug 2013 18:54:59 +0530 Message-ID: <20130830132456.4947.67600.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130830131221.4947.99764.stgit@srivatsabhat.in.ibm.com> References: <20130830131221.4947.99764.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13083013-4834-0000-0000-00000A9956A0 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-9.0 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, whenever the page allocator notices that it has all the freepages of a given memory region, it attempts to return it back to the region allocator. This strategy is needlessly aggressive and can cause a lot of back and forth between the page-allocator and the region-allocator. More importantly, it can potentially completely wreck the benefits of having a region allocator in the first place - if the buddy allocator immediately returns freepages of memory regions to the region allocator, it goes back to the generic pool of pages. So, in future, depending on when the next allocation request arrives for this particular migratetype, the region allocator might not have any free regions to hand out, and hence we might end up falling back to freepages of other migratetypes. Instead, if the page allocator retains a few regions as a cache for every migratetype, we will have higher chances of avoiding fallbacks to other migratetypes. So, don't return all free memory regions (in the page allocator) to the region allocator. Keep atleast one region as a cache, for future use. Signed-off-by: Srivatsa S. Bhat --- mm/page_alloc.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1312546..55e8e65 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -639,9 +639,11 @@ static void add_to_region_allocator(struct zone *z, struct free_list *free_list, int region_id); -static inline int can_return_region(struct mem_region_list *region, int order) +static inline int can_return_region(struct mem_region_list *region, int order, + struct free_list *free_list) { struct zone_mem_region *zone_region; + struct page *prev_page, *next_page; zone_region = region->zone_region; @@ -659,6 +661,16 @@ static inline int can_return_region(struct mem_region_list *region, int order) if (likely(order != MAX_ORDER-1)) return 0; + /* + * Don't return all the regions; retain atleast one region as a + * cache for future use. + */ + prev_page = container_of(free_list->list.prev , struct page, lru); + next_page = container_of(free_list->list.next , struct page, lru); + + if (page_zone_region_id(prev_page) == page_zone_region_id(next_page)) + return 0; /* There is only one region in this freelist */ + if (region->nr_free * (1 << order) == zone_region->nr_free) return 1; @@ -728,7 +740,7 @@ try_return_region: * Try to return the freepages of a memory region to the region * allocator, if possible. */ - if (can_return_region(region, order)) + if (can_return_region(region, order, free_list)) add_to_region_allocator(page_zone(page), free_list, region_id); }