From patchwork Thu Nov 8 09:12:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 10673813 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4DC814BD for ; Thu, 8 Nov 2018 09:12:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D3EC22CF52 for ; Thu, 8 Nov 2018 09:12:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C83562CF65; Thu, 8 Nov 2018 09:12:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8EA502CF52 for ; Thu, 8 Nov 2018 09:12:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0F6446B05B7; Thu, 8 Nov 2018 04:12:23 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0B2786B05B9; Thu, 8 Nov 2018 04:12:23 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6A276B05B8; Thu, 8 Nov 2018 04:12:22 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id 5904F6B05B5 for ; Thu, 8 Nov 2018 04:12:22 -0500 (EST) Received: by mail-ed1-f70.google.com with SMTP id z72-v6so10957040ede.14 for ; Thu, 08 Nov 2018 01:12:22 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=27XlKhGhVykC9+0keuc5iaQlYSXf6+i5XK7EQZa0ipI=; b=J+IQnIVYwpCAk551xmS2wN7PqdWIsCVbHDhoJNX/2+uXRgaJFIjdL3GsMrmI0qU1cm AAl0FFFECFO9A4os5FqQ82ThPkDcziwdc3ACHK8E4remf7CvpdkeL5hXH47lbg+OsahK djGUwCAJr/A4Y1YC4g918D90w6JQAH2+728/uCuyTYmwGI56Dr5YOgFfq9jyXkaGx1xs lnL+lzLmrd/eUC4AjsGg8uR2O6FYeuqU0VZ5ddWKImLG2YG/IwWRivRtB3rSxwhraxf5 uVT8f0Vh9iEcfFtX1pm8GQ+7Fw+oQVimUjv4NslBip9BQAUfc6NOXHnFPlfy8ui2ZDZz BkGQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 46.22.139.230 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Gm-Message-State: AGRZ1gLTgMMJ6ofkEottFG8FhuvCV+h5Mp7LgABMqD7HMJZK7l1egStV lngfKP6HfNkm1iaCLhShCcF5edykR5efPNAyTs7n3V0dMnmQrRn5K4xx0dhpsYaPDT/yDOGAzFJ JYc6gMYZ4Qw+1VUAofcwN/7NL5bsyiNKi//3AlWiq+bGKivKM93h3Z9nlgZWMqMLm2w== X-Received: by 2002:a17:906:f1c9:: with SMTP id gx9-v6mr2524204ejb.144.1541668341761; Thu, 08 Nov 2018 01:12:21 -0800 (PST) X-Google-Smtp-Source: AJdET5cQee76Sd7uwKa3YD+N+GTgHfkM5nMF1xwTJdtga6hFjf878frX+aOyA2/YIO9BH2hLA4mB X-Received: by 2002:a17:906:f1c9:: with SMTP id gx9-v6mr2524127ejb.144.1541668340071; Thu, 08 Nov 2018 01:12:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541668340; cv=none; d=google.com; s=arc-20160816; b=hlWt88nqK+Koj6QcRjmwn5Vd/xXtkDqxJfxWvaZM92QIDPxVDXrkQ5f68etjGCmT1x bMdSScb6Rnj7PgLKIOEK0xstyFbQ6vuPMnyCXMJakG/xtFzM/n0un3ath9w4RbcwWUHB 58LxpazxT9Y+aFuhsJJ/EhfUMPFiSYFeDQR7q/vBGAlQYVzYMNUee6BmHkpivnT6Zurw hyKU2WjGwx8bg0AGfS9y1Eo60P7w6Y9QWvLW5P18WqpcbZ//uWBMU6d1yhlAYUTxvoDN DsJ4TauvFXu6RcJnxZCbg6Po9/ea+2S9MDyOkf6KXMY2+3bomgJPw0rOQov6CnEpZVRq ODyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=27XlKhGhVykC9+0keuc5iaQlYSXf6+i5XK7EQZa0ipI=; b=e+jFxv6smEJyUDqqY//aeI58qq//t6Iiyep2k/ZKRG3i7qK48qtmNwpnx8Cp5ku2NY F0/13/1lIF/ZqDtoas3/RZeXF+I89ITq5vcnfr1uFaXQpWfpvemV4R+AIt0U46gVYsO9 kgCpcNI0/+dinEskWqGGMKJ7hgzDJy64Lvqd43yxr4SUx6wB1nf6+y52ifsOibGg9Hxm 3bEMVhkfoGUleNxYXlpzerCQcniH0nQtRCJJxyY/1xSuwYQ9NRZ7cmiSdk9AgYA/0H7G 1pAhogasvdC6/A80x9IFM7Lb6tP9Y69MWXnYMDHA2NfDuDNTsXm1ZU1g7z4VutaRB+Db JbcQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 46.22.139.230 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net Received: from outbound-smtp13.blacknight.com (outbound-smtp13.blacknight.com. [46.22.139.230]) by mx.google.com with ESMTPS id 33-v6si678280edq.249.2018.11.08.01.12.19 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 08 Nov 2018 01:12:20 -0800 (PST) Received-SPF: pass (google.com: domain of mgorman@techsingularity.net designates 46.22.139.230 as permitted sender) client-ip=46.22.139.230; Authentication-Results: mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 46.22.139.230 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net Received: from mail.blacknight.com (unknown [81.17.254.10]) by outbound-smtp13.blacknight.com (Postfix) with ESMTPS id 932521C2FAE for ; Thu, 8 Nov 2018 09:12:19 +0000 (GMT) Received: (qmail 23761 invoked from network); 8 Nov 2018 09:12:19 -0000 Received: from unknown (HELO stampy.163woodhaven.lan) (mgorman@techsingularity.net@[37.228.229.69]) by 81.17.254.9 with ESMTPA; 8 Nov 2018 09:12:19 -0000 From: Mel Gorman To: Linux-MM Cc: Andrew Morton , Vlastimil Babka , David Rientjes , Andrea Arcangeli , Zi Yan , LKML , Mel Gorman Subject: [PATCH 1/4] mm, page_alloc: Spread allocations across zones before introducing fragmentation Date: Thu, 8 Nov 2018 09:12:15 +0000 Message-Id: <20181108091218.32715-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20181108091218.32715-1-mgorman@techsingularity.net> References: <20181108091218.32715-1-mgorman@techsingularity.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The page allocator zone lists are iterated based on the watermarks of each zone which does not take anti-fragmentation into account. On x86, node 0 may have multiple zones while other nodes have one zone. A consequence is that tasks running on node 0 may fragment ZONE_NORMAL even though ZONE_DMA32 has plenty of free memory. This patch special cases the allocator fast path such that it'll try an allocation from a lower local zone before fragmenting a higher zone. In this case, stealing of pageblocks or orders larger than a pageblock are still allowed in the fast path as they are uninteresting from a fragmentation point of view. This was evaluated using a benchmark designed to fragment memory before attempting THPs. It's implemented in mmtests as the following configurations configs/config-global-dhp__workload_thpfioscale configs/config-global-dhp__workload_thpfioscale-defrag configs/config-global-dhp__workload_thpfioscale-madvhugepage e.g. from mmtests ./run-mmtests.sh --run-monitor --config configs/config-global-dhp__workload_thpfioscale test-run-1 The broad details of the workload are as follows; 1. Create an XFS filesystem (not specified in the configuration but done as part of the testing for this patch) 2. Start 4 fio threads that write a number of 64K files inefficiently. Inefficiently means that files are created on first access and not created in advance (fio parameterr create_on_open=1) and fallocate is not used (fallocate=none). With multiple IO issuers this creates a mix of slab and page cache allocations over time. The total size of the files is 150% physical memory so that the slabs and page cache pages get mixed 3. Warm up a number of fio read-only threads accessing the same files created in step 2. This part runs for the same length of time it took to create the files. It'll fault back in old data and further interleave slab and page cache allocations. As it's now low on memory due to step 2, fragmentation occurs as pageblocks get stolen. 4. While step 3 is still running, start a process that tries to allocate 75% of memory as huge pages with a number of threads. The number of threads is based on a (NR_CPUS_SOCKET - NR_FIO_THREADS)/4 to avoid THP threads contending with fio, any other threads or forcing cross-NUMA scheduling. Note that the test has not been used on a machine with less than 8 cores. The benchmark records whether huge pages were allocated and what the fault latency was in microseconds 5. Measure the number of events potentially causing external fragmentation, the fault latency and the huge page allocation success rate. 6. Cleanup Note that due to the use of IO and page cache that this benchmark is not suitable for running on large machines where the time to fragment memory may be excessive. Also note that while this is one mix that generates fragmentation that it's not the only mix that generates fragmentation. Differences in workload that are more slab-intensive or whether SLUB is used with high-order pages may yield different results. When the page allocator fragments memory, it records the event using the mm_page_alloc_extfrag event. If the fallback_order is smaller than a pageblock order (order-9 on 64-bit x86) then it's considered an event that may cause external fragmentation issues in the future. Hence, the primary metric here is the number of external fragmentation events that occur with order < 9. The secondary metric is allocation latency and huge page allocation success rates but note that differences in latencies and what the success rate also can affect the number of external fragmentation event which is why it's a secondary metric. 1-socket Skylake machine config-global-dhp__workload_thpfioscale XFS (no special madvise) 4 fio threads, 1 THP allocating thread -------------------------------------- 4.20-rc1 extfrag events < order 9: 1023463 4.20-rc1+patch: 358574 (65% reduction) thpfioscale Fault Latencies 4.20.0-rc1 4.20.0-rc1 vanilla lowzone-v2r4 Min fault-base-1 588.00 ( 0.00%) 557.00 ( 5.27%) Min fault-huge-1 0.00 ( 0.00%) 0.00 ( 0.00%) Amean fault-base-1 663.58 ( 0.00%) 663.65 ( -0.01%) Amean fault-huge-1 0.00 ( 0.00%) 0.00 ( 0.00%) 4.20.0-rc1 4.20.0-rc1 vanilla lowzone-v2r4 Percentage huge-1 0.00 ( 0.00%) 0.00 ( 0.00%) Fault latencies are reduced while allocation success rates remain at zero asthis configuration does not make any heavy effort to allocate THP and fio is heavily active at the time and filling memory. However, a 65% reduction of serious fragmentation events reduces the changes of external fragmentation being a problem in the future. 1-socket Skylake machine global-dhp__workload_thpfioscale-madvhugepage-xfs (MADV_HUGEPAGE) ----------------------------------------------------------------- 4.20-rc1 extfrag events < order 9: 342549 4.20-rc1+patch: 337890 (1% reduction) thpfioscale Fault Latencies 4.20.0-rc1 4.20.0-rc1 vanilla lowzone-v2r4 Amean fault-base-1 1517.06 ( 0.00%) 1531.37 ( -0.94%) Amean fault-huge-1 1129.50 ( 0.00%) 1160.95 ( -2.78%) thpfioscale Percentage Faults Huge 4.20.0-rc1 4.20.0-rc1 vanilla lowzone-v2r4 Percentage huge-1 78.01 ( 0.00%) 78.97 ( 1.23%) Nothing dramatic. Fragmentation events were only reduced slightly which is very different to what was reported in V1. A big difference with V1 is the relative size of Normal to the DMA32 zone. This machine has double the memory so the impact of using a small zone to avoid fragmentation events is much lower. 2-socket Haswell machine config-global-dhp__workload_thpfioscale XFS (no special madvise) 4 fio threads, 5 THP allocating threads ---------------------------------------------------------------- 4.20-rc1 extfrag events < order 9: 209820 4.20-rc1+patch: 185923 (11% reduction) thpfioscale Fault Latencies 4.20.0-rc1 4.20.0-rc1 vanilla lowzone-v2r4 Amean fault-base-5 1324.93 ( 0.00%) 1334.99 ( -0.76%) Amean fault-huge-5 4681.71 ( 0.00%) 2428.43 ( 48.13%) 4.20.0-rc1 4.20.0-rc1 vanilla lowzone-v2r4 Percentage huge-5 1.05 ( 0.00%) 1.13 ( 7.94%) The reduction of external fragmentation events is expected. A careful reader may spot that the reduction is lower than it was on v1. This is due to the removal of __GFP_THISNODE in commit ac5b2c18911f ("mm: thp: relax __GFP_THISNODE for MADV_HUGEPAGE mappings") as THP allocations can now spill over to remote nodes instead of fragmenting local memory. This reduces the impact of the use of a lower zone to avoid fragmentation. It's also worth noting relative to v1 that the allocation success rate is slightly higher. 2-socket Haswell machine global-dhp__workload_thpfioscale-madvhugepage-xfs (MADV_HUGEPAGE) ----------------------------------------------------------------- 4.20-rc1 extfrag events < order 9: 167464 4.20-rc1+patch: 130081 (22% reduction) thpfioscale Fault Latencies 4.20.0-rc1 4.20.0-rc1 vanilla lowzone-v2r4 Amean fault-base-5 7721.82 ( 0.00%) 6652.67 ( 13.85%) Amean fault-huge-5 3896.10 ( 0.00%) 2486.89 * 36.17%* thpfioscale Percentage Faults Huge 4.20.0-rc1 4.20.0-rc1 vanilla lowzone-v2r4 Percentage huge-5 95.02 ( 0.00%) 94.49 ( -0.56%) In this case, there was both a reduction in the external fragmentation causing events and the huge page allocation success latency with little change in the allocation success rates which were already high. A careful reader will note that V1 had very different outcomes both in terms of the number of fragmentation events and the allocation success rates. In this case, it's due to the baseline including the THP __GFP_THISNODE removaal patch. Overall, the patch significantly reduces the number of external fragmentation causing events so the success of THP over long periods of time would be improved for this adverse workload. While there are large differences compared to how V1 behaved, this is almost entirely accounted for by ac5b2c18911f ("mm: thp: relax __GFP_THISNODE for MADV_HUGEPAGE mappings"). Signed-off-by: Mel Gorman --- mm/internal.h | 13 +++++--- mm/page_alloc.c | 100 +++++++++++++++++++++++++++++++++++++++++++++++++------- 2 files changed, 98 insertions(+), 15 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 291eb2b6d1d8..544355156c92 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -480,10 +480,15 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone, #define ALLOC_OOM ALLOC_NO_WATERMARKS #endif -#define ALLOC_HARDER 0x10 /* try to alloc harder */ -#define ALLOC_HIGH 0x20 /* __GFP_HIGH set */ -#define ALLOC_CPUSET 0x40 /* check for correct cpuset */ -#define ALLOC_CMA 0x80 /* allow allocations from CMA areas */ +#define ALLOC_HARDER 0x10 /* try to alloc harder */ +#define ALLOC_HIGH 0x20 /* __GFP_HIGH set */ +#define ALLOC_CPUSET 0x40 /* check for correct cpuset */ +#define ALLOC_CMA 0x80 /* allow allocations from CMA areas */ +#ifdef CONFIG_ZONE_DMA32 +#define ALLOC_NOFRAGMENT 0x100 /* avoid mixing pageblock types */ +#else +#define ALLOC_NOFRAGMENT 0x0 +#endif enum ttu_flags; struct tlbflush_unmap_batch; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a919ba5cb3c8..5db746c642df 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2375,20 +2375,30 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, * condition simpler. */ static __always_inline bool -__rmqueue_fallback(struct zone *zone, int order, int start_migratetype) +__rmqueue_fallback(struct zone *zone, int order, int start_migratetype, + unsigned int alloc_flags) { struct free_area *area; int current_order; + int min_order = order; struct page *page; int fallback_mt; bool can_steal; + /* + * Do not steal pages from freelists belonging to other pageblocks + * i.e. orders < pageblock_order. In the event there is on local + * zone free, the allocation will retry later. + */ + if (alloc_flags & ALLOC_NOFRAGMENT) + min_order = pageblock_order; + /* * Find the largest available free page in the other list. This roughly * approximates finding the pageblock with the most free pages, which * would be too costly to do exactly. */ - for (current_order = MAX_ORDER - 1; current_order >= order; + for (current_order = MAX_ORDER - 1; current_order >= min_order; --current_order) { area = &(zone->free_area[current_order]); fallback_mt = find_suitable_fallback(area, current_order, @@ -2447,7 +2457,8 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype) * Call me with the zone->lock already held. */ static __always_inline struct page * -__rmqueue(struct zone *zone, unsigned int order, int migratetype) +__rmqueue(struct zone *zone, unsigned int order, int migratetype, + unsigned int alloc_flags) { struct page *page; @@ -2457,7 +2468,8 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype) if (migratetype == MIGRATE_MOVABLE) page = __rmqueue_cma_fallback(zone, order); - if (!page && __rmqueue_fallback(zone, order, migratetype)) + if (!page && __rmqueue_fallback(zone, order, migratetype, + alloc_flags)) goto retry; } @@ -2472,13 +2484,14 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype) */ static int rmqueue_bulk(struct zone *zone, unsigned int order, unsigned long count, struct list_head *list, - int migratetype) + int migratetype, unsigned int alloc_flags) { int i, alloced = 0; spin_lock(&zone->lock); for (i = 0; i < count; ++i) { - struct page *page = __rmqueue(zone, order, migratetype); + struct page *page = __rmqueue(zone, order, migratetype, + alloc_flags); if (unlikely(page == NULL)) break; @@ -2934,6 +2947,7 @@ static inline void zone_statistics(struct zone *preferred_zone, struct zone *z) /* Remove page from the per-cpu list, caller must protect the list */ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype, + unsigned int alloc_flags, struct per_cpu_pages *pcp, struct list_head *list) { @@ -2943,7 +2957,7 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype, if (list_empty(list)) { pcp->count += rmqueue_bulk(zone, 0, pcp->batch, list, - migratetype); + migratetype, alloc_flags); if (unlikely(list_empty(list))) return NULL; } @@ -2959,7 +2973,8 @@ static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype, /* Lock and remove page from the per-cpu list */ static struct page *rmqueue_pcplist(struct zone *preferred_zone, struct zone *zone, unsigned int order, - gfp_t gfp_flags, int migratetype) + gfp_t gfp_flags, int migratetype, + unsigned int alloc_flags) { struct per_cpu_pages *pcp; struct list_head *list; @@ -2969,7 +2984,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, local_irq_save(flags); pcp = &this_cpu_ptr(zone->pageset)->pcp; list = &pcp->lists[migratetype]; - page = __rmqueue_pcplist(zone, migratetype, pcp, list); + page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list); if (page) { __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); zone_statistics(preferred_zone, zone); @@ -2992,7 +3007,7 @@ struct page *rmqueue(struct zone *preferred_zone, if (likely(order == 0)) { page = rmqueue_pcplist(preferred_zone, zone, order, - gfp_flags, migratetype); + gfp_flags, migratetype, alloc_flags); goto out; } @@ -3011,7 +3026,7 @@ struct page *rmqueue(struct zone *preferred_zone, trace_mm_page_alloc_zone_locked(page, order, migratetype); } if (!page) - page = __rmqueue(zone, order, migratetype); + page = __rmqueue(zone, order, migratetype, alloc_flags); } while (page && check_new_pages(page, order)); spin_unlock(&zone->lock); if (!page) @@ -3253,6 +3268,36 @@ static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone) } #endif /* CONFIG_NUMA */ +#ifdef CONFIG_ZONE_DMA32 +/* + * The restriction on ZONE_DMA32 as being a suitable zone to use to avoid + * fragmentation is subtle. If the preferred zone was HIGHMEM then + * premature use of a lower zone may cause lowmem pressure problems that + * are wose than fragmentation. If the next zone is ZONE_DMA then it is + * probably too small. It only makes sense to spread allocations to avoid + * fragmentation between the Normal and DMA32 zones. + */ +static inline unsigned int alloc_flags_nofragment(struct zone *zone) +{ + if (zone_idx(zone) != ZONE_NORMAL) + return 0; + + /* + * If ZONE_DMA32 exists, assume it is the one after ZONE_NORMAL and + * the pointer is within zone->zone_pgdat->node_zones[]. + */ + if (!populated_zone(--zone)) + return 0; + + return ALLOC_NOFRAGMENT; +} +#else +static inline unsigned int alloc_flags_nofragment(struct zone *zone) +{ + return 0; +} +#endif + /* * get_page_from_freelist goes through the zonelist trying to allocate * a page. @@ -3264,11 +3309,14 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, struct zoneref *z = ac->preferred_zoneref; struct zone *zone; struct pglist_data *last_pgdat_dirty_limit = NULL; + bool no_fallback; +retry: /* * Scan zonelist, looking for a zone with enough free. * See also __cpuset_node_allowed() comment in kernel/cpuset.c. */ + no_fallback = alloc_flags & ALLOC_NOFRAGMENT; for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) { struct page *page; @@ -3307,6 +3355,21 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, } } + if (no_fallback) { + int local_nid; + + /* + * If moving to a remote node, retry but allow + * fragmenting fallbacks. Locality is more important + * than fragmentation avoidance. + */ + local_nid = zone_to_nid(ac->preferred_zoneref->zone); + if (zone_to_nid(zone) != local_nid) { + alloc_flags &= ~ALLOC_NOFRAGMENT; + goto retry; + } + } + mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK]; if (!zone_watermark_fast(zone, order, mark, ac_classzone_idx(ac), alloc_flags)) { @@ -3374,6 +3437,15 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, } } + /* + * It's possible on a UMA machine to get through all zones that are + * fragmented. If avoiding fragmentation, reset and try again + */ + if (no_fallback) { + alloc_flags &= ~ALLOC_NOFRAGMENT; + goto retry; + } + return NULL; } @@ -4371,6 +4443,12 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, finalise_ac(gfp_mask, &ac); + /* + * Forbid the first pass from falling back to types that fragment + * memory until all local zones are considered. + */ + alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone); + /* First allocation attempt */ page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac); if (likely(page)) From patchwork Thu Nov 8 09:12:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 10673815 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3458214BD for ; Thu, 8 Nov 2018 09:12:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1F8292CF5D for ; Thu, 8 Nov 2018 09:12:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0F4DE2CF52; Thu, 8 Nov 2018 09:12:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A9CD42CF52 for ; Thu, 8 Nov 2018 09:12:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7AFFA6B05B5; Thu, 8 Nov 2018 04:12:23 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 75F426B05B9; Thu, 8 Nov 2018 04:12:23 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 603456B05BB; Thu, 8 Nov 2018 04:12:23 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by kanga.kvack.org (Postfix) with ESMTP id 04E3D6B05B5 for ; Thu, 8 Nov 2018 04:12:23 -0500 (EST) Received: by mail-ed1-f69.google.com with SMTP id r20-v6so7786341eds.18 for ; Thu, 08 Nov 2018 01:12:22 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=7sFJwvHT1yp6FQ0Augnc/YMh749OjiLL7cBGh0XkokU=; b=V7TykRaB0Z7Nj/9HFojZKhq+6JusJq5qfXWT3Gg0BZ1LtUOaOY9ZkZU+gAvFX7rlbo dWSqdmos8JRzDjLC97JNgX6sYNieXyFUPFjBvRmkeFvuXpl7KiOH38yslbJxiptTz1kC jkVa2O+X/VEt+CbFlDb6v8ko9kmlNZOy56WJ8G1NX2lGdfJl3BUuQfPFr39khlzILo9y Np5BXeeOz/MyktVI428wJXAVHpDI8MiY0I46hNFqKsYfmVHIGnKWfu71fN9CX6i7wdDK VPBLd6M71CmWlen3wbngt+4h2vdsQ9cnTeT8SBV4Ds2s2ujTXT3K/nqesI1q0/aElgDr jKjg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 46.22.139.17 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Gm-Message-State: AGRZ1gJFgqaQdFQYjEjJ1mdkoZVqek8tZZ3150umK1h5LBjE33CVn5TQ AXG/IzhnCXlkBEDhq/+89KmvAAFUij4Vt7cHRedAEf3KnyTqTcf2M8Dg8bf6DNJgcl5XHKMSkxk wKfGBCBVBGFN5m+ZsPgVYFDWc7T6RHXGbEM9iMnmdi80Rb+3fXyJ0NTAdkC9VnjvQOQ== X-Received: by 2002:a17:906:1ec4:: with SMTP id m4-v6mr2487002ejj.35.1541668342360; Thu, 08 Nov 2018 01:12:22 -0800 (PST) X-Google-Smtp-Source: AJdET5dW/sjlYZ31A2JT3i9U7ae63YJ+SyTE376Lq3a3uCZcw1Vv+yNP9/HxlDHj9qdjSUOSyyyt X-Received: by 2002:a17:906:1ec4:: with SMTP id m4-v6mr2486914ejj.35.1541668340371; Thu, 08 Nov 2018 01:12:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541668340; cv=none; d=google.com; s=arc-20160816; b=aZbHmCjulXIv2ZyOuE5TVsn+65iGsX7ay2+Isu0sN4RpEf1BPyxHR5RFkzAv+0U0eD zfYanJotFN0ANBUAmAq9kzaYBqHOuSXE1O/HoC0hXgFw/Seo37ATsOszfB2wytYPQ9ID FOwrQ0dXBu3+Ie0WTsx3Ej7yRKBebxicWG8k/LVq97UmQrSTGPJHss0KQWtW46f7w0UM F0ZnYQ0uDt8tR5NTNghrlTsYOG51eyJ6uuZWWIiniDfSHJqIg9pHFrmwFDjLJ5cyeukV k4l6VxRwYaIlGxDHjqGnZo6NayqqJRSbQ+63gxXypDKglvN+n6j+hWVDqh9d+M+q6/2V 6Z6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=7sFJwvHT1yp6FQ0Augnc/YMh749OjiLL7cBGh0XkokU=; b=KXzGiLCSopZElctY86jiYMNxyLH6TGWW/gcXyAj4w5zF0lQLeuGt2Yqqx4zm1ch+G8 hlkg1FbEsDRsG6Txe+y9SFDDVfoSYy+qjJ9crp1zIjuD+kZMBxMHc96g6sZEQpQ2bK8W QnIBzARXOS/agtyebn6WEtG6IvolouaYiJm5Po9gZjLQ19XNJYqL1+uidT9OC0QXqkUj 2CjVnS/vp9PpN9/cZq9q/ZuIxMa49+hdhQTviy2nLUkFKov5GMxqs4R4eXqbUAxTA4o2 6Gb9XAE4IJuwk/J29xdhCq2X3miDRdLf5teHarmpWEY91FgR7QBB4KYkLBKrVmUUr5OA Juhw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 46.22.139.17 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net Received: from outbound-smtp12.blacknight.com (outbound-smtp12.blacknight.com. [46.22.139.17]) by mx.google.com with ESMTPS id x31-v6si1869589edc.220.2018.11.08.01.12.20 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 08 Nov 2018 01:12:20 -0800 (PST) Received-SPF: pass (google.com: domain of mgorman@techsingularity.net designates 46.22.139.17 as permitted sender) client-ip=46.22.139.17; Authentication-Results: mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 46.22.139.17 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp12.blacknight.com (Postfix) with ESMTPS id 048471C3128 for ; Thu, 8 Nov 2018 09:12:20 +0000 (GMT) Received: (qmail 23826 invoked from network); 8 Nov 2018 09:12:19 -0000 Received: from unknown (HELO stampy.163woodhaven.lan) (mgorman@techsingularity.net@[37.228.229.69]) by 81.17.254.9 with ESMTPA; 8 Nov 2018 09:12:19 -0000 From: Mel Gorman To: Linux-MM Cc: Andrew Morton , Vlastimil Babka , David Rientjes , Andrea Arcangeli , Zi Yan , LKML , Mel Gorman Subject: [PATCH 3/4] mm: Reclaim small amounts of memory when an external fragmentation event occurs Date: Thu, 8 Nov 2018 09:12:17 +0000 Message-Id: <20181108091218.32715-4-mgorman@techsingularity.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20181108091218.32715-1-mgorman@techsingularity.net> References: <20181108091218.32715-1-mgorman@techsingularity.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP An external fragmentation event was previously described as When the page allocator fragments memory, it records the event using the mm_page_alloc_extfrag event. If the fallback_order is smaller than a pageblock order (order-9 on 64-bit x86) then it's considered an event that will cause external fragmentation issues in the future. The kernel reduces the probability of such events by increasing the watermark sizes by calling set_recommended_min_free_kbytes early in the lifetime of the system. This works reasonably well in general but if there are enough sparsely populated pageblocks then the problem can still occur as enough memory is free overall and kswapd stays asleep. This patch introduces a watermark_boost_factor sysctl that allows a zone watermark to be temporarily boosted when an external fragmentation causing events occurs. The boosting will stall allocations that would decrease free memory below the boosted low watermark and kswapd is woken unconditionally to reclaim an amount of memory relative to the size of the high watermark and the watermark_boost_factor until the boost is cleared. When kswapd finishes, it wakes kcompactd at the pageblock order to clean some of the pageblocks that may have been affected by the fragmentation event. kswapd avoids any writeback or swap from reclaim context during this operation to avoid excessive system disruption in the name of fragmentation avoidance. Care is taken so that kswapd will do normal reclaim work if the system is really low on memory. This was evaluated using the same workloads as "mm, page_alloc: Spread allocations across zones before introducing fragmentation". 1-socket Skylake machine config-global-dhp__workload_thpfioscale XFS (no special madvise) 4 fio threads, 1 THP allocating thread -------------------------------------- 4.20-rc1 extfrag events < order 9: 1023463 4.20-rc1+patch: 358574 (65% reduction) 4.20-rc1+patch1-3: 19274 (98% reduction) 4.20.0-rc1 4.20.0-rc1 lowzone-v2r4 boost-v2r4 Amean fault-base-1 663.65 ( 0.00%) 659.85 * 0.57%* Amean fault-huge-1 0.00 ( 0.00%) 172.19 * -99.00%* 4.20.0-rc1 4.20.0-rc1 lowzone-v2r4 boost-v2r4 Percentage huge-1 0.00 ( 0.00%) 1.68 ( 100.00%) Note that external fragmentation causing events are massively reduced by this path whether in comparison to the previous kernel or the vanilla kernel. The fault latency for huge pages appears to be increased but that is only because THP allocations were successful with the patch applied. 1-socket Skylake machine global-dhp__workload_thpfioscale-madvhugepage-xfs (MADV_HUGEPAGE) ----------------------------------------------------------------- 4.20-rc1 extfrag events < order 9: 342549 4.20-rc1+patch: 337890 ( 1% reduction) 4.20-rc1+patch1-3: 12801 (96% reduction) thpfioscale Fault Latencies thpfioscale Fault Latencies 4.20.0-rc1 4.20.0-rc1 lowzone-v2r4 boost-v2r4 Amean fault-base-1 1531.37 ( 0.00%) 1578.91 ( -3.10%) Amean fault-huge-1 1160.95 ( 0.00%) 1090.23 * 6.09%* 4.20.0-rc1 4.20.0-rc1 lowzone-v2r4 boost-v2r4 Percentage huge-1 78.97 ( 0.00%) 82.59 ( 4.58%) As before, massive reduction in external fragmentation events, some jitter on latencies and an increase in THP allocation success rates. 2-socket Haswell machine config-global-dhp__workload_thpfioscale XFS (no special madvise) 4 fio threads, 5 THP allocating threads ---------------------------------------------------------------- 4.20-rc1 extfrag events < order 9: 209820 4.20-rc1+patch: 185923 (11% reduction) 4.20-rc1+patch1-3: 11240 (95% reduction) 4.20.0-rc1 4.20.0-rc1 lowzone-v2r4 boost-v2r4 Amean fault-base-5 1334.99 ( 0.00%) 1395.28 ( -4.52%) Amean fault-huge-5 2428.43 ( 0.00%) 539.69 ( 77.78%) 4.20.0-rc1 4.20.0-rc1 lowzone-v2r4 boost-v2r4 Percentage huge-5 1.13 ( 0.00%) 0.53 ( -52.94%) This is an illustration of why latencies are not the primary metric. There is a 95% reduction in fragmentation causing events but the huge page latencies look fantastic until you account for the fact it might be because the success rate was lower. Given how low it was initially, this is partially down to luck. 2-socket Haswell machine global-dhp__workload_thpfioscale-madvhugepage-xfs (MADV_HUGEPAGE) ----------------------------------------------------------------- 4.20-rc1 extfrag events < order 9: 167464 4.20-rc1+patch: 130081 (22% reduction) 4.20-rc1+patch1-3: 12057 (92% reduction) thpfioscale Fault Latencies 4.20.0-rc1 4.20.0-rc1 lowzone-v2r4 boost-v2r4 Amean fault-base-5 6652.67 ( 0.00%) 8691.83 * -30.65%* Amean fault-huge-5 2486.89 ( 0.00%) 2899.83 * -16.60%* 4.20.0-rc1 4.20.0-rc1 lowzone-v2r4 boost-v2r4 Percentage huge-5 94.49 ( 0.00%) 95.55 ( 1.13%) There is a large reduction in fragmentation events with a very slightly higher THP allocation success rate. The latencies look bad but a closer look at the data seems to indicate the problem is at the tails. Given the high THP allocation success rate, the system is under quite some pressure. Signed-off-by: Mel Gorman --- Documentation/sysctl/vm.txt | 19 +++++++ include/linux/mm.h | 1 + include/linux/mmzone.h | 11 ++-- kernel/sysctl.c | 8 +++ mm/page_alloc.c | 53 +++++++++++++++++-- mm/vmscan.c | 123 ++++++++++++++++++++++++++++++++++++++++---- 6 files changed, 199 insertions(+), 16 deletions(-) diff --git a/Documentation/sysctl/vm.txt b/Documentation/sysctl/vm.txt index 7d73882e2c27..4dff1b75229b 100644 --- a/Documentation/sysctl/vm.txt +++ b/Documentation/sysctl/vm.txt @@ -63,6 +63,7 @@ files can be found in mm/swap.c. - swappiness - user_reserve_kbytes - vfs_cache_pressure +- watermark_boost_factor - watermark_scale_factor - zone_reclaim_mode @@ -856,6 +857,24 @@ ten times more freeable objects than there are. ============================================================= +watermark_boost_factor: + +This factor controls the level of reclaim when memory is being fragmented. +It defines the percentage of the low watermark of a zone that will be +reclaimed if pages of different mobility are being mixed within pageblocks. +The intent is so that compaction has less work to do and increase the +success rate of future high-order allocations such as SLUB allocations, +THP and hugetlbfs pages. + +To make it sensible with respect to the watermark_scale_factor parameter, +the unit is in fractions of 10,000. The default value of 15000 means +that 150% of the high watermark will be reclaimed in the event of a +pageblock being mixed due to fragmentation. If this value is smaller +than a pageblock then a pageblocks worth of pages will be reclaimed (e.g. +2MB on 64-bit x86). A boost factor of 0 will disable the feature. + +============================================================= + watermark_scale_factor: This factor controls the aggressiveness of kswapd. It defines the diff --git a/include/linux/mm.h b/include/linux/mm.h index fcf9cc9d535f..81926daf6dfb 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2194,6 +2194,7 @@ extern void zone_pcp_reset(struct zone *zone); /* page_alloc.c */ extern int min_free_kbytes; +extern int watermark_boost_factor; extern int watermark_scale_factor; /* nommu.c */ diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index e43e8e79db99..d352c1dab486 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -269,10 +269,10 @@ enum zone_watermarks { NR_WMARK }; -#define min_wmark_pages(z) (z->_watermark[WMARK_MIN]) -#define low_wmark_pages(z) (z->_watermark[WMARK_LOW]) -#define high_wmark_pages(z) (z->_watermark[WMARK_HIGH]) -#define wmark_pages(z, i) (z->_watermark[i]) +#define min_wmark_pages(z) (z->_watermark[WMARK_MIN] + z->watermark_boost) +#define low_wmark_pages(z) (z->_watermark[WMARK_LOW] + z->watermark_boost) +#define high_wmark_pages(z) (z->_watermark[WMARK_HIGH] + z->watermark_boost) +#define wmark_pages(z, i) (z->_watermark[i] + z->watermark_boost) struct per_cpu_pages { int count; /* number of pages in the list */ @@ -364,6 +364,7 @@ struct zone { /* zone watermarks, access with *_wmark_pages(zone) macros */ unsigned long _watermark[NR_WMARK]; + unsigned long watermark_boost; unsigned long nr_reserved_highatomic; @@ -885,6 +886,8 @@ static inline int is_highmem(struct zone *zone) struct ctl_table; int min_free_kbytes_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); +int watermark_boost_factor_sysctl_handler(struct ctl_table *, int, + void __user *, size_t *, loff_t *); int watermark_scale_factor_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); extern int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES]; diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 5fc724e4e454..1825f712e73b 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1462,6 +1462,14 @@ static struct ctl_table vm_table[] = { .proc_handler = min_free_kbytes_sysctl_handler, .extra1 = &zero, }, + { + .procname = "watermark_boost_factor", + .data = &watermark_boost_factor, + .maxlen = sizeof(watermark_boost_factor), + .mode = 0644, + .proc_handler = watermark_boost_factor_sysctl_handler, + .extra1 = &zero, + }, { .procname = "watermark_scale_factor", .data = &watermark_scale_factor, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ad996a769bd5..4abac725a149 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -263,6 +263,7 @@ compound_page_dtor * const compound_page_dtors[] = { int min_free_kbytes = 1024; int user_min_free_kbytes = -1; +int watermark_boost_factor __read_mostly = 15000; int watermark_scale_factor = 10; static unsigned long nr_kernel_pages __meminitdata; @@ -2129,6 +2130,21 @@ static bool can_steal_fallback(unsigned int order, int start_mt) return false; } +static inline void boost_watermark(struct zone *zone) +{ + unsigned long max_boost; + + if (!watermark_boost_factor) + return; + + max_boost = mult_frac(wmark_pages(zone, WMARK_HIGH), + watermark_boost_factor, 10000); + max_boost = max(pageblock_nr_pages, max_boost); + + zone->watermark_boost = min(zone->watermark_boost + pageblock_nr_pages, + max_boost); +} + /* * This function implements actual steal behaviour. If order is large enough, * we can steal whole pageblock. If not, we first move freepages in this @@ -2160,6 +2176,14 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, goto single_page; } + /* + * Boost watermarks to increase reclaim pressure to reduce the + * likelihood of future fallbacks. Wake kswapd now as the node + * may be balanced overall and kswapd will not wake naturally. + */ + boost_watermark(zone); + wakeup_kswapd(zone, 0, 0, zone_idx(zone)); + /* We are not allowed to try stealing from the whole block */ if (!whole_block) goto single_page; @@ -3277,11 +3301,19 @@ static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone) * probably too small. It only makes sense to spread allocations to avoid * fragmentation between the Normal and DMA32 zones. */ -static inline unsigned int alloc_flags_nofragment(struct zone *zone) +static inline unsigned int +alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask) { if (zone_idx(zone) != ZONE_NORMAL) return 0; + /* + * A fragmenting fallback will try waking kswapd. ALLOC_NOFRAGMENT + * may break that so such callers can introduce fragmentation. + */ + if (!(gfp_mask & __GFP_KSWAPD_RECLAIM)) + return 0; + /* * If ZONE_DMA32 exists, assume it is the one after ZONE_NORMAL and * the pointer is within zone->zone_pgdat->node_zones[]. @@ -3292,7 +3324,8 @@ static inline unsigned int alloc_flags_nofragment(struct zone *zone) return ALLOC_NOFRAGMENT; } #else -static inline unsigned int alloc_flags_nofragment(struct zone *zone) +static inline unsigned int +alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask) { return 0; } @@ -4447,7 +4480,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, * Forbid the first pass from falling back to types that fragment * memory until all local zones are considered. */ - alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone); + alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone, + gfp_mask); /* First allocation attempt */ page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac); @@ -7438,6 +7472,7 @@ static void __setup_per_zone_wmarks(void) zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp; zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2; + zone->watermark_boost = 0; spin_unlock_irqrestore(&zone->lock, flags); } @@ -7538,6 +7573,18 @@ int min_free_kbytes_sysctl_handler(struct ctl_table *table, int write, return 0; } +int watermark_boost_factor_sysctl_handler(struct ctl_table *table, int write, + void __user *buffer, size_t *length, loff_t *ppos) +{ + int rc; + + rc = proc_dointvec_minmax(table, write, buffer, length, ppos); + if (rc) + return rc; + + return 0; +} + int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos) { diff --git a/mm/vmscan.c b/mm/vmscan.c index 62ac0c488624..5ba76ec4f01e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3378,6 +3378,30 @@ static void age_active_anon(struct pglist_data *pgdat, } while (memcg); } +static bool pgdat_watermark_boosted(pg_data_t *pgdat, int classzone_idx) +{ + int i; + struct zone *zone; + + /* + * Check for watermark boosts top-down as the higher zones + * are more likely to be boosted. Both watermarks and boosts + * should not be checked at the time time as reclaim would + * start prematurely when there is no boosting and a lower + * zone is balanced. + */ + for (i = classzone_idx; i >= 0; i--) { + zone = pgdat->node_zones + i; + if (!managed_zone(zone)) + continue; + + if (zone->watermark_boost) + return true; + } + + return false; +} + /* * Returns true if there is an eligible zone balanced for the request order * and classzone_idx @@ -3388,9 +3412,12 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int classzone_idx) unsigned long mark = -1; struct zone *zone; + /* + * Check watermarks bottom-up as lower zones are more likely to + * meet watermarks. + */ for (i = 0; i <= classzone_idx; i++) { zone = pgdat->node_zones + i; - if (!managed_zone(zone)) continue; @@ -3516,14 +3543,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx) unsigned long nr_soft_reclaimed; unsigned long nr_soft_scanned; unsigned long pflags; + unsigned long nr_boost_reclaim; + unsigned long zone_boosts[MAX_NR_ZONES] = { 0, }; + bool boosted; struct zone *zone; struct scan_control sc = { .gfp_mask = GFP_KERNEL, .order = order, - .priority = DEF_PRIORITY, - .may_writepage = !laptop_mode, .may_unmap = 1, - .may_swap = 1, }; psi_memstall_enter(&pflags); @@ -3531,9 +3558,28 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx) count_vm_event(PAGEOUTRUN); + /* + * Account for the reclaim boost. Note that the zone boost is left in + * place so that parallel allocations that are near the watermark will + * stall or direct reclaim until kswapd is finished. + */ + nr_boost_reclaim = 0; + for (i = 0; i <= classzone_idx; i++) { + zone = pgdat->node_zones + i; + if (!managed_zone(zone)) + continue; + + nr_boost_reclaim += zone->watermark_boost; + zone_boosts[i] = zone->watermark_boost; + } + boosted = nr_boost_reclaim; + +restart: + sc.priority = DEF_PRIORITY; do { unsigned long nr_reclaimed = sc.nr_reclaimed; bool raise_priority = true; + bool balanced; bool ret; sc.reclaim_idx = classzone_idx; @@ -3560,13 +3606,39 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx) } /* - * Only reclaim if there are no eligible zones. Note that - * sc.reclaim_idx is not used as buffer_heads_over_limit may - * have adjusted it. + * If the pgdat is imbalanced then ignore boosting and preserve + * the watermarks for a later time and restart. Note that the + * zone watermarks will be still reset at the end of balancing + * on the grounds that the normal reclaim should be enough to + * re-evaluate if boosting is required when kswapd next wakes. + */ + balanced = pgdat_balanced(pgdat, sc.order, classzone_idx); + if (!balanced && nr_boost_reclaim) { + nr_boost_reclaim = 0; + goto restart; + } + + /* + * If boosting is not active then only reclaim if there are no + * eligible zones. Note that sc.reclaim_idx is not used as + * buffer_heads_over_limit may have adjusted it. */ - if (pgdat_balanced(pgdat, sc.order, classzone_idx)) + if (!nr_boost_reclaim && balanced) goto out; + /* Limit the priority of boosting to avoid reclaim writeback */ + if (nr_boost_reclaim && sc.priority == DEF_PRIORITY - 2) + raise_priority = false; + + /* + * Do not writeback or swap pages for boosted reclaim. The + * intent is to relieve pressure not issue sub-optimal IO + * from reclaim context. If no pages are reclaimed, the + * reclaim will be aborted. + */ + sc.may_writepage = !laptop_mode && !nr_boost_reclaim; + sc.may_swap = !nr_boost_reclaim; + /* * Do some background aging of the anon list, to give * pages a chance to be referenced before reclaiming. All @@ -3618,6 +3690,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx) * progress in reclaiming pages */ nr_reclaimed = sc.nr_reclaimed - nr_reclaimed; + nr_boost_reclaim -= min(nr_boost_reclaim, nr_reclaimed); + + /* + * If reclaim made no progress for a boost, stop reclaim as + * IO cannot be queued and it could be an infinite loop in + * extreme circumstances. + */ + if (nr_boost_reclaim && !nr_reclaimed) + break; + if (raise_priority || !nr_reclaimed) sc.priority--; } while (sc.priority >= 1); @@ -3626,6 +3708,28 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx) pgdat->kswapd_failures++; out: + /* If reclaim was boosted, account for the reclaim done in this pass */ + if (boosted) { + unsigned long flags; + + for (i = 0; i <= classzone_idx; i++) { + if (!zone_boosts[i]) + continue; + + /* Increments are under the zone lock */ + zone = pgdat->node_zones + i; + spin_lock_irqsave(&zone->lock, flags); + zone->watermark_boost -= min(zone->watermark_boost, zone_boosts[i]); + spin_unlock_irqrestore(&zone->lock, flags); + } + + /* + * As there is now likely space, wakeup kcompact to defragment + * pageblocks. + */ + wakeup_kcompactd(pgdat, pageblock_order, classzone_idx); + } + snapshot_refaults(NULL, pgdat); __fs_reclaim_release(); psi_memstall_leave(&pflags); @@ -3854,7 +3958,8 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_flags, int order, /* Hopeless node, leave it to direct reclaim if possible */ if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES || - pgdat_balanced(pgdat, order, classzone_idx)) { + (pgdat_balanced(pgdat, order, classzone_idx) && + !pgdat_watermark_boosted(pgdat, classzone_idx))) { /* * There may be plenty of free memory available, but it's too * fragmented for high-order allocations. Wake up kcompactd From patchwork Thu Nov 8 09:12:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 10673817 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9876515E9 for ; Thu, 8 Nov 2018 09:12:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86C152CF52 for ; Thu, 8 Nov 2018 09:12:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7ACD22CF65; Thu, 8 Nov 2018 09:12:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 27F802CF52 for ; Thu, 8 Nov 2018 09:12:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D8ADC6B05B8; Thu, 8 Nov 2018 04:12:23 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C44626B05B9; Thu, 8 Nov 2018 04:12:23 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A455E6B05BC; Thu, 8 Nov 2018 04:12:23 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id 1D5006B05B8 for ; Thu, 8 Nov 2018 04:12:23 -0500 (EST) Received: by mail-ed1-f70.google.com with SMTP id f3-v6so6623930edt.11 for ; Thu, 08 Nov 2018 01:12:23 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=g+MeL4EcdhLtMd51CZ12BKAhaCbd/ks9ACTXXoyISdI=; b=FkbiSxFLCNx5IabziU00UradrFr8UbueweWvR1H/27y5gTxsfjo7VHgRVbYT3G3HGf ZlGABZcjkDu9ZO24tbFyGsaSnlFF0Q/v09S8P55Y4fHdy2AGGVMDg0IuUsNIlSoWHuXS FHBP4CnNHpOpZnz/obv0mbJuCJMtPNa1f1jzWoS3XkI34SKXuR7Uj5fsxp8Ef8EYSl1N nqOgn2bUAs5ANDw+j6A7t2xw/KIFwmLKe6jsCm4snSvoB48/SdUPXQz4JahwC4tWUdd7 ytQg+ps8ESiJKkJi7a/zP5ZDiqFucbK8A7rdYp5YNJ98MfHmAllDPgRvahI/n/oGZkXV SfiQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 81.17.249.35 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Gm-Message-State: AGRZ1gLJhawD3La1qc4guhbknbCR9iHb50+Np52/vTWhFTtJWqmtAy2B UsNl8uTOWPS/3jTTDTF8tRFnAmuIO2qH0ijbZckBkHuu/JtFqGLSF0ukL+DF/86ULKlopUxHJn5 mTVmwEREKrbj/lnmT/bLei1CM8vWtcxGKBKlUlAmfBlgFH/MhfVDHbyV+X1T9o9MK2g== X-Received: by 2002:a17:906:6b99:: with SMTP id l25-v6mr2411752ejr.154.1541668342540; Thu, 08 Nov 2018 01:12:22 -0800 (PST) X-Google-Smtp-Source: AJdET5fg2Gq6Dxu2zUuTDAItrKSwy1+sTAqvwCFGnBsd0tx8X6imQBgB540Dlpy03t74S+/5t/9N X-Received: by 2002:a17:906:6b99:: with SMTP id l25-v6mr2411659ejr.154.1541668340460; Thu, 08 Nov 2018 01:12:20 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541668340; cv=none; d=google.com; s=arc-20160816; b=I/OJBO3CX87iPtdI3dlXBoA5EEWcI3K031yVBeXQc84FKsr/8zQl5MfMpggAB5e9tI 2XHGuPJo8SJ6loMEh53qsl+l1AR1kDIkkPwhqvjNGEVy50seLSXZ2UhkYpN5ml0Yrehj U4tNnBNUGRamqp0sp94SKe/LXAIyv9VO+RQywQPdG1U9ifwd3wq3iSNyF4M5CpXqBtrR wxg32U+kI2oowuQUaRZNAd82keho5ltgs3wFTLmLiOf5NlxaTfA7tBSUCuvBa2hyKiYe Su5GK2atHbi/gK56SOTdiZKLt2tmuBQSNz7pUicmdfFSxChxQ22/Rzf4NkMXC1mFf6cm 05jQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=g+MeL4EcdhLtMd51CZ12BKAhaCbd/ks9ACTXXoyISdI=; b=e3a3XBhL8oVhZQUcTRvqhAFjqcNsKHFD/rz8BFbY2xlV9TG9MRCi6WVblYbT0IgDvi b8D65w0JTef8pP7LlmGRi5cnYOgCGNvFn+Wv9ndVI/hO/OkXHPAktRSqJxNW3GpcKaOg KP5cIzyi582cE9kkqb1w+/9r5hL1ydpC5dfS//pFKUQh3iOBKH5ChJmaT962DXV80En2 /+ul8zyLEhFSBsdeRKxEiIlyf3opeRO/+dIVT6NqzqrkkExXBYp8ODyEpd8VXWCjHkea 9WyyWgcX8bD3iF//OzTy+1qjWyuqNXUM458W1ACgMl72rJW30jRIs2cAyI2F/8KUmD1r dfuQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 81.17.249.35 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net Received: from outbound-smtp04.blacknight.com (outbound-smtp04.blacknight.com. [81.17.249.35]) by mx.google.com with ESMTPS id e9-v6si1863750eds.58.2018.11.08.01.12.20 for (version=TLS1 cipher=AES128-SHA bits=128/128); Thu, 08 Nov 2018 01:12:20 -0800 (PST) Received-SPF: pass (google.com: domain of mgorman@techsingularity.net designates 81.17.249.35 as permitted sender) client-ip=81.17.249.35; Authentication-Results: mx.google.com; spf=pass (google.com: domain of mgorman@techsingularity.net designates 81.17.249.35 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp04.blacknight.com (Postfix) with ESMTPS id 2B56798C18 for ; Thu, 8 Nov 2018 09:12:20 +0000 (UTC) Received: (qmail 23840 invoked from network); 8 Nov 2018 09:12:20 -0000 Received: from unknown (HELO stampy.163woodhaven.lan) (mgorman@techsingularity.net@[37.228.229.69]) by 81.17.254.9 with ESMTPA; 8 Nov 2018 09:12:20 -0000 From: Mel Gorman To: Linux-MM Cc: Andrew Morton , Vlastimil Babka , David Rientjes , Andrea Arcangeli , Zi Yan , LKML , Mel Gorman Subject: [PATCH 4/4] mm: Stall movable allocations until kswapd progresses during serious external fragmentation event Date: Thu, 8 Nov 2018 09:12:18 +0000 Message-Id: <20181108091218.32715-5-mgorman@techsingularity.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20181108091218.32715-1-mgorman@techsingularity.net> References: <20181108091218.32715-1-mgorman@techsingularity.net> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP An event that potentially causes external fragmentation problems has already been described but there are degrees of severity. A "serious" event is defined as one that steals a contiguous range of pages of an order lower than fragment_stall_order (PAGE_ALLOC_COSTLY_ORDER by default). If a movable allocation request that is allowed to sleep needs to steal a small block then it schedules until kswapd makes progress or a timeout passes. The watermarks are also boosted slightly faster so that kswapd makes greater effort to reclaim enough pages to avoid the fragmentation event. This stall is not guaranteed to avoid serious fragmentation events. If memory pressure is high enough, the pages freed by kswapd may be reallocated or the free pages may not be in pageblocks that contain only movable pages. Furthermore an allocation request that cannot stall (e.g. atomic allocations) or unmovable/reclaimable allocations will still proceed without stalling. The worst-case scenario for stalling is a combination of both high memory pressure where kswapd is having trouble keeping free pages over the pfmemalloc_reserve and movable allocations are fragmenting memory. In this case, an allocation request may sleep for longer. There are both vmstats to identify stalls are happening and a tracepoint to quantify what the stall durations are. Note that the granularity of the stall detection is a jiffy so the delay accounting is not precise. 1-socket Skylake machine config-global-dhp__workload_thpfioscale XFS (no special madvise) 4 fio threads, 1 THP allocating thread -------------------------------------- 4.20-rc1 extfrag events < order 9: 1023463 4.20-rc1+patch: 358574 (65% reduction) 4.20-rc1+patch1-3: 19274 (98% reduction) 4.20-rc1+patch1-4: 1094 (99.9% reduction) 4.20.0-rc1 4.20.0-rc1 boost-v3r1 stall-v3r1 Amean fault-base-1 659.85 ( 0.00%) 658.74 ( 0.17%) Amean fault-huge-1 172.19 ( 0.00%) 168.00 ( 2.43%) thpfioscale Percentage Faults Huge 4.20.0-rc1 4.20.0-rc1 boost-v3r1 stall-v3r1 Percentage huge-1 1.68 ( 0.00%) 0.88 ( -47.52%) Fragmentation events are now reduced to negligible levels. The latencies and allocation success rates are roughly similar. Over the course of 16 minutes, there were 52 stalls due to fragmentation avoidance with a total stall time of 0.2 seconds. 1-socket Skylake machine global-dhp__workload_thpfioscale-madvhugepage-xfs (MADV_HUGEPAGE) ----------------------------------------------------------------- 4.20-rc1 extfrag events < order 9: 342549 4.20-rc1+patch: 337890 ( 1% reduction) 4.20-rc1+patch1-3: 12801 (96% reduction) 4.20-rc1+patch1-4: 1112 (99.7% reduction) 4.20.0-rc1 4.20.0-rc1 boost-v3r1 stall-v3r1 Amean fault-base-1 1578.91 ( 0.00%) 1647.00 ( -4.31%) Amean fault-huge-1 1090.23 ( 0.00%) 559.31 * 48.70%* 4.20.0-rc1 4.20.0-rc1 boost-v3r1 stall-v3r1 Percentage huge-1 82.59 ( 0.00%) 99.98 ( 21.05%) The fragmentation events were reduced and the latencies are good. This is a big difference between v2 and v3 of the series as v2 had stalls that reached the timeout of HZ/10 where as a timeout of HZ/50 has better latencies without compromising on fragmentation events or allocation success rates. There were 219 stalls over the course of 16 minutes for a total stall time of roughly 1 second (as opposed to 11 seconds with HZ/10). The distribution of stalls is as follows 209 4000 1 8000 9 20000 This shows the majority of stalls were for just one jiffie. 2-socket Haswell machine config-global-dhp__workload_thpfioscale XFS (no special madvise) 4 fio threads, 5 THP allocating threads ---------------------------------------------------------------- 4.20-rc1 extfrag events < order 9: 209820 4.20-rc1+patch: 185923 (11% reduction) 4.20-rc1+patch1-3: 11240 (95% reduction) 4.20-rc1+patch1-4: 8709 (96% reduction) 4.20.0-rc1 4.20.0-rc1 boost-v3r1 stall-v3r1 Amean fault-base-5 1395.28 ( 0.00%) 1335.23 ( 4.30%) Amean fault-huge-5 539.69 ( 0.00%) 614.88 * -13.93%* 4.20.0-rc1 4.20.0-rc1 boost-v3r1 stall-v3r1 Percentage huge-5 0.53 ( 0.00%) 2.16 ( 306.25%) There is a slight reduction in fragmentation events but it's slight enough that it may be due to luck. There is a small increase in latencies which is partially offset by a slight increase in THP allocation success rates. There were 65 stalls over the course of 63 minutes with stall time of a total of roughly 0.2 seconds. 2-socket Haswell machine global-dhp__workload_thpfioscale-madvhugepage-xfs (MADV_HUGEPAGE) ----------------------------------------------------------------- 4.20-rc1 extfrag events < order 9: 167464 4.20-rc1+patch: 130081 (22% reduction) 4.20-rc1+patch1-3: 12057 (92% reduction) 4.20-rc1+patch1-4: 11494 (93% reduction) thpfioscale Fault Latencies 4.20.0-rc1 4.20.0-rc1 boost-v3r1 stall-v3r1 Amean fault-base-5 8691.83 ( 0.00%) 7380.80 ( 15.08%) Amean fault-huge-5 2899.83 ( 0.00%) 4066.94 * -40.25%* 4.20.0-rc1 4.20.0-rc1 boost-v3r1 stall-v3r1 Percentage huge-5 95.55 ( 0.00%) 98.98 ( 3.59%) The fragmentation events are reduced and while there is some wobble on the latency, the success rate is near 100% while under heavy pressure. There were 2016 stalls over the course of 85 minutes with a total stall time of roughly 8 seconds. This patch does reduce fragmentation rates overall but it's not free as some allocataions can stall for short periods of time and there are knock-on effects to latency when THP allocation success rates are higher. While it's within acceptable limits for the adverse test case, there may be other workloads that cannot tolerate the stalls. If this occurs, it can be tuned to disable the feature or more ideally, the test case is made available for analysis to see if the stall behaviour can be reduced while still limiting the fragmentation events. On the flip-side, it has been checked that setting the fragment_stall_order to 9 eliminated fragmentation events entirely. Signed-off-by: Mel Gorman --- Documentation/sysctl/vm.txt | 23 +++++++++++ include/linux/mm.h | 1 + include/linux/mmzone.h | 2 + include/linux/vm_event_item.h | 1 + include/trace/events/kmem.h | 21 ++++++++++ kernel/sysctl.c | 10 +++++ mm/internal.h | 1 + mm/page_alloc.c | 94 +++++++++++++++++++++++++++++++++++++------ mm/vmstat.c | 1 + 9 files changed, 142 insertions(+), 12 deletions(-) diff --git a/Documentation/sysctl/vm.txt b/Documentation/sysctl/vm.txt index 4dff1b75229b..7d4f41bf7902 100644 --- a/Documentation/sysctl/vm.txt +++ b/Documentation/sysctl/vm.txt @@ -31,6 +31,7 @@ files can be found in mm/swap.c. - dirty_writeback_centisecs - drop_caches - extfrag_threshold +- fragment_stall_order - hugetlb_shm_group - laptop_mode - legacy_va_layout @@ -275,6 +276,28 @@ any throttling. ============================================================== +fragment_stall_order + +External fragmentation control is managed on a pageblock level where the +page allocator tries to avoid mixing pages of different mobility within page +blocks (e.g. order 9 on 64-bit x86). If external fragmentation is perfectly +controlled then a THP allocation will often succeed up to the number of +movable pageblocks in the system as reported by /proc/pagetypeinfo. + +When memory is low, the system may have to mix pageblocks and will wake +kswapd to try control future fragmentation. fragment_stall_order controls if +the allocating task will stall if possible until kswapd makes some progress +in preference to fragmenting the system. This incurs a small stall penalty +in exchange for future success at allocating huge pages. If the stalls +are undesirable and high-order allocations are irrelevant then this can +be disabled by writing 0 to the tunable. Writing the pageblock order will +strongly (but not perfectly) control external fragmentation. + +The default will stall for fragmenting allocations smaller than the +PAGE_ALLOC_COSTLY_ORDER (defined as order-3 at the time of writing). + +============================================================== + hugetlb_shm_group hugetlb_shm_group contains group id that is allowed to create SysV diff --git a/include/linux/mm.h b/include/linux/mm.h index 81926daf6dfb..ef98eb3f8360 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2196,6 +2196,7 @@ extern void zone_pcp_reset(struct zone *zone); extern int min_free_kbytes; extern int watermark_boost_factor; extern int watermark_scale_factor; +extern int fragment_stall_order; /* nommu.c */ extern atomic_long_t mmap_pages_allocated; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index d352c1dab486..cffec484ac8a 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -890,6 +890,8 @@ int watermark_boost_factor_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); int watermark_scale_factor_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); +int fragment_stall_order_sysctl_handler(struct ctl_table *, int, + void __user *, size_t *, loff_t *); extern int sysctl_lowmem_reserve_ratio[MAX_NR_ZONES]; int lowmem_reserve_ratio_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *); diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 47a3441cf4c4..7661abe5236e 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -43,6 +43,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, PAGEOUTRUN, PGROTATED, DROP_PAGECACHE, DROP_SLAB, OOM_KILL, + FRAGMENTSTALL, #ifdef CONFIG_NUMA_BALANCING NUMA_PTE_UPDATES, NUMA_HUGE_PTE_UPDATES, diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h index eb57e3037deb..caadd8681ac5 100644 --- a/include/trace/events/kmem.h +++ b/include/trace/events/kmem.h @@ -315,6 +315,27 @@ TRACE_EVENT(mm_page_alloc_extfrag, __entry->change_ownership) ); +TRACE_EVENT(mm_fragmentation_stall, + + TP_PROTO(int nid, unsigned long duration), + + TP_ARGS(nid, duration), + + TP_STRUCT__entry( + __field( int, nid ) + __field( unsigned long, duration ) + ), + + TP_fast_assign( + __entry->nid = nid; + __entry->duration = duration + ), + + TP_printk("nid=%d duration=%lu", + __entry->nid, + __entry->duration) +); + #endif /* _TRACE_KMEM_H */ /* This part must be outside protection */ diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 1825f712e73b..eb09c79ddbef 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -126,6 +126,7 @@ static int zero; static int __maybe_unused one = 1; static int __maybe_unused two = 2; static int __maybe_unused four = 4; +static int __maybe_unused max_order = MAX_ORDER; static unsigned long one_ul = 1; static int one_hundred = 100; static int one_thousand = 1000; @@ -1479,6 +1480,15 @@ static struct ctl_table vm_table[] = { .extra1 = &one, .extra2 = &one_thousand, }, + { + .procname = "fragment_stall_order", + .data = &fragment_stall_order, + .maxlen = sizeof(fragment_stall_order), + .mode = 0644, + .proc_handler = fragment_stall_order_sysctl_handler, + .extra1 = &zero, + .extra2 = &max_order, + }, { .procname = "percpu_pagelist_fraction", .data = &percpu_pagelist_fraction, diff --git a/mm/internal.h b/mm/internal.h index 544355156c92..5506a4596d59 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -489,6 +489,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone, #else #define ALLOC_NOFRAGMENT 0x0 #endif +#define ALLOC_FRAGMENT_STALL 0x200 /* stall if fragmenting heavily */ enum ttu_flags; struct tlbflush_unmap_batch; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4abac725a149..7db5d95db0d6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -265,6 +265,7 @@ int min_free_kbytes = 1024; int user_min_free_kbytes = -1; int watermark_boost_factor __read_mostly = 15000; int watermark_scale_factor = 10; +int fragment_stall_order __read_mostly = (PAGE_ALLOC_COSTLY_ORDER + 1); static unsigned long nr_kernel_pages __meminitdata; static unsigned long nr_all_pages __meminitdata; @@ -2130,9 +2131,10 @@ static bool can_steal_fallback(unsigned int order, int start_mt) return false; } -static inline void boost_watermark(struct zone *zone) +static inline void boost_watermark(struct zone *zone, bool fast_boost) { unsigned long max_boost; + unsigned long nr; if (!watermark_boost_factor) return; @@ -2140,9 +2142,36 @@ static inline void boost_watermark(struct zone *zone) max_boost = mult_frac(wmark_pages(zone, WMARK_HIGH), watermark_boost_factor, 10000); max_boost = max(pageblock_nr_pages, max_boost); + nr = pageblock_nr_pages; - zone->watermark_boost = min(zone->watermark_boost + pageblock_nr_pages, - max_boost); + /* Scale relative to the MIGRATE_PCPTYPES similar to min_free_kbytes */ + if (fast_boost) + nr += pageblock_nr_pages * (MIGRATE_PCPTYPES << 1); + + zone->watermark_boost = min(zone->watermark_boost + nr, max_boost); +} + +static void stall_fragmentation(struct zone *pzone) +{ + DEFINE_WAIT(wait); + long remaining = 0; + long timeout = HZ/50; + pg_data_t *pgdat = pzone->zone_pgdat; + + if (current->flags & PF_MEMALLOC) + return; + + boost_watermark(pzone, true); + prepare_to_wait(&pgdat->pfmemalloc_wait, &wait, TASK_INTERRUPTIBLE); + if (waitqueue_active(&pgdat->kswapd_wait)) + wake_up_interruptible(&pgdat->kswapd_wait); + remaining = schedule_timeout(timeout); + finish_wait(&pgdat->pfmemalloc_wait, &wait); + if (remaining != timeout) { + trace_mm_fragmentation_stall(pgdat->node_id, + jiffies_to_usecs(timeout - remaining)); + count_vm_event(FRAGMENTSTALL); + } } /* @@ -2153,8 +2182,9 @@ static inline void boost_watermark(struct zone *zone) * of pages are free or compatible, we can change migratetype of the pageblock * itself, so pages freed in the future will be put on the correct free list. */ -static void steal_suitable_fallback(struct zone *zone, struct page *page, - int start_type, bool whole_block) +static bool steal_suitable_fallback(struct zone *zone, struct page *page, + int start_type, bool whole_block, + unsigned int alloc_flags) { unsigned int current_order = page_order(page); struct free_area *area; @@ -2181,9 +2211,14 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, * likelihood of future fallbacks. Wake kswapd now as the node * may be balanced overall and kswapd will not wake naturally. */ - boost_watermark(zone); + boost_watermark(zone, false); wakeup_kswapd(zone, 0, 0, zone_idx(zone)); + if ((alloc_flags & ALLOC_FRAGMENT_STALL) && + current_order < fragment_stall_order) { + return false; + } + /* We are not allowed to try stealing from the whole block */ if (!whole_block) goto single_page; @@ -2224,11 +2259,12 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, page_group_by_mobility_disabled) set_pageblock_migratetype(page, start_type); - return; + return true; single_page: area = &zone->free_area[current_order]; list_move(&page->lru, &area->free_list[start_type]); + return true; } /* @@ -2467,13 +2503,14 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, page = list_first_entry(&area->free_list[fallback_mt], struct page, lru); - steal_suitable_fallback(zone, page, start_migratetype, can_steal); + if (!steal_suitable_fallback(zone, page, start_migratetype, can_steal, + alloc_flags)) + return false; trace_mm_page_alloc_extfrag(page, order, current_order, start_migratetype, fallback_mt); return true; - } /* @@ -3340,9 +3377,12 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, const struct alloc_context *ac) { struct zoneref *z = ac->preferred_zoneref; + struct zone *pzone = z->zone; struct zone *zone; struct pglist_data *last_pgdat_dirty_limit = NULL; bool no_fallback; + bool fragment_stall; + int wmark_idx = alloc_flags & ALLOC_WMARK_MASK; retry: /* @@ -3350,6 +3390,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, * See also __cpuset_node_allowed() comment in kernel/cpuset.c. */ no_fallback = alloc_flags & ALLOC_NOFRAGMENT; + fragment_stall = alloc_flags & ALLOC_FRAGMENT_STALL; + for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) { struct page *page; @@ -3388,7 +3430,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, } } - if (no_fallback) { + if (no_fallback || fragment_stall) { int local_nid; /* @@ -3396,9 +3438,12 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, * fragmenting fallbacks. Locality is more important * than fragmentation avoidance. */ - local_nid = zone_to_nid(ac->preferred_zoneref->zone); + local_nid = zone_to_nid(pzone); if (zone_to_nid(zone) != local_nid) { + if (fragment_stall) + stall_fragmentation(pzone); alloc_flags &= ~ALLOC_NOFRAGMENT; + alloc_flags &= ~ALLOC_FRAGMENT_STALL; goto retry; } } @@ -3474,8 +3519,12 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, * It's possible on a UMA machine to get through all zones that are * fragmented. If avoiding fragmentation, reset and try again */ - if (no_fallback) { + if (no_fallback || fragment_stall) { + if (fragment_stall) + stall_fragmentation(pzone); + alloc_flags &= ~ALLOC_NOFRAGMENT; + alloc_flags &= ~ALLOC_FRAGMENT_STALL; goto retry; } @@ -4197,6 +4246,14 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, */ alloc_flags = gfp_to_alloc_flags(gfp_mask); + /* + * Consider stalling on heavy for movable allocations in preference to + * fragmenting unmovable/reclaimable pageblocks. + */ + if ((gfp_mask & (__GFP_MOVABLE|__GFP_DIRECT_RECLAIM)) == + (__GFP_MOVABLE|__GFP_DIRECT_RECLAIM)) + alloc_flags |= ALLOC_FRAGMENT_STALL; + /* * We need to recalculate the starting point for the zonelist iterator * because we might have used different nodemask in the fast path, or @@ -4218,6 +4275,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac); if (page) goto got_pg; + alloc_flags &= ~ALLOC_FRAGMENT_STALL; /* * For costly allocations, try direct compaction first, as it's likely @@ -7585,6 +7643,18 @@ int watermark_boost_factor_sysctl_handler(struct ctl_table *table, int write, return 0; } +int fragment_stall_order_sysctl_handler(struct ctl_table *table, int write, + void __user *buffer, size_t *length, loff_t *ppos) +{ + int rc; + + rc = proc_dointvec_minmax(table, write, buffer, length, ppos); + if (rc) + return rc; + + return 0; +} + int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos) { diff --git a/mm/vmstat.c b/mm/vmstat.c index 6038ce593ce3..9bb78adf4445 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1211,6 +1211,7 @@ const char * const vmstat_text[] = { "drop_pagecache", "drop_slab", "oom_kill", + "fragment_stall", #ifdef CONFIG_NUMA_BALANCING "numa_pte_updates",