From patchwork Fri Aug 14 17:31:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 11715051 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D3F46739 for ; Fri, 14 Aug 2020 17:31:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A06E820768 for ; Fri, 14 Aug 2020 17:31:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZDk4pXeR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A06E820768 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3A5526B0095; Fri, 14 Aug 2020 13:31:45 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 309848D0002; Fri, 14 Aug 2020 13:31:45 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1593A6B009A; Fri, 14 Aug 2020 13:31:45 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id D96676B0095 for ; Fri, 14 Aug 2020 13:31:44 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8FD43127C for ; Fri, 14 Aug 2020 17:31:44 +0000 (UTC) X-FDA: 77149866528.07.jar62_14181ec26ffe Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id E74881803F9C0 for ; Fri, 14 Aug 2020 17:31:41 +0000 (UTC) X-Spam-Summary: 1,0,0,53daccf7a0fc110c,d41d8cd98f00b204,minchan.kim@gmail.com,,RULES_HIT:41:355:379:541:560:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2731:3138:3139:3140:3141:3142:3354:3870:3871:3874:3876:4118:4321:4605:5007:6120:6261:6653:7901:8957:9010:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:12986:13894:14096:14181:14721:21080:21324:21444:21451:21627:21990:30054,0,RBL:209.85.210.194:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04yg6chbtkhactfbgw758mn8brhsgycxacu9u9xu4qs1f7cmpfp66wb4p5ryci8.zcy3pss8kafm3a3z4su5xj5hrtbpnwcos8i8tmggxp3sfa8ujfzeddqj8haixzw.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: jar62_14181ec26ffe X-Filterd-Recvd-Size: 7005 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Fri, 14 Aug 2020 17:31:40 +0000 (UTC) Received: by mail-pf1-f194.google.com with SMTP id d188so4879250pfd.2 for ; Fri, 14 Aug 2020 10:31:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WJpxQfWuLdEkvuYVJgijB6hFX/x7FtsK/XxVIYu8htw=; b=ZDk4pXeR6FAMjoS0yPpCqgUCJ4U+gI08oxZBQFFoXdDp/GqP48qMKGfORSzg7H1/KJ PDScmt3LlDHAARpuGj5CkzmrYHuLEw0LwTOSIpHACzmTJtc3bSaEVMCXYiTNFmG3N05C tyH+Lyzy5mZ/D09wWk2u1CekQhaNW5zJqqJ6AjdwEQUyVEA5BaabuSU2uzzCq2RcmGPt mmMAm0FNWBteNbN5KcVS6BzAqRWmUuLWjCCKaEtf0HnB2zi+Vk46q3eeC+w44/fY5fr3 1fUARfRhexKKNFB44fKL+N2a877G3GEP0wb4lELss1GEe2wVmATgWdyQcBLPwRWQoSch E3Vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=WJpxQfWuLdEkvuYVJgijB6hFX/x7FtsK/XxVIYu8htw=; b=Yp1QoSEdC4WsfoZvZC7q7XFf3F82OWRzBMmsPsaUShWVs5cd+n8Og3V+eJC1eh5R8t yzdeoJ/QUC7IQomwNNPkvJjJInn6PbhFG1cqnZla9fLDSX3y7bWIgiZK7s9KSieD+4Us ReBVYepOFY5OrWr/MqQPUj5EQqkNjbeuMGFZSvALLKIpyQpJfC8ojf4/9XPUc+xe6guc gHuM0VRCpujy/zTz/fc2JbaOzaaEs1/pQzO2BUo5C4NbU/q215UigDPrzTZeQJkXT77Y r1Nyd02bGO8M+Y6LYjqE3+4YfxifcmYyVfNCSTXDohBxJcPaeGI7AGrMbV4Qf+TwrSOM 0fyQ== X-Gm-Message-State: AOAM533bmGzjeJ9gwLDMjafAe8hDgoibQTcTVLMRnkJmluawNVl+ycZD zgD2ifKnuwiYWwlSd6vd4aw= X-Google-Smtp-Source: ABdhPJxS173hg461uEBdXz771hL5VcL7CVZSin+58iHnkkss5F1xbgR6RTLPK1Jd/Sllu+HaMOYZ0g== X-Received: by 2002:a63:2543:: with SMTP id l64mr2514599pgl.164.1597426299852; Fri, 14 Aug 2020 10:31:39 -0700 (PDT) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:1:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id n22sm8522973pjq.25.2020.08.14.10.31.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Aug 2020 10:31:38 -0700 (PDT) From: Minchan Kim To: Andrew Morton Cc: linux-mm , Joonsoo Kim , Vlastimil Babka , John Dias , Suren Baghdasaryan , pullip.cho@samsung.com, Minchan Kim Subject: [RFC 1/7] mm: page_owner: split page by order Date: Fri, 14 Aug 2020 10:31:25 -0700 Message-Id: <20200814173131.2803002-2-minchan@kernel.org> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog In-Reply-To: <20200814173131.2803002-1-minchan@kernel.org> References: <20200814173131.2803002-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: E74881803F9C0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: split_page_owner has assumed that a high-order page allocation is always split into order-0 allocations. This patch enables splitting a high-order allocation into any smaller-order allocations. Signed-off-by: Minchan Kim --- include/linux/page_owner.h | 10 ++++++---- mm/huge_memory.c | 2 +- mm/page_alloc.c | 2 +- mm/page_owner.c | 7 +++++-- 4 files changed, 13 insertions(+), 8 deletions(-) diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h index 8679ccd722e8..60231997edb7 100644 --- a/include/linux/page_owner.h +++ b/include/linux/page_owner.h @@ -11,7 +11,8 @@ extern struct page_ext_operations page_owner_ops; extern void __reset_page_owner(struct page *page, unsigned int order); extern void __set_page_owner(struct page *page, unsigned int order, gfp_t gfp_mask); -extern void __split_page_owner(struct page *page, unsigned int order); +extern void __split_page_owner(struct page *page, unsigned int order, + unsigned int new_order); extern void __copy_page_owner(struct page *oldpage, struct page *newpage); extern void __set_page_owner_migrate_reason(struct page *page, int reason); extern void __dump_page_owner(struct page *page); @@ -31,10 +32,11 @@ static inline void set_page_owner(struct page *page, __set_page_owner(page, order, gfp_mask); } -static inline void split_page_owner(struct page *page, unsigned int order) +static inline void split_page_owner(struct page *page, unsigned int order, + unsigned int new_order) { if (static_branch_unlikely(&page_owner_inited)) - __split_page_owner(page, order); + __split_page_owner(page, order, new_order); } static inline void copy_page_owner(struct page *oldpage, struct page *newpage) { @@ -60,7 +62,7 @@ static inline void set_page_owner(struct page *page, { } static inline void split_page_owner(struct page *page, - unsigned int order) + unsigned int order, unsigned int new_order) { } static inline void copy_page_owner(struct page *oldpage, struct page *newpage) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 07007a8b68fe..2858a342ce87 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2420,7 +2420,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, ClearPageCompound(head); - split_page_owner(head, HPAGE_PMD_ORDER); + split_page_owner(head, HPAGE_PMD_ORDER, 0); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index cf0b25161fea..8ce30cc50577 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3205,7 +3205,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); - split_page_owner(page, order); + split_page_owner(page, order, 0); } EXPORT_SYMBOL_GPL(split_page); diff --git a/mm/page_owner.c b/mm/page_owner.c index 360461509423..c7a07b53eb92 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -204,7 +204,8 @@ void __set_page_owner_migrate_reason(struct page *page, int reason) page_owner->last_migrate_reason = reason; } -void __split_page_owner(struct page *page, unsigned int order) +void __split_page_owner(struct page *page, unsigned int order, + unsigned int new_order) { int i; struct page_ext *page_ext = lookup_page_ext(page); @@ -213,9 +214,11 @@ void __split_page_owner(struct page *page, unsigned int order) if (unlikely(!page_ext)) return; + VM_BUG_ON_PAGE(order < new_order, page); + for (i = 0; i < (1 << order); i++) { page_owner = get_page_owner(page_ext); - page_owner->order = 0; + page_owner->order = new_order; page_ext = page_ext_next(page_ext); } } From patchwork Fri Aug 14 17:31:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 11715049 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 31A4C618 for ; Fri, 14 Aug 2020 17:31:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F2AC220829 for ; Fri, 14 Aug 2020 17:31:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PAeUj220" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2AC220829 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D19BC6B000A; Fri, 14 Aug 2020 13:31:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CA3388D0002; Fri, 14 Aug 2020 13:31:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B438C6B0098; Fri, 14 Aug 2020 13:31:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 99DB46B000A for ; Fri, 14 Aug 2020 13:31:44 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3425A180AD815 for ; Fri, 14 Aug 2020 17:31:44 +0000 (UTC) X-FDA: 77149866528.24.team61_301196b26ffe Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id DDA211A4AE for ; Fri, 14 Aug 2020 17:31:42 +0000 (UTC) X-Spam-Summary: 1,0,0,88bea24aa2d7d2d6,d41d8cd98f00b204,minchan.kim@gmail.com,,RULES_HIT:41:355:379:541:560:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3354:3866:3867:3868:3870:3872:4321:4385:5007:6120:6261:6653:7904:8957:9010:9592:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12895:12986:13894:14181:14721:21080:21444:21451:21627:30054:30070,0,RBL:209.85.215.195:@gmail.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04y8m4tkf7qprsihem1z19e4auwjhyptbjbqfo1kx136ygwpf3ny4rbuzcaq7qk.i6upwixdzrabp85s89akfgub7gpuohi84ucpew1t15461jjw34tngsz3k143a1i.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: team61_301196b26ffe X-Filterd-Recvd-Size: 5874 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Fri, 14 Aug 2020 17:31:42 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id 189so4213372pgg.13 for ; Fri, 14 Aug 2020 10:31:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UAAMpOsV4LSeCHvFCmQVj34sFgyBlHyc8J4tU9vV214=; b=PAeUj220oSbCQOaXQgGjTYHUwu92+h+iiMSngwIIwEJ4HZuJrr2rQ+CHSlxKQ3Z+EA Lk74altv5AYeQ9L2/OSO5Q+Vj95grWKo8WtUlbcW/1lzy/DNTGx6C9YO4Tv38Mq+Vg23 u36UPmFp9x+Eav3zhsDcwGU963Hc36EjxZoZ4rNgGusn7mHc0uNaklCeS/oInh9UI3lg ZHeo0FIJFc8L8nw7WZbchEs5cgcC7x/mjVQM8FP6VwPolD7S0NxwH2CKQjMf6tr1SwPs JM/fdd5D9O3sWmfkWCG+zK3ON8BWpN8+qxdoPYBCPe5PwF91JfAzzW39y9EkJWdMvae4 q/uA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=UAAMpOsV4LSeCHvFCmQVj34sFgyBlHyc8J4tU9vV214=; b=rog4lZF3FeCePnEjTknXn0uPBFmcEk6+k+d31tjIBvXCqO8NHl1JbIbkmqzez/UeNR 3aTES7y6dtMUB1EHcUtP+F+mGeZmAkrKxiH7xaGdbpfGpjN+Cox+Z4AbsmQu5zXEtEVJ ZNaqoZ73OVRwzyO8GOx9tDEu+eRq2yo7Na9gI+Z8NC16a7tfUbUveA0F0S24WkgM0NDo LMlwNkQG6RB1W7BqXnUzUIg+omXD4Hngh5Or91xAaQSs+nA2yoRVC38aJu/mtUcOrm/J yeJSuhugMJxGL96Zkw8oS7JTiMtGJ8OjTCmu6tkLUTUOaJ0uI8VnM22iZVUdAeLPYLnf hXBQ== X-Gm-Message-State: AOAM533Q5P5H8YTvAV8nfVUDNDHSCozgGkSP8z7npGN/zKu7TrXfyg6V wlMu/pjptT6BMu+Bc8+zw3w= X-Google-Smtp-Source: ABdhPJz994pPoGzHYt6CFnPTK3o6kSa5K3u66Dghha8kvJX+ZYKcf4Otmg12gR6iI1iElT2liX0usg== X-Received: by 2002:a63:f752:: with SMTP id f18mr2316540pgk.94.1597426301319; Fri, 14 Aug 2020 10:31:41 -0700 (PDT) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:1:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id n22sm8522973pjq.25.2020.08.14.10.31.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Aug 2020 10:31:40 -0700 (PDT) From: Minchan Kim To: Andrew Morton Cc: linux-mm , Joonsoo Kim , Vlastimil Babka , John Dias , Suren Baghdasaryan , pullip.cho@samsung.com, Minchan Kim Subject: [RFC 2/7] mm: introduce split_page_by_order Date: Fri, 14 Aug 2020 10:31:26 -0700 Message-Id: <20200814173131.2803002-3-minchan@kernel.org> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog In-Reply-To: <20200814173131.2803002-1-minchan@kernel.org> References: <20200814173131.2803002-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: DDA211A4AE X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch introduces split_page_by_order to support splitting a high-order page into group of smaller high-order pages and use it in split_map_pages for supporting upcoming high-order bulk operation. This patch shouldn't change any behavior. Signed-off-by: Minchan Kim --- include/linux/mm.h | 2 ++ mm/compaction.c | 2 +- mm/page_alloc.c | 27 +++++++++++++++++++-------- 3 files changed, 22 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8ab941cf73f4..9a51abbe8625 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -849,6 +849,8 @@ void __put_page(struct page *page); void put_pages_list(struct list_head *pages); +void split_page_by_order(struct page *page, unsigned int order, + unsigned int new_order); void split_page(struct page *page, unsigned int order); /* diff --git a/mm/compaction.c b/mm/compaction.c index 176dcded298e..f31799a841f2 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -98,7 +98,7 @@ static void split_map_pages(struct list_head *list) post_alloc_hook(page, order, __GFP_MOVABLE); if (order) - split_page(page, order); + split_page_by_order(page, order, 0); for (i = 0; i < nr_pages; i++) { list_add(&page->lru, &tmp_list); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8ce30cc50577..4caab47377a7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3188,6 +3188,24 @@ void free_unref_page_list(struct list_head *list) local_irq_restore(flags); } +/* + * split_page_by_order takes a non-compound higher-order page, and splits + * it into n (1 << (order - new_order)) sub-order pages: page[0..n] + * Each sub-page must be freed individually. + */ +void split_page_by_order(struct page *page, unsigned int order, + unsigned int new_order) +{ + int i; + + VM_BUG_ON_PAGE(PageCompound(page), page); + VM_BUG_ON_PAGE(!page_count(page), page); + + for (i = 1; i < (1 << (order - new_order)); i++) + set_page_refcounted(page + i * (1 << new_order)); + split_page_owner(page, order, new_order); +} + /* * split_page takes a non-compound higher-order page, and splits it into * n (1< X-Patchwork-Id: 11715053 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2E002739 for ; Fri, 14 Aug 2020 17:31:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E12CC20768 for ; Fri, 14 Aug 2020 17:31:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="C6ftLx9s" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E12CC20768 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BE5738D0002; Fri, 14 Aug 2020 13:31:45 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B6C7F8D0003; Fri, 14 Aug 2020 13:31:45 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9BD908D0002; Fri, 14 Aug 2020 13:31:45 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0101.hostedemail.com [216.40.44.101]) by kanga.kvack.org (Postfix) with ESMTP id 749AA8D0003 for ; Fri, 14 Aug 2020 13:31:45 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2E013181AEF23 for ; Fri, 14 Aug 2020 17:31:45 +0000 (UTC) X-FDA: 77149866570.09.jam52_531080126ffe Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 89454180AD81A for ; Fri, 14 Aug 2020 17:31:44 +0000 (UTC) X-Spam-Summary: 1,0,0,ec840b1cb7a268ab,d41d8cd98f00b204,minchan.kim@gmail.com,,RULES_HIT:2:41:69:355:379:541:560:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1606:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2553:2559:2562:2693:2731:2736:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4119:4250:4321:4385:5007:6117:6119:6120:6261:6653:7875:7903:9010:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13215:13229:13894:14096:21080:21094:21323:21444:21451:21627:21795:21987:21990:30051:30054:30090,0,RBL:209.85.210.196:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04yfworbywkexyznf867skcsyysbpyctdzdjrcgmxj4ujqjn5tqrobznqyxzt7j.k9b9rnh78f4tcotk4ssqiojxa6x4uq86nwmiomfm7sjw67zw8c5yf4443csdzyu.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: jam52_531080126ffe X-Filterd-Recvd-Size: 8417 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Fri, 14 Aug 2020 17:31:43 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id m8so4885744pfh.3 for ; Fri, 14 Aug 2020 10:31:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EkTZgMvscD2KwIBX0vZD343fn5ehrZxMsNsNzYcucec=; b=C6ftLx9sbYiyyYKgmA04wWOsnc7KvbG16gcWcNOv/hcRQw8ybQMpUwoyUXTR4dmtd0 uZe72diH9C0c9xNNh6MC0rK+V4NcOpIoVZm+yvCAf/fkIUYY5jyh5BlDOaVauQoDwEKI 4ctGErOacT8LiOg7a5hP/unoZeRiLFafHJ/dU6lGeosk0jyLsGZRcVtrzAVQglt2rg9r yPa3M8DbpvIadtJ79ZQ5Ku0NOXf9GljegFXH30kKLB2U3hAl2QXgC1un7EzyYX8sNB0A d21DpyE9mccr1JS1m/roezO+OUJaqx3ip7oPJxx216u6kZgUxQQy6tFEhkoYqKAYB8k/ G+Qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=EkTZgMvscD2KwIBX0vZD343fn5ehrZxMsNsNzYcucec=; b=ip+L71GTlT2Z9stLPEmJ5UTXPmKJ7jvQ/xFC4Xr0pBXPf8hi8aPx6+vuvR1NcjB1xx WXZWZdriva8wUg/hwzofX4qEeJq0I6DWuYkQtXFIdLFIRsVTanvmlXokD3N9PU1UuVBq hNdzVE2xrI0cqajxzaI4Vw8x77S0SeehueDwAmBmIdcmYBzpKw2s7eX5ozUGiolDRkzR +SVXPemRnClqObK2G0QGi66agAKYMbHR7JP6+BkPUpbnGFDmvG8g5ZboXO9n19L8PazQ OCnH2SK2mpCadG2EcN2W3lMmPParXRHhEVzgPV4XFg7VkD/dcCCdeP2jsbRKMrru5fi0 N4aA== X-Gm-Message-State: AOAM530MRPjRT+8BWORO2wzzjehwwGSyCpkbbukpPdN/SBoEgf13ATjY oMPjDYAIpfKie4yban55JSM= X-Google-Smtp-Source: ABdhPJzay7FypJUmNSI99B+mNjGzIjNonb2GXNqZn8e1q9by06s3lfGRmPkPuA29rxIOZolhWT6ktA== X-Received: by 2002:a65:62c3:: with SMTP id m3mr2407190pgv.338.1597426302983; Fri, 14 Aug 2020 10:31:42 -0700 (PDT) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:1:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id n22sm8522973pjq.25.2020.08.14.10.31.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Aug 2020 10:31:41 -0700 (PDT) From: Minchan Kim To: Andrew Morton Cc: linux-mm , Joonsoo Kim , Vlastimil Babka , John Dias , Suren Baghdasaryan , pullip.cho@samsung.com, Minchan Kim Subject: [RFC 3/7] mm: compaction: deal with upcoming high-order page splitting Date: Fri, 14 Aug 2020 10:31:27 -0700 Message-Id: <20200814173131.2803002-4-minchan@kernel.org> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog In-Reply-To: <20200814173131.2803002-1-minchan@kernel.org> References: <20200814173131.2803002-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 89454180AD81A X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When compaction isolates free pages, it needs to consider freed pages's order and sub-page splitting to support upcoming high order page bulk allocation. Since we have primitive functions to deal with high order page splitting, this patch introduces cc->isolate_order to indicate what order pages the API user want to allocate. It isolates free pages with order greater or equal to cc->isolate_order. After isolating it splits them into sub pages of cc->isolate_order order. Signed-off-by: Minchan Kim --- mm/compaction.c | 42 ++++++++++++++++++++++++++++-------------- mm/internal.h | 1 + 2 files changed, 29 insertions(+), 14 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index f31799a841f2..76f380cb801d 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -68,7 +68,8 @@ static const unsigned int HPAGE_FRAG_CHECK_INTERVAL_MSEC = 500; #define COMPACTION_HPAGE_ORDER (PMD_SHIFT - PAGE_SHIFT) #endif -static unsigned long release_freepages(struct list_head *freelist) +static unsigned long release_freepages(struct list_head *freelist, + unsigned int order) { struct page *page, *next; unsigned long high_pfn = 0; @@ -76,7 +77,7 @@ static unsigned long release_freepages(struct list_head *freelist) list_for_each_entry_safe(page, next, freelist, lru) { unsigned long pfn = page_to_pfn(page); list_del(&page->lru); - __free_page(page); + __free_pages(page, order); if (pfn > high_pfn) high_pfn = pfn; } @@ -84,7 +85,7 @@ static unsigned long release_freepages(struct list_head *freelist) return high_pfn; } -static void split_map_pages(struct list_head *list) +static void split_map_pages(struct list_head *list, unsigned int split_order) { unsigned int i, order, nr_pages; struct page *page, *next; @@ -94,15 +95,15 @@ static void split_map_pages(struct list_head *list) list_del(&page->lru); order = page_private(page); - nr_pages = 1 << order; + nr_pages = 1 << (order - split_order); post_alloc_hook(page, order, __GFP_MOVABLE); - if (order) - split_page_by_order(page, order, 0); + if (order > split_order) + split_page_by_order(page, order, split_order); for (i = 0; i < nr_pages; i++) { list_add(&page->lru, &tmp_list); - page++; + page += 1 << split_order; } } @@ -547,8 +548,10 @@ static bool compact_unlock_should_abort(spinlock_t *lock, } /* - * Isolate free pages onto a private freelist. If @strict is true, will abort - * returning 0 on any invalid PFNs or non-free pages inside of the pageblock + * Isolate free pages onto a private freelist if order of page is greater + * or equal to cc->isolate_order. If @strict is true, will abort + * returning 0 on any invalid PFNs, pages with order lower than + * cc->isolate_order or non-free pages inside of the pageblock * (even though it may still end up isolating some pages). */ static unsigned long isolate_freepages_block(struct compact_control *cc, @@ -625,8 +628,19 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, goto isolate_fail; } - /* Found a free page, will break it into order-0 pages */ + /* + * Found a free page. will isolate and possibly split the page + * into isolate_order sub pages if the page's order is greater + * than or equal to the isolate_order. Otherwise, it will keep + * going with further pages to isolate them unless strict is + * true. + */ order = page_order(page); + if (order < cc->isolate_order) { + blockpfn += (1UL << order) - 1; + cursor += (1UL << order) - 1; + goto isolate_fail; + } isolated = __isolate_free_page(page, order); if (!isolated) break; @@ -752,11 +766,11 @@ isolate_freepages_range(struct compact_control *cc, } /* __isolate_free_page() does not map the pages */ - split_map_pages(&freelist); + split_map_pages(&freelist, cc->isolate_order); if (pfn < end_pfn) { /* Loop terminated early, cleanup. */ - release_freepages(&freelist); + release_freepages(&freelist, cc->isolate_order); return 0; } @@ -1564,7 +1578,7 @@ static void isolate_freepages(struct compact_control *cc) splitmap: /* __isolate_free_page() does not map the pages */ - split_map_pages(freelist); + split_map_pages(freelist, 0); } /* @@ -2376,7 +2390,7 @@ compact_zone(struct compact_control *cc, struct capture_control *capc) * so we don't leave any returned pages behind in the next attempt. */ if (cc->nr_freepages > 0) { - unsigned long free_pfn = release_freepages(&cc->freepages); + unsigned long free_pfn = release_freepages(&cc->freepages, 0); cc->nr_freepages = 0; VM_BUG_ON(free_pfn == 0); diff --git a/mm/internal.h b/mm/internal.h index 10c677655912..5f1e9d76a623 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -244,6 +244,7 @@ struct compact_control { bool contended; /* Signal lock or sched contention */ bool rescan; /* Rescanning the same pageblock */ bool alloc_contig; /* alloc_contig_range allocation */ + int isolate_order; /* minimum order isolated from buddy */ }; /* From patchwork Fri Aug 14 17:31:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 11715059 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 28610618 for ; Fri, 14 Aug 2020 17:31:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E918220768 for ; Fri, 14 Aug 2020 17:31:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="tqi5HlG8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E918220768 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 222C48D0006; Fri, 14 Aug 2020 13:31:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 09B078D0003; Fri, 14 Aug 2020 13:31:46 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE1528D0006; Fri, 14 Aug 2020 13:31:46 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0023.hostedemail.com [216.40.44.23]) by kanga.kvack.org (Postfix) with ESMTP id C14F38D0003 for ; Fri, 14 Aug 2020 13:31:46 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8EF6E4DD0 for ; Fri, 14 Aug 2020 17:31:46 +0000 (UTC) X-FDA: 77149866612.02.camp61_3d1310326ffe Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 66C6F100C0CC3 for ; Fri, 14 Aug 2020 17:31:46 +0000 (UTC) X-Spam-Summary: 1,0,0,0c3c5d4a1e7fa5c4,d41d8cd98f00b204,minchan.kim@gmail.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:4117:4250:4321:4385:4605:5007:6119:6120:6261:6653:7875:7903:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14110:14181:14721:21080:21444:21451:21627:21990:30012:30054,0,RBL:209.85.214.195:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04ygo184j1e7mcq63zkqzy6imznazyp3ei6whs7qxk1aa5kn1jh6pdubizrg9bf.57je4nrqjgpoh3fwhsrfd666n548397ttujcx16y8nrt18smbd3kjep8eoi5i95.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: camp61_3d1310326ffe X-Filterd-Recvd-Size: 6294 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Fri, 14 Aug 2020 17:31:45 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id g15so577252plj.6 for ; Fri, 14 Aug 2020 10:31:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=P7rqSrKy/nnoTM8CqDIfpiZXkUt7aO0uWJ6Gz+yM4ps=; b=tqi5HlG8hhVI+w8Ue9/7W4j7xwOkOsdYKdr8TCC8WT8bhI7UWQv6R8wvVzfgXTDCFm zlnuE7QmGQ4U/IT4hkIjkK7KRMLyCyvcL1vtCE2RO/z6mkzHjxCbdlUst5wompRgUaF5 DF2qdF9ksDaG4bvI20ZBSsw8Mp1qocV0TVIi31Is8gZcjkLtP26y0lDcaPCKendWaoxm waO1nVBPQv7klVgJrOffQ1BE89CZqQeCITwZDg0OU0JaTpxObQe06VHEw5m8nYkBZe+y Y2xLf9cBmGx64aaa3MHfEeZLe+nUsd82+Oy7Lro+WHIL1l1r8h68vFT3s54pi4kBwz/g Phpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=P7rqSrKy/nnoTM8CqDIfpiZXkUt7aO0uWJ6Gz+yM4ps=; b=PxR8QRKDQBijSB0P6eaHhNU5+GIYXmFbwiBO1ZAc6MwTW5Xa1+ES6PbhwW2U1+SMV9 Fqrl5rcW2zXcxHy+1SCHDSIbPBwydcWQMY/oxtLGHbMmyx9Yz/Fp6mNZa68rknA6IDX7 SI45nJIfnG/q97VWrEja36VXBlJ6Q+49hEqdSfvOw4A6+pufbT7gb5u3QnvjjRrxmuIh JzcgHUalgAsHIW5biW2m22O/SkGQjIdeP5Fvyz3yDHhUFSMbzsdTq3vNOGV9Na3OtdKp rq+KMdAsm7v6fSt6SrHbZICede6rGtR3J3wgiLRM+ELf4eTn+VroDHNvIfZoLFdbe1yZ 6/pg== X-Gm-Message-State: AOAM533ebSLevivMjOZ5LEDYDS/qUhhbuRr8S2IrDig97KtVuPkjXBCH nYTn/RZRNKkxjYbP/ESBOmTac3Km2qQ= X-Google-Smtp-Source: ABdhPJxOc52ILuwtRtW5VknmfR32RY7TWmdQ/9lQA/hy1dC06Z8JjXec2VabI2CX9fKzLSSTcAcllw== X-Received: by 2002:a17:90a:c201:: with SMTP id e1mr3069280pjt.142.1597426304724; Fri, 14 Aug 2020 10:31:44 -0700 (PDT) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:1:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id n22sm8522973pjq.25.2020.08.14.10.31.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Aug 2020 10:31:43 -0700 (PDT) From: Minchan Kim To: Andrew Morton Cc: linux-mm , Joonsoo Kim , Vlastimil Babka , John Dias , Suren Baghdasaryan , pullip.cho@samsung.com, Minchan Kim Subject: [RFC 4/7] mm: factor __alloc_contig_range out Date: Fri, 14 Aug 2020 10:31:28 -0700 Message-Id: <20200814173131.2803002-5-minchan@kernel.org> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog In-Reply-To: <20200814173131.2803002-1-minchan@kernel.org> References: <20200814173131.2803002-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 66C6F100C0CC3 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To prepare new API which will reuse most of alloc_contig_range, this patch factor out the common part as __alloc_contig_range. Signed-off-by: Minchan Kim --- mm/page_alloc.c | 50 +++++++++++++++++++++++++++---------------------- 1 file changed, 28 insertions(+), 22 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4caab47377a7..caf393d8b413 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8401,28 +8401,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, return 0; } -/** - * alloc_contig_range() -- tries to allocate given range of pages - * @start: start PFN to allocate - * @end: one-past-the-last PFN to allocate - * @migratetype: migratetype of the underlaying pageblocks (either - * #MIGRATE_MOVABLE or #MIGRATE_CMA). All pageblocks - * in range must have the same migratetype and it must - * be either of the two. - * @gfp_mask: GFP mask to use during compaction - * - * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES - * aligned. The PFN range must belong to a single zone. - * - * The first thing this routine does is attempt to MIGRATE_ISOLATE all - * pageblocks in the range. Once isolated, the pageblocks should not - * be modified by others. - * - * Return: zero on success or negative error code. On success all - * pages which PFN is in [start, end) are allocated for the caller and - * need to be freed with free_contig_range(). - */ -int alloc_contig_range(unsigned long start, unsigned long end, +static int __alloc_contig_range(unsigned long start, unsigned long end, unsigned migratetype, gfp_t gfp_mask) { unsigned long outer_start, outer_end; @@ -8555,6 +8534,33 @@ int alloc_contig_range(unsigned long start, unsigned long end, } EXPORT_SYMBOL(alloc_contig_range); +/** + * alloc_contig_range() -- tries to allocate given range of pages + * @start: start PFN to allocate + * @end: one-past-the-last PFN to allocate + * @migratetype: migratetype of the underlaying pageblocks (either + * #MIGRATE_MOVABLE or #MIGRATE_CMA). All pageblocks + * in range must have the same migratetype and it must + * be either of the two. + * @gfp_mask: GFP mask to use during compaction + * + * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES + * aligned. The PFN range must belong to a single zone. + * + * The first thing this routine does is attempt to MIGRATE_ISOLATE all + * pageblocks in the range. Once isolated, the pageblocks should not + * be modified by others. + * + * Return: zero on success or negative error code. On success all + * pages which PFN is in [start, end) are allocated for the caller and + * need to be freed with free_contig_range(). + */ +int alloc_contig_range(unsigned long start, unsigned long end, + unsigned migratetype, gfp_t gfp_mask) +{ + return __alloc_contig_range(start, end, migratetype, gfp_mask); +} + static int __alloc_contig_pages(unsigned long start_pfn, unsigned long nr_pages, gfp_t gfp_mask) { From patchwork Fri Aug 14 17:31:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 11715063 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 865A1739 for ; Fri, 14 Aug 2020 17:31:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4626820768 for ; Fri, 14 Aug 2020 17:31:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PmAGVEUj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4626820768 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 85F7B8D0007; Fri, 14 Aug 2020 13:31:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7E7EF8D0003; Fri, 14 Aug 2020 13:31:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5ECE18D0007; Fri, 14 Aug 2020 13:31:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0006.hostedemail.com [216.40.44.6]) by kanga.kvack.org (Postfix) with ESMTP id 444348D0003 for ; Fri, 14 Aug 2020 13:31:48 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0BA59641D for ; Fri, 14 Aug 2020 17:31:48 +0000 (UTC) X-FDA: 77149866696.13.cat90_3011a8c26ffe Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id BF72718140B70 for ; Fri, 14 Aug 2020 17:31:47 +0000 (UTC) X-Spam-Summary: 1,0,0,a1c921da3a6b3470,d41d8cd98f00b204,minchan.kim@gmail.com,,RULES_HIT:1:41:355:379:421:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:2194:2196:2198:2199:2200:2201:2393:2559:2562:2636:2693:2731:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4250:4321:4385:4605:5007:6119:6120:6261:6630:6653:7875:7903:8603:9108:10004:11026:11232:11473:11658:11914:12043:12291:12295:12296:12297:12438:12517:12519:12555:12664:12679:12683:12895:13161:13215:13229:13894:21080:21433:21444:21451:21611:21627:21740:21795:21987:21990:30012:30034:30051:30054,0,RBL:209.85.214.195:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04yf1icpfycjbydrfue1ft4ryaetiycukkzdzrgx667s188jnhgysjqe6mfkyc1.6epxnjmeg4dn5haprdakngo3dhfrjde7qpfn1bpkfgbq61eyru6zyo8bbn4b7u1.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA _SUMMARY X-HE-Tag: cat90_3011a8c26ffe X-Filterd-Recvd-Size: 13360 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Fri, 14 Aug 2020 17:31:47 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id q19so4490697pll.0 for ; Fri, 14 Aug 2020 10:31:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2WoD8w9V8sAzcCU1i6LLdudW6Ib7s2S6BdueANcXcuU=; b=PmAGVEUjphHrXdrQY7ULkXj+G8rwvnWdjVo1AwCWlD/voyWsWTJt/OWqZKXvheqsN3 LnP00H2NT1gpXczptMmsJAQ9KI2jfC50ou+fffbQU37BWIV4/4mg2Rz/6lV9RoGvj12f f36dVT4CJu0FWWAk+t9+hSw2dIY+MP0x7Qxb1UE0D0Vcd4jB/s5ZMmAgHdX7LLG+YSaV XyMYn98i0WvaYlpx75Ep8ZavXDIbfobwK1FKtMqZL2vdI1j/WhEuzCHxVlSvLOC1iJsc xhAG9aDLjpiNDxF0fLBSP18ut1hvO1pL6ObsP/BCC7tn4CqyuXZM+R90VkkoKEuys3lR ul3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=2WoD8w9V8sAzcCU1i6LLdudW6Ib7s2S6BdueANcXcuU=; b=oWoHnuXVytH1KeIR0MGsiwbqTpVluDKbuAm6USjkRfSB89RaVT5MMfw32nXqBvQMPB iDrJqgHeBtmvJELlHJAAU9nu5uapxKACOzeu8TD9KaqCF8GjsYj2+d78Vf10oapGO1R4 hkNSEetFiIWxPPL2vwoz45WGe8L4NRQff8qIqSlcg+aTmnvEyN4+vLw08GaiPwBCnsy8 dF76WNqf3MQrOwiZesaRJ6KUKnSMMEdX/mUQyEiKZ7hCnqMfqs9FGOIow8PVeub82i2U RRwgKHGMoDfdk9fCiQMPVheslEWQAPMUPdVsJ9kzW1x+YFgAd+uKeQcr0cFAibCe6+7C zr1g== X-Gm-Message-State: AOAM530yKFgTk2mbwA+pzn1UU/KInmVnh6kAcWxKBEBcJUiF4CAYo2HH HxyhCqwPPK1yUkVQlzLOZKk= X-Google-Smtp-Source: ABdhPJyiNDgvQEgA2HrGLf8TUqc8gq4iw58bXwUtDmiBjl6FJZX0DLRx3Piu2gkpBCGXsu02p022Zw== X-Received: by 2002:a17:90a:df11:: with SMTP id gp17mr3178939pjb.140.1597426306127; Fri, 14 Aug 2020 10:31:46 -0700 (PDT) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:1:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id n22sm8522973pjq.25.2020.08.14.10.31.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Aug 2020 10:31:45 -0700 (PDT) From: Minchan Kim To: Andrew Morton Cc: linux-mm , Joonsoo Kim , Vlastimil Babka , John Dias , Suren Baghdasaryan , pullip.cho@samsung.com, Minchan Kim Subject: [RFC 5/7] mm: introduce alloc_pages_bulk API Date: Fri, 14 Aug 2020 10:31:29 -0700 Message-Id: <20200814173131.2803002-6-minchan@kernel.org> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog In-Reply-To: <20200814173131.2803002-1-minchan@kernel.org> References: <20200814173131.2803002-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: BF72718140B70 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is a need for special HW to require bulk allocation of high-order pages. For example, 4800 * order-4 pages. To meet the requirement, a option is using CMA area because page allocator with compaction under memory pressure is easily failed to meet the requirement and too slow for 4800 times. However, CMA has also the following drawbacks: * 4800 of order-4 * cma_alloc is too slow To avoid the slowness, we could try to allocate 300M contiguous memory once and then split them into order-4 chunks. The problem of this approach is CMA allocation fails one of the pages in those range couldn't migrate out, which happens easily with fs write under memory pressure. To solve issues, this patch introduces alloc_pages_bulk. int alloc_pages_bulk(unsigned long start, unsigned long end, unsigned int migratetype, gfp_t gfp_mask, unsigned int order, unsigned int nr_elem, struct page **pages); It will investigate the [start, end) and migrate movable pages out there by best effort(by upcoming patches) to make requested order's free pages. The allocated pages will be returned using pages parameter. Return value represents how many of requested order pages we got. It could be less than user requested by nr_elem. /** * alloc_pages_bulk() -- tries to allocate high order pages * by batch from given range [start, end) * @start: start PFN to allocate * @end: one-past-the-last PFN to allocate * @migratetype: migratetype of the underlaying pageblocks (either * #MIGRATE_MOVABLE or #MIGRATE_CMA). All pageblocks * in range must have the same migratetype and it must * be either of the two. * @gfp_mask: GFP mask to use during compaction * @order: page order requested * @nr_elem: the number of high-order pages to allocate * @pages: page array pointer to store allocated pages (must * have space for at least nr_elem elements) * * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES * aligned. The PFN range must belong to a single zone. * * Return: the number of pages allocated on success or negative error code. * The allocated pages should be freed using __free_pages */ The test goes order-4 * 4800 allocation(i.e., total 300MB) under kernel build workload. System RAM size is 1.5GB and CMA is 500M. With using CMA to allocate to 300M, ran 10 times trial, 10 time failed with big latency(up to several seconds). With this alloc_pages_bulk API, ran 10 time trial, 7 times are successful to allocate 4800 times. Rest 3 times are allocated 4799, 4789 and 4799. They are all done with 300ms. Signed-off-by: Minchan Kim --- include/linux/gfp.h | 5 +++ mm/compaction.c | 11 +++-- mm/internal.h | 3 +- mm/page_alloc.c | 97 +++++++++++++++++++++++++++++++++++++++++---- 4 files changed, 102 insertions(+), 14 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 67a0774e080b..79ff38f25def 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -625,6 +625,11 @@ static inline bool pm_suspended_storage(void) /* The below functions must be run on a range from a single zone. */ extern int alloc_contig_range(unsigned long start, unsigned long end, unsigned migratetype, gfp_t gfp_mask); +extern int alloc_pages_bulk(unsigned long start, unsigned long end, + unsigned int migratetype, gfp_t gfp_mask, + unsigned int order, unsigned int nr_elem, + struct page **pages); + extern struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, int nid, nodemask_t *nodemask); #endif diff --git a/mm/compaction.c b/mm/compaction.c index 76f380cb801d..1e4392f6fec3 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -713,10 +713,10 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, */ unsigned long isolate_freepages_range(struct compact_control *cc, - unsigned long start_pfn, unsigned long end_pfn) + unsigned long start_pfn, unsigned long end_pfn, + struct list_head *freepage_list) { unsigned long isolated, pfn, block_start_pfn, block_end_pfn; - LIST_HEAD(freelist); pfn = start_pfn; block_start_pfn = pageblock_start_pfn(pfn); @@ -748,7 +748,7 @@ isolate_freepages_range(struct compact_control *cc, break; isolated = isolate_freepages_block(cc, &isolate_start_pfn, - block_end_pfn, &freelist, 0, true); + block_end_pfn, freepage_list, 0, true); /* * In strict mode, isolate_freepages_block() returns 0 if @@ -766,15 +766,14 @@ isolate_freepages_range(struct compact_control *cc, } /* __isolate_free_page() does not map the pages */ - split_map_pages(&freelist, cc->isolate_order); + split_map_pages(freepage_list, cc->isolate_order); if (pfn < end_pfn) { /* Loop terminated early, cleanup. */ - release_freepages(&freelist, cc->isolate_order); + release_freepages(freepage_list, cc->isolate_order); return 0; } - /* We don't use freelists for anything. */ return pfn; } diff --git a/mm/internal.h b/mm/internal.h index 5f1e9d76a623..f9b86257fae2 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -258,7 +258,8 @@ struct capture_control { unsigned long isolate_freepages_range(struct compact_control *cc, - unsigned long start_pfn, unsigned long end_pfn); + unsigned long start_pfn, unsigned long end_pfn, + struct list_head *freepage_list); unsigned long isolate_migratepages_range(struct compact_control *cc, unsigned long low_pfn, unsigned long end_pfn); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index caf393d8b413..cdf956feae80 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8402,10 +8402,14 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, } static int __alloc_contig_range(unsigned long start, unsigned long end, - unsigned migratetype, gfp_t gfp_mask) + unsigned int migratetype, gfp_t gfp_mask, + unsigned int alloc_order, + struct list_head *freepage_list) { unsigned long outer_start, outer_end; unsigned int order; + struct page *page, *page2; + unsigned long pfn; int ret = 0; struct compact_control cc = { @@ -8417,6 +8421,7 @@ static int __alloc_contig_range(unsigned long start, unsigned long end, .no_set_skip_hint = true, .gfp_mask = current_gfp_context(gfp_mask), .alloc_contig = true, + .isolate_order = alloc_order, }; INIT_LIST_HEAD(&cc.migratepages); @@ -8515,17 +8520,42 @@ static int __alloc_contig_range(unsigned long start, unsigned long end, } /* Grab isolated pages from freelists. */ - outer_end = isolate_freepages_range(&cc, outer_start, end); + outer_end = isolate_freepages_range(&cc, outer_start, end, + freepage_list); if (!outer_end) { ret = -EBUSY; goto done; } /* Free head and tail (if any) */ - if (start != outer_start) - free_contig_range(outer_start, start - outer_start); - if (end != outer_end) - free_contig_range(end, outer_end - end); + if (start != outer_start) { + if (alloc_order == 0) + free_contig_range(outer_start, start - outer_start); + else { + list_for_each_entry_safe(page, page2, + freepage_list, lru) { + pfn = page_to_pfn(page); + if (pfn >= start) + break; + list_del(&page->lru); + __free_pages(page, alloc_order); + } + } + } + if (end != outer_end) { + if (alloc_order == 0) + free_contig_range(end, outer_end - end); + else { + list_for_each_entry_safe_reverse(page, page2, + freepage_list, lru) { + pfn = page_to_pfn(page); + if ((pfn + (1 << alloc_order)) <= end) + break; + list_del(&page->lru); + __free_pages(page, alloc_order); + } + } + } done: undo_isolate_page_range(pfn_max_align_down(start), @@ -8558,8 +8588,61 @@ EXPORT_SYMBOL(alloc_contig_range); int alloc_contig_range(unsigned long start, unsigned long end, unsigned migratetype, gfp_t gfp_mask) { - return __alloc_contig_range(start, end, migratetype, gfp_mask); + LIST_HEAD(freepage_list); + + return __alloc_contig_range(start, end, migratetype, + gfp_mask, 0, &freepage_list); +} + +/** + * alloc_pages_bulk() -- tries to allocate high order pages + * by batch from given range [start, end) + * @start: start PFN to allocate + * @end: one-past-the-last PFN to allocate + * @migratetype: migratetype of the underlaying pageblocks (either + * #MIGRATE_MOVABLE or #MIGRATE_CMA). All pageblocks + * in range must have the same migratetype and it must + * be either of the two. + * @gfp_mask: GFP mask to use during compaction + * @order: page order requested + * @nr_elem: the number of high-order pages to allocate + * @pages: page array pointer to store allocated pages (must + * have space for at least nr_elem elements) + * + * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES + * aligned. The PFN range must belong to a single zone. + * + * Return: the number of pages allocated on success or negative error code. + * The allocated pages need to be free with __free_pages + */ +int alloc_pages_bulk(unsigned long start, unsigned long end, + unsigned int migratetype, gfp_t gfp_mask, + unsigned int order, unsigned int nr_elem, + struct page **pages) +{ + int ret; + struct page *page, *page2; + LIST_HEAD(freepage_list); + + if (order >= MAX_ORDER) + return -EINVAL; + + ret = __alloc_contig_range(start, end, migratetype, + gfp_mask, order, &freepage_list); + if (ret) + return ret; + + /* keep pfn ordering */ + list_for_each_entry_safe(page, page2, &freepage_list, lru) { + if (ret < nr_elem) + pages[ret++] = page; + else + __free_pages(page, order); + } + + return ret; } +EXPORT_SYMBOL(alloc_pages_bulk); static int __alloc_contig_pages(unsigned long start_pfn, unsigned long nr_pages, gfp_t gfp_mask) From patchwork Fri Aug 14 17:31:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 11715065 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 23691739 for ; Fri, 14 Aug 2020 17:31:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D6C7C20768 for ; Fri, 14 Aug 2020 17:31:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BPRDYjHM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D6C7C20768 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EEECC8D0008; Fri, 14 Aug 2020 13:31:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E54E68D0003; Fri, 14 Aug 2020 13:31:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B733A8D0008; Fri, 14 Aug 2020 13:31:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id 841F18D0003 for ; Fri, 14 Aug 2020 13:31:49 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4B472181AEF23 for ; Fri, 14 Aug 2020 17:31:49 +0000 (UTC) X-FDA: 77149866738.21.hat54_160ab3b26ffe Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 102BA180442C0 for ; Fri, 14 Aug 2020 17:31:49 +0000 (UTC) X-Spam-Summary: 1,0,0,43e15af2c80e2492,d41d8cd98f00b204,minchan.kim@gmail.com,,RULES_HIT:1:2:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2731:2739:2898:2899:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:4050:4321:4385:4605:5007:6117:6119:6120:6261:6653:7875:7903:8957:9108:10004:11026:11473:11658:11914:12043:12219:12291:12294:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14096:21080:21094:21323:21324:21433:21444:21451:21627:21740:21990:30054:30070:30074,0,RBL:209.85.214.193:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100;04yfknfg67h46y1umij1d5dm3iynmopkoa6eu9zekssj4zrjkwhnf4pncuhe4sg.crmqds1jt3cgzokut7thmcadzb95r1q3w4r78pf6b69xbddnug6iyx8nonyrzj6.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: hat54_160ab3b26ffe X-Filterd-Recvd-Size: 10428 Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Fri, 14 Aug 2020 17:31:48 +0000 (UTC) Received: by mail-pl1-f193.google.com with SMTP id y6so4482737plt.3 for ; Fri, 14 Aug 2020 10:31:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=03psx5ZOedEdpxuYD/LdYYTJpY1uy6XJwUcDuinG3Cc=; b=BPRDYjHM6cKVpglbeF2tnlLtTehthioMQFUigruftZU7xygDMGcbCggu6qsKbHiwl3 NUyZtccHnAhZoFXYBvObuH5l0S6hAj0nAdLIdPw8JVTqFzBKWSXSCQiOx5/V3n1LFMvh CM3xogZRBYOpVMOHp3MgLdys7ampiJFUW2Mp6XN91WSphqfwfQ8/Ie8r8BmDf11BC1i2 bO3jumGwa67BT7bCWVVV7FmQe0+p842bzJUbM2HvMRUu3okJ5Cu2kmEamsOe3A6PnV6t RPzeBoShUfZaxXKxwGPcDdjyxxASSYFWWvMRnWKzjNBHHJp8ndRqVuqzfTZQLgfhxsIS 5R0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=03psx5ZOedEdpxuYD/LdYYTJpY1uy6XJwUcDuinG3Cc=; b=nuu5QdOemg3BPOTvK+zdg5kLZKA1pqF1xxw9pozcth6LPsKzSx3lCTKUOK9arKEcgs t/OAPYBm6bRcy2QfWy2Sess+1DWfDSYNLsYR7HT1ZkRvUPx8wyShxxdOa5s5SlGrs5oZ d6n4DtFYNMperr0Mvbsm0tBD38DiZXHBYTnu1p6lbZZGsCUfxPF5CxN+1byn2l3DoAO9 0M5zaxX7CEcz4U+SVkFl0UFFspwTt0+4dPdAZ0k79SITh1WLqppO1jxdpgQZ4gE4LD/G ERT/S0rGxZAUS0peTj1y72dj2Ium650PyhBG3xz0aAW9RyS/HZOdPHOGui8p2U9t27xg oZNA== X-Gm-Message-State: AOAM533CG5y2CClKveO5H+Uuvp4XWEdIKFULgDG8HOQ0lt3QZTdUMfXG yWsKJE7x7Xb5cWaQYcRbjQQ= X-Google-Smtp-Source: ABdhPJwVx8zHzA6/MAdAGM/g3uPg2tQ+jbdGavkz5sRl5XY4E2wA1MLYnu2wfsmqkf9rvzjcd2zZaw== X-Received: by 2002:a17:902:8ecb:: with SMTP id x11mr2657884plo.13.1597426307637; Fri, 14 Aug 2020 10:31:47 -0700 (PDT) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:1:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id n22sm8522973pjq.25.2020.08.14.10.31.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Aug 2020 10:31:46 -0700 (PDT) From: Minchan Kim To: Andrew Morton Cc: linux-mm , Joonsoo Kim , Vlastimil Babka , John Dias , Suren Baghdasaryan , pullip.cho@samsung.com, Minchan Kim Subject: [RFC 6/7] mm: make alloc_pages_bulk best effort Date: Fri, 14 Aug 2020 10:31:30 -0700 Message-Id: <20200814173131.2803002-7-minchan@kernel.org> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog In-Reply-To: <20200814173131.2803002-1-minchan@kernel.org> References: <20200814173131.2803002-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 102BA180442C0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: alloc_pages_bulk takes best effort approach to make high order pages so it should keep going with further range even though it encounters non-movable pages. To achieve it, this patch introduces ALLOW_ISOLATE_FAILURE flags for start_isolate_page_range and alloc_bulk in compact_control so it could proceed with further range although some failures happen from isolation/migration/ free page isolation. What it does with new flag are * skip the pageblock if it's not affordable for changing the block MIGRATE_ISOLATE * skip the pageblock if it couldn't migrate a page by some reasons * skip the pageblock if it couldn't isolate free pages by some reasons Signed-off-by: Minchan Kim --- include/linux/page-isolation.h | 1 + mm/compaction.c | 17 +++++++++++++---- mm/internal.h | 1 + mm/page_alloc.c | 32 +++++++++++++++++++++++--------- mm/page_isolation.c | 4 ++++ 5 files changed, 42 insertions(+), 13 deletions(-) diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 572458016331..b8b6789d2bd9 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -32,6 +32,7 @@ static inline bool is_migrate_isolate(int migratetype) #define MEMORY_OFFLINE 0x1 #define REPORT_FAILURE 0x2 +#define ALLOW_ISOLATE_FAILURE 0x4 struct page *has_unmovable_pages(struct zone *zone, struct page *page, int migratetype, int flags); diff --git a/mm/compaction.c b/mm/compaction.c index 1e4392f6fec3..94dee139ce0d 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -748,15 +748,24 @@ isolate_freepages_range(struct compact_control *cc, break; isolated = isolate_freepages_block(cc, &isolate_start_pfn, - block_end_pfn, freepage_list, 0, true); + block_end_pfn, freepage_list, + cc->alloc_bulk ? 1 : 0, + cc->alloc_bulk ? false : true); /* * In strict mode, isolate_freepages_block() returns 0 if * there are any holes in the block (ie. invalid PFNs or - * non-free pages). + * non-free pages) so just stop the isolation in the case. + * However, in alloc_bulk mode, we could check further range + * to find affordable high order free pages so keep going + * with next pageblock. */ - if (!isolated) - break; + if (!isolated) { + if (!cc->alloc_bulk) + break; + pfn = block_end_pfn; + continue; + } /* * If we managed to isolate pages, it is always (1 << n) * diff --git a/mm/internal.h b/mm/internal.h index f9b86257fae2..71f00284326e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -244,6 +244,7 @@ struct compact_control { bool contended; /* Signal lock or sched contention */ bool rescan; /* Rescanning the same pageblock */ bool alloc_contig; /* alloc_contig_range allocation */ + bool alloc_bulk; /* alloc_pages_bulk allocation */ int isolate_order; /* minimum order isolated from buddy */ }; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index cdf956feae80..66cea47ae2b6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8359,8 +8359,8 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, /* This function is based on compact_zone() from compaction.c. */ unsigned int nr_reclaimed; unsigned long pfn = start; - unsigned int tries = 0; - int ret = 0; + unsigned int tries; + int ret; struct migration_target_control mtc = { .nid = zone_to_nid(cc->zone), .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL, @@ -8368,6 +8368,8 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, migrate_prep(); +next: + tries = ret = 0; while (pfn < end || !list_empty(&cc->migratepages)) { if (fatal_signal_pending(current)) { ret = -EINTR; @@ -8396,15 +8398,25 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, } if (ret < 0) { putback_movable_pages(&cc->migratepages); - return ret; + if (cc->alloc_bulk && pfn < end) { + /* + * -EINTR means current process has fatal signal. + * -ENOMEM means there is no free memory. + * In these cases, stop the effort to work with + * next blocks. + */ + if (ret != -EINTR && ret != -ENOMEM) + goto next; + } } - return 0; + return ret; } static int __alloc_contig_range(unsigned long start, unsigned long end, unsigned int migratetype, gfp_t gfp_mask, unsigned int alloc_order, - struct list_head *freepage_list) + struct list_head *freepage_list, + bool alloc_bulk) { unsigned long outer_start, outer_end; unsigned int order; @@ -8422,6 +8434,7 @@ static int __alloc_contig_range(unsigned long start, unsigned long end, .gfp_mask = current_gfp_context(gfp_mask), .alloc_contig = true, .isolate_order = alloc_order, + .alloc_bulk = alloc_bulk, }; INIT_LIST_HEAD(&cc.migratepages); @@ -8450,7 +8463,8 @@ static int __alloc_contig_range(unsigned long start, unsigned long end, */ ret = start_isolate_page_range(pfn_max_align_down(start), - pfn_max_align_up(end), migratetype, 0); + pfn_max_align_up(end), migratetype, + alloc_bulk ? ALLOW_ISOLATE_FAILURE : 0); if (ret < 0) return ret; @@ -8512,7 +8526,7 @@ static int __alloc_contig_range(unsigned long start, unsigned long end, } /* Make sure the range is really isolated. */ - if (test_pages_isolated(outer_start, end, 0)) { + if (!alloc_bulk && test_pages_isolated(outer_start, end, 0)) { pr_info_ratelimited("%s: [%lx, %lx) PFNs busy\n", __func__, outer_start, end); ret = -EBUSY; @@ -8591,7 +8605,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, LIST_HEAD(freepage_list); return __alloc_contig_range(start, end, migratetype, - gfp_mask, 0, &freepage_list); + gfp_mask, 0, &freepage_list, false); } /** @@ -8628,7 +8642,7 @@ int alloc_pages_bulk(unsigned long start, unsigned long end, return -EINVAL; ret = __alloc_contig_range(start, end, migratetype, - gfp_mask, order, &freepage_list); + gfp_mask, order, &freepage_list, true); if (ret) return ret; diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 242c03121d73..6208db89a31b 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -154,6 +154,8 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages) * and PageOffline() pages. * REPORT_FAILURE - report details about the failure to * isolate the range + * ALLOW_ISOLATE_FAILURE - skip the pageblock of the range + * whenever we fail to set MIGRATE_ISOLATE * * Making page-allocation-type to be MIGRATE_ISOLATE means free pages in * the range will never be allocated. Any free pages and pages freed in the @@ -190,6 +192,8 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, page = __first_valid_page(pfn, pageblock_nr_pages); if (page) { if (set_migratetype_isolate(page, migratetype, flags)) { + if (flags & ALLOW_ISOLATE_FAILURE) + continue; undo_pfn = pfn; goto undo; } From patchwork Fri Aug 14 17:31:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 11715067 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9FF58739 for ; Fri, 14 Aug 2020 17:31:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6164E20838 for ; Fri, 14 Aug 2020 17:31:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YZZzcYbQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6164E20838 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B704E8D0009; Fri, 14 Aug 2020 13:31:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A348B8D0003; Fri, 14 Aug 2020 13:31:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 85F748D0009; Fri, 14 Aug 2020 13:31:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id 66DBD8D0003 for ; Fri, 14 Aug 2020 13:31:51 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2F5425837 for ; Fri, 14 Aug 2020 17:31:51 +0000 (UTC) X-FDA: 77149866822.13.sun61_4a050a426ffe Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id DEB2F18140B67 for ; Fri, 14 Aug 2020 17:31:50 +0000 (UTC) X-Spam-Summary: 1,0,0,c42cdb1e7823344b,d41d8cd98f00b204,minchan.kim@gmail.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3866:3870:3871:3874:4250:4321:5007:6119:6120:6261:6653:9108:10004:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:13069:13161:13229:13311:13357:13894:14096:14181:14384:14721:21080:21444:21451:21627:30054,0,RBL:209.85.214.196:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04yrsbj3gprbhfdd1sfzfyqryssskop7h1hf79qhkgmr1e63zcz91ki7fdtpked.wzytpq5mbjgynjdyu7qhrcoxotqmkamej9z9ibe4zp344eye5sk8wwsjmk9pqeo.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: sun61_4a050a426ffe X-Filterd-Recvd-Size: 4375 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Fri, 14 Aug 2020 17:31:50 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id q19so4490758pll.0 for ; Fri, 14 Aug 2020 10:31:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KS+7WsC3qVWICgIyskO4nDMgUia5PatYrKOwFn7+XWY=; b=YZZzcYbQcL8tIkkKxXyJMAyNwsx6T8Pv0drfzreiOyigonf9yI8M1Vi5tLC6ExELfJ QdCrGKCp9LHOpdEsAEl7CPUfDGpEYmjQ28HlJQcMiRKN64NRwA/2LQxRXnkSSwXEifX3 +4SBvrSNdPPsgDw/N2nucCR2+SxQF8HE438+0Qpc+Hs9HEYcdZ5Pkdeh6HTGI7wn4Pbo yhdvCA0zDoN5BbzZLoYgJ1p3pJ3TUIbNjeHn+f9Xw7PL3aVL181TdbrJVyZA5THqjKWA nmzvQ4/fM+UcOfU4vfghOjI3ieFkOMj6yjZihEp7IhThPv88gJCSYtDIuJFG27jT1iWD h5MQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=KS+7WsC3qVWICgIyskO4nDMgUia5PatYrKOwFn7+XWY=; b=e5Ap6IuuJQiPm+ef/bLXffH//gRp5JyuwAky++cGdsYTzis1HX8rqH3PMQWS036RQq WNzHj5J0Ad13ntWkgdgTAgV/KYwHnFtLa/lU40m5jgNh2cPlSCeZBf/iuNidldRMHCL5 tsbOndqxuHRqUKkmi2BmlV8m7wsf2hzAE7SMljOVTjjwO/BsEZMro+C3vbPg0CBpfWa8 G0ubwN5icMf/0wDsathg6smfBb9qX+K0RDcPvocMlkmN8Y9X35F0yTTLZ8hWg0teShO5 T+ZqZWv9/3bBXgzYz8UjKjYyD84GtvRMNUqw7IzTW7JrKj8i7D9O6dLklr329cgqFfBn UUaQ== X-Gm-Message-State: AOAM531kkeb86juMWWUwFRrxcjPtmMIutJvokIN8ZZjjje9Iavr4V7xL AU9IVOOFal7im+IMfwTBhog= X-Google-Smtp-Source: ABdhPJwMXS0LhLmHStD8BV9BuY2GmvlfW65TjEA+A/juy6ovsRoUd6UqmKg2rkhQ67nBcjsD/QP0KQ== X-Received: by 2002:a17:902:9a82:: with SMTP id w2mr2689056plp.308.1597426309467; Fri, 14 Aug 2020 10:31:49 -0700 (PDT) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:1:7220:84ff:fe09:5e58]) by smtp.gmail.com with ESMTPSA id n22sm8522973pjq.25.2020.08.14.10.31.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Aug 2020 10:31:48 -0700 (PDT) From: Minchan Kim To: Andrew Morton Cc: linux-mm , Joonsoo Kim , Vlastimil Babka , John Dias , Suren Baghdasaryan , pullip.cho@samsung.com, Minchan Kim Subject: [RFC 7/7] mm/page_isolation: avoid drain_all_pages for alloc_pages_bulk Date: Fri, 14 Aug 2020 10:31:31 -0700 Message-Id: <20200814173131.2803002-8-minchan@kernel.org> X-Mailer: git-send-email 2.28.0.220.ged08abb693-goog In-Reply-To: <20200814173131.2803002-1-minchan@kernel.org> References: <20200814173131.2803002-1-minchan@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: DEB2F18140B67 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The draining PCP of CPUs whenever we marked a pageblock MIGRATE_ISOLATE in the big range is too expensive when we consider fact that alloc_pages_bulk is just best effort approach. Thus, this patch avoids the flush when the semantic allows ISOLATE_FAILURE. Signed-off-by: Minchan Kim --- mm/page_isolation.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 6208db89a31b..e70bdded02e9 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -54,9 +54,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_ out: spin_unlock_irqrestore(&zone->lock, flags); - if (!ret) { - drain_all_pages(zone); - } else { + if (ret) { WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); if ((isol_flags & REPORT_FAILURE) && unmovable) @@ -197,6 +195,8 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, undo_pfn = pfn; goto undo; } + if (!(flags & ALLOW_ISOLATE_FAILURE)) + drain_all_pages(page_zone(page)); nr_isolate_pageblock++; } }