From patchwork Thu Apr 30 20:11:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Jordan X-Patchwork-Id: 11521451 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B477C81 for ; Thu, 30 Apr 2020 20:14:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 744E320774 for ; Thu, 30 Apr 2020 20:14:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="m+I2zk3N" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 744E320774 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AB7F18E0007; Thu, 30 Apr 2020 16:13:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9CB118E0008; Thu, 30 Apr 2020 16:13:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 732E28E0007; Thu, 30 Apr 2020 16:13:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id 4F3258E0008 for ; Thu, 30 Apr 2020 16:13:54 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 16035180AD806 for ; Thu, 30 Apr 2020 20:13:54 +0000 (UTC) X-FDA: 76765622388.07.ink53_5c2e9aec4641d X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,daniel.m.jordan@oracle.com,,RULES_HIT:30012:30054:30064:30070,0,RBL:156.151.31.85:@oracle.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: ink53_5c2e9aec4641d X-Filterd-Recvd-Size: 9123 Received: from userp2120.oracle.com (userp2120.oracle.com [156.151.31.85]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Thu, 30 Apr 2020 20:13:53 +0000 (UTC) Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 03UKDWOM009406; Thu, 30 Apr 2020 20:13:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2020-01-29; bh=m72YR/nkrMDSIXspqjuUroii0+CofYy9l2S6f8qhUFQ=; b=m+I2zk3NpPfpkFFc97LloLnclZOxcrApFODkdPrRxwRZoPNPkDnQHWRimbWTptFS5Vi6 baMhKWWgvcLSnAWAzUVAWjbjA1l/CdBAVkcgYlalt+C/RY8aYJPDvduA26lkfbcJDXlk 7IR2Xw67kKidhNPd5o8BhKVR6WCtZKnvaxw/oCcoc/ZL70tacGG5uUabV+MwQp0fDck7 7S5O8ju/IpVrnxnHJDUpzcwe/L3tq/2ifqLvoTPCWD3aJ7sfiEW/liQgXlQsdmCtiDHY GdA/By95bKhNSXIHvfvECom/hwxjby6gPWtiH2JdJumN62b0H0+ABKzZUwVuevLB8t3l 6w== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by userp2120.oracle.com with ESMTP id 30p2p0k2wx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 30 Apr 2020 20:13:43 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 03UK6XkW140606; Thu, 30 Apr 2020 20:11:42 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by aserp3030.oracle.com with ESMTP id 30qtkx5a75-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 30 Apr 2020 20:11:42 +0000 Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 03UKBfka024124; Thu, 30 Apr 2020 20:11:41 GMT Received: from localhost.localdomain (/98.229.125.203) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 30 Apr 2020 13:11:41 -0700 From: Daniel Jordan To: Andrew Morton , Herbert Xu , Steffen Klassert Cc: Alex Williamson , Alexander Duyck , Dan Williams , Dave Hansen , David Hildenbrand , Jason Gunthorpe , Jonathan Corbet , Josh Triplett , Kirill Tkhai , Michal Hocko , Pavel Machek , Pavel Tatashin , Peter Zijlstra , Randy Dunlap , Shile Zhang , Tejun Heo , Zi Yan , linux-crypto@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Daniel Jordan Subject: [PATCH 5/7] mm: move zone iterator outside of deferred_init_maxorder() Date: Thu, 30 Apr 2020 16:11:23 -0400 Message-Id: <20200430201125.532129-6-daniel.m.jordan@oracle.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200430201125.532129-1-daniel.m.jordan@oracle.com> References: <20200430201125.532129-1-daniel.m.jordan@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9607 signatures=668687 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 suspectscore=2 mlxscore=0 phishscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2004300150 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9607 signatures=668687 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 phishscore=0 clxscore=1015 bulkscore=0 adultscore=0 lowpriorityscore=0 impostorscore=0 malwarescore=0 mlxscore=0 suspectscore=2 mlxlogscore=999 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000 definitions=main-2004300151 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: padata will soon divide up pfn ranges between threads when parallelizing deferred init, and deferred_init_maxorder() complicates that by using an opaque index in addition to start and end pfns. Move the index outside the function to make splitting the job easier, and simplify the code while at it. deferred_init_maxorder() now always iterates within a single pfn range instead of potentially multiple ranges, and advances start_pfn to the end of that range instead of the max-order block so partial pfn ranges in the block aren't skipped in a later iteration. The section alignment check in deferred_grow_zone() is removed as well since this alignment is no longer guaranteed. It's not clear what value the alignment provided originally. Signed-off-by: Daniel Jordan --- mm/page_alloc.c | 88 +++++++++++++++---------------------------------- 1 file changed, 27 insertions(+), 61 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 68669d3a5a665..990514d8f0d94 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1708,55 +1708,23 @@ deferred_init_mem_pfn_range_in_zone(u64 *i, struct zone *zone, } /* - * Initialize and free pages. We do it in two loops: first we initialize - * struct page, then free to buddy allocator, because while we are - * freeing pages we can access pages that are ahead (computing buddy - * page in __free_one_page()). - * - * In order to try and keep some memory in the cache we have the loop - * broken along max page order boundaries. This way we will not cause - * any issues with the buddy page computation. + * Initialize the struct pages and then free them to the buddy allocator at + * most a max order block at a time because while we are freeing pages we can + * access pages that are ahead (computing buddy page in __free_one_page()). + * It's also cache friendly. */ static unsigned long __init -deferred_init_maxorder(u64 *i, struct zone *zone, unsigned long *start_pfn, - unsigned long *end_pfn) +deferred_init_maxorder(struct zone *zone, unsigned long *start_pfn, + unsigned long end_pfn) { - unsigned long mo_pfn = ALIGN(*start_pfn + 1, MAX_ORDER_NR_PAGES); - unsigned long spfn = *start_pfn, epfn = *end_pfn; - unsigned long nr_pages = 0; - u64 j = *i; - - /* First we loop through and initialize the page values */ - for_each_free_mem_pfn_range_in_zone_from(j, zone, start_pfn, end_pfn) { - unsigned long t; - - if (mo_pfn <= *start_pfn) - break; - - t = min(mo_pfn, *end_pfn); - nr_pages += deferred_init_pages(zone, *start_pfn, t); - - if (mo_pfn < *end_pfn) { - *start_pfn = mo_pfn; - break; - } - } - - /* Reset values and now loop through freeing pages as needed */ - swap(j, *i); - - for_each_free_mem_pfn_range_in_zone_from(j, zone, &spfn, &epfn) { - unsigned long t; - - if (mo_pfn <= spfn) - break; + unsigned long nr_pages, pfn; - t = min(mo_pfn, epfn); - deferred_free_pages(spfn, t); + pfn = ALIGN(*start_pfn + 1, MAX_ORDER_NR_PAGES); + pfn = min(pfn, end_pfn); - if (mo_pfn <= epfn) - break; - } + nr_pages = deferred_init_pages(zone, *start_pfn, pfn); + deferred_free_pages(*start_pfn, pfn); + *start_pfn = pfn; return nr_pages; } @@ -1814,9 +1782,11 @@ static int __init deferred_init_memmap(void *data) * that we can avoid introducing any issues with the buddy * allocator. */ - while (spfn < epfn) { - nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn); - cond_resched(); + for_each_free_mem_pfn_range_in_zone_from(i, zone, &spfn, &epfn) { + while (spfn < epfn) { + nr_pages += deferred_init_maxorder(zone, &spfn, epfn); + cond_resched(); + } } zone_empty: /* Sanity check that the next zone really is unpopulated */ @@ -1883,22 +1853,18 @@ deferred_grow_zone(struct zone *zone, unsigned int order) * that we can avoid introducing any issues with the buddy * allocator. */ - while (spfn < epfn) { - /* update our first deferred PFN for this section */ - first_deferred_pfn = spfn; - - nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn); - touch_nmi_watchdog(); - - /* We should only stop along section boundaries */ - if ((first_deferred_pfn ^ spfn) < PAGES_PER_SECTION) - continue; - - /* If our quota has been met we can stop here */ - if (nr_pages >= nr_pages_needed) - break; + for_each_free_mem_pfn_range_in_zone_from(i, zone, &spfn, &epfn) { + while (spfn < epfn) { + nr_pages += deferred_init_maxorder(zone, &spfn, epfn); + touch_nmi_watchdog(); + + /* If our quota has been met we can stop here */ + if (nr_pages >= nr_pages_needed) + goto out; + } } +out: pgdat->first_deferred_pfn = spfn; pgdat_resize_unlock(pgdat, &flags);