From patchwork Thu Apr 1 21:42:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Gushchin X-Patchwork-Id: 12179639 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26BF8C433ED for ; Thu, 1 Apr 2021 21:43:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 88F1C60232 for ; Thu, 1 Apr 2021 21:43:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88F1C60232 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=fb.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D89D06B00C7; Thu, 1 Apr 2021 17:43:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D49CB6B00C9; Thu, 1 Apr 2021 17:43:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BDB5E6B00C8; Thu, 1 Apr 2021 17:43:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id 9F4DF6B00C6 for ; Thu, 1 Apr 2021 17:43:10 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5AFBB1803542A for ; Thu, 1 Apr 2021 21:43:10 +0000 (UTC) X-FDA: 77985124140.12.3BEE53D Received: from mx0a-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf16.hostedemail.com (Postfix) with ESMTP id 6016E80192D5 for ; Thu, 1 Apr 2021 21:43:09 +0000 (UTC) Received: from pps.filterd (m0089730.ppops.net [127.0.0.1]) by m0089730.ppops.net (8.16.0.43/8.16.0.43) with SMTP id 131LU5aq004026 for ; Thu, 1 Apr 2021 14:43:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=hxykTcdQ4dgJtFv/w9O8FTIKihrnAzMGz0W3OAFGbzU=; b=XROu/DfpCQGAC40SZkRHAUJ5UwvkABPKfV/wuHyQzW7vFXLfCKAizykhlwzaArKI0nq8 DH/BrsCFsr7whG3zQs9QUQp0Y6lLHg8R5xUP7urQ3u2m4RUM42pf4tQsoBHhK9/9uVc+ wElLzpx9i2SUixaiYx7ELX9mhMT61Jyz+AM= Received: from maileast.thefacebook.com ([163.114.130.16]) by m0089730.ppops.net with ESMTP id 37n28aegek-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 01 Apr 2021 14:43:09 -0700 Received: from intmgw002.48.prn1.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::f) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Thu, 1 Apr 2021 14:43:08 -0700 Received: by devvm3388.prn0.facebook.com (Postfix, from userid 111017) id 49ECE5C2F8EB; Thu, 1 Apr 2021 14:43:04 -0700 (PDT) From: Roman Gushchin To: Dennis Zhou CC: Tejun Heo , Christoph Lameter , Andrew Morton , , , Roman Gushchin Subject: [PATCH v1 3/5] percpu: generalize pcpu_balance_populated() Date: Thu, 1 Apr 2021 14:42:59 -0700 Message-ID: <20210401214301.1689099-4-guro@fb.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210401214301.1689099-1-guro@fb.com> References: <20210401214301.1689099-1-guro@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-GUID: sZoQXE7zTG_HHLGB4q_-rx1-imi75jLc X-Proofpoint-ORIG-GUID: sZoQXE7zTG_HHLGB4q_-rx1-imi75jLc X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.761 definitions=2021-04-01_13:2021-04-01,2021-04-01 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 clxscore=1015 phishscore=0 mlxscore=0 adultscore=0 malwarescore=0 suspectscore=0 bulkscore=0 spamscore=0 mlxlogscore=999 lowpriorityscore=0 impostorscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2103310000 definitions=main-2104010138 X-FB-Internal: deliver X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 6016E80192D5 X-Stat-Signature: bqdi8dfbyjekhcubpx8hys378iwnrkju Received-SPF: none (fb.com>: No applicable sender policy available) receiver=imf16; identity=mailfrom; envelope-from=""; helo=mx0a-00082601.pphosted.com; client-ip=67.231.153.30 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617313389-689095 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To prepare for the depopulation of percpu chunks, split out the populating part of the pcpu_balance_populated() into the new pcpu_grow_populated() (with an intention to add pcpu_shrink_populated() in the next commit). The goal of pcpu_balance_populated() is to determine whether there is a shortage or an excessive amount of empty percpu pages and call into the corresponding function. pcpu_grow_populated() takes a desired number of pages as an argument (nr_to_pop). If it creates a new chunk, nr_to_pop should be updated to reflect that the new chunk could be created already populated. Otherwise an infinite loop might appear. Signed-off-by: Roman Gushchin --- mm/percpu.c | 63 +++++++++++++++++++++++++++++++++-------------------- 1 file changed, 39 insertions(+), 24 deletions(-) diff --git a/mm/percpu.c b/mm/percpu.c index 0eeeb4e7a2f9..25a181328353 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1976,7 +1976,7 @@ static void pcpu_balance_free(enum pcpu_chunk_type type) } /** - * pcpu_balance_populated - manage the amount of populated pages + * pcpu_grow_populated - populate chunk(s) to satisfy atomic allocations * @type: chunk type * * Maintain a certain amount of populated pages to satisfy atomic allocations. @@ -1985,35 +1985,15 @@ static void pcpu_balance_free(enum pcpu_chunk_type type) * allocation causes the failure as it is possible that requests can be * serviced from already backed regions. */ -static void pcpu_balance_populated(enum pcpu_chunk_type type) +static void pcpu_grow_populated(enum pcpu_chunk_type type, int nr_to_pop) { /* gfp flags passed to underlying allocators */ const gfp_t gfp = GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN; struct list_head *pcpu_slot = pcpu_chunk_list(type); struct pcpu_chunk *chunk; - int slot, nr_to_pop, ret; + int slot, ret; - /* - * Ensure there are certain number of free populated pages for - * atomic allocs. Fill up from the most packed so that atomic - * allocs don't increase fragmentation. If atomic allocation - * failed previously, always populate the maximum amount. This - * should prevent atomic allocs larger than PAGE_SIZE from keeping - * failing indefinitely; however, large atomic allocs are not - * something we support properly and can be highly unreliable and - * inefficient. - */ retry_pop: - if (pcpu_atomic_alloc_failed) { - nr_to_pop = PCPU_EMPTY_POP_PAGES_HIGH; - /* best effort anyway, don't worry about synchronization */ - pcpu_atomic_alloc_failed = false; - } else { - nr_to_pop = clamp(PCPU_EMPTY_POP_PAGES_HIGH - - pcpu_nr_empty_pop_pages[type], - 0, PCPU_EMPTY_POP_PAGES_HIGH); - } - for (slot = pcpu_size_to_slot(PAGE_SIZE); slot < pcpu_nr_slots; slot++) { unsigned int nr_unpop = 0, rs, re; @@ -2057,12 +2037,47 @@ static void pcpu_balance_populated(enum pcpu_chunk_type type) if (chunk) { spin_lock_irq(&pcpu_lock); pcpu_chunk_relocate(chunk, -1); + nr_to_pop = max_t(int, 0, nr_to_pop - chunk->nr_populated); spin_unlock_irq(&pcpu_lock); - goto retry_pop; + if (nr_to_pop) + goto retry_pop; } } } +/** + * pcpu_balance_populated - manage the amount of populated pages + * @type: chunk type + * + * Populate or depopulate chunks to maintain a certain amount + * of free pages to satisfy atomic allocations, but not waste + * large amounts of memory. + */ +static void pcpu_balance_populated(enum pcpu_chunk_type type) +{ + int nr_to_pop; + + /* + * Ensure there are certain number of free populated pages for + * atomic allocs. Fill up from the most packed so that atomic + * allocs don't increase fragmentation. If atomic allocation + * failed previously, always populate the maximum amount. This + * should prevent atomic allocs larger than PAGE_SIZE from keeping + * failing indefinitely; however, large atomic allocs are not + * something we support properly and can be highly unreliable and + * inefficient. + */ + if (pcpu_atomic_alloc_failed) { + nr_to_pop = PCPU_EMPTY_POP_PAGES_HIGH; + /* best effort anyway, don't worry about synchronization */ + pcpu_atomic_alloc_failed = false; + pcpu_grow_populated(type, nr_to_pop); + } else if (pcpu_nr_empty_pop_pages[type] < PCPU_EMPTY_POP_PAGES_HIGH) { + nr_to_pop = PCPU_EMPTY_POP_PAGES_HIGH - pcpu_nr_empty_pop_pages[type]; + pcpu_grow_populated(type, nr_to_pop); + } +} + /** * pcpu_balance_workfn - manage the amount of free chunks and populated pages * @work: unused