From patchwork Mon Oct 16 05:29:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 13422492 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FF68CDB465 for ; Mon, 16 Oct 2023 05:30:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ADC328D0034; Mon, 16 Oct 2023 01:30:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A8C528D0001; Mon, 16 Oct 2023 01:30:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 92CB38D0034; Mon, 16 Oct 2023 01:30:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 80F8B8D0001 for ; Mon, 16 Oct 2023 01:30:41 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5D10D1CB519 for ; Mon, 16 Oct 2023 05:30:41 +0000 (UTC) X-FDA: 81350199882.04.4EAFC11 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.65]) by imf28.hostedemail.com (Postfix) with ESMTP id 566F3C0008 for ; Mon, 16 Oct 2023 05:30:39 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=LTe2YsPJ; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf28.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697434239; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SQnZqRsApDYhuX2hY/nsUZ1a7v0CG+IRnI/AKxa1uwQ=; b=DJeCpkjq58kBpNl8JhZV3Uh6jdcVcqAx2cdxQIo9z9X4/Hl6zErdxFrX3fz0bNZuhjTGav /ChOnvdm+EBE8RhvId7b/QTZkk1i2VWH140kbA7uXqHL5TW0XsjXrS04DIpbHbGuqkEcd5 slJcJhM9tt1C44pJ/FFeEN/59CatNns= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=LTe2YsPJ; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf28.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697434239; a=rsa-sha256; cv=none; b=0filRb1J0PNLo7BpBAGQBwZBUcybiSTQ9J7tThaUfIZB4SqvD6fqAvUagjWnh0Fe8czKaO oKK+eTWzEqqB6bvqyyieEFxh8Ra8iCfa+a1VQ5Y9f7xmlXLGhqCXqiL4PLN8xsoBCMC87C fPrvBeD1Uq/0qUfwLCbN3+lws9QERh8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697434239; x=1728970239; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2DHjUu90tq7FGOt8u63umE+5KVYee9+PKxuiyDDeGjc=; b=LTe2YsPJIWcng/GwbQhgXxtHJu6CJE0iIjRMoiJsZLLtyb0G7uSvYS6c yED5M0vm/O6t1lIh//kDwzqEZH0jE3lohkhIX430/auLuFbUbM+xIqZEL XJc4hWYANkPgv3XZwc1LuGeDpDW+riyk4b17Wj9SE9SW9MXik6SMt841t 9K9NtcJcG7etODiFzdPc/SB8r5d830jKkIDzRpXt4beBFT4llrCZLh50i s0Ocfc9jdZlBmPDuJ3K12TSdpeFsAM6yxuNAgqExP+Xjaze7RXzuY9GYM Gfjy3XupZesIVxbkygazydc3jKOobjoFq/MRdoJuBsItxOSrlRU/I3INf g==; X-IronPort-AV: E=McAfee;i="6600,9927,10863"; a="389308041" X-IronPort-AV: E=Sophos;i="6.03,228,1694761200"; d="scan'208";a="389308041" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Oct 2023 22:30:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10863"; a="899356707" X-IronPort-AV: E=Sophos;i="6.03,228,1694761200"; d="scan'208";a="899356707" Received: from yhuang6-mobl2.sh.intel.com ([10.238.6.133]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Oct 2023 22:28:36 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Huang Ying , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: [PATCH -V3 5/9] mm, page_alloc: scale the number of pages that are batch allocated Date: Mon, 16 Oct 2023 13:29:58 +0800 Message-Id: <20231016053002.756205-6-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231016053002.756205-1-ying.huang@intel.com> References: <20231016053002.756205-1-ying.huang@intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 566F3C0008 X-Stat-Signature: crwnjn61d5ydbtqd7hhosaebdcb79jyr X-Rspam-User: X-HE-Tag: 1697434239-259324 X-HE-Meta: U2FsdGVkX1+qXccBF6YWJU1OozqY5q82G0hmBU0/gr2KYnqaKzm5IevR/8DMFRooIJboTvnmKILOnwe8gBu5LWjw6mg0snVUOzUMsYHSvj4sain8azwlI5TvaPIKYPJVl4Q+mjp/rCh6od61gqDOAGSmjbIYqJ6eBc7jmjc8AD44o7Z+oowrSULlJ6QPED5kYoESJLTZDir5z9hsUgJh0r4iDOnyzDsm7JhNQAQzZP5aw7YkhMBMRe/qAc1D1bJqc7DLJdyxxnaJuM3TZXTGrsSMd3SSb5kaJC97lsegKRNxq81ZeqLSPA0kl2EgRrvsUR7XGzv39+wPyPujTRSi2EtuPnMV+RgJ1rMLAq8nV6YO9tYq2PODPqCLPvegC3XWi/s3KtmHjReB5oURsspLauD2bQG09oWTFTq54ad/R4FESr6FTyLoVy2UY2lS0RadD0BJGBsk4//K2b+4RzCwQbZyYbw2WsMoGtPiEo9fGEk9ebZEh/wC44tsQZFxFXr17tgeqTGvThwSjdeaBbo+dLXVSAV5qihqkq4/w7ZTPwkbyjkrVJifJO9EUka3w5XTLYUHsNbNbW1s2yAIDI5M3GvAy9CTtxZ2SNQeYZV+I0w/9fRuCAydShkETvhfHfuugRq6vVklRg9BSRRBaRn9e6V1ouiG7pHesAyltP0doX5U8yXWb6yGewqzARsXB5+RqgWyBq+VG22iB47FyhYhkYqP9Jml2DKST3UtD+bwq/uELBqGJvOQcOlBZSqtxNaVvCsfdOMBfGXk9+2xWOzaq/6UEcrYPA2l04o1PhRh2XRugPIJ+tzlKOOpSof1SDhjsEBxpT7miLWX+8H2boECEczaT5fMxLyoR7E7kZ/pt+iUYr+qgkGUUdeB2cMWPW7fOhH4JivQIdPfWMLMqLEwkPQfcm33z4mecgmo7K05mbCBYgt4M6xH2QnWupcOpi46uy1AyZoTJWw3VdnU73T /QB8S+QZ qi8GrClBWyU4UJc8vXpVFFwMAnzjPKAN6HRMjPkL+b/reelflcp02EB3JutBsMpcqKVBUXEPa4KSXdNQP8Ry3Ud+xpbZGfnjlt30G5QA+rgqncq3nWLe+msH4/AhVKVxdOv2B8ChaXogRNSXUMx3DcrUnUHwsn2cXhnkYm3yRQXhG2oCC7LIokkplDT3MAqN8LNcM4lHLmgFBiG3oMKazDLKYGxtHAZ3uS5hUka5cvu9ZJhLyDt7SyJKzUrQDBYgOW9YX6Rnq6itkWumbJ4VchvD+eMgQ7LmCN8PHufxyq7HwbsRyspXRgT5Ngu/regzeIlud2H12PdNX70ogRvr3tfq4iYqE6o0Wzk1tcYFiyI8Psfe71W6lXa2n5rbQlRr8Ecc3NIVuQY7eGUQd9IaljCx+IZ8ayqnjFMYWmh7yZeJ85miBZ7wfMTym0mJ+WXa+oZQNXiS8owEKJ0a/tpa110G+gTXAguKZfVBRBEZcsBsDTPnryzo5pQU2MI5eOFHTLISJNruAlqiXyEUdnhtposwLuljMswpIJnZ833BIluoQ5BRlUTjMRCXDYqW1u+pZCM1ZihzBqB3v9h4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When a task is allocating a large number of order-0 pages, it may acquire the zone->lock multiple times allocating pages in batches. This may unnecessarily contend on the zone lock when allocating very large number of pages. This patch adapts the size of the batch based on the recent pattern to scale the batch size for subsequent allocations. On a 2-socket Intel server with 224 logical CPU, we run 8 kbuild instances in parallel (each with `make -j 28`) in 8 cgroup. This simulates the kbuild server that is used by 0-Day kbuild service. With the patch, the cycles% of the spinlock contention (mostly for zone lock) decreases from 12.6% to 11.0% (with PCP size == 367). Signed-off-by: "Huang, Ying" Suggested-by: Mel Gorman Acked-by: Mel Gorman Cc: Andrew Morton Cc: Vlastimil Babka Cc: David Hildenbrand Cc: Johannes Weiner Cc: Dave Hansen Cc: Michal Hocko Cc: Pavel Tatashin Cc: Matthew Wilcox Cc: Christoph Lameter --- include/linux/mmzone.h | 3 ++- mm/page_alloc.c | 53 ++++++++++++++++++++++++++++++++++-------- 2 files changed, 45 insertions(+), 11 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index cdff247e8c6f..ba548ae20686 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -697,9 +697,10 @@ struct per_cpu_pages { int high; /* high watermark, emptying needed */ int batch; /* chunk size for buddy add/remove */ u8 flags; /* protected by pcp->lock */ + u8 alloc_factor; /* batch scaling factor during allocate */ u8 free_factor; /* batch scaling factor during free */ #ifdef CONFIG_NUMA - short expire; /* When 0, remote pagesets are drained */ + u8 expire; /* When 0, remote pagesets are drained */ #endif /* Lists of pages, one per migrate type stored on the pcp-lists */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a5a5a4c3cd2b..eeef0ead1c2a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2373,6 +2373,12 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, int pindex; bool free_high = false; + /* + * On freeing, reduce the number of pages that are batch allocated. + * See nr_pcp_alloc() where alloc_factor is increased for subsequent + * allocations. + */ + pcp->alloc_factor >>= 1; __count_vm_events(PGFREE, 1 << order); pindex = order_to_pindex(migratetype, order); list_add(&page->pcp_list, &pcp->lists[pindex]); @@ -2679,6 +2685,42 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, return page; } +static int nr_pcp_alloc(struct per_cpu_pages *pcp, int order) +{ + int high, batch, max_nr_alloc; + + high = READ_ONCE(pcp->high); + batch = READ_ONCE(pcp->batch); + + /* Check for PCP disabled or boot pageset */ + if (unlikely(high < batch)) + return 1; + + /* + * Double the number of pages allocated each time there is subsequent + * allocation of order-0 pages without any freeing. + */ + if (!order) { + max_nr_alloc = max(high - pcp->count - batch, batch); + batch <<= pcp->alloc_factor; + if (batch <= max_nr_alloc && + pcp->alloc_factor < CONFIG_PCP_BATCH_SCALE_MAX) + pcp->alloc_factor++; + batch = min(batch, max_nr_alloc); + } + + /* + * Scale batch relative to order if batch implies free pages + * can be stored on the PCP. Batch can be 1 for small zones or + * for boot pagesets which should never store free pages as + * the pages may belong to arbitrary zones. + */ + if (batch > 1) + batch = max(batch >> order, 2); + + return batch; +} + /* Remove page from the per-cpu list, caller must protect the list */ static inline struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, @@ -2691,18 +2733,9 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, do { if (list_empty(list)) { - int batch = READ_ONCE(pcp->batch); + int batch = nr_pcp_alloc(pcp, order); int alloced; - /* - * Scale batch relative to order if batch implies - * free pages can be stored on the PCP. Batch can - * be 1 for small zones or for boot pagesets which - * should never store free pages as the pages may - * belong to arbitrary zones. - */ - if (batch > 1) - batch = max(batch >> order, 2); alloced = rmqueue_bulk(zone, order, batch, list, migratetype, alloc_flags);