From patchwork Tue Mar 22 21:43:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12789140 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C35BC433EF for ; Tue, 22 Mar 2022 21:43:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 959286B00EC; Tue, 22 Mar 2022 17:43:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 908976B00ED; Tue, 22 Mar 2022 17:43:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 75B8A6B00EE; Tue, 22 Mar 2022 17:43:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 560566B00EC for ; Tue, 22 Mar 2022 17:43:41 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 2A1BF61E04 for ; Tue, 22 Mar 2022 21:43:41 +0000 (UTC) X-FDA: 79273349442.07.B764251 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf03.hostedemail.com (Postfix) with ESMTP id B705720021 for ; Tue, 22 Mar 2022 21:43:40 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 349CB60A1B; Tue, 22 Mar 2022 21:43:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8FFC5C340EC; Tue, 22 Mar 2022 21:43:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1647985419; bh=1sREwPqsoklvKO9TTinMABRXKUs84U+dTlddvMWZ4Sk=; h=Date:To:From:In-Reply-To:Subject:From; b=QgJAwkajMt1MxK3aqVK3NEFE90p6sL9qzBbvM9GDkcTpawj0lNzhzfO5AinhNevqW fgfQZVTMDrLnaDNl94FS025UuK4e4fFuWd9TDPdnE+8Uxi5CZLJtRk8KjXQZL0z6OH 7Lj1YDWcH3zGN2IFMh5234CxObWx+W8Axopzg0xI= Date: Tue, 22 Mar 2022 14:43:38 -0700 To: vbabka@suse.cz,mhocko@kernel.org,dave.hansen@linux.intel.com,brouer@redhat.com,aaron.lu@intel.com,mgorman@techsingularity.net,akpm@linux-foundation.org,patches@lists.linux.dev,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220322143803.04a5e59a07e48284f196a2f9@linux-foundation.org> Subject: [patch 101/227] mm/page_alloc: drain the requested list first during bulk free Message-Id: <20220322214339.8FFC5C340EC@smtp.kernel.org> Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=QgJAwkaj; spf=pass (imf03.hostedemail.com: domain of akpm@linux-foundation.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: B705720021 X-Stat-Signature: keepnjz5othyubcdzotydmtwow81hwcz X-HE-Tag: 1647985420-30661 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mel Gorman Subject: mm/page_alloc: drain the requested list first during bulk free Prior to the series, pindex 0 (order-0 MIGRATE_UNMOVABLE) was always skipped first and the precise reason is forgotten. A potential reason may have been to artificially preserve MIGRATE_UNMOVABLE but there is no reason why that would be optimal as it depends on the workload. The more likely reason is that it was less complicated to do a pre-increment instead of a post-increment in terms of overall code flow. As free_pcppages_bulk() now typically receives the pindex of the PCP list that exceeded high, always start draining that list. Link: https://lkml.kernel.org/r/20220217002227.5739-5-mgorman@techsingularity.net Signed-off-by: Mel Gorman Reviewed-by: Vlastimil Babka Tested-by: Aaron Lu Cc: Dave Hansen Cc: Jesper Dangaard Brouer Cc: Michal Hocko Signed-off-by: Andrew Morton --- mm/page_alloc.c | 4 ++++ 1 file changed, 4 insertions(+) --- a/mm/page_alloc.c~mm-page_alloc-drain-the-requested-list-first-during-bulk-free +++ a/mm/page_alloc.c @@ -1460,6 +1460,10 @@ static void free_pcppages_bulk(struct zo * below while (list_empty(list)) loop. */ count = min(pcp->count, count); + + /* Ensure requested pindex is drained first. */ + pindex = pindex - 1; + while (count > 0) { struct list_head *list; int nr_pages;