From patchwork Fri Jun 24 12:54:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 12894466 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70B70C433EF for ; Fri, 24 Jun 2022 12:54:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CE7838E0216; Fri, 24 Jun 2022 08:54:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C96F18E020E; Fri, 24 Jun 2022 08:54:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B86AA8E0216; Fri, 24 Jun 2022 08:54:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id A9B7E8E020E for ; Fri, 24 Jun 2022 08:54:37 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 84B0133EF9 for ; Fri, 24 Jun 2022 12:54:37 +0000 (UTC) X-FDA: 79613123394.30.05CD6BB Received: from outbound-smtp05.blacknight.com (outbound-smtp05.blacknight.com [81.17.249.38]) by imf26.hostedemail.com (Postfix) with ESMTP id A7DEB14001F for ; Fri, 24 Jun 2022 12:54:36 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp05.blacknight.com (Postfix) with ESMTPS id 0449FCCC29 for ; Fri, 24 Jun 2022 13:54:35 +0100 (IST) Received: (qmail 6834 invoked from network); 24 Jun 2022 12:54:34 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 24 Jun 2022 12:54:34 -0000 From: Mel Gorman To: Andrew Morton Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , Hugh Dickins , Yu Zhao , Marek Szyprowski , LKML , Linux-MM , Mel Gorman Subject: [PATCH v5 00/7] Drain remote per-cpu directly Date: Fri, 24 Jun 2022 13:54:16 +0100 Message-Id: <20220624125423.6126-1-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656075277; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=R3cq6sjSJQ+mpq/TelUu8yo+ccaMPt3D9M7tMZCNjzU=; b=N8XQTt61qeKWeuLiGks/n+mcfZc6TqfaeTqJSTQKnB1Gz3ZaPuTqvmyCJ+6iNcoTMUiAes kG8CxsWy0wvn2m4BKky+2Hrh3xq0Eba4YtsvMzUWxJa2sWyPkAH4zQor8fArjsIksvr5qu dTQTYitLPj0X9u/nFcSh5KPTRmIhOy8= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf26.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.38 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656075277; a=rsa-sha256; cv=none; b=LOSV2lQBuaDECOgBTH8f0X7rBBp5KIxmDfyqXspfnWQCwjM8u18toqreecwvsRbdN0Kz6f Tr+o41foUI/tFPPhWdvvv7Bww2l2r4b1imqoCQmE2xVwx12cNFrloghMIUHBQIf8xbXFo9 gypaiV6+hg/zmkS2VbkE8KB7cjhaKHo= X-Rspamd-Queue-Id: A7DEB14001F Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf26.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.38 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: mnnarui9qyyxcekznp3gjohme5wn3k4y X-HE-Tag: 1656075276-700025 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This replaces the existing version on mm-unstable. While there are some fixes, this is mostly refactoring of patch 5 based on Vlastimil's feedback to reduce churn in later patches. The level of refactoring made -fix patches excessively complicated. Changelog since v4 o Fix lockdep issues in patch 7 o Refactor patch 5 to reduce churn in patches 6 and 7 o Rebase to 5.19-rc3 Some setups, notably NOHZ_FULL CPUs, may be running realtime or latency-sensitive applications that cannot tolerate interference due to per-cpu drain work queued by __drain_all_pages(). Introduce a new mechanism to remotely drain the per-cpu lists. It is made possible by remotely locking 'struct per_cpu_pages' new per-cpu spinlocks. This has two advantages, the time to drain is more predictable and other unrelated tasks are not interrupted. This series has the same intent as Nicolas' series "mm/page_alloc: Remote per-cpu lists drain support" -- avoid interference of a high priority task due to a workqueue item draining per-cpu page lists. While many workloads can tolerate a brief interruption, it may cause a real-time task running on a NOHZ_FULL CPU to miss a deadline and at minimum, the draining is non-deterministic. Currently an IRQ-safe local_lock protects the page allocator per-cpu lists. The local_lock on its own prevents migration and the IRQ disabling protects from corruption due to an interrupt arriving while a page allocation is in progress. This series adjusts the locking. A spinlock is added to struct per_cpu_pages to protect the list contents while local_lock_irq is ultimately replaced by just the spinlock in the final patch. This allows a remote CPU to safely. Follow-on work should allow the spin_lock_irqsave to be converted to spin_lock to avoid IRQs being disabled/enabled in most cases. The follow-on patch will be one kernel release later as it is relatively high risk and it'll make bisections more clear if there are any problems. Patch 1 is a cosmetic patch to clarify when page->lru is storing buddy pages and when it is storing per-cpu pages. Patch 2 shrinks per_cpu_pages to make room for a spin lock. Strictly speaking this is not necessary but it avoids per_cpu_pages consuming another cache line. Patch 3 is a preparation patch to avoid code duplication. Patch 4 is a minor correction. Patch 5 uses a spin_lock to protect the per_cpu_pages contents while still relying on local_lock to prevent migration, stabilise the pcp lookup and prevent IRQ reentrancy. Patch 6 remote drains per-cpu pages directly instead of using a workqueue. Patch 7 uses a normal spinlock instead of local_lock for remote draining Nicolas Saenz Julienne (1): mm/page_alloc: Remotely drain per-cpu lists include/linux/mm_types.h | 5 + include/linux/mmzone.h | 12 +- mm/page_alloc.c | 386 ++++++++++++++++++++++++--------------- 3 files changed, 250 insertions(+), 153 deletions(-)