From patchwork Mon Jun 13 12:56:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mel Gorman X-Patchwork-Id: 12879616 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15601C433EF for ; Mon, 13 Jun 2022 14:33:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B2AF8D018C; Mon, 13 Jun 2022 10:33:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 962878D0171; Mon, 13 Jun 2022 10:33:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 803768D018C; Mon, 13 Jun 2022 10:33:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 713AC8D0171 for ; Mon, 13 Jun 2022 10:33:57 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id CC19160B72 for ; Mon, 13 Jun 2022 14:33:56 +0000 (UTC) X-FDA: 79573456872.20.CC925CA Received: from outbound-smtp43.blacknight.com (outbound-smtp43.blacknight.com [46.22.139.229]) by imf21.hostedemail.com (Postfix) with ESMTP id 6684D1C00D0 for ; Mon, 13 Jun 2022 14:33:50 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp43.blacknight.com (Postfix) with ESMTPS id 9E418196A for ; Mon, 13 Jun 2022 13:56:33 +0100 (IST) Received: (qmail 27785 invoked from network); 13 Jun 2022 12:56:33 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 13 Jun 2022 12:56:33 -0000 From: Mel Gorman To: Andrew Morton Cc: Nicolas Saenz Julienne , Marcelo Tosatti , Vlastimil Babka , Michal Hocko , Hugh Dickins , LKML , Linux-MM , Mel Gorman Subject: [PATCH v4 00/7] Drain remote per-cpu directly Date: Mon, 13 Jun 2022 13:56:15 +0100 Message-Id: <20220613125622.18628-1-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.229 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655130834; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=9NOjV0wFVRnxcNeRs/52sqJ/82+jSWZuNys5nNC9IRE=; b=BzakjnpDmeBM2qVrUJj419eUTLgE5H7yWqHa4nE4IdGX5FJF66BDapNLHYe0OyJXhsNJov 9GkJs35+PyBmJqk6M7NnUzKFvvDTQZ0DZd5btukNFpI2CTMNwPNKGgRF1g+aJyUAgGL7Wl RZS2MBHvbrQeATIjZ3LOFiQg6utMhos= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655130834; a=rsa-sha256; cv=none; b=6o8uXM21uNJJW2OkG88GJcLSZdX/vc7EPDYTtRIckoqyWWkkzERn+KWfnQYwDGr0mlDv0v usOPeR0dNywe/57CKI8w6ytPNWPJr4iFVKC/UO5TKWI0KYL/Deh4E6nsGAryqdAK6LJa9w kMrDakF7EJFlewEAYqO+AfdzghRY2zE= X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.229 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 6684D1C00D0 X-Stat-Signature: atypp1wgd8geijz96nd6yrjimo6awecw X-HE-Tag: 1655130830-119746 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This replaces the existing version on mm-unstable. The biggest difference is replacing local_lock entirely which is the last patch. The other changes are minor fixes reported by Hugh and Vlastimil. Changelog since v3 o Checkpatch fixes from mm-unstable (akpm) o Replace local_lock with spinlock (akpm) o Remove IRQ-disabled check in free_unref_page_list as it triggers a false positive (hughd) o Take an unlikely check out of the rmqueue fast path (vbabka) Some setups, notably NOHZ_FULL CPUs, may be running realtime or latency-sensitive applications that cannot tolerate interference due to per-cpu drain work queued by __drain_all_pages(). Introduce a new mechanism to remotely drain the per-cpu lists. It is made possible by remotely locking 'struct per_cpu_pages' new per-cpu spinlocks. This has two advantages, the time to drain is more predictable and other unrelated tasks are not interrupted. This series has the same intent as Nicolas' series "mm/page_alloc: Remote per-cpu lists drain support" -- avoid interference of a high priority task due to a workqueue item draining per-cpu page lists. While many workloads can tolerate a brief interruption, it may cause a real-time task running on a NOHZ_FULL CPU to miss a deadline and at minimum, the draining is non-deterministic. Currently an IRQ-safe local_lock protects the page allocator per-cpu lists. The local_lock on its own prevents migration and the IRQ disabling protects from corruption due to an interrupt arriving while a page allocation is in progress. This series adjusts the locking. A spinlock is added to struct per_cpu_pages to protect the list contents while local_lock_irq is ultimately replaced by just the spinlock in the final patch. This allows a remote CPU to safely. Follow-on work should allow the local_irq_save to be converted to a local_irq to avoid IRQs being disabled/enabled in most cases. Patch 1 is a cosmetic patch to clarify when page->lru is storing buddy pages and when it is storing per-cpu pages. Patch 2 shrinks per_cpu_pages to make room for a spin lock. Strictly speaking this is not necessary but it avoids per_cpu_pages consuming another cache line. Patch 3 is a preparation patch to avoid code duplication. Patch 4 is a simple micro-optimisation that improves code flow necessary for a later patch to avoid code duplication. Patch 5 uses a spin_lock to protect the per_cpu_pages contents while still relying on local_lock to prevent migration, stabilise the pcp lookup and prevent IRQ reentrancy. Patch 6 remote drains per-cpu pages directly instead of using a workqueue. Patch 7 uses a normal spinlock instead of local_lock for remote draining include/linux/mm_types.h | 5 + include/linux/mmzone.h | 12 +- mm/page_alloc.c | 404 ++++++++++++++++++++++++--------------- 3 files changed, 266 insertions(+), 155 deletions(-)