From patchwork Thu Dec 12 18:02:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13905762 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5308EE7717F for ; Thu, 12 Dec 2024 18:02:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 681556B0085; Thu, 12 Dec 2024 13:02:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6300C6B0088; Thu, 12 Dec 2024 13:02:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45F036B0089; Thu, 12 Dec 2024 13:02:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0C77C6B0085 for ; Thu, 12 Dec 2024 13:02:18 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B152D120673 for ; Thu, 12 Dec 2024 18:02:17 +0000 (UTC) X-FDA: 82887075348.06.603F9C2 Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by imf03.hostedemail.com (Postfix) with ESMTP id 9F2D720037 for ; Thu, 12 Dec 2024 18:02:04 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jJv1mkAU; spf=pass (imf03.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734026510; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=d17cFn0Xubw7gH2xGZnbhSlLU5gmONRj4+qymf6is0I=; b=LMTc2gVFIj/ulPtTeLIjyc54w2NwOcwvM5x+bSi/uy0P9zoZG684918MiGD7VYn7PRfvPA ezYac2zTbINCu+88GhGLtfv4nL1ulWv0nAY9EgRYHkBQSx3pp/yKbW4K+nT46zGaExwTqF E92VeAEPWkqJOZdzvzm3nWhtYulVG50= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734026510; a=rsa-sha256; cv=none; b=sy6j8vAa08r9JKHcP8mHlNXrYxapl6LTOf0v0fVLftm+1ZEdaBtpHdHRCEERKCQaZFJ9z+ 2S7z6Av8kziTm9zPpbkWNbsJqjE8asG4nOI4luad4I2AfldIB+D3Dabq4kTgoU85eUvwlj dpHbfMyniPiM4CLCJvr3BrfZNG7dKtk= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jJv1mkAU; spf=pass (imf03.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lf1-f49.google.com with SMTP id 2adb3069b0e04-5401c52000dso1006169e87.3 for ; Thu, 12 Dec 2024 10:02:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1734026534; x=1734631334; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=d17cFn0Xubw7gH2xGZnbhSlLU5gmONRj4+qymf6is0I=; b=jJv1mkAUeS+3lNGuhT1v1ro1BMbPIsTJpX6Hlz7+BpSsO6fL7MSJzd97pZ41xkYd9R bzpH4bJwykoWs4zh1G0YZpv/DrfpI0ix3BpUioveJQX0iNfwFQQ2BAd+0bcSC8+sWccF VOokP1ozDWPdJu1GuUTmHXqspSwWfky/oKdPKPZr98qm1uoyiJwjm+A6jPYMGPq28x5+ ME9yIP9Vte3FrkMzWG9Q4VpRCGAPLE9g4ALdY8Z5v8AHdjg3SrbTOG7FVGdWU4EUXdlV th48+3viVouR2rc4S5JibNOAsUGdfgq8WhadYgIo8m9chbnwBVDiNyeqkql0jxwF1Jcz W6qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026534; x=1734631334; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=d17cFn0Xubw7gH2xGZnbhSlLU5gmONRj4+qymf6is0I=; b=rdhy/R5VPufvUAO2b2u0JZYFVuaZOjCGQ571ejCmdHCSgK7Xz6+sYtj2Pw2Vviu2Et KW5soKyjFLuzgcMx9gG3NCHoODqi23JA2pvOZ/inN6aYJSCUsJqRhYl+34KXQWMH/ieO Pq+++4BdCdvUMvkCPVdj01LuDm03s6JYyzt42R+Y4zWBtQlvGkomQF/MxAHZNUkOqzjO pR1PSHldJv6yBXC6exfqNW2DaA1rfauQou7Xl104//sN36tX95NorgoCp8UHndEKzF8b w2/iNZYCSoPk+7bq1FrNiAWOLHfw048jUerhIKgEtKXmXRxgzVWQ2OlZZ9lO/9rao5F1 zsfA== X-Gm-Message-State: AOJu0YwC7tNRd8+R82AYyQotQC0NYxACFYRNP1I/DWLm+T/qI74znwFs hDkpr5UyJpdebLSFGmByTKmvpe9gHj2G2FTjjArQF4CvxC8hfkScIFc5kQ== X-Gm-Gg: ASbGncsPhdvcFeE0MgjvL4cvbLnDkWIjrXuVU61N1q9e/TUH9QIuMls/p2RzAFxTrn+ 8GjNrg/dzVk6Q47xFhRBsLD7v47yXIQrNBaNtJG/Jw/22LMH2SCO3Z38/YjIsI1WGWBX1KWOOxn ctcbpZUlZtM6jG/4tByLA2yxPnPergpylAZJEi3TZDYohGnj3uRMx9wNRrqyKDcDJ37olSBHkYD Jfb0pKiP4QK0lwDAVRS2lH/VZP7XERneKsRmiwnAKD7EhTf9Vqx X-Google-Smtp-Source: AGHT+IGz+p8q2ObIoPlvWL19tlssMjhRoDnjX+5PLTP5UDSV8wGIYh9EGxkPINPkkiRjxhnvXpmUGQ== X-Received: by 2002:a05:6512:3995:b0:53e:3a7c:c0b5 with SMTP id 2adb3069b0e04-54034100d67mr608345e87.10.1734026532195; Thu, 12 Dec 2024 10:02:12 -0800 (PST) Received: from pc638.lan ([2001:9b1:d5a0:a500:2d8:61ff:fec9:d743]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-53e39375d36sm1940645e87.7.2024.12.12.10.02.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Dec 2024 10:02:11 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, "Paul E . McKenney" , Andrew Morton , Vlastimil Babka Cc: RCU , LKML , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v2 1/5] rcu/kvfree: Initialize kvfree_rcu() separately Date: Thu, 12 Dec 2024 19:02:04 +0100 Message-Id: <20241212180208.274813-2-urezki@gmail.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241212180208.274813-1-urezki@gmail.com> References: <20241212180208.274813-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 9F2D720037 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: rzzoy75ph559syehqno1fpmhpgtgz6me X-HE-Tag: 1734026524-931882 X-HE-Meta: U2FsdGVkX19yCyLPsgrIAg62jFfLsSPPcw10ruMXNYZdS+CAPAH7cvhb+FpWTyP+v519xeKEjziFlZJB4s5gBHEjo2iTLzgt2ngKgZGmz8oKBstIpJr5iCFmuDOq3sfXXPmq8uUFSAHfHG8gAbRcFN7xKGOnpFw44YqoiiPZe/Q/0PzzAtq5kysyZkzv7zuaxEl8XglWyfLNHUnIqJwngp0Bch71iqVNFA/U/bSkoIgeSsAOWwCbyObXg1mJF2DLSG7OHz9t3WmPvox70+0SYSTwvCE1EW/ROGJbVo8ns+5MdBSPuTYNvk3p7pUm9VNEL+R0my6LJqHjoZ8ZMk0bTtcXXryubXUH0ikipNPbuWuJGqxfVtX8PnV5cWsrITGh6BeOcTKVRaKTTYY8uZzZ7niAjCx7xkFhlog14NrAH+vcTYC+m5xOKAmy7SryKLAuFdG72x5pylrbkC3CXVBppyT0uBro3YSgCXuTMjM9BF1nWs1a443wJEP7lBixSHzDppu3fyvfheNSns2QsUEj8pFyhN1hs0LDeNpgC4+A2w+016MtvGIV/OVDrRVI/6qvl2X+EodiGaTxKJmYEvBQYvdqrRTxvGFy0AOqGUW7X3o2ei3oRnLBV797P0ouiBHxl9MzOlhdyVnh8zsA3TGtPLmek0C7Iyh1HYGeB8XqUFVxJwh+JI79YRiXEs9dWSNrzZUwnwZZ9NHqUGGxDA9rZ3V19l405imEkorjXmOZJTKzj4CQkoS3/oGvbXi+bHka+fcz1iln6I3D/JEyjSTRkmbP0kEvICImnouBNKZ3XuJYenB+2iRpsuelaPSd1OGxvUAB0mRZ4pAbsnSf9LTXzdaRc6vanheW1Ufwwb+gj85bbv/Jl1CCGTaqzLgS4ZsBaJiIUqB+9t5P9v7DM7wHmIslqjBbsqNZ9L8ZnV5t0oIZ0wB+opw/YwFQMdriRDfQTcz/2IFtj22Qj7ePB3T G9S6k8Fx rX35nZ2q9HJrgHDFW5QfEhb88kRn1V4qrhsZozZLKE5rQ2RmHIQ4BMZuem5aGys6COroahfCwohjBlpc/dAi3Kt7eKxZcyN2Sqi17GxgolN5wOH4ha8UUDZWNhgPZ+xhWzWmYNmgMETFQkwnh1coWJxehM4Ro7N+HXxDSKxGINIy/hlmc6XK2xFrsbLDmeLYkY1KgvPxrR5Y7hoEYFKDOe/rmznefAdkVV7FM3ReVSTxA8mXss4WIfv2A+WVy3uqLRREbylOs+Ydu/M0ZQDsDLKO+iPqVygf1GRDLpPXuP/iJ/wHpqCzTflY8qINGSy8Xss7eTwvld+pkHJv2d9ZqS0qYrLCxx658s5toMl/ZCAGtDk8HHckOrJSGDk683OaCLUNq1iKZ+HJAbkzB7IqSsPbEq0wT5ygUoedsx6RQdX7hdRy6b3BcsVX7NPw0XWLeo0+8ATydneoDocdbqTsA/9AJBHKI7XE092MvSwVkxl7DxMf+/f7YFcS2Mw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000049, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce a separate initialization of kvfree_rcu() functionality. For such purpose a kfree_rcu_batch_init() is renamed to a kvfree_rcu_init() and it is invoked from the main.c right after rcu_init() is done. Signed-off-by: Uladzislau Rezki (Sony) --- include/linux/rcupdate.h | 1 + init/main.c | 1 + kernel/rcu/tree.c | 3 +-- 3 files changed, 3 insertions(+), 2 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 48e5c03df1dd..acb0095b4dbe 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -118,6 +118,7 @@ static inline void call_rcu_hurry(struct rcu_head *head, rcu_callback_t func) /* Internal to kernel */ void rcu_init(void); +void __init kvfree_rcu_init(void); extern int rcu_scheduler_active; void rcu_sched_clock_irq(int user); diff --git a/init/main.c b/init/main.c index 00fac1170294..893cb77aef22 100644 --- a/init/main.c +++ b/init/main.c @@ -992,6 +992,7 @@ void start_kernel(void) workqueue_init_early(); rcu_init(); + kvfree_rcu_init(); /* Trace events are available after this */ trace_init(); diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index ff98233d4aa5..e69b867de8ef 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -5648,7 +5648,7 @@ static void __init rcu_dump_rcu_node_tree(void) struct workqueue_struct *rcu_gp_wq; -static void __init kfree_rcu_batch_init(void) +void __init kvfree_rcu_init(void) { int cpu; int i, j; @@ -5703,7 +5703,6 @@ void __init rcu_init(void) rcu_early_boot_tests(); - kfree_rcu_batch_init(); rcu_bootup_announce(); sanitize_kthread_prio(); rcu_init_geometry(); From patchwork Thu Dec 12 18:02:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13905761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89578E77182 for ; Thu, 12 Dec 2024 18:02:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 178896B0083; Thu, 12 Dec 2024 13:02:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 10BB16B0089; Thu, 12 Dec 2024 13:02:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6D9C6B0088; Thu, 12 Dec 2024 13:02:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CC0566B0083 for ; Thu, 12 Dec 2024 13:02:17 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7875E160508 for ; Thu, 12 Dec 2024 18:02:17 +0000 (UTC) X-FDA: 82887075558.23.ECB6DDA Received: from mail-lf1-f48.google.com (mail-lf1-f48.google.com [209.85.167.48]) by imf03.hostedemail.com (Postfix) with ESMTP id 56E2E20033 for ; Thu, 12 Dec 2024 18:02:04 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="mtY/LyDm"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf03.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.48 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734026512; a=rsa-sha256; cv=none; b=I0u4FPMf3ba0+wMGD8oRRosK4Uo9gBey3wrPXy0k2nphBT8I00dfMvMFKJtP+55KTJEpQY rFku/qVj62qptr/B8FAIInydiqp5f11pcvMzSCBwmQvJMr4MyLBpMCpz+SMwQSqkBcVdd9 iD3js0DFm4LiIm+mTlMkPskrPJhFuug= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="mtY/LyDm"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf03.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.48 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734026512; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tH4hkCLHUFdjR9ahFZD+4T/NQJlUp6AP/fKB9gG9Cew=; b=zf7GHy4Gwc6cR9SfO1upvt/lABBi7nVi9/AtgrfcfNCdPqZfITZ/OuPqNDQxfJVUhpxhap PS+uIOo13DAtpyE0oC/LPHffLNlwlF79QfWg1xWwxEbXZAuOD6HRylwkyb0BXZUnf9u8x5 qP3zM8wgrUaT3VdJoPsFBT9K4FIHLBE= Received: by mail-lf1-f48.google.com with SMTP id 2adb3069b0e04-53ff1f7caaeso1057065e87.0 for ; Thu, 12 Dec 2024 10:02:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1734026534; x=1734631334; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tH4hkCLHUFdjR9ahFZD+4T/NQJlUp6AP/fKB9gG9Cew=; b=mtY/LyDmQ4DGeB9kfcy+qq46HUtaVd5ImKzv7G9AHEJZH8x1A/wyRRXpbhakntaC+C yLoelvnYzOtM9EjsRiQNy/ptE2Hoy5GjncKpjr+wyfPfgp4z137lonjA2exgj8Gvs5vg ArWAW8AZfB/b8+afyVAJxu34nIuadSjDJYBvqeQqJr07QClH3lB5gdOS+2bb8gJQgL7j hBRGNjEZf9IB4l8PDYgItezfTifd90/fhsykUrs/9fsPgQ87XX5fLQwiUe2gkf7QGE0Q gFEtT8/tFbb+FVqRveUMqXLvgK0w+N6X31LsvXoN9OwrTn3n1rFKLoczV0CXcNvSSZGG F9Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026534; x=1734631334; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tH4hkCLHUFdjR9ahFZD+4T/NQJlUp6AP/fKB9gG9Cew=; b=Oo1tS9vOcjnuh39KiB2WBTtSv/15/s971M7bDhkcqGBIr37jH88yNp0+da8TJGonia eoTJQAzPtkPraBtfW6xdY/2CxHyaOJcmnSvyMdngU8Z97A18kRX01ywiO+mkgVe54HMv 5w5CH7ngQQHyykQa3iy1QEwPBc000KLJxYzwmM4Lwc8CLds0EmduPEFtG3ChN/333BuQ brUHGm0E8d4ZSsM3rVwFvA/mjuk2T5nRTuSaWFwXZ8soZgyOvjcqH8LV7evQun7pBu8H xEY5E9Dnjg4GZzFtvRAWYRC7itbNOYTGdOx+3nuWe4JHViNWrb8ieK8C8zPLy+AuNfAD 8rMQ== X-Gm-Message-State: AOJu0YxcwT6U4a4ekCcpuZoLYyDXYK7YgU0wgSzUexDZ9+lPbL8xwRJZ iIAJ7FB2gggAWxVppqHBPevNArtWgNKfR/74umD7w0uuXZKtWaq1da5xFw== X-Gm-Gg: ASbGnctdizwMneCWJJOz/xo1jBLAXgj/o7kqpQvMkHxJFdGUvcrXJ3GW2rW7oLAymwu A4MNeghuP9YaDLud+34r3r3Q/ta8oD+68rUQ9Rbz7H0dSZrQoDvuOv0MPrNwQ1SP5YL99jQ0L7/ b4UlC0rb/lF/8tZMHBihTi63n6Tb2wC/6XGoe8y55Xu8x6+BGpGh/BA/0IForHsEkCMmZKPK49A +ONXUxraRMkPtwhZc+lhwQoQMXvPHdRh3nfmBGouCImP42XXvJD X-Google-Smtp-Source: AGHT+IEKLvPMI5CiAsBGeGMYPXutOtLCJKZ20+8zR8pPilM981v0MVz8wxdnXFoxtX1lsxgDEQ77sQ== X-Received: by 2002:a05:6512:3b13:b0:53e:350a:7298 with SMTP id 2adb3069b0e04-54034111a86mr428178e87.25.1734026533208; Thu, 12 Dec 2024 10:02:13 -0800 (PST) Received: from pc638.lan ([2001:9b1:d5a0:a500:2d8:61ff:fec9:d743]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-53e39375d36sm1940645e87.7.2024.12.12.10.02.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Dec 2024 10:02:12 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, "Paul E . McKenney" , Andrew Morton , Vlastimil Babka Cc: RCU , LKML , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v2 2/5] rcu/kvfree: Move some functions under CONFIG_TINY_RCU Date: Thu, 12 Dec 2024 19:02:05 +0100 Message-Id: <20241212180208.274813-3-urezki@gmail.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241212180208.274813-1-urezki@gmail.com> References: <20241212180208.274813-1-urezki@gmail.com> MIME-Version: 1.0 X-Stat-Signature: ntietxpenpr3pw8i8kd7bpfaadno3c3f X-Rspam-User: X-Rspamd-Queue-Id: 56E2E20033 X-Rspamd-Server: rspam08 X-HE-Tag: 1734026524-531753 X-HE-Meta: U2FsdGVkX1+4v3F42CB9ps7GcRlCWeu4KwDp8pqOygCExIvxK2CksaR8XS8kcc0mDHUHnfo8/KpamWpHittve/YrLjfCVzIxoPnlZ8DEqS9G9UqgRRwEi08GnEV8FMxVpeskjeJbJVCXaTR5VU4K2hdl64a+fz1lQkkYPKecXNp7jIdidsRj+IBCFWgl9QKa+5hngiOeebTzSvH0Fjy4k6oNca82VdPeZI6qH4KGCR7ZMf93tcqIfxrwlfwqkNzgOu2lmJ4qVpFgyhEV8iR1gFJYrFz+Xdm6i1SAOTpVpvCoIi+CH6c4e7QUtKKCj8phbHu7N9Z4DShYq2TL6lJX/XJkg2RUOLBCY1bSqmvLbQX3OIwEpNPFjAJkuwaexUqEAxdN5KWmvWGnbSSSx5Z8UhOmNCA6mEMCwKZwpJ0yde5zHt0Yaw94q7LvOzEJNDxgEahwRs3BP9wgwH5Ln/CVVmQ+NO7zFJBp7uwMInVDIhhMf6HCxlWW036XsqMS5zG9qxyksT0UemFyBZX5mX+BP/m3IO6ivy8vl7Omui/95IqJGFfPw1Q03O1tmvoaqAze1JSlGxOtggTCiUHHOYXsUjIcfO0njGZDxXFXuYCMp7HmtdiMdqCMYtd+FBrw2sOQW4Cu5BchDiLSrEGnYBgrhPvvR2P3c2MCr5aYbMeLbDCZmCyv95XW4XxsHEEBOcUzeo2G//wKQd3ZWr/pXv9Sm0qOI8+Jwh78NLkB16sVcduaIJS54i+NrfANw6UydpVNT1r8wHkAtaSwKonTGZ4dPxd3Gw6bNBSXj0On2qxc5Z3OmW3LgOFyh7gAYZu2o6b4JCVoqG462b/nALCHbKakoBLeyezIfpeA/DU96n71hEDSCCL47wUbNO8AFt0K5RStvUPmci+Xu1V22eJwlIkj8s5kZU33+He9eIS+ISvkoTVTSPWKLy4m60nj0PT4Bgos3hZsF0M/x2oOFTNNgJm +gEoga5+ FSaqijmpQlVrOgDcM5zMfv0KLkFuVGvI9NDQkNOns6654obps5H9isUPaAs7eNIMIJg39iqnY2e3Y/PiPUBgIWjklg399N9WnAqLanWxJ9fMjnwJQdkIg/T8N7OGY+K3+aNdkLjDTBaHmgcZkEK8GoDbEhozaoq6r380Lx7TIcHetIr+CTSEAEt1IGio0aGuPHxycLKVJWLGyFdUveHztDt3p4DO8vf+8XQqDTuES50SH/JE8HWxTa1vmWAVCtl/3sseJdRSj6ccGYa7Wn1bXVIBjYLJZLDaDSfCt87KMkfxjv7beKn6mgTjIXq9Rna2KDfSVwg6o3WBgSwjJbXNDX+aByIUyfft+O9Dy4b+j4+Oyv7SMhiN5woNBX1I03qTwxkwoN4rY8YysUS9BO9ISYHauqUbv0G8Wrk25SVp7HY74GNSGrPEMGjQ4l2q591vdPF3mZJl16T+B2Auy1v06daxJnbK4X4kiZAXPy1aAWLErilBwRU5IWtIofQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000005, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently when a tiny RCU is enabled, the tree.c file is not compiled, thus duplicating function names do not conflict with each other. Because of moving of kvfree_rcu() functionality to the SLAB, we have to reorder some functions and place them together under CONFIG_TINY_RCU macro definition. Therefore, those functions name will not conflict when a kernel is compiled for CONFIG_TINY_RCU flavor. Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 90 +++++++++++++++++++++++++---------------------- 1 file changed, 47 insertions(+), 43 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index e69b867de8ef..b3853ae6e869 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3653,16 +3653,6 @@ static void kfree_rcu_monitor(struct work_struct *work) schedule_delayed_monitor_work(krcp); } -static enum hrtimer_restart -schedule_page_work_fn(struct hrtimer *t) -{ - struct kfree_rcu_cpu *krcp = - container_of(t, struct kfree_rcu_cpu, hrtimer); - - queue_delayed_work(system_highpri_wq, &krcp->page_cache_work, 0); - return HRTIMER_NORESTART; -} - static void fill_page_cache_func(struct work_struct *work) { struct kvfree_rcu_bulk_data *bnode; @@ -3698,27 +3688,6 @@ static void fill_page_cache_func(struct work_struct *work) atomic_set(&krcp->backoff_page_cache_fill, 0); } -static void -run_page_cache_worker(struct kfree_rcu_cpu *krcp) -{ - // If cache disabled, bail out. - if (!rcu_min_cached_objs) - return; - - if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING && - !atomic_xchg(&krcp->work_in_progress, 1)) { - if (atomic_read(&krcp->backoff_page_cache_fill)) { - queue_delayed_work(system_unbound_wq, - &krcp->page_cache_work, - msecs_to_jiffies(rcu_delay_page_cache_fill_msec)); - } else { - hrtimer_init(&krcp->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); - krcp->hrtimer.function = schedule_page_work_fn; - hrtimer_start(&krcp->hrtimer, 0, HRTIMER_MODE_REL); - } - } -} - // Record ptr in a page managed by krcp, with the pre-krc_this_cpu_lock() // state specified by flags. If can_alloc is true, the caller must // be schedulable and not be holding any locks or mutexes that might be @@ -3779,6 +3748,51 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, return true; } +#if !defined(CONFIG_TINY_RCU) + +static enum hrtimer_restart +schedule_page_work_fn(struct hrtimer *t) +{ + struct kfree_rcu_cpu *krcp = + container_of(t, struct kfree_rcu_cpu, hrtimer); + + queue_delayed_work(system_highpri_wq, &krcp->page_cache_work, 0); + return HRTIMER_NORESTART; +} + +static void +run_page_cache_worker(struct kfree_rcu_cpu *krcp) +{ + // If cache disabled, bail out. + if (!rcu_min_cached_objs) + return; + + if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING && + !atomic_xchg(&krcp->work_in_progress, 1)) { + if (atomic_read(&krcp->backoff_page_cache_fill)) { + queue_delayed_work(system_unbound_wq, + &krcp->page_cache_work, + msecs_to_jiffies(rcu_delay_page_cache_fill_msec)); + } else { + hrtimer_init(&krcp->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + krcp->hrtimer.function = schedule_page_work_fn; + hrtimer_start(&krcp->hrtimer, 0, HRTIMER_MODE_REL); + } + } +} + +void __init kfree_rcu_scheduler_running(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { + struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); + + if (need_offload_krc(krcp)) + schedule_delayed_monitor_work(krcp); + } +} + /* * Queue a request for lazy invocation of the appropriate free routine * after a grace period. Please note that three paths are maintained, @@ -3944,6 +3958,8 @@ void kvfree_rcu_barrier(void) } EXPORT_SYMBOL_GPL(kvfree_rcu_barrier); +#endif /* #if !defined(CONFIG_TINY_RCU) */ + static unsigned long kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc) { @@ -3985,18 +4001,6 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) return freed == 0 ? SHRINK_STOP : freed; } -void __init kfree_rcu_scheduler_running(void) -{ - int cpu; - - for_each_possible_cpu(cpu) { - struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - - if (need_offload_krc(krcp)) - schedule_delayed_monitor_work(krcp); - } -} - /* * During early boot, any blocking grace-period wait automatically * implies a grace period. From patchwork Thu Dec 12 18:02:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13905763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7C1DE77180 for ; Thu, 12 Dec 2024 18:02:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2FEF96B0088; Thu, 12 Dec 2024 13:02:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 287146B0089; Thu, 12 Dec 2024 13:02:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D8836B008A; Thu, 12 Dec 2024 13:02:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DD29A6B0088 for ; Thu, 12 Dec 2024 13:02:19 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A5126AEB59 for ; Thu, 12 Dec 2024 18:02:19 +0000 (UTC) X-FDA: 82887075474.07.3873D6F Received: from mail-lf1-f51.google.com (mail-lf1-f51.google.com [209.85.167.51]) by imf07.hostedemail.com (Postfix) with ESMTP id B55D440007 for ; Thu, 12 Dec 2024 18:01:48 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=YnX89+n6; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.51 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734026526; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Hv8QfUHYhhVaP7flYxocwH+u3wceiEtGo25bo7k3AuU=; b=hjXGjwEfEsPyaHRLxz0fe3EaolhJLgHsRd4pI+XhSYaGZ9/ANtACcN1Y+eHCCJOaCXH645 uxLwt083P8Y1zVGe2hWA3FAGMQndOWGdE0zJtyjOSOhIUE3jCTO+xwNFmUMe5mfWmihNeK QqIgW5m0XJIbv7t+WmuSwuh3aM5f9mU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734026526; a=rsa-sha256; cv=none; b=pDIey+jzcmmL5jaYtUkfrUtydMSnwsQOwFlxuIMRu+shtljZmelRD4u0A2rYNJ1BjM8Klk IaczBVaaAKiNGAFyVNz9ZLgLeYEiafGTH8P4+HufZS0u0KGZC6hsg9Ts9TRKc7hMJOu6Ps CWyB6da77Qo9C16SkuIjTIlOKZAKCGw= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=YnX89+n6; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.51 as permitted sender) smtp.mailfrom=urezki@gmail.com Received: by mail-lf1-f51.google.com with SMTP id 2adb3069b0e04-53ffaaeeb76so929205e87.0 for ; Thu, 12 Dec 2024 10:02:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1734026536; x=1734631336; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Hv8QfUHYhhVaP7flYxocwH+u3wceiEtGo25bo7k3AuU=; b=YnX89+n6vp+Tr9w2JydJNXecVbEEYMGVdcPj7KgwzGTqNV1OWQ/Q2cXfNKh/BNv7N6 T8U4oBpQsX+M4waA1W3jBvZDZxWK4sl/SSQIKhXOKtxN57DzVYNsMJg1JyFNaIPK0ITu 4ng1gEZ7zSf6V7DQeOXiplp4gYrOkQ/TZ0LcGa18euyZzuyXUO8KRoPXEq+ZYwR28rHC aQ9dtQ56oVgvvKj2xC1N2sF+NGUYXHUjjgz+fglJQVcVFbyy4na6oNnb6Sw7GUNb25Ns I9qjLXHQoumnZLpPNbki3C7JDFaS0fRrYulaP/1DuZpsxfD2nh7MDIfGzHbFry9O43Sl o5mQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026536; x=1734631336; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Hv8QfUHYhhVaP7flYxocwH+u3wceiEtGo25bo7k3AuU=; b=HV3MbAICjhN/GqnEOuawEielfg3Yelgsv9RcSgDSPhkATIQIbtiA4RfT0347uN3Gjz 923yCNwksq1qYOwUFFPbIrqz/mX9Cm3ScKmdT1gf9HHVf2ZLviZeNUP7r3+LzM+TrMVU el9mrBSLrOvMuf4GHlLf+TNoV38b6F2wKfXfN5Zi00w5JJgZq5W3vpq2P8O6XlarfSp4 +dYR64pFn2HOkofURlU/7Qf+mGrW6X/AtrUn7VS1z5666IAKseAJIJFJfr6dxxIA+fj9 QeaJ8hAE3rqRJhUtOkPGE2s+O6Fdi+REjTXuMCe/uUysGkevFIt0vX7ocwooExH8KriS W4qg== X-Gm-Message-State: AOJu0YzyWOH3/nH9pLmYNf0ydRnMRtXMxiLOwLWTR2tMhuEVwOtPkrih YxsApeqlGoraYgzsBsHD9OSGmC50LL1pj3iG+qXn0Htlre/YZasE/6HKeA== X-Gm-Gg: ASbGnctETRMZoYPKiyuOz0bZq+QrCfYTN2elURWRiTdWCFm3O10NmzAoWGdilFtE1w5 XVAoP6WpW/5V4DiRfCtGXmORTLN8S7/rxZS23JFU/aZvIrWuEqM6rHTFxuPCE6hSFxRbSQe3BOn G/uS/qcC0C3THl3gE4dSIm4kVMrdckyzzwLlnHFy4GqhxsZC6Wz11ochT3DqFLCmYaJlF50rFC/ Le1ZPoo8it6AuZCVWHJm1T+e/2n+eCjNnX9C3eAT5j7DkUh/Qgd X-Google-Smtp-Source: AGHT+IEBa7W43d+GZY0D/NIRGJ6K2KBC2g84iLkFM9fwOtfd8BrPro/IcWJVgAX73Jiv6rWzaqnyWw== X-Received: by 2002:a05:6512:690:b0:540:3561:969d with SMTP id 2adb3069b0e04-540356196d3mr8441e87.49.1734026534397; Thu, 12 Dec 2024 10:02:14 -0800 (PST) Received: from pc638.lan ([2001:9b1:d5a0:a500:2d8:61ff:fec9:d743]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-53e39375d36sm1940645e87.7.2024.12.12.10.02.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Dec 2024 10:02:13 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, "Paul E . McKenney" , Andrew Morton , Vlastimil Babka Cc: RCU , LKML , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v2 3/5] rcu/kvfree: Adjust names passed into trace functions Date: Thu, 12 Dec 2024 19:02:06 +0100 Message-Id: <20241212180208.274813-4-urezki@gmail.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241212180208.274813-1-urezki@gmail.com> References: <20241212180208.274813-1-urezki@gmail.com> MIME-Version: 1.0 X-Stat-Signature: 85mrmautp3mgm4q1zxbmjgx7c483phhm X-Rspamd-Queue-Id: B55D440007 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1734026508-277949 X-HE-Meta: U2FsdGVkX19t5MKxxQqHPzPOUB0yFdnDmyY64VuuARoQXaZtydjQhYrQoXgB50QSevm6wcLtWqiadnpLEYE2F41yPOhIYmzFuXRCURLXcenpeWpu0SsBMcYyNvk+aFNy8jXlUUWRc4+t7kqby4BrRukrdgqyk+waIbMz0OejiKBmk/wVvQBAqBnnAEhqAnf3RZwrnv99db3+D887jIRjyxsjqDQ+/i0pNpdFZ7ulu52zx4uF74go+uIF2CkAztH2n/G7gAeyet5LrHnzxBi0Sgz1LX6StT0ZMrFJvX5pXJLebzV5qka01tjCJKxLwgWG+aAy66M4gjxZ0FwllgCWlUPr2ZEtus7Tsiu+g2PeBDN9WeHropF0AqOl4onLgPaREruAMmPQQVLhe1vx43r6PUgXqBFTbxVUuMDx+3SZrIf811AZc1A8w0gE2AzyZFgyHhgEh6rzZ9ZnsB2T9MJd1a+uU5SGuUBnEa2TtaBSJP/375gANpVCuGd4AF7NNYebx7DxhmL1HstB9CWkWITm3Ke+IblFG0zjkiFmSBTooIKAFqWBZHjHjHd/i9NLrt+61Q9KQk+ye78/O5u6eFW7Fq+GTeMgHhRJ/u00HSejTghUFYvS8oi3hlisyGpeEg+VwrzPzD9YlAIxzwpREWL331JPCawZIiYWc79Nf/+jJE+9nTlx4opiZ4VBTJIIt8uDx7PS9G4/0Ng709L5RI+i778x9jcvweYOH01oI7shDfhZVednqDiNenHIM9HoqiqkJGMPnK5yl6BGSKwmWls9fshXCH8WWPnRHAtFsb94wvpGn22OwCV09QUiQJUaCpbiHlRCRIh3XvM7ec/jlr71cLFDAOEbxjOc/t8sL7l6WCkVwAlbEs5TzWdrDkDf4ARxzFmOZMAIVfoatuIZkR5AYiRln1GnqCGbht7fF778mKeQ4/4rjU6khHNf3HgVP+9htxuRDiJzmeXl9WTK4Jy byUtDRLx it0B5xrmfImgps4+y0aEMDJAJVAMRM+fMn54rXYYYq2PWzi1hXxOGL6itEh8qfMGjsHNCZ3Uev5ip4ODkLAilXPvjcEbX86SszTyv3HFt8S2VTHxeyJ3oLvEdqGFTpCWfGQz6/jQaXltotDGLrEsQUG6/GaE7bZ1ETb4s2nzPfTs5u2H3wd/8NFtKuUrczNn3WuWIIsIykSj3jCjmeyTP6yMWYMzl32apf1mZAkXjWcn5eqfjaLsdbxzUpN2TMKn17ftlTNraKM9StCqHmlraO9ifSx16YK1UxdFpIxnVtk07m+ME2AIGkHAxEYZ8gE6CjJwt5aepu2a5kJVFcmLG2hUXCzAoVn0kwqXR8PX0cZrfYBRryvdYgXMaLaJaM8SCed6U3y39nrNTTufw3Y5O+fSBCNpodQou+1mPl+4hOQYpQWQ7kteXynirigV5wO3M1/siPFJlx8h3ux4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000448, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently trace functions are supplied with "rcu_state.name" member which is located in the structure. The problem is that the "rcu_state" structure variable is local and can not be accessed from another place. To address this, this preparation patch passes "slab" string as a first argument. Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b3853ae6e869..6ab21655c248 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3379,14 +3379,14 @@ kvfree_rcu_bulk(struct kfree_rcu_cpu *krcp, rcu_lock_acquire(&rcu_callback_map); if (idx == 0) { // kmalloc() / kfree(). trace_rcu_invoke_kfree_bulk_callback( - rcu_state.name, bnode->nr_records, + "slab", bnode->nr_records, bnode->records); kfree_bulk(bnode->nr_records, bnode->records); } else { // vmalloc() / vfree(). for (i = 0; i < bnode->nr_records; i++) { trace_rcu_invoke_kvfree_callback( - rcu_state.name, bnode->records[i], 0); + "slab", bnode->records[i], 0); vfree(bnode->records[i]); } @@ -3417,7 +3417,7 @@ kvfree_rcu_list(struct rcu_head *head) next = head->next; debug_rcu_head_unqueue((struct rcu_head *)ptr); rcu_lock_acquire(&rcu_callback_map); - trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); + trace_rcu_invoke_kvfree_callback("slab", head, offset); if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) kvfree(ptr); From patchwork Thu Dec 12 18:02:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13905764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7627EE7717F for ; Thu, 12 Dec 2024 18:02:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 76AA46B0089; Thu, 12 Dec 2024 13:02:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 71B9E6B008A; Thu, 12 Dec 2024 13:02:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F7D66B0092; Thu, 12 Dec 2024 13:02:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 311F36B008A for ; Thu, 12 Dec 2024 13:02:20 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 94A4C1401EA for ; Thu, 12 Dec 2024 18:02:19 +0000 (UTC) X-FDA: 82887075642.23.5292085 Received: from mail-lf1-f41.google.com (mail-lf1-f41.google.com [209.85.167.41]) by imf09.hostedemail.com (Postfix) with ESMTP id D91A8140002 for ; Thu, 12 Dec 2024 18:02:00 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=nCNnrcOL; spf=pass (imf09.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.41 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734026519; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RdWB7CeZxTt2Nx4wp0TYbeZXKw5F7qc0bJVeZVV8Ytk=; b=BRGIzy9F+hkTemHedv07qo0JLeJvSbGVULUH6KUaFoivgJcebSPbJh1y2ngas+l0Oc/7PI 04qZgR1UFhyTbR/vPEd8Oo4CMfOuScD5ovSx50fnItWXhVJHersikLGR26ep0ngps51Oun 7f3qmtXDovFeE8OKWGVQcxSDYnD6uq8= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=nCNnrcOL; spf=pass (imf09.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.41 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734026519; a=rsa-sha256; cv=none; b=WHvKrokrbmAdMkmr9MG0zITG8jcI/n1SOP9tfZbb4yzG0roQP71tHGwuUgHym8zbRy4mjT lLo7Y/TbHlWzIw8pwco+LAHCMpsJIIDPGSfMF7pE+dz/f/mBQeaWIAkUu4y7xn8h9PJrRW jpDA1JaYJ2ybvuixI4c8iNjY2KlFMmI= Received: by mail-lf1-f41.google.com with SMTP id 2adb3069b0e04-53df19bf6a9so1164241e87.1 for ; Thu, 12 Dec 2024 10:02:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1734026536; x=1734631336; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RdWB7CeZxTt2Nx4wp0TYbeZXKw5F7qc0bJVeZVV8Ytk=; b=nCNnrcOLrmXOnMONEdw7Iivfq1ipzn9GXw9F2UH1azMbKBzTvayCblt2xv4bAdpOaz x7oFB3jqtaca//rT1QyOQB29YaSd3Y3axb10i58jmXsSKiQO4K2kLkKzA4jZKC66ZH5Q 8gRFrmzcyQ+soUFRMV4dOw7uR61CoZWC+s8zok2JMu801B9ohY8RvW0vsdLtVEHS+xH9 X2XQlYdlitRS8JGkfC2s2ho5x6YroktjGfmizWa5/H6AydtvzTbfmz6nn3hnNeeW4TCa 9EOxGO++PHXhnR4/2UaFzLcKym4Ydckajjs1BA6dfFjlyMIy3QyJSyi9O3N7ehTVYu77 q38w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026536; x=1734631336; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RdWB7CeZxTt2Nx4wp0TYbeZXKw5F7qc0bJVeZVV8Ytk=; b=HWVxfTS/UK09utry1iHsfyl7DxtBbe+/cmhqPYEUj/7ovGAdNd3gkDEg/0Sj1xON0X 7Ut5JDbk81Ig5ssZk/hLcXEtOf2XkmUFRQoWDyx3S6oEAFrFty9T0qeWthvCr43GS3k7 oyPKYgNZQJScY+VCmvoZnoXh4k/hCrUWFFD1KnJrDuamxXTkx2AFxXbp7jufxLXssQtf BSfYz7ZS0ZOujxtg+89A8M4QTWoASTUZRIQaJUTHcGiZtVkKkTzemEN7z0jBa1MFPlPS ROxzH1oTXNtU8or2zLWirQY69z13eZ8oAclwiqDXkU5Ss/ZURVLpJFlbxPs6GVRWa2Cy 9ZBA== X-Gm-Message-State: AOJu0Yx5qiqox4mRv3hAaYoB/qg73FgR0g1XEZZUXtuhqCs8XkT6cGTq IEXIo/R/r6klBiWxsd+c8UQTXbFnL7uP7OkP1wQfXdqYC8K6Wk2nRaoJQA== X-Gm-Gg: ASbGncuZFhkdwIXaDHTGZOaavtawAdHWXpZg1Jvehb5YEgIutsgm7DJnK5N6EFRWKrf N3QQ5jqyNnvYuEi72kXoPEUdnCKpbqFyzax1pTgK4MqmRebYnSx9oFZ+aZpfR87bt8P4e3ivAlL 2dRqllQsHuSargeJz5Jt66miViP2Karr1plHW3YB+esGAjLGtwa8oYGqWuGC25HKh0jyCtd+4yQ LLv6kDEaEDvqPgQkCOPLny3F+1MtPwTDxa+Dsvj3KxGNX2nSFN6 X-Google-Smtp-Source: AGHT+IFL3uXqNVDcCjU+Co56LBNn3KehaHU3DBiOXQtZYB5mPERgTxGbcXsMfrElOzdqOV88rVtyMw== X-Received: by 2002:a05:6512:3082:b0:540:31e1:c8dd with SMTP id 2adb3069b0e04-54034102576mr544592e87.9.1734026535516; Thu, 12 Dec 2024 10:02:15 -0800 (PST) Received: from pc638.lan ([2001:9b1:d5a0:a500:2d8:61ff:fec9:d743]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-53e39375d36sm1940645e87.7.2024.12.12.10.02.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Dec 2024 10:02:14 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, "Paul E . McKenney" , Andrew Morton , Vlastimil Babka Cc: RCU , LKML , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v2 4/5] rcu/kvfree: Adjust a shrinker name Date: Thu, 12 Dec 2024 19:02:07 +0100 Message-Id: <20241212180208.274813-5-urezki@gmail.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241212180208.274813-1-urezki@gmail.com> References: <20241212180208.274813-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: D91A8140002 X-Rspamd-Server: rspam12 X-Stat-Signature: crudex3c1dozshz5f1arcd5oj31fjtbf X-Rspam-User: X-HE-Tag: 1734026520-241361 X-HE-Meta: U2FsdGVkX1/G95oqNAP3wXQM+5iYChVb4xBnN1/LkleRxH7qlm15nDnDuiXtcUiCKDk1qRBHnnR3YMCfZ9Mt8qaHrMedXcF2X2qG+Y/kqsy0ENtyU4/g+ShNWvTB0w0hEyQx01IHHJ3PRLUIS11eDt91iq3JQ5UDJc+kxEPE7lwUPqAppef4VXQIe0DCsclNllbjjGWl65JUFd/IB6bvIHJ/cnyA6QtBOgp9z7jSJbMpMpZtx3sohguX6fQtpe02NCow7WoJPpfbjfWyMB7IRBxux1ccDivPK2jF9+doAz/DDw7rL5VsiLZ8V04o3Zo7mqF+otSrKmEOmgEz71gQktUJDK0oOpmmVnWSNsI0Bux34hC0mdo8/qqSs5B4StPKhKkqug83ktCuGYlSuXHWBQqzP7ktiPGgTFpIsM7Jvi2e+aJVRCpRRdX/fip8RE42RdzNmi6c9biKGDNu6g47sH4xZ6Y5ydUE3+NP11LebQWGJSWlsIHrYXXS4f15x8NtS5QR2GAaARzpYgOPWyQmywWhnJ2Q/s9foToxoElbDZWakl4GHdFZUum8obJV8tWEMIfKZwCEvDW9RO4Ss7vzMbzlMuqNSE6eBBEM7H+WrOZXaIAZW5ObL+rHcdpKoJQV4lqTPtzFCngLJycA5X7rhZw0O8BWb4SVdCw2tUWYMdW0YVEW3UpDEeZIrQ/z4eZ4SgwhcOnh4lusibPXTXmHO/Gwz4SbtRj8uMZNY0JXCjVOTKZiQLPVj/wTIe00LLoZ/3POt31j3qL9RbfLbCKA1sh+E1wyBQ8XF2wDb7/tZ6K/qbUoms/3eMyuVSflQq0FIgXeqXQcobEiEOgtlmiPUDtb6tD3LB/ii8di8vhwR9/hQ+GjVyBd7s4kIEcuXOp8OMFzeRYk+84R4ldWJJISNLzKX2p5xxVnggFl2ejKfmEaAo76I+9ft3eukG1R0XHiUbMvI3Ua9wTbgZBgS52 doM2Eirb BlH6Nu/a9SZac+0w0w1kYZYS//hPL8LDwZxGJHkZbOJLPZbTHtGecuFO49Pv3p0CW0yyi4VO/cfFmMsavZmoeg3pypJ7MpFCKIR9Ihhwl1LxkAGBEyna3YyAInWFfNidYNZUuQCP7N7p5Y2NyZKmjsQ8x726KgiWv6s9qol7lLeqNxE3yL8Bqkr/2fZUFKl0uLIjoXoD2dmB/ZVsI6mKLKMgWhtcLWq1SuwKiQlZ7iHI2Jsn+8HPGbv2Bv4Q1f5zRkQx1wEsbiS2HaGDEdJ37alP4/3M6hJsQHAvOogf4wvMOVGCwryuettKLxnvyAIUm87urBUvywTRCQVEyl3LCTk3cMT/3aNnCt5RlwmYxNITnLNpBt4iO9kJv44fMizLAd+cKPYEoA9UP8xUM/N/bf00Z73eMJdfDlE8khCQeSShULbON8MfFzvONz9IqDtV1HC3jNgego28Fmw0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.008308, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Rename "rcu-kfree" to "slab-kvfree-rcu" since it goes to the slab_common.c file soon. Signed-off-by: Uladzislau Rezki (Sony) --- kernel/rcu/tree.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 6ab21655c248..b7ec998f360e 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -5689,7 +5689,7 @@ void __init kvfree_rcu_init(void) krcp->initialized = true; } - kfree_rcu_shrinker = shrinker_alloc(0, "rcu-kfree"); + kfree_rcu_shrinker = shrinker_alloc(0, "slab-kvfree-rcu"); if (!kfree_rcu_shrinker) { pr_err("Failed to allocate kfree_rcu() shrinker!\n"); return; From patchwork Thu Dec 12 18:02:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13905765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 580F7E7717F for ; Thu, 12 Dec 2024 18:02:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 152DB6B0092; Thu, 12 Dec 2024 13:02:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1035D6B0093; Thu, 12 Dec 2024 13:02:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF9D16B0095; Thu, 12 Dec 2024 13:02:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id B52A96B0092 for ; Thu, 12 Dec 2024 13:02:23 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 35218A074D for ; Thu, 12 Dec 2024 18:02:23 +0000 (UTC) X-FDA: 82887075432.19.CB410EB Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by imf29.hostedemail.com (Postfix) with ESMTP id 7A569120034 for ; Thu, 12 Dec 2024 18:01:46 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=IWcdM4YN; spf=pass (imf29.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734026515; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jp9n6MzDNw4OHX8MQ5giOdRJWOY1KB9wiTVGm+VZUQY=; b=7ylQUaqTj7E1GKrF0ak95bGqIxA0i+B4MXxoGxBpbu6101ec6rTSyRaYMMIoY8wknodPOb xl9o7HEyNG5d6OKJbipCCHUSgBMJ1Fkm3PoeRsNpfiTE9MvIw3nxtnozj0jxfXgLMwQffh zsooaEbc0yI1qdkpIcEOMsU4vQCWef8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734026515; a=rsa-sha256; cv=none; b=DyxdW7/qzz8DfyD4qXaIC1sOb/VQkEFIQkoi3KBXhOWXCvrok6Ydf5Zy/1M+61lsTVHSPq CUiuphyU6KXsQKwvC9Vdg6zPFpkvVzyF7b4dJ+k76zRY6aA99Woh8zKhUwTkxLoMerO3dt xAvUiCcr/+zJTsVnf1lNZtdP3t0xNe4= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=IWcdM4YN; spf=pass (imf29.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lf1-f52.google.com with SMTP id 2adb3069b0e04-54026562221so925992e87.1 for ; Thu, 12 Dec 2024 10:02:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1734026539; x=1734631339; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jp9n6MzDNw4OHX8MQ5giOdRJWOY1KB9wiTVGm+VZUQY=; b=IWcdM4YNvMjqaKLjY/9evow+o3guJhr78x5havCb2fWpwqdkKaLK6vWXZmeN4caMPd VtnVN46PXx0Cin9FXKODyfP4Sz/lFBZxPUva/99Cyyh6nLDpkPxHLltN02ARdwSIQeyq C6YETL7u9cGmApPeroMEBdKJhKI/0O0wls0CF4OdFqrsNJdU8Y3IEVJJV+ZuXEkwol48 uEofGSRue1Zs/pXoSeB7nDulf9fMkONUk/VM/Gn5U1r2kVJxAYwcD3HAutLaxnpzc5qs UwNLTERhoWxdQwrHfG/BZVemmbqp5RjMJSUKbJTpdgEkLEm59R4HS9K8N1GGLEdoakv7 QZVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734026539; x=1734631339; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jp9n6MzDNw4OHX8MQ5giOdRJWOY1KB9wiTVGm+VZUQY=; b=oFTyU7cRgeRCkIYQQEQ2SG955AJUGRnrDPVNj/W4FuBiQ0nTXwBhwkq8TnjyoENri6 jfCIndQ52yLwce3uacWtEOhezyhOj62wJX62uGFFpQw9SCfryTiTGMnf8U9oS8jEcEiA AeR1BAbg4TwZ0E2RJkXFTHR/opvSZPWUr/jBobvYzFMka8iOIF4UVKt9LHaw9pKcR+MM N+p9fbvrDD/gog/GPv6t0CPwhKkNMXJx2EL7wS2noB8G9LVxLU4FhR4X4H8Uli4Xn3U/ bX6cyVc18n+93oshviHy3s9Nfogo8gYYDqk7Z7WAgf+ofCg5sgsAcpJd2aWizgGU8+4o YVQw== X-Gm-Message-State: AOJu0YwEKWM0lYqrUecxHE/Ktjx2THs4FQKFKAGUaX1oEpm6PL7QWeO0 lc9qTa3WXMF3vwbAUuhteH6k+ScmKIA0v2orsAbNz2F8tU4tReEMj64Zhw== X-Gm-Gg: ASbGnctECW3BpjNjVm9LRVjoTmnwBFvDYk1LFWU1mE/YWl34nSKVLZFrNJVKddzN+js gwkTPLekooYTjH90FGmGpU0L4ZTxGgOvGoa2IM0TCeorBYzWFvS8Ib6ClEiZL80QqXD7RW0CW+N SkmZDyPb9dce+Z4azsl6i7Ynq2VcEL1Wnm8kvkh6pklBkvZJUGGPOaEHruOm3DFkWXqlj6YHfX5 bt8bu9LNFn9P5Ylv5q7H2iNC2z8xBthlMr297v0Il2xvszCRCWj X-Google-Smtp-Source: AGHT+IEfQdeyIDZrSYGtJ4LX1kF2UVsP+Uht2AT+RfPzpQN9Rl7E74CSxqzZhAPeac2YdUJ9MV7Tsg== X-Received: by 2002:a05:6512:a95:b0:540:1605:ac62 with SMTP id 2adb3069b0e04-54034107c8fmr546074e87.15.1734026536688; Thu, 12 Dec 2024 10:02:16 -0800 (PST) Received: from pc638.lan ([2001:9b1:d5a0:a500:2d8:61ff:fec9:d743]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-53e39375d36sm1940645e87.7.2024.12.12.10.02.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Dec 2024 10:02:16 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, "Paul E . McKenney" , Andrew Morton , Vlastimil Babka Cc: RCU , LKML , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v2 5/5] mm/slab: Move kvfree_rcu() into SLAB Date: Thu, 12 Dec 2024 19:02:08 +0100 Message-Id: <20241212180208.274813-6-urezki@gmail.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20241212180208.274813-1-urezki@gmail.com> References: <20241212180208.274813-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 7A569120034 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: mxcbqpu1uegit3e4dz8qhgmurrbr6w6k X-HE-Tag: 1734026506-228700 X-HE-Meta: U2FsdGVkX18cHBWRWR9KnNtbnDqNafvtgMMRFRG/FeedS30WoFRT65eDxkY6bSFRikTt1nvllaV3OlfY+XGjbm3LMQy7ceExC1aGzz33VEmsimKmzWjmXiMpb/uH4aZMNZMMqKXO5datJNgMcUSAoMKnxMc38pM+muHrBxxqTh37k4oH8aHyHaVXKgboBRzfwmRovmutornLwbFVf+8P8M0oPAPXcrIi2sfLzAOJCaW5alP+JQbYgcnMla3AoO3yNtBoBiNF+IdMAvKV3OoBbG4dggFOw/s8BAwqZwxEojI9DTK8cEIIQNf1rOQFRBa4L0+W6WyKcy+ag88zXjT6fHfTllf0WgZttJh1h1uFVwYyxs9f1wpxx+XvTFqWK2XYQTgkcpDa0dHmAy9x9jfC6WdxJMpHlt8Et2sT+F5xuUCN7RHDpEdqgBmwu89GWrVQxhBg858D9B/fLjkbAiEVQfQz0Kjowzp9xf2CXz4NsPQGBnYmXkkJa3BHXG9cTiNkphgD/E3iEQa0kGmt/Lfh8XePaCs5aD0XahZo3g/C+3msF1WNt8KQ1K2NAltjfCvys+mjTbdGoXsNb8QCjRXX9e9PQ4+toPwx86Bvu775bnt1YjWDwgTIzquR147dxKm49w/J6rOa8rXVVxT1hHNG4ZZpqC4Xo0nAPbe6G8DIx0ycsFhMmL1ZxXCbQ61c/fklbrPHlTyqKUPu7Ytl3TpKUKFEVw7Vq5H1rg2xi472ri7Ly/pjVoG4kJRsZPiyB43H6D46C8lLJWSvGbt7N9Wo8W0XEju3pe7GKQjqmdQ5sDORH+U5Q7CBkBCBhcaiPJkCf4orHXRN2y4ZN88NauWgjezwRqgtpl+34APAShpRcg5bjgg2Wua0RzQoXC/j6aUUlmoV+63N0Al9W8iutN7dQNlgHnEo75ic3a6eE5NwmMgNMpYDLt22PfMQkvJcXqQfxzJXhmH1e1XdyQqX8at vcgq+MR2 uofx6PwJP3sOliaHT6/BlP/hVs4AWcSTByRztMijBwEF546nreYeNHiTNJ2Xx15Ho3byEFWZrW4ApQhUQ0gSvhhIYDKKZh9rkuFRX7gxnjJMmPx0ATXSTHZ7eLxi2Vniv6HSJTMT+EAldQ9qP5R8w+MBuupyH+hJmp1g8May4kTK31HzGKjM/L5qX938rI6Ni4h7lP5DxkuTeq/Prgb/euZDKXRAHqI9/120fdZl9JUk7beeZHhVh5dOjG/d0G+/VrP7lqlv9JFROJIRIrIvganrLQ2OZQIx5gK7s0Txo9kvsRCWt4R7rXxdlhB1NPYysi5NM/lBXqWj5wkq8LOXWqrRp3w10MXXSPSCLe9AUup63nLmYXh8RwDb6FlAH0Dbd1Yu8hWNcOWH24qbJWhEu2fErqOPwMVUwfMDHvBa/bF1svlk7gcbtVljcVM0Pkr1pK4hjviilZ5qCzAtK/VX2jwJlIWGXCtF+7TrJJBL/IzKfuu9biQk6ifLjFw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move kvfree_rcu() functionality to the slab_common.c file. Signed-off-by: Uladzislau Rezki (Sony) --- include/linux/rcupdate.h | 1 - include/linux/slab.h | 1 + kernel/rcu/tree.c | 879 -------------------------------------- mm/slab_common.c | 880 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 881 insertions(+), 880 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index acb0095b4dbe..48e5c03df1dd 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -118,7 +118,6 @@ static inline void call_rcu_hurry(struct rcu_head *head, rcu_callback_t func) /* Internal to kernel */ void rcu_init(void); -void __init kvfree_rcu_init(void); extern int rcu_scheduler_active; void rcu_sched_clock_irq(int user); diff --git a/include/linux/slab.h b/include/linux/slab.h index 10a971c2bde3..09eedaecf120 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -1099,5 +1099,6 @@ unsigned int kmem_cache_size(struct kmem_cache *s); size_t kmalloc_size_roundup(size_t size); void __init kmem_cache_init_late(void); +void __init kvfree_rcu_init(void); #endif /* _LINUX_SLAB_H */ diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index b7ec998f360e..6af042cde972 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -186,26 +186,6 @@ static int rcu_unlock_delay; module_param(rcu_unlock_delay, int, 0444); #endif -/* - * This rcu parameter is runtime-read-only. It reflects - * a minimum allowed number of objects which can be cached - * per-CPU. Object size is equal to one page. This value - * can be changed at boot time. - */ -static int rcu_min_cached_objs = 5; -module_param(rcu_min_cached_objs, int, 0444); - -// A page shrinker can ask for pages to be freed to make them -// available for other parts of the system. This usually happens -// under low memory conditions, and in that case we should also -// defer page-cache filling for a short time period. -// -// The default value is 5 seconds, which is long enough to reduce -// interference with the shrinker while it asks other systems to -// drain their caches. -static int rcu_delay_page_cache_fill_msec = 5000; -module_param(rcu_delay_page_cache_fill_msec, int, 0444); - /* Retrieve RCU kthreads priority for rcutorture */ int rcu_get_gp_kthreads_prio(void) { @@ -3191,816 +3171,6 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func) } EXPORT_SYMBOL_GPL(call_rcu); -/* Maximum number of jiffies to wait before draining a batch. */ -#define KFREE_DRAIN_JIFFIES (5 * HZ) -#define KFREE_N_BATCHES 2 -#define FREE_N_CHANNELS 2 - -/** - * struct kvfree_rcu_bulk_data - single block to store kvfree_rcu() pointers - * @list: List node. All blocks are linked between each other - * @gp_snap: Snapshot of RCU state for objects placed to this bulk - * @nr_records: Number of active pointers in the array - * @records: Array of the kvfree_rcu() pointers - */ -struct kvfree_rcu_bulk_data { - struct list_head list; - struct rcu_gp_oldstate gp_snap; - unsigned long nr_records; - void *records[] __counted_by(nr_records); -}; - -/* - * This macro defines how many entries the "records" array - * will contain. It is based on the fact that the size of - * kvfree_rcu_bulk_data structure becomes exactly one page. - */ -#define KVFREE_BULK_MAX_ENTR \ - ((PAGE_SIZE - sizeof(struct kvfree_rcu_bulk_data)) / sizeof(void *)) - -/** - * struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests - * @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period - * @head_free: List of kfree_rcu() objects waiting for a grace period - * @head_free_gp_snap: Grace-period snapshot to check for attempted premature frees. - * @bulk_head_free: Bulk-List of kvfree_rcu() objects waiting for a grace period - * @krcp: Pointer to @kfree_rcu_cpu structure - */ - -struct kfree_rcu_cpu_work { - struct rcu_work rcu_work; - struct rcu_head *head_free; - struct rcu_gp_oldstate head_free_gp_snap; - struct list_head bulk_head_free[FREE_N_CHANNELS]; - struct kfree_rcu_cpu *krcp; -}; - -/** - * struct kfree_rcu_cpu - batch up kfree_rcu() requests for RCU grace period - * @head: List of kfree_rcu() objects not yet waiting for a grace period - * @head_gp_snap: Snapshot of RCU state for objects placed to "@head" - * @bulk_head: Bulk-List of kvfree_rcu() objects not yet waiting for a grace period - * @krw_arr: Array of batches of kfree_rcu() objects waiting for a grace period - * @lock: Synchronize access to this structure - * @monitor_work: Promote @head to @head_free after KFREE_DRAIN_JIFFIES - * @initialized: The @rcu_work fields have been initialized - * @head_count: Number of objects in rcu_head singular list - * @bulk_count: Number of objects in bulk-list - * @bkvcache: - * A simple cache list that contains objects for reuse purpose. - * In order to save some per-cpu space the list is singular. - * Even though it is lockless an access has to be protected by the - * per-cpu lock. - * @page_cache_work: A work to refill the cache when it is empty - * @backoff_page_cache_fill: Delay cache refills - * @work_in_progress: Indicates that page_cache_work is running - * @hrtimer: A hrtimer for scheduling a page_cache_work - * @nr_bkv_objs: number of allocated objects at @bkvcache. - * - * This is a per-CPU structure. The reason that it is not included in - * the rcu_data structure is to permit this code to be extracted from - * the RCU files. Such extraction could allow further optimization of - * the interactions with the slab allocators. - */ -struct kfree_rcu_cpu { - // Objects queued on a linked list - // through their rcu_head structures. - struct rcu_head *head; - unsigned long head_gp_snap; - atomic_t head_count; - - // Objects queued on a bulk-list. - struct list_head bulk_head[FREE_N_CHANNELS]; - atomic_t bulk_count[FREE_N_CHANNELS]; - - struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES]; - raw_spinlock_t lock; - struct delayed_work monitor_work; - bool initialized; - - struct delayed_work page_cache_work; - atomic_t backoff_page_cache_fill; - atomic_t work_in_progress; - struct hrtimer hrtimer; - - struct llist_head bkvcache; - int nr_bkv_objs; -}; - -static DEFINE_PER_CPU(struct kfree_rcu_cpu, krc) = { - .lock = __RAW_SPIN_LOCK_UNLOCKED(krc.lock), -}; - -static __always_inline void -debug_rcu_bhead_unqueue(struct kvfree_rcu_bulk_data *bhead) -{ -#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD - int i; - - for (i = 0; i < bhead->nr_records; i++) - debug_rcu_head_unqueue((struct rcu_head *)(bhead->records[i])); -#endif -} - -static inline struct kfree_rcu_cpu * -krc_this_cpu_lock(unsigned long *flags) -{ - struct kfree_rcu_cpu *krcp; - - local_irq_save(*flags); // For safely calling this_cpu_ptr(). - krcp = this_cpu_ptr(&krc); - raw_spin_lock(&krcp->lock); - - return krcp; -} - -static inline void -krc_this_cpu_unlock(struct kfree_rcu_cpu *krcp, unsigned long flags) -{ - raw_spin_unlock_irqrestore(&krcp->lock, flags); -} - -static inline struct kvfree_rcu_bulk_data * -get_cached_bnode(struct kfree_rcu_cpu *krcp) -{ - if (!krcp->nr_bkv_objs) - return NULL; - - WRITE_ONCE(krcp->nr_bkv_objs, krcp->nr_bkv_objs - 1); - return (struct kvfree_rcu_bulk_data *) - llist_del_first(&krcp->bkvcache); -} - -static inline bool -put_cached_bnode(struct kfree_rcu_cpu *krcp, - struct kvfree_rcu_bulk_data *bnode) -{ - // Check the limit. - if (krcp->nr_bkv_objs >= rcu_min_cached_objs) - return false; - - llist_add((struct llist_node *) bnode, &krcp->bkvcache); - WRITE_ONCE(krcp->nr_bkv_objs, krcp->nr_bkv_objs + 1); - return true; -} - -static int -drain_page_cache(struct kfree_rcu_cpu *krcp) -{ - unsigned long flags; - struct llist_node *page_list, *pos, *n; - int freed = 0; - - if (!rcu_min_cached_objs) - return 0; - - raw_spin_lock_irqsave(&krcp->lock, flags); - page_list = llist_del_all(&krcp->bkvcache); - WRITE_ONCE(krcp->nr_bkv_objs, 0); - raw_spin_unlock_irqrestore(&krcp->lock, flags); - - llist_for_each_safe(pos, n, page_list) { - free_page((unsigned long)pos); - freed++; - } - - return freed; -} - -static void -kvfree_rcu_bulk(struct kfree_rcu_cpu *krcp, - struct kvfree_rcu_bulk_data *bnode, int idx) -{ - unsigned long flags; - int i; - - if (!WARN_ON_ONCE(!poll_state_synchronize_rcu_full(&bnode->gp_snap))) { - debug_rcu_bhead_unqueue(bnode); - rcu_lock_acquire(&rcu_callback_map); - if (idx == 0) { // kmalloc() / kfree(). - trace_rcu_invoke_kfree_bulk_callback( - "slab", bnode->nr_records, - bnode->records); - - kfree_bulk(bnode->nr_records, bnode->records); - } else { // vmalloc() / vfree(). - for (i = 0; i < bnode->nr_records; i++) { - trace_rcu_invoke_kvfree_callback( - "slab", bnode->records[i], 0); - - vfree(bnode->records[i]); - } - } - rcu_lock_release(&rcu_callback_map); - } - - raw_spin_lock_irqsave(&krcp->lock, flags); - if (put_cached_bnode(krcp, bnode)) - bnode = NULL; - raw_spin_unlock_irqrestore(&krcp->lock, flags); - - if (bnode) - free_page((unsigned long) bnode); - - cond_resched_tasks_rcu_qs(); -} - -static void -kvfree_rcu_list(struct rcu_head *head) -{ - struct rcu_head *next; - - for (; head; head = next) { - void *ptr = (void *) head->func; - unsigned long offset = (void *) head - ptr; - - next = head->next; - debug_rcu_head_unqueue((struct rcu_head *)ptr); - rcu_lock_acquire(&rcu_callback_map); - trace_rcu_invoke_kvfree_callback("slab", head, offset); - - if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) - kvfree(ptr); - - rcu_lock_release(&rcu_callback_map); - cond_resched_tasks_rcu_qs(); - } -} - -/* - * This function is invoked in workqueue context after a grace period. - * It frees all the objects queued on ->bulk_head_free or ->head_free. - */ -static void kfree_rcu_work(struct work_struct *work) -{ - unsigned long flags; - struct kvfree_rcu_bulk_data *bnode, *n; - struct list_head bulk_head[FREE_N_CHANNELS]; - struct rcu_head *head; - struct kfree_rcu_cpu *krcp; - struct kfree_rcu_cpu_work *krwp; - struct rcu_gp_oldstate head_gp_snap; - int i; - - krwp = container_of(to_rcu_work(work), - struct kfree_rcu_cpu_work, rcu_work); - krcp = krwp->krcp; - - raw_spin_lock_irqsave(&krcp->lock, flags); - // Channels 1 and 2. - for (i = 0; i < FREE_N_CHANNELS; i++) - list_replace_init(&krwp->bulk_head_free[i], &bulk_head[i]); - - // Channel 3. - head = krwp->head_free; - krwp->head_free = NULL; - head_gp_snap = krwp->head_free_gp_snap; - raw_spin_unlock_irqrestore(&krcp->lock, flags); - - // Handle the first two channels. - for (i = 0; i < FREE_N_CHANNELS; i++) { - // Start from the tail page, so a GP is likely passed for it. - list_for_each_entry_safe(bnode, n, &bulk_head[i], list) - kvfree_rcu_bulk(krcp, bnode, i); - } - - /* - * This is used when the "bulk" path can not be used for the - * double-argument of kvfree_rcu(). This happens when the - * page-cache is empty, which means that objects are instead - * queued on a linked list through their rcu_head structures. - * This list is named "Channel 3". - */ - if (head && !WARN_ON_ONCE(!poll_state_synchronize_rcu_full(&head_gp_snap))) - kvfree_rcu_list(head); -} - -static bool -need_offload_krc(struct kfree_rcu_cpu *krcp) -{ - int i; - - for (i = 0; i < FREE_N_CHANNELS; i++) - if (!list_empty(&krcp->bulk_head[i])) - return true; - - return !!READ_ONCE(krcp->head); -} - -static bool -need_wait_for_krwp_work(struct kfree_rcu_cpu_work *krwp) -{ - int i; - - for (i = 0; i < FREE_N_CHANNELS; i++) - if (!list_empty(&krwp->bulk_head_free[i])) - return true; - - return !!krwp->head_free; -} - -static int krc_count(struct kfree_rcu_cpu *krcp) -{ - int sum = atomic_read(&krcp->head_count); - int i; - - for (i = 0; i < FREE_N_CHANNELS; i++) - sum += atomic_read(&krcp->bulk_count[i]); - - return sum; -} - -static void -__schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp) -{ - long delay, delay_left; - - delay = krc_count(krcp) >= KVFREE_BULK_MAX_ENTR ? 1:KFREE_DRAIN_JIFFIES; - if (delayed_work_pending(&krcp->monitor_work)) { - delay_left = krcp->monitor_work.timer.expires - jiffies; - if (delay < delay_left) - mod_delayed_work(system_unbound_wq, &krcp->monitor_work, delay); - return; - } - queue_delayed_work(system_unbound_wq, &krcp->monitor_work, delay); -} - -static void -schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp) -{ - unsigned long flags; - - raw_spin_lock_irqsave(&krcp->lock, flags); - __schedule_delayed_monitor_work(krcp); - raw_spin_unlock_irqrestore(&krcp->lock, flags); -} - -static void -kvfree_rcu_drain_ready(struct kfree_rcu_cpu *krcp) -{ - struct list_head bulk_ready[FREE_N_CHANNELS]; - struct kvfree_rcu_bulk_data *bnode, *n; - struct rcu_head *head_ready = NULL; - unsigned long flags; - int i; - - raw_spin_lock_irqsave(&krcp->lock, flags); - for (i = 0; i < FREE_N_CHANNELS; i++) { - INIT_LIST_HEAD(&bulk_ready[i]); - - list_for_each_entry_safe_reverse(bnode, n, &krcp->bulk_head[i], list) { - if (!poll_state_synchronize_rcu_full(&bnode->gp_snap)) - break; - - atomic_sub(bnode->nr_records, &krcp->bulk_count[i]); - list_move(&bnode->list, &bulk_ready[i]); - } - } - - if (krcp->head && poll_state_synchronize_rcu(krcp->head_gp_snap)) { - head_ready = krcp->head; - atomic_set(&krcp->head_count, 0); - WRITE_ONCE(krcp->head, NULL); - } - raw_spin_unlock_irqrestore(&krcp->lock, flags); - - for (i = 0; i < FREE_N_CHANNELS; i++) { - list_for_each_entry_safe(bnode, n, &bulk_ready[i], list) - kvfree_rcu_bulk(krcp, bnode, i); - } - - if (head_ready) - kvfree_rcu_list(head_ready); -} - -/* - * Return: %true if a work is queued, %false otherwise. - */ -static bool -kvfree_rcu_queue_batch(struct kfree_rcu_cpu *krcp) -{ - unsigned long flags; - bool queued = false; - int i, j; - - raw_spin_lock_irqsave(&krcp->lock, flags); - - // Attempt to start a new batch. - for (i = 0; i < KFREE_N_BATCHES; i++) { - struct kfree_rcu_cpu_work *krwp = &(krcp->krw_arr[i]); - - // Try to detach bulk_head or head and attach it, only when - // all channels are free. Any channel is not free means at krwp - // there is on-going rcu work to handle krwp's free business. - if (need_wait_for_krwp_work(krwp)) - continue; - - // kvfree_rcu_drain_ready() might handle this krcp, if so give up. - if (need_offload_krc(krcp)) { - // Channel 1 corresponds to the SLAB-pointer bulk path. - // Channel 2 corresponds to vmalloc-pointer bulk path. - for (j = 0; j < FREE_N_CHANNELS; j++) { - if (list_empty(&krwp->bulk_head_free[j])) { - atomic_set(&krcp->bulk_count[j], 0); - list_replace_init(&krcp->bulk_head[j], - &krwp->bulk_head_free[j]); - } - } - - // Channel 3 corresponds to both SLAB and vmalloc - // objects queued on the linked list. - if (!krwp->head_free) { - krwp->head_free = krcp->head; - get_state_synchronize_rcu_full(&krwp->head_free_gp_snap); - atomic_set(&krcp->head_count, 0); - WRITE_ONCE(krcp->head, NULL); - } - - // One work is per one batch, so there are three - // "free channels", the batch can handle. Break - // the loop since it is done with this CPU thus - // queuing an RCU work is _always_ success here. - queued = queue_rcu_work(system_unbound_wq, &krwp->rcu_work); - WARN_ON_ONCE(!queued); - break; - } - } - - raw_spin_unlock_irqrestore(&krcp->lock, flags); - return queued; -} - -/* - * This function is invoked after the KFREE_DRAIN_JIFFIES timeout. - */ -static void kfree_rcu_monitor(struct work_struct *work) -{ - struct kfree_rcu_cpu *krcp = container_of(work, - struct kfree_rcu_cpu, monitor_work.work); - - // Drain ready for reclaim. - kvfree_rcu_drain_ready(krcp); - - // Queue a batch for a rest. - kvfree_rcu_queue_batch(krcp); - - // If there is nothing to detach, it means that our job is - // successfully done here. In case of having at least one - // of the channels that is still busy we should rearm the - // work to repeat an attempt. Because previous batches are - // still in progress. - if (need_offload_krc(krcp)) - schedule_delayed_monitor_work(krcp); -} - -static void fill_page_cache_func(struct work_struct *work) -{ - struct kvfree_rcu_bulk_data *bnode; - struct kfree_rcu_cpu *krcp = - container_of(work, struct kfree_rcu_cpu, - page_cache_work.work); - unsigned long flags; - int nr_pages; - bool pushed; - int i; - - nr_pages = atomic_read(&krcp->backoff_page_cache_fill) ? - 1 : rcu_min_cached_objs; - - for (i = READ_ONCE(krcp->nr_bkv_objs); i < nr_pages; i++) { - bnode = (struct kvfree_rcu_bulk_data *) - __get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN); - - if (!bnode) - break; - - raw_spin_lock_irqsave(&krcp->lock, flags); - pushed = put_cached_bnode(krcp, bnode); - raw_spin_unlock_irqrestore(&krcp->lock, flags); - - if (!pushed) { - free_page((unsigned long) bnode); - break; - } - } - - atomic_set(&krcp->work_in_progress, 0); - atomic_set(&krcp->backoff_page_cache_fill, 0); -} - -// Record ptr in a page managed by krcp, with the pre-krc_this_cpu_lock() -// state specified by flags. If can_alloc is true, the caller must -// be schedulable and not be holding any locks or mutexes that might be -// acquired by the memory allocator or anything that it might invoke. -// Returns true if ptr was successfully recorded, else the caller must -// use a fallback. -static inline bool -add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, - unsigned long *flags, void *ptr, bool can_alloc) -{ - struct kvfree_rcu_bulk_data *bnode; - int idx; - - *krcp = krc_this_cpu_lock(flags); - if (unlikely(!(*krcp)->initialized)) - return false; - - idx = !!is_vmalloc_addr(ptr); - bnode = list_first_entry_or_null(&(*krcp)->bulk_head[idx], - struct kvfree_rcu_bulk_data, list); - - /* Check if a new block is required. */ - if (!bnode || bnode->nr_records == KVFREE_BULK_MAX_ENTR) { - bnode = get_cached_bnode(*krcp); - if (!bnode && can_alloc) { - krc_this_cpu_unlock(*krcp, *flags); - - // __GFP_NORETRY - allows a light-weight direct reclaim - // what is OK from minimizing of fallback hitting point of - // view. Apart of that it forbids any OOM invoking what is - // also beneficial since we are about to release memory soon. - // - // __GFP_NOMEMALLOC - prevents from consuming of all the - // memory reserves. Please note we have a fallback path. - // - // __GFP_NOWARN - it is supposed that an allocation can - // be failed under low memory or high memory pressure - // scenarios. - bnode = (struct kvfree_rcu_bulk_data *) - __get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN); - raw_spin_lock_irqsave(&(*krcp)->lock, *flags); - } - - if (!bnode) - return false; - - // Initialize the new block and attach it. - bnode->nr_records = 0; - list_add(&bnode->list, &(*krcp)->bulk_head[idx]); - } - - // Finally insert and update the GP for this page. - bnode->nr_records++; - bnode->records[bnode->nr_records - 1] = ptr; - get_state_synchronize_rcu_full(&bnode->gp_snap); - atomic_inc(&(*krcp)->bulk_count[idx]); - - return true; -} - -#if !defined(CONFIG_TINY_RCU) - -static enum hrtimer_restart -schedule_page_work_fn(struct hrtimer *t) -{ - struct kfree_rcu_cpu *krcp = - container_of(t, struct kfree_rcu_cpu, hrtimer); - - queue_delayed_work(system_highpri_wq, &krcp->page_cache_work, 0); - return HRTIMER_NORESTART; -} - -static void -run_page_cache_worker(struct kfree_rcu_cpu *krcp) -{ - // If cache disabled, bail out. - if (!rcu_min_cached_objs) - return; - - if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING && - !atomic_xchg(&krcp->work_in_progress, 1)) { - if (atomic_read(&krcp->backoff_page_cache_fill)) { - queue_delayed_work(system_unbound_wq, - &krcp->page_cache_work, - msecs_to_jiffies(rcu_delay_page_cache_fill_msec)); - } else { - hrtimer_init(&krcp->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); - krcp->hrtimer.function = schedule_page_work_fn; - hrtimer_start(&krcp->hrtimer, 0, HRTIMER_MODE_REL); - } - } -} - -void __init kfree_rcu_scheduler_running(void) -{ - int cpu; - - for_each_possible_cpu(cpu) { - struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - - if (need_offload_krc(krcp)) - schedule_delayed_monitor_work(krcp); - } -} - -/* - * Queue a request for lazy invocation of the appropriate free routine - * after a grace period. Please note that three paths are maintained, - * two for the common case using arrays of pointers and a third one that - * is used only when the main paths cannot be used, for example, due to - * memory pressure. - * - * Each kvfree_call_rcu() request is added to a batch. The batch will be drained - * every KFREE_DRAIN_JIFFIES number of jiffies. All the objects in the batch will - * be free'd in workqueue context. This allows us to: batch requests together to - * reduce the number of grace periods during heavy kfree_rcu()/kvfree_rcu() load. - */ -void kvfree_call_rcu(struct rcu_head *head, void *ptr) -{ - unsigned long flags; - struct kfree_rcu_cpu *krcp; - bool success; - - /* - * Please note there is a limitation for the head-less - * variant, that is why there is a clear rule for such - * objects: it can be used from might_sleep() context - * only. For other places please embed an rcu_head to - * your data. - */ - if (!head) - might_sleep(); - - // Queue the object but don't yet schedule the batch. - if (debug_rcu_head_queue(ptr)) { - // Probable double kfree_rcu(), just leak. - WARN_ONCE(1, "%s(): Double-freed call. rcu_head %p\n", - __func__, head); - - // Mark as success and leave. - return; - } - - kasan_record_aux_stack_noalloc(ptr); - success = add_ptr_to_bulk_krc_lock(&krcp, &flags, ptr, !head); - if (!success) { - run_page_cache_worker(krcp); - - if (head == NULL) - // Inline if kvfree_rcu(one_arg) call. - goto unlock_return; - - head->func = ptr; - head->next = krcp->head; - WRITE_ONCE(krcp->head, head); - atomic_inc(&krcp->head_count); - - // Take a snapshot for this krcp. - krcp->head_gp_snap = get_state_synchronize_rcu(); - success = true; - } - - /* - * The kvfree_rcu() caller considers the pointer freed at this point - * and likely removes any references to it. Since the actual slab - * freeing (and kmemleak_free()) is deferred, tell kmemleak to ignore - * this object (no scanning or false positives reporting). - */ - kmemleak_ignore(ptr); - - // Set timer to drain after KFREE_DRAIN_JIFFIES. - if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING) - __schedule_delayed_monitor_work(krcp); - -unlock_return: - krc_this_cpu_unlock(krcp, flags); - - /* - * Inline kvfree() after synchronize_rcu(). We can do - * it from might_sleep() context only, so the current - * CPU can pass the QS state. - */ - if (!success) { - debug_rcu_head_unqueue((struct rcu_head *) ptr); - synchronize_rcu(); - kvfree(ptr); - } -} -EXPORT_SYMBOL_GPL(kvfree_call_rcu); - -/** - * kvfree_rcu_barrier - Wait until all in-flight kvfree_rcu() complete. - * - * Note that a single argument of kvfree_rcu() call has a slow path that - * triggers synchronize_rcu() following by freeing a pointer. It is done - * before the return from the function. Therefore for any single-argument - * call that will result in a kfree() to a cache that is to be destroyed - * during module exit, it is developer's responsibility to ensure that all - * such calls have returned before the call to kmem_cache_destroy(). - */ -void kvfree_rcu_barrier(void) -{ - struct kfree_rcu_cpu_work *krwp; - struct kfree_rcu_cpu *krcp; - bool queued; - int i, cpu; - - /* - * Firstly we detach objects and queue them over an RCU-batch - * for all CPUs. Finally queued works are flushed for each CPU. - * - * Please note. If there are outstanding batches for a particular - * CPU, those have to be finished first following by queuing a new. - */ - for_each_possible_cpu(cpu) { - krcp = per_cpu_ptr(&krc, cpu); - - /* - * Check if this CPU has any objects which have been queued for a - * new GP completion. If not(means nothing to detach), we are done - * with it. If any batch is pending/running for this "krcp", below - * per-cpu flush_rcu_work() waits its completion(see last step). - */ - if (!need_offload_krc(krcp)) - continue; - - while (1) { - /* - * If we are not able to queue a new RCU work it means: - * - batches for this CPU are still in flight which should - * be flushed first and then repeat; - * - no objects to detach, because of concurrency. - */ - queued = kvfree_rcu_queue_batch(krcp); - - /* - * Bail out, if there is no need to offload this "krcp" - * anymore. As noted earlier it can run concurrently. - */ - if (queued || !need_offload_krc(krcp)) - break; - - /* There are ongoing batches. */ - for (i = 0; i < KFREE_N_BATCHES; i++) { - krwp = &(krcp->krw_arr[i]); - flush_rcu_work(&krwp->rcu_work); - } - } - } - - /* - * Now we guarantee that all objects are flushed. - */ - for_each_possible_cpu(cpu) { - krcp = per_cpu_ptr(&krc, cpu); - - /* - * A monitor work can drain ready to reclaim objects - * directly. Wait its completion if running or pending. - */ - cancel_delayed_work_sync(&krcp->monitor_work); - - for (i = 0; i < KFREE_N_BATCHES; i++) { - krwp = &(krcp->krw_arr[i]); - flush_rcu_work(&krwp->rcu_work); - } - } -} -EXPORT_SYMBOL_GPL(kvfree_rcu_barrier); - -#endif /* #if !defined(CONFIG_TINY_RCU) */ - -static unsigned long -kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc) -{ - int cpu; - unsigned long count = 0; - - /* Snapshot count of all CPUs */ - for_each_possible_cpu(cpu) { - struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - - count += krc_count(krcp); - count += READ_ONCE(krcp->nr_bkv_objs); - atomic_set(&krcp->backoff_page_cache_fill, 1); - } - - return count == 0 ? SHRINK_EMPTY : count; -} - -static unsigned long -kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) -{ - int cpu, freed = 0; - - for_each_possible_cpu(cpu) { - int count; - struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - - count = krc_count(krcp); - count += drain_page_cache(krcp); - kfree_rcu_monitor(&krcp->monitor_work.work); - - sc->nr_to_scan -= count; - freed += count; - - if (sc->nr_to_scan <= 0) - break; - } - - return freed == 0 ? SHRINK_STOP : freed; -} - /* * During early boot, any blocking grace-period wait automatically * implies a grace period. @@ -5652,55 +4822,6 @@ static void __init rcu_dump_rcu_node_tree(void) struct workqueue_struct *rcu_gp_wq; -void __init kvfree_rcu_init(void) -{ - int cpu; - int i, j; - struct shrinker *kfree_rcu_shrinker; - - /* Clamp it to [0:100] seconds interval. */ - if (rcu_delay_page_cache_fill_msec < 0 || - rcu_delay_page_cache_fill_msec > 100 * MSEC_PER_SEC) { - - rcu_delay_page_cache_fill_msec = - clamp(rcu_delay_page_cache_fill_msec, 0, - (int) (100 * MSEC_PER_SEC)); - - pr_info("Adjusting rcutree.rcu_delay_page_cache_fill_msec to %d ms.\n", - rcu_delay_page_cache_fill_msec); - } - - for_each_possible_cpu(cpu) { - struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - - for (i = 0; i < KFREE_N_BATCHES; i++) { - INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); - krcp->krw_arr[i].krcp = krcp; - - for (j = 0; j < FREE_N_CHANNELS; j++) - INIT_LIST_HEAD(&krcp->krw_arr[i].bulk_head_free[j]); - } - - for (i = 0; i < FREE_N_CHANNELS; i++) - INIT_LIST_HEAD(&krcp->bulk_head[i]); - - INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor); - INIT_DELAYED_WORK(&krcp->page_cache_work, fill_page_cache_func); - krcp->initialized = true; - } - - kfree_rcu_shrinker = shrinker_alloc(0, "slab-kvfree-rcu"); - if (!kfree_rcu_shrinker) { - pr_err("Failed to allocate kfree_rcu() shrinker!\n"); - return; - } - - kfree_rcu_shrinker->count_objects = kfree_rcu_shrink_count; - kfree_rcu_shrinker->scan_objects = kfree_rcu_shrink_scan; - - shrinker_register(kfree_rcu_shrinker); -} - void __init rcu_init(void) { int cpu = smp_processor_id(); diff --git a/mm/slab_common.c b/mm/slab_common.c index a29457bef626..69f2d19010de 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -28,7 +28,9 @@ #include #include #include +#include +#include "../kernel/rcu/rcu.h" #include "internal.h" #include "slab.h" @@ -1282,3 +1284,881 @@ EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); EXPORT_TRACEPOINT_SYMBOL(kfree); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free); +/* + * This rcu parameter is runtime-read-only. It reflects + * a minimum allowed number of objects which can be cached + * per-CPU. Object size is equal to one page. This value + * can be changed at boot time. + */ +static int rcu_min_cached_objs = 5; +module_param(rcu_min_cached_objs, int, 0444); + +// A page shrinker can ask for pages to be freed to make them +// available for other parts of the system. This usually happens +// under low memory conditions, and in that case we should also +// defer page-cache filling for a short time period. +// +// The default value is 5 seconds, which is long enough to reduce +// interference with the shrinker while it asks other systems to +// drain their caches. +static int rcu_delay_page_cache_fill_msec = 5000; +module_param(rcu_delay_page_cache_fill_msec, int, 0444); + +/* Maximum number of jiffies to wait before draining a batch. */ +#define KFREE_DRAIN_JIFFIES (5 * HZ) +#define KFREE_N_BATCHES 2 +#define FREE_N_CHANNELS 2 + +/** + * struct kvfree_rcu_bulk_data - single block to store kvfree_rcu() pointers + * @list: List node. All blocks are linked between each other + * @gp_snap: Snapshot of RCU state for objects placed to this bulk + * @nr_records: Number of active pointers in the array + * @records: Array of the kvfree_rcu() pointers + */ +struct kvfree_rcu_bulk_data { + struct list_head list; + struct rcu_gp_oldstate gp_snap; + unsigned long nr_records; + void *records[] __counted_by(nr_records); +}; + +/* + * This macro defines how many entries the "records" array + * will contain. It is based on the fact that the size of + * kvfree_rcu_bulk_data structure becomes exactly one page. + */ +#define KVFREE_BULK_MAX_ENTR \ + ((PAGE_SIZE - sizeof(struct kvfree_rcu_bulk_data)) / sizeof(void *)) + +/** + * struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests + * @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period + * @head_free: List of kfree_rcu() objects waiting for a grace period + * @head_free_gp_snap: Grace-period snapshot to check for attempted premature frees. + * @bulk_head_free: Bulk-List of kvfree_rcu() objects waiting for a grace period + * @krcp: Pointer to @kfree_rcu_cpu structure + */ + +struct kfree_rcu_cpu_work { + struct rcu_work rcu_work; + struct rcu_head *head_free; + struct rcu_gp_oldstate head_free_gp_snap; + struct list_head bulk_head_free[FREE_N_CHANNELS]; + struct kfree_rcu_cpu *krcp; +}; + +/** + * struct kfree_rcu_cpu - batch up kfree_rcu() requests for RCU grace period + * @head: List of kfree_rcu() objects not yet waiting for a grace period + * @head_gp_snap: Snapshot of RCU state for objects placed to "@head" + * @bulk_head: Bulk-List of kvfree_rcu() objects not yet waiting for a grace period + * @krw_arr: Array of batches of kfree_rcu() objects waiting for a grace period + * @lock: Synchronize access to this structure + * @monitor_work: Promote @head to @head_free after KFREE_DRAIN_JIFFIES + * @initialized: The @rcu_work fields have been initialized + * @head_count: Number of objects in rcu_head singular list + * @bulk_count: Number of objects in bulk-list + * @bkvcache: + * A simple cache list that contains objects for reuse purpose. + * In order to save some per-cpu space the list is singular. + * Even though it is lockless an access has to be protected by the + * per-cpu lock. + * @page_cache_work: A work to refill the cache when it is empty + * @backoff_page_cache_fill: Delay cache refills + * @work_in_progress: Indicates that page_cache_work is running + * @hrtimer: A hrtimer for scheduling a page_cache_work + * @nr_bkv_objs: number of allocated objects at @bkvcache. + * + * This is a per-CPU structure. The reason that it is not included in + * the rcu_data structure is to permit this code to be extracted from + * the RCU files. Such extraction could allow further optimization of + * the interactions with the slab allocators. + */ +struct kfree_rcu_cpu { + // Objects queued on a linked list + // through their rcu_head structures. + struct rcu_head *head; + unsigned long head_gp_snap; + atomic_t head_count; + + // Objects queued on a bulk-list. + struct list_head bulk_head[FREE_N_CHANNELS]; + atomic_t bulk_count[FREE_N_CHANNELS]; + + struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES]; + raw_spinlock_t lock; + struct delayed_work monitor_work; + bool initialized; + + struct delayed_work page_cache_work; + atomic_t backoff_page_cache_fill; + atomic_t work_in_progress; + struct hrtimer hrtimer; + + struct llist_head bkvcache; + int nr_bkv_objs; +}; + +static DEFINE_PER_CPU(struct kfree_rcu_cpu, krc) = { + .lock = __RAW_SPIN_LOCK_UNLOCKED(krc.lock), +}; + +static __always_inline void +debug_rcu_bhead_unqueue(struct kvfree_rcu_bulk_data *bhead) +{ +#ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD + int i; + + for (i = 0; i < bhead->nr_records; i++) + debug_rcu_head_unqueue((struct rcu_head *)(bhead->records[i])); +#endif +} + +static inline struct kfree_rcu_cpu * +krc_this_cpu_lock(unsigned long *flags) +{ + struct kfree_rcu_cpu *krcp; + + local_irq_save(*flags); // For safely calling this_cpu_ptr(). + krcp = this_cpu_ptr(&krc); + raw_spin_lock(&krcp->lock); + + return krcp; +} + +static inline void +krc_this_cpu_unlock(struct kfree_rcu_cpu *krcp, unsigned long flags) +{ + raw_spin_unlock_irqrestore(&krcp->lock, flags); +} + +static inline struct kvfree_rcu_bulk_data * +get_cached_bnode(struct kfree_rcu_cpu *krcp) +{ + if (!krcp->nr_bkv_objs) + return NULL; + + WRITE_ONCE(krcp->nr_bkv_objs, krcp->nr_bkv_objs - 1); + return (struct kvfree_rcu_bulk_data *) + llist_del_first(&krcp->bkvcache); +} + +static inline bool +put_cached_bnode(struct kfree_rcu_cpu *krcp, + struct kvfree_rcu_bulk_data *bnode) +{ + // Check the limit. + if (krcp->nr_bkv_objs >= rcu_min_cached_objs) + return false; + + llist_add((struct llist_node *) bnode, &krcp->bkvcache); + WRITE_ONCE(krcp->nr_bkv_objs, krcp->nr_bkv_objs + 1); + return true; +} + +static int +drain_page_cache(struct kfree_rcu_cpu *krcp) +{ + unsigned long flags; + struct llist_node *page_list, *pos, *n; + int freed = 0; + + if (!rcu_min_cached_objs) + return 0; + + raw_spin_lock_irqsave(&krcp->lock, flags); + page_list = llist_del_all(&krcp->bkvcache); + WRITE_ONCE(krcp->nr_bkv_objs, 0); + raw_spin_unlock_irqrestore(&krcp->lock, flags); + + llist_for_each_safe(pos, n, page_list) { + free_page((unsigned long)pos); + freed++; + } + + return freed; +} + +static void +kvfree_rcu_bulk(struct kfree_rcu_cpu *krcp, + struct kvfree_rcu_bulk_data *bnode, int idx) +{ + unsigned long flags; + int i; + + if (!WARN_ON_ONCE(!poll_state_synchronize_rcu_full(&bnode->gp_snap))) { + debug_rcu_bhead_unqueue(bnode); + rcu_lock_acquire(&rcu_callback_map); + if (idx == 0) { // kmalloc() / kfree(). + trace_rcu_invoke_kfree_bulk_callback( + "slab", bnode->nr_records, + bnode->records); + + kfree_bulk(bnode->nr_records, bnode->records); + } else { // vmalloc() / vfree(). + for (i = 0; i < bnode->nr_records; i++) { + trace_rcu_invoke_kvfree_callback( + "slab", bnode->records[i], 0); + + vfree(bnode->records[i]); + } + } + rcu_lock_release(&rcu_callback_map); + } + + raw_spin_lock_irqsave(&krcp->lock, flags); + if (put_cached_bnode(krcp, bnode)) + bnode = NULL; + raw_spin_unlock_irqrestore(&krcp->lock, flags); + + if (bnode) + free_page((unsigned long) bnode); + + cond_resched_tasks_rcu_qs(); +} + +static void +kvfree_rcu_list(struct rcu_head *head) +{ + struct rcu_head *next; + + for (; head; head = next) { + void *ptr = (void *) head->func; + unsigned long offset = (void *) head - ptr; + + next = head->next; + debug_rcu_head_unqueue((struct rcu_head *)ptr); + rcu_lock_acquire(&rcu_callback_map); + trace_rcu_invoke_kvfree_callback("slab", head, offset); + + if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) + kvfree(ptr); + + rcu_lock_release(&rcu_callback_map); + cond_resched_tasks_rcu_qs(); + } +} + +/* + * This function is invoked in workqueue context after a grace period. + * It frees all the objects queued on ->bulk_head_free or ->head_free. + */ +static void kfree_rcu_work(struct work_struct *work) +{ + unsigned long flags; + struct kvfree_rcu_bulk_data *bnode, *n; + struct list_head bulk_head[FREE_N_CHANNELS]; + struct rcu_head *head; + struct kfree_rcu_cpu *krcp; + struct kfree_rcu_cpu_work *krwp; + struct rcu_gp_oldstate head_gp_snap; + int i; + + krwp = container_of(to_rcu_work(work), + struct kfree_rcu_cpu_work, rcu_work); + krcp = krwp->krcp; + + raw_spin_lock_irqsave(&krcp->lock, flags); + // Channels 1 and 2. + for (i = 0; i < FREE_N_CHANNELS; i++) + list_replace_init(&krwp->bulk_head_free[i], &bulk_head[i]); + + // Channel 3. + head = krwp->head_free; + krwp->head_free = NULL; + head_gp_snap = krwp->head_free_gp_snap; + raw_spin_unlock_irqrestore(&krcp->lock, flags); + + // Handle the first two channels. + for (i = 0; i < FREE_N_CHANNELS; i++) { + // Start from the tail page, so a GP is likely passed for it. + list_for_each_entry_safe(bnode, n, &bulk_head[i], list) + kvfree_rcu_bulk(krcp, bnode, i); + } + + /* + * This is used when the "bulk" path can not be used for the + * double-argument of kvfree_rcu(). This happens when the + * page-cache is empty, which means that objects are instead + * queued on a linked list through their rcu_head structures. + * This list is named "Channel 3". + */ + if (head && !WARN_ON_ONCE(!poll_state_synchronize_rcu_full(&head_gp_snap))) + kvfree_rcu_list(head); +} + +static bool +need_offload_krc(struct kfree_rcu_cpu *krcp) +{ + int i; + + for (i = 0; i < FREE_N_CHANNELS; i++) + if (!list_empty(&krcp->bulk_head[i])) + return true; + + return !!READ_ONCE(krcp->head); +} + +static bool +need_wait_for_krwp_work(struct kfree_rcu_cpu_work *krwp) +{ + int i; + + for (i = 0; i < FREE_N_CHANNELS; i++) + if (!list_empty(&krwp->bulk_head_free[i])) + return true; + + return !!krwp->head_free; +} + +static int krc_count(struct kfree_rcu_cpu *krcp) +{ + int sum = atomic_read(&krcp->head_count); + int i; + + for (i = 0; i < FREE_N_CHANNELS; i++) + sum += atomic_read(&krcp->bulk_count[i]); + + return sum; +} + +static void +__schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp) +{ + long delay, delay_left; + + delay = krc_count(krcp) >= KVFREE_BULK_MAX_ENTR ? 1:KFREE_DRAIN_JIFFIES; + if (delayed_work_pending(&krcp->monitor_work)) { + delay_left = krcp->monitor_work.timer.expires - jiffies; + if (delay < delay_left) + mod_delayed_work(system_unbound_wq, &krcp->monitor_work, delay); + return; + } + queue_delayed_work(system_unbound_wq, &krcp->monitor_work, delay); +} + +static void +schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&krcp->lock, flags); + __schedule_delayed_monitor_work(krcp); + raw_spin_unlock_irqrestore(&krcp->lock, flags); +} + +static void +kvfree_rcu_drain_ready(struct kfree_rcu_cpu *krcp) +{ + struct list_head bulk_ready[FREE_N_CHANNELS]; + struct kvfree_rcu_bulk_data *bnode, *n; + struct rcu_head *head_ready = NULL; + unsigned long flags; + int i; + + raw_spin_lock_irqsave(&krcp->lock, flags); + for (i = 0; i < FREE_N_CHANNELS; i++) { + INIT_LIST_HEAD(&bulk_ready[i]); + + list_for_each_entry_safe_reverse(bnode, n, &krcp->bulk_head[i], list) { + if (!poll_state_synchronize_rcu_full(&bnode->gp_snap)) + break; + + atomic_sub(bnode->nr_records, &krcp->bulk_count[i]); + list_move(&bnode->list, &bulk_ready[i]); + } + } + + if (krcp->head && poll_state_synchronize_rcu(krcp->head_gp_snap)) { + head_ready = krcp->head; + atomic_set(&krcp->head_count, 0); + WRITE_ONCE(krcp->head, NULL); + } + raw_spin_unlock_irqrestore(&krcp->lock, flags); + + for (i = 0; i < FREE_N_CHANNELS; i++) { + list_for_each_entry_safe(bnode, n, &bulk_ready[i], list) + kvfree_rcu_bulk(krcp, bnode, i); + } + + if (head_ready) + kvfree_rcu_list(head_ready); +} + +/* + * Return: %true if a work is queued, %false otherwise. + */ +static bool +kvfree_rcu_queue_batch(struct kfree_rcu_cpu *krcp) +{ + unsigned long flags; + bool queued = false; + int i, j; + + raw_spin_lock_irqsave(&krcp->lock, flags); + + // Attempt to start a new batch. + for (i = 0; i < KFREE_N_BATCHES; i++) { + struct kfree_rcu_cpu_work *krwp = &(krcp->krw_arr[i]); + + // Try to detach bulk_head or head and attach it, only when + // all channels are free. Any channel is not free means at krwp + // there is on-going rcu work to handle krwp's free business. + if (need_wait_for_krwp_work(krwp)) + continue; + + // kvfree_rcu_drain_ready() might handle this krcp, if so give up. + if (need_offload_krc(krcp)) { + // Channel 1 corresponds to the SLAB-pointer bulk path. + // Channel 2 corresponds to vmalloc-pointer bulk path. + for (j = 0; j < FREE_N_CHANNELS; j++) { + if (list_empty(&krwp->bulk_head_free[j])) { + atomic_set(&krcp->bulk_count[j], 0); + list_replace_init(&krcp->bulk_head[j], + &krwp->bulk_head_free[j]); + } + } + + // Channel 3 corresponds to both SLAB and vmalloc + // objects queued on the linked list. + if (!krwp->head_free) { + krwp->head_free = krcp->head; + get_state_synchronize_rcu_full(&krwp->head_free_gp_snap); + atomic_set(&krcp->head_count, 0); + WRITE_ONCE(krcp->head, NULL); + } + + // One work is per one batch, so there are three + // "free channels", the batch can handle. Break + // the loop since it is done with this CPU thus + // queuing an RCU work is _always_ success here. + queued = queue_rcu_work(system_unbound_wq, &krwp->rcu_work); + WARN_ON_ONCE(!queued); + break; + } + } + + raw_spin_unlock_irqrestore(&krcp->lock, flags); + return queued; +} + +/* + * This function is invoked after the KFREE_DRAIN_JIFFIES timeout. + */ +static void kfree_rcu_monitor(struct work_struct *work) +{ + struct kfree_rcu_cpu *krcp = container_of(work, + struct kfree_rcu_cpu, monitor_work.work); + + // Drain ready for reclaim. + kvfree_rcu_drain_ready(krcp); + + // Queue a batch for a rest. + kvfree_rcu_queue_batch(krcp); + + // If there is nothing to detach, it means that our job is + // successfully done here. In case of having at least one + // of the channels that is still busy we should rearm the + // work to repeat an attempt. Because previous batches are + // still in progress. + if (need_offload_krc(krcp)) + schedule_delayed_monitor_work(krcp); +} + +static void fill_page_cache_func(struct work_struct *work) +{ + struct kvfree_rcu_bulk_data *bnode; + struct kfree_rcu_cpu *krcp = + container_of(work, struct kfree_rcu_cpu, + page_cache_work.work); + unsigned long flags; + int nr_pages; + bool pushed; + int i; + + nr_pages = atomic_read(&krcp->backoff_page_cache_fill) ? + 1 : rcu_min_cached_objs; + + for (i = READ_ONCE(krcp->nr_bkv_objs); i < nr_pages; i++) { + bnode = (struct kvfree_rcu_bulk_data *) + __get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN); + + if (!bnode) + break; + + raw_spin_lock_irqsave(&krcp->lock, flags); + pushed = put_cached_bnode(krcp, bnode); + raw_spin_unlock_irqrestore(&krcp->lock, flags); + + if (!pushed) { + free_page((unsigned long) bnode); + break; + } + } + + atomic_set(&krcp->work_in_progress, 0); + atomic_set(&krcp->backoff_page_cache_fill, 0); +} + +// Record ptr in a page managed by krcp, with the pre-krc_this_cpu_lock() +// state specified by flags. If can_alloc is true, the caller must +// be schedulable and not be holding any locks or mutexes that might be +// acquired by the memory allocator or anything that it might invoke. +// Returns true if ptr was successfully recorded, else the caller must +// use a fallback. +static inline bool +add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, + unsigned long *flags, void *ptr, bool can_alloc) +{ + struct kvfree_rcu_bulk_data *bnode; + int idx; + + *krcp = krc_this_cpu_lock(flags); + if (unlikely(!(*krcp)->initialized)) + return false; + + idx = !!is_vmalloc_addr(ptr); + bnode = list_first_entry_or_null(&(*krcp)->bulk_head[idx], + struct kvfree_rcu_bulk_data, list); + + /* Check if a new block is required. */ + if (!bnode || bnode->nr_records == KVFREE_BULK_MAX_ENTR) { + bnode = get_cached_bnode(*krcp); + if (!bnode && can_alloc) { + krc_this_cpu_unlock(*krcp, *flags); + + // __GFP_NORETRY - allows a light-weight direct reclaim + // what is OK from minimizing of fallback hitting point of + // view. Apart of that it forbids any OOM invoking what is + // also beneficial since we are about to release memory soon. + // + // __GFP_NOMEMALLOC - prevents from consuming of all the + // memory reserves. Please note we have a fallback path. + // + // __GFP_NOWARN - it is supposed that an allocation can + // be failed under low memory or high memory pressure + // scenarios. + bnode = (struct kvfree_rcu_bulk_data *) + __get_free_page(GFP_KERNEL | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN); + raw_spin_lock_irqsave(&(*krcp)->lock, *flags); + } + + if (!bnode) + return false; + + // Initialize the new block and attach it. + bnode->nr_records = 0; + list_add(&bnode->list, &(*krcp)->bulk_head[idx]); + } + + // Finally insert and update the GP for this page. + bnode->nr_records++; + bnode->records[bnode->nr_records - 1] = ptr; + get_state_synchronize_rcu_full(&bnode->gp_snap); + atomic_inc(&(*krcp)->bulk_count[idx]); + + return true; +} + +#if !defined(CONFIG_TINY_RCU) + +static enum hrtimer_restart +schedule_page_work_fn(struct hrtimer *t) +{ + struct kfree_rcu_cpu *krcp = + container_of(t, struct kfree_rcu_cpu, hrtimer); + + queue_delayed_work(system_highpri_wq, &krcp->page_cache_work, 0); + return HRTIMER_NORESTART; +} + +static void +run_page_cache_worker(struct kfree_rcu_cpu *krcp) +{ + // If cache disabled, bail out. + if (!rcu_min_cached_objs) + return; + + if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING && + !atomic_xchg(&krcp->work_in_progress, 1)) { + if (atomic_read(&krcp->backoff_page_cache_fill)) { + queue_delayed_work(system_unbound_wq, + &krcp->page_cache_work, + msecs_to_jiffies(rcu_delay_page_cache_fill_msec)); + } else { + hrtimer_init(&krcp->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + krcp->hrtimer.function = schedule_page_work_fn; + hrtimer_start(&krcp->hrtimer, 0, HRTIMER_MODE_REL); + } + } +} + +void __init kfree_rcu_scheduler_running(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { + struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); + + if (need_offload_krc(krcp)) + schedule_delayed_monitor_work(krcp); + } +} + +/* + * Queue a request for lazy invocation of the appropriate free routine + * after a grace period. Please note that three paths are maintained, + * two for the common case using arrays of pointers and a third one that + * is used only when the main paths cannot be used, for example, due to + * memory pressure. + * + * Each kvfree_call_rcu() request is added to a batch. The batch will be drained + * every KFREE_DRAIN_JIFFIES number of jiffies. All the objects in the batch will + * be free'd in workqueue context. This allows us to: batch requests together to + * reduce the number of grace periods during heavy kfree_rcu()/kvfree_rcu() load. + */ +void kvfree_call_rcu(struct rcu_head *head, void *ptr) +{ + unsigned long flags; + struct kfree_rcu_cpu *krcp; + bool success; + + /* + * Please note there is a limitation for the head-less + * variant, that is why there is a clear rule for such + * objects: it can be used from might_sleep() context + * only. For other places please embed an rcu_head to + * your data. + */ + if (!head) + might_sleep(); + + // Queue the object but don't yet schedule the batch. + if (debug_rcu_head_queue(ptr)) { + // Probable double kfree_rcu(), just leak. + WARN_ONCE(1, "%s(): Double-freed call. rcu_head %p\n", + __func__, head); + + // Mark as success and leave. + return; + } + + kasan_record_aux_stack_noalloc(ptr); + success = add_ptr_to_bulk_krc_lock(&krcp, &flags, ptr, !head); + if (!success) { + run_page_cache_worker(krcp); + + if (head == NULL) + // Inline if kvfree_rcu(one_arg) call. + goto unlock_return; + + head->func = ptr; + head->next = krcp->head; + WRITE_ONCE(krcp->head, head); + atomic_inc(&krcp->head_count); + + // Take a snapshot for this krcp. + krcp->head_gp_snap = get_state_synchronize_rcu(); + success = true; + } + + /* + * The kvfree_rcu() caller considers the pointer freed at this point + * and likely removes any references to it. Since the actual slab + * freeing (and kmemleak_free()) is deferred, tell kmemleak to ignore + * this object (no scanning or false positives reporting). + */ + kmemleak_ignore(ptr); + + // Set timer to drain after KFREE_DRAIN_JIFFIES. + if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING) + __schedule_delayed_monitor_work(krcp); + +unlock_return: + krc_this_cpu_unlock(krcp, flags); + + /* + * Inline kvfree() after synchronize_rcu(). We can do + * it from might_sleep() context only, so the current + * CPU can pass the QS state. + */ + if (!success) { + debug_rcu_head_unqueue((struct rcu_head *) ptr); + synchronize_rcu(); + kvfree(ptr); + } +} +EXPORT_SYMBOL_GPL(kvfree_call_rcu); + +/** + * kvfree_rcu_barrier - Wait until all in-flight kvfree_rcu() complete. + * + * Note that a single argument of kvfree_rcu() call has a slow path that + * triggers synchronize_rcu() following by freeing a pointer. It is done + * before the return from the function. Therefore for any single-argument + * call that will result in a kfree() to a cache that is to be destroyed + * during module exit, it is developer's responsibility to ensure that all + * such calls have returned before the call to kmem_cache_destroy(). + */ +void kvfree_rcu_barrier(void) +{ + struct kfree_rcu_cpu_work *krwp; + struct kfree_rcu_cpu *krcp; + bool queued; + int i, cpu; + + /* + * Firstly we detach objects and queue them over an RCU-batch + * for all CPUs. Finally queued works are flushed for each CPU. + * + * Please note. If there are outstanding batches for a particular + * CPU, those have to be finished first following by queuing a new. + */ + for_each_possible_cpu(cpu) { + krcp = per_cpu_ptr(&krc, cpu); + + /* + * Check if this CPU has any objects which have been queued for a + * new GP completion. If not(means nothing to detach), we are done + * with it. If any batch is pending/running for this "krcp", below + * per-cpu flush_rcu_work() waits its completion(see last step). + */ + if (!need_offload_krc(krcp)) + continue; + + while (1) { + /* + * If we are not able to queue a new RCU work it means: + * - batches for this CPU are still in flight which should + * be flushed first and then repeat; + * - no objects to detach, because of concurrency. + */ + queued = kvfree_rcu_queue_batch(krcp); + + /* + * Bail out, if there is no need to offload this "krcp" + * anymore. As noted earlier it can run concurrently. + */ + if (queued || !need_offload_krc(krcp)) + break; + + /* There are ongoing batches. */ + for (i = 0; i < KFREE_N_BATCHES; i++) { + krwp = &(krcp->krw_arr[i]); + flush_rcu_work(&krwp->rcu_work); + } + } + } + + /* + * Now we guarantee that all objects are flushed. + */ + for_each_possible_cpu(cpu) { + krcp = per_cpu_ptr(&krc, cpu); + + /* + * A monitor work can drain ready to reclaim objects + * directly. Wait its completion if running or pending. + */ + cancel_delayed_work_sync(&krcp->monitor_work); + + for (i = 0; i < KFREE_N_BATCHES; i++) { + krwp = &(krcp->krw_arr[i]); + flush_rcu_work(&krwp->rcu_work); + } + } +} +EXPORT_SYMBOL_GPL(kvfree_rcu_barrier); + +#endif /* #if !defined(CONFIG_TINY_RCU) */ + +static unsigned long +kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc) +{ + int cpu; + unsigned long count = 0; + + /* Snapshot count of all CPUs */ + for_each_possible_cpu(cpu) { + struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); + + count += krc_count(krcp); + count += READ_ONCE(krcp->nr_bkv_objs); + atomic_set(&krcp->backoff_page_cache_fill, 1); + } + + return count == 0 ? SHRINK_EMPTY : count; +} + +static unsigned long +kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) +{ + int cpu, freed = 0; + + for_each_possible_cpu(cpu) { + int count; + struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); + + count = krc_count(krcp); + count += drain_page_cache(krcp); + kfree_rcu_monitor(&krcp->monitor_work.work); + + sc->nr_to_scan -= count; + freed += count; + + if (sc->nr_to_scan <= 0) + break; + } + + return freed == 0 ? SHRINK_STOP : freed; +} + +void __init kvfree_rcu_init(void) +{ + int cpu; + int i, j; + struct shrinker *kfree_rcu_shrinker; + + /* Clamp it to [0:100] seconds interval. */ + if (rcu_delay_page_cache_fill_msec < 0 || + rcu_delay_page_cache_fill_msec > 100 * MSEC_PER_SEC) { + + rcu_delay_page_cache_fill_msec = + clamp(rcu_delay_page_cache_fill_msec, 0, + (int) (100 * MSEC_PER_SEC)); + + pr_info("Adjusting rcutree.rcu_delay_page_cache_fill_msec to %d ms.\n", + rcu_delay_page_cache_fill_msec); + } + + for_each_possible_cpu(cpu) { + struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); + + for (i = 0; i < KFREE_N_BATCHES; i++) { + INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); + krcp->krw_arr[i].krcp = krcp; + + for (j = 0; j < FREE_N_CHANNELS; j++) + INIT_LIST_HEAD(&krcp->krw_arr[i].bulk_head_free[j]); + } + + for (i = 0; i < FREE_N_CHANNELS; i++) + INIT_LIST_HEAD(&krcp->bulk_head[i]); + + INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor); + INIT_DELAYED_WORK(&krcp->page_cache_work, fill_page_cache_func); + krcp->initialized = true; + } + + kfree_rcu_shrinker = shrinker_alloc(0, "slab-kvfree-rcu"); + if (!kfree_rcu_shrinker) { + pr_err("Failed to allocate kfree_rcu() shrinker!\n"); + return; + } + + kfree_rcu_shrinker->count_objects = kfree_rcu_shrink_count; + kfree_rcu_shrinker->scan_objects = kfree_rcu_shrink_scan; + + shrinker_register(kfree_rcu_shrinker); +}