From patchwork Mon Sep 16 05:08:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neeraj Upadhyay X-Patchwork-Id: 13805008 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61789C3ABB2 for ; Mon, 16 Sep 2024 05:09:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D34D36B008C; Mon, 16 Sep 2024 01:09:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CE5116B0092; Mon, 16 Sep 2024 01:09:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B379D6B0093; Mon, 16 Sep 2024 01:09:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8EEE66B008C for ; Mon, 16 Sep 2024 01:09:00 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 45D871C066A for ; Mon, 16 Sep 2024 05:09:00 +0000 (UTC) X-FDA: 82569422040.25.F632BA0 Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2052.outbound.protection.outlook.com [40.107.100.52]) by imf19.hostedemail.com (Postfix) with ESMTP id 4F1DD1A0010 for ; Mon, 16 Sep 2024 05:08:57 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=EhiQEAT0; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf19.hostedemail.com: domain of Neeraj.Upadhyay@amd.com designates 40.107.100.52 as permitted sender) smtp.mailfrom=Neeraj.Upadhyay@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726463229; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2FBVw+pyjVJ0fOe69NB1TgCfKY+dzuWS4nyeSzMm6vs=; b=0OFRDnjvFMDnR5c8hJNlvtRAIRNigrwgShXy69M99bZfqS37hkbFG6NVeIXUwHvYRTWBTi jrXf2oez36PhOuK80Q5mXey5fN0MIiEuxYkjav8urioY5/rNhchBOB75sPhZKRtNleuaI/ 4ynyhULeiRuADOwm6wmvek5SUBYNGyU= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1726463229; a=rsa-sha256; cv=pass; b=o0ttJY3cAkJMxPYqB3ATc8IJ418AtV6YYMFVHoYmfzhjjkVeUvrR2sl86TLRBkKUzflDro F/0XJTzaqfPZJHU4rtKjmd7j7gCovGwaMR8aDRYdoCOlXCB9hw/vmlm9dpCqJXKvcE1chg u+RWiV4mmWM12xig8jFykna0wzFYlAc= ARC-Authentication-Results: i=2; imf19.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=EhiQEAT0; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf19.hostedemail.com: domain of Neeraj.Upadhyay@amd.com designates 40.107.100.52 as permitted sender) smtp.mailfrom=Neeraj.Upadhyay@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Y8whDK5gJFXerI/MTyfY0FBne/1JRw+lSXXunm7oyn1LPYQyOe/n3IDnWB8ZsulRyz44YH69xw1fU4dFMvLL7alcc6BwmAd7A9ka4erIbIvFTECJS5meOMB9GGwxeMbABj9/z4WtJ/76ciVDCg9WsoqA5bPbeAvtiDoBbMIopx0NGRfB6pigIjCXp4qDs8JBSNtckvgsRdGnbIQGWcHYxb8PwYAG3pQln47zwh7+23H8K2DH1LceDPZ5Xceen4eg/ugvHRAAjj0CwHmu5QN3xtDwOtw9oiR9pXzc9EK+pRZgrWhXcBVD3nSogavqb1kSkuYd4AKAC2H6+OopETz+lQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2FBVw+pyjVJ0fOe69NB1TgCfKY+dzuWS4nyeSzMm6vs=; b=F9iKr9ys0TBpBlSR6TtIOykKbwn57qG+5bq4sb2iPURn97ysbHcFalDIXXxEJo66+M9c9XpseoRhbz5LcAS+dreTBy9aAqzkYS/w/xaUKWPfAL7sU5mlTTUDBlcXQENAh74Qr8GGGHduXt798vRzM4k9iaqJw7+qsRUJA4OdVx6+SvkZR0CM8OWCYh51aviZ2uuGf1SFRKci7Aj5Fs9xpIJRK6o+EjGGG3kz+H7j7BLKCUrgCb2yP5Uqy88xPoNh6wMgW71GS/h5TZIXkcjgBliyuVaXek9tw44CjCq2Ut08e4awQZIy/yQzqPQrewnuAygdXbxaGuPjMniTjpHLBg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=2FBVw+pyjVJ0fOe69NB1TgCfKY+dzuWS4nyeSzMm6vs=; b=EhiQEAT0/J575zNjcrq9eW8Dx2YftJpvizrBx63itmdBMixYf56PcbMUSOcgf2q7VToFU3I2+CiP7tLL0nOqYuT81e+Biut4+Xowp72q+t9jAtKh56+j9GHBnNTjjr/pgshfpmSlcyIyK63uDF8BI/EOY3FCCEdlnzLJ57HC+QQ= Received: from SJ0PR03CA0281.namprd03.prod.outlook.com (2603:10b6:a03:39e::16) by CY5PR12MB6347.namprd12.prod.outlook.com (2603:10b6:930:20::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7962.24; Mon, 16 Sep 2024 05:08:50 +0000 Received: from SJ5PEPF000001CE.namprd05.prod.outlook.com (2603:10b6:a03:39e:cafe::b2) by SJ0PR03CA0281.outlook.office365.com (2603:10b6:a03:39e::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7939.29 via Frontend Transport; Mon, 16 Sep 2024 05:08:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by SJ5PEPF000001CE.mail.protection.outlook.com (10.167.242.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7918.13 via Frontend Transport; Mon, 16 Sep 2024 05:08:49 +0000 Received: from BLR-L-NUPADHYA.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 16 Sep 2024 00:08:43 -0500 From: Neeraj Upadhyay To: CC: , , , , , , , , , , , , , , , , , Subject: [RFC 1/6] percpu-refcount: Add managed mode for RCU released objects Date: Mon, 16 Sep 2024 10:38:06 +0530 Message-ID: <20240916050811.473556-2-Neeraj.Upadhyay@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240916050811.473556-1-Neeraj.Upadhyay@amd.com> References: <20240916050811.473556-1-Neeraj.Upadhyay@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ5PEPF000001CE:EE_|CY5PR12MB6347:EE_ X-MS-Office365-Filtering-Correlation-Id: 78289c7b-5265-47a9-fb05-08dcd60da6c5 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|36860700013|82310400026|1800799024; X-Microsoft-Antispam-Message-Info: jfpODw4Pkp7XUxB9GwMvh6Gd9CxIE3uU3HEs3OwN+9y6ZwUaQb5W2Y2qPWhIc1RwPwT4yNwREf7YAQ1v3dHlC+Fv+RqNtcuj5aR2Kaa2jlXMqtUz//frYrAMmoHejRGmrC5DIi3fqVJLANnPxp+wlNlqmbmq34OSIcRJLD676vfmJrgnoFDVG6hu9OzHZO+7CCnCRuqw+PHnKWM02zUf8HbUHD+tmdGBEMOSLTAbJGiYzZCTVZmmNdHWNUGP29Bojepo1YcessldMi98WcNji3/rCKIUHMBwIL9xVcj6yw/I88ousgZFW6huLIHWbOlKZIQAQFRQRHn+r5kEtFTaGoPUAgxuMWzizvEvpoVB3NJvftudpBApofX7tg1roGhBV15WjtUDJfhPWrsbw8ODhQyRbn6nm2iEXL4DVGyUOdclRLYYQBTQbGfytvICcYfp87D8ghmhSxIXaUg/MddrTgvOymo3YpJibiIpF6FT4nr87IGPckH3xynhH6efHN6WyaZbkBNDr4ssH+BcFLGe3L0nkNM+wvU6RngpzwT+RT9xqJpJPNhSqG9ZymgX/512tIS+YV60XeJc4xou+aymFjYp08oZo/FhEqUx+WGyJWqdjI3NXvB4sKXuuA156mRxC+lt+IH5LdzDVvle58qjnvTT1E7fVvaphbZbJpLuLJ7j81r2cQDq720iXZBH0kD4yrpsXFAOmvSk1mqRKh/zU2gb7yf0jxpeU4BFjn7jfcVnlC0rUHBXkFLbyf+24tpXCEmx9j35cbFhpPeUUrOISqBq1lGOzj0AcmnJvH+uESOcJgODgISpsJJx5iBxbdSSxWO16njGhudM1k3d/sDNnsZQKt9oVpC7WHkqL2B9PncMgt0dqcvpzJeaIShbKHs7NJAhmysZj83oISqyxQGFPBNZGV1VoC1D5FjWulX79u4Nsp6ytYSqlrnLjTadhKBfmqqRmp5244hXXf4Dce4Zr9New2k7FZRrCD8cf6lrE5if5rZcDCBoWxlVix5p0ATz7PguZj2rN0tNPgHdCu5cVNxeRK/4lWL7IzWYSL3AxEV6fpb7rsOywf9KYVqHLjsBMxy+kY2w7+odYySURvE88K4iopwmppaLPiDlxs/Pscww2/vtmgtXvpDnhWB7veZqw5OBIdnPxAZaqbvVFEUUXUEEYNruntJ7nsC3cf4k2Zm1gPzZD6teJktY8gFWCDBTpueUoNGeFHEPHuMAgpXOxfCiHHqyoYDEIC7dpgOr0Cn1tbP8BZFQ9Q0dBtxtA4fIfO4aUbSsKFSqUZRTvPRD8i4loZ/q7Z7jMh5nYLj/sIcJzYg9TE09rsVtN3PgEQTRGlHiKnyQ+CctkIfT148T8HzlYUIPFQZXNxb2++X2mzM5//3Mjv9IXMwbwUtvG14ea4j+anov6LVZVIY9lTh4IF21V0irQIMcJzytqu762bAlFD45MPHhJWrKo2vHmzfA X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(376014)(7416014)(36860700013)(82310400026)(1800799024);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Sep 2024 05:08:49.9273 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 78289c7b-5265-47a9-fb05-08dcd60da6c5 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: SJ5PEPF000001CE.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6347 X-Rspamd-Queue-Id: 4F1DD1A0010 X-Stat-Signature: 1id7ifdxjng8iig7jpupwjbwagxb11cg X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1726463337-495030 X-HE-Meta: U2FsdGVkX1/c8k0Yaw7JGqH2kRR7Uk4KR4pfOMzTzfjZQOOdDce34xSHx37ZJTK/FxQ2GM9P9vQpizP5m6jhBwLLyvDA4zFYZujOnwyFQuv60iP2aXgUTq7eMcxkViumFRDCpM0tG2lEsh1/RSKbuHyKd1vHRSYyzL8Lv6I4k00uKAPXlra8W09nKWm/SKuVW7VAWG1dnuWa3qdhJYWMsDAPdW1azps1tnPpfMHPh1JJGS7eLhqEixCNqXnLUaBZOH1jMFLKY/7R4CnSAxbqyPoAHZyhipvghgsq/kzw85B0ERwgds6kAD2zWki8FPUvtvNNZ4fiqx1l114Tw2N6Flm/EfnSWhYGaik1y+mpUkD7yRA2rcKLqfg3K9h9iHxdZL7OEIRH7mB0dIrnFjjs0HjrTzodZ/14tTGl84+Fb6a74rZjdrjKN4pvN38ICqG1toQk/N5lV9imJpjhHk0WDfyW53V2hIL5amsuPO+J6EzBrb/+iuWJ+HA3EF6MqzEX1AwH5xp2QrctfePwzaj7xXUUYN9tqRlU5HNpSY3k+KWq5nf3+UTF1t6Hw1Q9e2M2B+54a4UD+dQm5/vxK8j/x8s1X6OhG43qB12/qPNil/QOG2pMRRE8N2W6sHOWsYUNj1dnG4cnuU3SNi8Kpwufx7MpYx85Wow6nzwyTeI5VRmVclXNohd/IhKnknXlXfvIp9KS3LGuGi1mPus1q88yYdEWlUdH/95cGlbDSRWJAdZx/nZX0bEtmjiZTOO9VzFuCa72HcH08nGC5aEDoEk9yprTl1SkJ9QkgwRf972ZvfISTiQCmdtDHv7NW/Cjwy5+0gxuiH9BVMUoOaDCVo7kYFsrAsUNmZJceyMvuQfAh/irQe7IN/vOP3ciynlf0+fJfPIcFg2oOzwUhMa32HXTEAkMHqvyTtvd6sPIYxUsAGuYO+UNx1fVC83PsMm+I8RDghebIQRBte0a6Wfujzt jFTU5yx7 O+10+0IPgyth0Kdb9emV8uO4c2q+HixMowv8Wkgy7wcO/RM/uvY63tQcy5oI5ZBaT9tOB9TbzmFv85tgVsmglcX6Y9hbG9SD90hoVCHsSzRkXESeXViM+IHJQmh7AYZvz2dOIvpbTX86547RThQRFDFZe2HtD8nwvLRjzjl1y3h+vLNswIbUnEIHNSZafgZ6D6Fov7AR/efezgpwoSR4vCKrEf2kRbBIOpcXgqBCCKsly/7YkH/4JOTo6IWWPkhVkuv27pmQ8iXNidVtkA31lzNwnpVyrsnheuDrfV5Rz/Mygj4ca9cTc7Q+wQoglzrUej7FrX2hKHR5CvIGbkJUEHGBHoWelm3ZA0iSwdxSYNqzEL10b2jCXm85k/IkiW/5lOs8NfbwGjACvCyG+GZqk/ZT7QmD+VZTW/64Oztv/Pi2M68aPaJHxNzpCumCgSnIp9VfBusncq9T8/bCRMAUCMlBu6Nu7JbcBmEfK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add a new "managed mode" to percpu refcounts, to track initial reference drop for refs which use RCU grace period for their object reclaims. Typical usage pattern for such refs is: // Called with elevated refcount get() p = get_ptr(); kref_get(&p->count); return p; get() rcu_read_lock(); p = get_ptr(); if (p && !kref_get_unless_zero(&p->count)) p = NULL; rcu_read_unlock(); return p; release() remove_ptr(p); call_rcu(&p->rcu, freep); release() remove_ptr(p); kfree_rcu((p, rcu); Currently, percpu ref requires users to call percpu_ref_kill() when object usage enters a shutdown phase. Post killi operation, ref increment/ decrement are performed on a atomic counter. For cases where ref is actively acquired and released after percpu_ref_kill(), percpu ref does not provide any performance benefits over using an atomic reference counter. Managed mode offloads tracking of ref kill to a manager thread, thereby not requiring users to explicitly call percpu_ref_kill(). This helps avoid the problem of suboptimal performance if a percpu ref is actively acquired and released after percpu_ref_kill() operation. A percpu ref can be initialized as managed either during percpu_ref_init() by passing PERCPU_REF_REL_MANAGED flag or a reinitable ref can be switched to managed mode using percpu_ref_switch_to_managed() post its initialization. Deferred switch to managed mode can be used for cases like module initialization errors, where a inited percpu ref's initial reference is dropped before the object becomes active and is referenced by other contexts. One such case is Apparmor labels which are not associated yet with a namespace. These labels are freed without waiting for a RCU grace period. So, managed mode cannot be used for these labels until their initialization has completed. Following are the allowed initialization modes for managed ref: Atomic Percpu Dead Reinit Managed Managed-ref Y N Y Y Y Following are the allowed transitions for managed ref: To --> A P P(RI) M D D(RI) D(RI/M) KLL REI RES A y n y y n y y y y y P n n n n y n n y n n M n n n y n n y n y y P(RI) y n y y n y y y y y D(RI) y n y y n y y - y y D(RI/M) n n n y n n y - y y Modes: A - Atomic P - PerCPU M - Managed P(RI) - PerCPU with ReInit D(RI) - Dead with ReInit D(RI/M) - Dead with ReInit and Managed PerCPU Ref Ops: KLL - Kill REI - Reinit RES - Resurrect Once a percpu ref is switched to managed mode, it cannot be switched to any other active mode. On reinit/resurrect, managed ref is reinitialized in managed mode. Signed-off-by: Neeraj Upadhyay --- .../admin-guide/kernel-parameters.txt | 12 + include/linux/percpu-refcount.h | 13 + lib/percpu-refcount.c | 358 +++++++++++++++++- 3 files changed, 364 insertions(+), 19 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 09126bb8cc9f..0f02a1b04fe9 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4665,6 +4665,18 @@ allocator. This parameter is primarily for debugging and performance comparison. + percpu_refcount.max_scan_count= [KNL] + Specifies the maximum number of percpu ref nodes which + are processed in one run of percpu ref manager thread. + + Default: 100 + + percpu_refcount.scan_interval= [KNL] + Specifies the duration (ms) between two runs of manager + thread. + + Default: 5000 ms + pirq= [SMP,APIC] Manual mp-table setup See Documentation/arch/x86/i386/IO-APIC.rst. diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index d73a1c08c3e3..e6aea81b3d01 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -68,6 +68,11 @@ enum { __PERCPU_REF_FLAG_BITS = 2, }; +/* Auxiliary flags */ +enum { + __PERCPU_REL_MANAGED = 1LU << 0, /* operating in managed mode */ +}; + /* @flags for percpu_ref_init() */ enum { /* @@ -90,6 +95,10 @@ enum { * Allow switching from atomic mode to percpu mode. */ PERCPU_REF_ALLOW_REINIT = 1 << 2, + /* + * Manage release of the percpu ref. + */ + PERCPU_REF_REL_MANAGED = 1 << 3, }; struct percpu_ref_data { @@ -100,6 +109,9 @@ struct percpu_ref_data { bool allow_reinit:1; struct rcu_head rcu; struct percpu_ref *ref; + unsigned int aux_flags; + struct llist_node node; + }; struct percpu_ref { @@ -126,6 +138,7 @@ void percpu_ref_switch_to_atomic(struct percpu_ref *ref, percpu_ref_func_t *confirm_switch); void percpu_ref_switch_to_atomic_sync(struct percpu_ref *ref); void percpu_ref_switch_to_percpu(struct percpu_ref *ref); +int percpu_ref_switch_to_managed(struct percpu_ref *ref); void percpu_ref_kill_and_confirm(struct percpu_ref *ref, percpu_ref_func_t *confirm_kill); void percpu_ref_resurrect(struct percpu_ref *ref); diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index 668f6aa6a75d..7b97f9728c5b 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -5,6 +5,9 @@ #include #include #include +#include +#include +#include #include #include @@ -38,6 +41,7 @@ static DEFINE_SPINLOCK(percpu_ref_switch_lock); static DECLARE_WAIT_QUEUE_HEAD(percpu_ref_switch_waitq); +static LLIST_HEAD(percpu_ref_manage_head); static unsigned long __percpu *percpu_count_ptr(struct percpu_ref *ref) { @@ -45,6 +49,8 @@ static unsigned long __percpu *percpu_count_ptr(struct percpu_ref *ref) (ref->percpu_count_ptr & ~__PERCPU_REF_ATOMIC_DEAD); } +int percpu_ref_switch_to_managed(struct percpu_ref *ref); + /** * percpu_ref_init - initialize a percpu refcount * @ref: percpu_ref to initialize @@ -80,6 +86,9 @@ int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release, return -ENOMEM; } + if (flags & PERCPU_REF_REL_MANAGED) + flags |= PERCPU_REF_ALLOW_REINIT; + data->force_atomic = flags & PERCPU_REF_INIT_ATOMIC; data->allow_reinit = flags & PERCPU_REF_ALLOW_REINIT; @@ -101,10 +110,73 @@ int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release, data->confirm_switch = NULL; data->ref = ref; ref->data = data; + init_llist_node(&data->node); + + if (flags & PERCPU_REF_REL_MANAGED) + percpu_ref_switch_to_managed(ref); + return 0; } EXPORT_SYMBOL_GPL(percpu_ref_init); +static bool percpu_ref_is_managed(struct percpu_ref *ref) +{ + return (ref->data->aux_flags & __PERCPU_REL_MANAGED) != 0; +} + +static void __percpu_ref_switch_mode(struct percpu_ref *ref, + percpu_ref_func_t *confirm_switch); + +static int __percpu_ref_switch_to_managed(struct percpu_ref *ref) +{ + unsigned long __percpu *percpu_count; + struct percpu_ref_data *data; + int ret = -1; + + data = ref->data; + + if (WARN_ONCE(!percpu_ref_tryget(ref), "Percpu ref is not active")) + return ret; + + if (WARN_ONCE(!data->allow_reinit, "Percpu ref does not allow switch")) + goto err_switch_managed; + + if (WARN_ONCE(percpu_ref_is_managed(ref), "Percpu ref is already managed")) + goto err_switch_managed; + + data->aux_flags |= __PERCPU_REL_MANAGED; + data->force_atomic = false; + if (!__ref_is_percpu(ref, &percpu_count)) + __percpu_ref_switch_mode(ref, NULL); + /* Ensure ordering of percpu mode switch and node scan */ + smp_mb(); + llist_add(&data->node, &percpu_ref_manage_head); + + return 0; + +err_switch_managed: + percpu_ref_put(ref); + return ret; +} + +/** + * percpu_ref_switch_to_managed - Switch an unmanaged ref to percpu mode. + * + * @ref: percpu_ref to switch to managed mode + * + */ +int percpu_ref_switch_to_managed(struct percpu_ref *ref) +{ + unsigned long flags; + int ret; + + spin_lock_irqsave(&percpu_ref_switch_lock, flags); + ret = __percpu_ref_switch_to_managed(ref); + spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); + return ret; +} +EXPORT_SYMBOL_GPL(percpu_ref_switch_to_managed); + static void __percpu_ref_exit(struct percpu_ref *ref) { unsigned long __percpu *percpu_count = percpu_count_ptr(ref); @@ -283,6 +355,27 @@ static void __percpu_ref_switch_mode(struct percpu_ref *ref, __percpu_ref_switch_to_percpu(ref); } +static bool __percpu_ref_switch_to_atomic_checked(struct percpu_ref *ref, + percpu_ref_func_t *confirm_switch, + bool check_managed) +{ + unsigned long flags; + + spin_lock_irqsave(&percpu_ref_switch_lock, flags); + if (check_managed && WARN_ONCE(percpu_ref_is_managed(ref), + "Percpu ref is managed, cannot switch to atomic mode")) { + spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); + return false; + } + + ref->data->force_atomic = true; + __percpu_ref_switch_mode(ref, confirm_switch); + + spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); + + return true; +} + /** * percpu_ref_switch_to_atomic - switch a percpu_ref to atomic mode * @ref: percpu_ref to switch to atomic mode @@ -306,17 +399,16 @@ static void __percpu_ref_switch_mode(struct percpu_ref *ref, void percpu_ref_switch_to_atomic(struct percpu_ref *ref, percpu_ref_func_t *confirm_switch) { - unsigned long flags; - - spin_lock_irqsave(&percpu_ref_switch_lock, flags); - - ref->data->force_atomic = true; - __percpu_ref_switch_mode(ref, confirm_switch); - - spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); + (void)__percpu_ref_switch_to_atomic_checked(ref, confirm_switch, true); } EXPORT_SYMBOL_GPL(percpu_ref_switch_to_atomic); +static void __percpu_ref_switch_to_atomic_sync_checked(struct percpu_ref *ref, bool check_managed) +{ + if (!__percpu_ref_switch_to_atomic_checked(ref, NULL, check_managed)) + return; + wait_event(percpu_ref_switch_waitq, !ref->data->confirm_switch); +} /** * percpu_ref_switch_to_atomic_sync - switch a percpu_ref to atomic mode * @ref: percpu_ref to switch to atomic mode @@ -327,11 +419,28 @@ EXPORT_SYMBOL_GPL(percpu_ref_switch_to_atomic); */ void percpu_ref_switch_to_atomic_sync(struct percpu_ref *ref) { - percpu_ref_switch_to_atomic(ref, NULL); - wait_event(percpu_ref_switch_waitq, !ref->data->confirm_switch); + __percpu_ref_switch_to_atomic_sync_checked(ref, true); } EXPORT_SYMBOL_GPL(percpu_ref_switch_to_atomic_sync); +static void __percpu_ref_switch_to_percpu_checked(struct percpu_ref *ref, bool check_managed) +{ + unsigned long flags; + + spin_lock_irqsave(&percpu_ref_switch_lock, flags); + + if (check_managed && WARN_ONCE(percpu_ref_is_managed(ref), + "Percpu ref is managed, cannot switch to percpu mode")) { + spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); + return; + } + + ref->data->force_atomic = false; + __percpu_ref_switch_mode(ref, NULL); + + spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); +} + /** * percpu_ref_switch_to_percpu - switch a percpu_ref to percpu mode * @ref: percpu_ref to switch to percpu mode @@ -352,14 +461,7 @@ EXPORT_SYMBOL_GPL(percpu_ref_switch_to_atomic_sync); */ void percpu_ref_switch_to_percpu(struct percpu_ref *ref) { - unsigned long flags; - - spin_lock_irqsave(&percpu_ref_switch_lock, flags); - - ref->data->force_atomic = false; - __percpu_ref_switch_mode(ref, NULL); - - spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); + __percpu_ref_switch_to_percpu_checked(ref, true); } EXPORT_SYMBOL_GPL(percpu_ref_switch_to_percpu); @@ -472,8 +574,226 @@ void percpu_ref_resurrect(struct percpu_ref *ref) ref->percpu_count_ptr &= ~__PERCPU_REF_DEAD; percpu_ref_get(ref); - __percpu_ref_switch_mode(ref, NULL); + if (percpu_ref_is_managed(ref)) { + ref->data->aux_flags &= ~__PERCPU_REL_MANAGED; + __percpu_ref_switch_to_managed(ref); + } else { + __percpu_ref_switch_mode(ref, NULL); + } spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); } EXPORT_SYMBOL_GPL(percpu_ref_resurrect); + +#define DEFAULT_SCAN_INTERVAL_MS 5000 +/* Interval duration between two ref scans. */ +static ulong scan_interval = DEFAULT_SCAN_INTERVAL_MS; +module_param(scan_interval, ulong, 0444); + +#define DEFAULT_MAX_SCAN_COUNT 100 +/* Number of percpu refs scanned in one iteration of worker execution. */ +static int max_scan_count = DEFAULT_MAX_SCAN_COUNT; +module_param(max_scan_count, int, 0444); + +static void percpu_ref_release_work_fn(struct work_struct *work); + +/* + * Sentinel llist nodes for lockless list traveral and deletions by + * the pcpu ref release worker, while nodes are added from + * percpu_ref_init() and percpu_ref_switch_to_managed(). + * + * Sentinel node marks the head of list traversal for the current + * iteration of kworker execution. + */ +struct percpu_ref_sen_node { + bool inuse; + struct llist_node node; +}; + +/* + * We need two sentinel nodes for lockless list manipulations from release + * worker - first node will be used in current reclaim iteration. The second + * node will be used in next iteration. Next iteration marks the first node + * as free, for use in subsequent iteration. + */ +#define PERCPU_REF_SEN_NODES_COUNT 2 + +/* Track last processed percpu ref node */ +static struct llist_node *last_percpu_ref_node; + +static struct percpu_ref_sen_node + percpu_ref_sen_nodes[PERCPU_REF_SEN_NODES_COUNT]; + +static DECLARE_DELAYED_WORK(percpu_ref_release_work, percpu_ref_release_work_fn); + +static bool percpu_ref_is_sen_node(struct llist_node *node) +{ + return &percpu_ref_sen_nodes[0].node <= node && + node <= &percpu_ref_sen_nodes[PERCPU_REF_SEN_NODES_COUNT - 1].node; +} + +static struct llist_node *percpu_ref_get_sen_node(void) +{ + int i; + struct percpu_ref_sen_node *sn; + + for (i = 0; i < PERCPU_REF_SEN_NODES_COUNT; i++) { + sn = &percpu_ref_sen_nodes[i]; + if (!sn->inuse) { + sn->inuse = true; + return &sn->node; + } + } + + return NULL; +} + +static void percpu_ref_put_sen_node(struct llist_node *node) +{ + struct percpu_ref_sen_node *sn = container_of(node, struct percpu_ref_sen_node, node); + + sn->inuse = false; + init_llist_node(node); +} + +static void percpu_ref_put_all_sen_nodes_except(struct llist_node *node) +{ + int i; + + for (i = 0; i < PERCPU_REF_SEN_NODES_COUNT; i++) { + if (&percpu_ref_sen_nodes[i].node == node) + continue; + percpu_ref_sen_nodes[i].inuse = false; + init_llist_node(&percpu_ref_sen_nodes[i].node); + } +} + +static struct workqueue_struct *percpu_ref_release_wq; + +static void percpu_ref_release_work_fn(struct work_struct *work) +{ + struct llist_node *pos, *first, *head, *prev, *next; + struct llist_node *sen_node; + struct percpu_ref *ref; + int count = 0; + bool held; + + first = READ_ONCE(percpu_ref_manage_head.first); + if (!first) + goto queue_release_work; + + /* + * Enqueue a dummy node to mark the start of scan. This dummy + * node is used as start point of scan and ensures that + * there is no additional synchronization required with new + * label node additions to the llist. Any new labels will + * be processed in next run of the kworker. + * + * SCAN START PTR + * | + * v + * +----------+ +------+ +------+ +------+ + * | | | | | | | | + * | head ------> dummy|--->|label |--->| label|--->NULL + * | | | node | | | | | + * +----------+ +------+ +------+ +------+ + * + * + * New label addition: + * + * SCAN START PTR + * | + * v + * +----------+ +------+ +------+ +------+ +------+ + * | | | | | | | | | | + * | head |--> label|--> dummy|--->|label |--->| label|--->NULL + * | | | | | node | | | | | + * +----------+ +------+ +------+ +------+ +------+ + * + */ + if (last_percpu_ref_node == NULL || last_percpu_ref_node->next == NULL) { +retry_sentinel_get: + sen_node = percpu_ref_get_sen_node(); + /* + * All sentinel nodes are in use? This should not happen, as we + * require only one sentinel for the start of list traversal and + * other sentinel node is freed during the traversal. + */ + if (WARN_ONCE(!sen_node, "All sentinel nodes are in use")) { + /* Use first node as the sentinel node */ + head = first->next; + if (!head) { + struct llist_node *ign_node = NULL; + /* + * We exhausted sentinel nodes. However, there aren't + * enough nodes in the llist. So, we have leaked + * sentinel nodes. Reclaim sentinels and retry. + */ + if (percpu_ref_is_sen_node(first)) + ign_node = first; + percpu_ref_put_all_sen_nodes_except(ign_node); + goto retry_sentinel_get; + } + prev = first; + } else { + llist_add(sen_node, &percpu_ref_manage_head); + prev = sen_node; + head = prev->next; + } + } else { + prev = last_percpu_ref_node; + head = prev->next; + } + + last_percpu_ref_node = NULL; + llist_for_each_safe(pos, next, head) { + /* Free sentinel node which is present in the list */ + if (percpu_ref_is_sen_node(pos)) { + prev->next = pos->next; + percpu_ref_put_sen_node(pos); + continue; + } + + ref = container_of(pos, struct percpu_ref_data, node)->ref; + __percpu_ref_switch_to_atomic_sync_checked(ref, false); + /* + * Drop the ref while in RCU read critical section to + * prevent obj free while we manipulating node. + */ + rcu_read_lock(); + percpu_ref_put(ref); + held = percpu_ref_tryget(ref); + if (!held) { + prev->next = pos->next; + init_llist_node(pos); + ref->percpu_count_ptr |= __PERCPU_REF_DEAD; + } + rcu_read_unlock(); + if (!held) + continue; + __percpu_ref_switch_to_percpu_checked(ref, false); + count++; + if (count == max_scan_count) { + last_percpu_ref_node = pos; + break; + } + prev = pos; + } + +queue_release_work: + queue_delayed_work(percpu_ref_release_wq, &percpu_ref_release_work, + scan_interval); +} + +static __init int percpu_ref_setup(void) +{ + percpu_ref_release_wq = alloc_workqueue("percpu_ref_release_wq", + WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_FREEZABLE, 0); + if (!percpu_ref_release_wq) + return -ENOMEM; + + queue_delayed_work(percpu_ref_release_wq, &percpu_ref_release_work, + scan_interval); + return 0; +} +early_initcall(percpu_ref_setup); From patchwork Mon Sep 16 05:08:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neeraj Upadhyay X-Patchwork-Id: 13805009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F03C8C3ABB2 for ; Mon, 16 Sep 2024 05:09:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C8D66B0093; Mon, 16 Sep 2024 01:09:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8780C6B0095; Mon, 16 Sep 2024 01:09:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F1D66B0096; Mon, 16 Sep 2024 01:09:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4B4926B0093 for ; Mon, 16 Sep 2024 01:09:23 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id EE2B480108 for ; Mon, 16 Sep 2024 05:09:22 +0000 (UTC) X-FDA: 82569422964.12.19639D8 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2061.outbound.protection.outlook.com [40.107.243.61]) by imf05.hostedemail.com (Postfix) with ESMTP id 03A7610000A for ; Mon, 16 Sep 2024 05:09:19 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=X7MRNBtn; dmarc=pass (policy=quarantine) header.from=amd.com; spf=pass (imf05.hostedemail.com: domain of Neeraj.Upadhyay@amd.com designates 40.107.243.61 as permitted sender) smtp.mailfrom=Neeraj.Upadhyay@amd.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726463238; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UAYTEu/r5ZwyE4gU0EfeKdUTT2JqfoHk4L5udSk6Kng=; b=cQ9vAR6+upWuDaeUDFu/N17VmWbPHd8JpJm25OBRvMkLs00XuRDzMfRJV/aAaH5OXqf/fM BTU8HJNi70UXdt74PIipK8bSyAlsrRtFUgeW/vaS+xtb3UHr601U98Bt7gDWJByvhoNxfA qxPjo0YgyHBFK6q2LHO5LCzS5A15LXM= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1726463238; a=rsa-sha256; cv=pass; b=q8xWozxyaD+X9pFISKDPcz4G1AIAuI0xVn1tmLldBUCqsIac11VN3+qn60SusVPgFxZdvI rhXfCdxdanRxPrT4p652l5bFptJe1PygwM4aJKFYuljm+AFGN3KIu57Wsxdl5NWj8dk12D OqvmjV9NSU7S6FV0fSMBnJOZ3q+LpF8= ARC-Authentication-Results: i=2; imf05.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=X7MRNBtn; dmarc=pass (policy=quarantine) header.from=amd.com; spf=pass (imf05.hostedemail.com: domain of Neeraj.Upadhyay@amd.com designates 40.107.243.61 as permitted sender) smtp.mailfrom=Neeraj.Upadhyay@amd.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=DbBdgvkRlg4WjNbnEva8nHBQeyZx6eRfCIGDkEqQbV1C6L8CQuaE1buuB8NMTubrObCqsDskq+FwUlqZD/iOfhdZVsnlpGEQG2E8Cm7nK3pCWDkwyHs53D/qOC42ng55+xfNgzc5KvcLg9YL4G8U0z9m2tSAJU/To40osNTkAtsRQPw3caJRE62LNHyjRDqd8uGp0FnGKXNo1KDLMYwSRGK+a/9hhIZlcuKzpmokQ1duI60QwwPWJgEpNh6WXjk/Gp1A86yYUCasrWmo7NNCjmhBJGpSC+p7c4SXa8kgfbO+rqVzUxQ5L7jMHwmKWN1YmkpPueLjLS8SkM4sAoeH+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=UAYTEu/r5ZwyE4gU0EfeKdUTT2JqfoHk4L5udSk6Kng=; b=tkTc62pr39PWBh2YweLLSVn6h2zqTo7MMotR6b5oSFMs9CSC/W21EUQeaThnxlSJpFWxolWnLh1i8mEscVAi7wTc/BD2VbUSJ1bPbbWXykFSH7qIq6jDfynpm+kBPJpYKuHqNqemr+td23slUEBTYsPE9O/B+0fzod1Jsn75BGWrcMAGs+DsFWRz1eAtpj8sHPDAkLH8RKfa3w5pEIuDeiw0erzoOBIjO5E1813MlZNdSQp7B8UThjnzsjtPmEwPc9P9Dvk7t03TVcieP/tTtzxu4Xi5LiBHFfvs/Pnz7oE8IFJ2Tnq+hYzxWIAd2SjQpJXl6bVxq3cusCa3dsMJjQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UAYTEu/r5ZwyE4gU0EfeKdUTT2JqfoHk4L5udSk6Kng=; b=X7MRNBtnYmAuur4tWHwjzQ2NPD53v/H864b5YPzoL3r1aB9Hi0CO6f8+jxFNg4gSHboK75cw8SgK3Nnqgocethc+u8QKoBSmLUbAL9FzDrRUSsJlg2sz4QVfKMKkInSL9VWZEJOTLWYAOGVd/2YKwmWiB1d8mKb8GMx+A/j2SrI= Received: from SJ0PR03CA0280.namprd03.prod.outlook.com (2603:10b6:a03:39e::15) by CYXPR12MB9426.namprd12.prod.outlook.com (2603:10b6:930:e3::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7962.22; Mon, 16 Sep 2024 05:09:13 +0000 Received: from SJ5PEPF000001CE.namprd05.prod.outlook.com (2603:10b6:a03:39e:cafe::e2) by SJ0PR03CA0280.outlook.office365.com (2603:10b6:a03:39e::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7939.26 via Frontend Transport; Mon, 16 Sep 2024 05:09:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by SJ5PEPF000001CE.mail.protection.outlook.com (10.167.242.38) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7918.13 via Frontend Transport; Mon, 16 Sep 2024 05:09:13 +0000 Received: from BLR-L-NUPADHYA.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 16 Sep 2024 00:09:07 -0500 From: Neeraj Upadhyay To: CC: , , , , , , , , , , , , , , , , , Subject: [RFC 2/6] percpu-refcount: Add torture test for percpu refcount Date: Mon, 16 Sep 2024 10:38:07 +0530 Message-ID: <20240916050811.473556-3-Neeraj.Upadhyay@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240916050811.473556-1-Neeraj.Upadhyay@amd.com> References: <20240916050811.473556-1-Neeraj.Upadhyay@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ5PEPF000001CE:EE_|CYXPR12MB9426:EE_ X-MS-Office365-Filtering-Correlation-Id: b7a604ec-097e-40c1-7c2b-08dcd60db4b9 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|36860700013|1800799024|7416014|376014|82310400026; X-Microsoft-Antispam-Message-Info: e3GE4r7ZazQvKAGYI45liIL+y6LeoiYsEmL2xsWFns5D9I5bJzdiSxoMEm6rIoIc5aBiIISi05coB+QcqpnMGFYvtaDmUWWbC3tEtXYoI5aF+/m9Gk9UGKeJFG8C6iKEoUZRlz4fxrt09KoKuEHB5rDaQjxCY/txwD8mMYpLtqg3IOkMXoztEAzaQE8BE5DeJgRp5qcK7A1dQliz8PwMsgj3NK2ZPT/X6lsaVsvR1zJ3KeX9Wb8yaqxhoJHucY9JnB4CKv8GZ9TtHGhiXQdpOWTzuvZmGgsXY5198KavtftRwc3A9JekQsEtnEkKiW+Uqsw41KCa3mK718fara+wTg8HsdLafPRfZadu42YqlIJMfz+7fh60NrVwUtJkpY9XhSaFkx9Z9hmrkX7P6a4PnT08kQpIZYl0ctbXqdVCW451AsHhUfLXzW6d9yvMYYVKzc1vHNJxFQXi3OSS71BHiMeUYcDwoCXTHI2XKZselOEbhlaJow3qVa3dIcclThuGf+e+4HtAu2+geWuOX/l4YdHDKO7Q3LpsnuKMN/sS06i7XyRPVq3pRgdPuL7RpGOFrfdnYAJG7CsikM+ajsM5dJQoIEgnpVg9Sjl+jaE9P2rWZlIXSyrd4hT4uYkMdJqRz3o8cNjh/5dImqzP5lJjXtvQBpv2H09HuAZrkmnDAmkJvQhCL5Tjfa59zSsOtKdVOeCb5triuHDm+o//W7V6JifbTXlza3wFL/zs2Boi5YMgBcgNDRkVMUptYzfIzHxQJJXUu6zSef4jBLa2ps1y/yVUIyriNwHgAW2huqADWyaEGpCrvrfAGq+WO4x4DB05e5SG4DTQYxSYKsUNr8E/zAi61hrqwrlJGF+X2/r4OUxjCst4YvU9HgSbqn9O5WQknSRNgqGMF8G/STmm5tGoI4c3ZaRveoYWWJTiBViOCTndQn9KndUMYvpxo3IhXOihxZvsksNGSicLy0pXza95CwG/dBd0iiQyOJafd9yAU4R+1VTnv6wBWGQeyw1/IS4kYS5qUiZZeOa7NDSpkIvB3DV94fSQkW5ziuPW1FvQ5qY1LEkRQQsBG/q9+lzzQ+SfbW91Lq37SwmqvtSFF0nZZXCDhIOp+Ue13FVj61OcOACoN0HfxgCQF3cw1f6BNqFJNeJKDhf2vd69GOlIdH/3W/0yrpNkbjvZ0tTy6Sm6z1iuiliuwNXbCyJ0Gym0iffsmfKxvG+N8epXKkMFBPK7cuat+JLJbDNVMZUc7A0D/xUsYQLVXGwBL4j18xGlk2d4yA2xQYS5qNr9wJRf3UZWsfVe3ow+Xu2b55/9f0EVj2x6Y0VumnRuYErI6Rf0uFRdAo+esfkVDdi94RMoFwkecupsmNcTUudthgBXBniGagVZTjpxBZ4tNg0iuyT5Tnle24ukWq+CuE8Rex0wKY+6oDAPHhjMDFUknpXmnk646+Mun45NNhEIqJ1vQNMNiVkb X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(1800799024)(7416014)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Sep 2024 05:09:13.3022 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b7a604ec-097e-40c1-7c2b-08dcd60db4b9 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: SJ5PEPF000001CE.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYXPR12MB9426 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 03A7610000A X-Stat-Signature: k8fa681rfd1mh5ywb97t6kkziak8c7i7 X-Rspam-User: X-HE-Tag: 1726463359-927454 X-HE-Meta: U2FsdGVkX1+0MJ0p3KX0JdWpSHmcPMXo+HcAkFO//j0lzxvFW5Fq4fzMacG5S6jVYMrW9VhFlA0JWqI3hbrIJ001mRRmmbHMFM4FAOBuDVOt40jDk1vN/pQ2aWgbF0gUxJXRQdDz+orIrQYt4k1ytRnShqaiaMe0ODFQhyQlMw69zfBOmuJcli37KpeYv9VZKXaV6xKFwZwEEIQdYGRtFDMzfEERWF2CooIE3U+IcEqrKDOV30q4b831P+SjwM4+MZBaBweKOZa+CjvLCMDCg5MrIPCdETzAXSktdB+pCnAeH5Ues8xv4yXU5P9rt3bTOtvkiUFmpsqju0ROEYc9JzeAdQt9moiwm8buU+uoyaOCxAnji2ouwspW+whZaiiyuUeMP1AIq5CE/4mbfCqG7UgjG+KXU7f3T71tby0J6GuauKf5zNZ9h4F4bxAKcJtIw2VnUfqJrtoDNu41d9gU5tHqvJ+T26HkAwBYJcqMFKdVZ3zuvmHqnc9ybeRw9v/mWmXpubVp0dHHfh3KaiUvd9bE11BCW2IpDOu6ixliqVRlJJMAKKvvTq1o1bcm3uHvBHx5jiwNGzT7gc8+dbsGdhu4kzEMXt8PA03DDG27n0JU1ulyo7ME0JOUIm1ioy/EaMlPNYn7RY6EnVJj6hIkoqP/nh47Uv/uf0WrEyTay+Zf1LJWCVQBBlkacFGKDgnx7X12lepuf/ZDMz/AHk51e4TKHNlsl94C6W5IambRGpkqYoSeTWmZXnOEnDezbbl6kwS3Mqb2wz92q+E0u6a+eReL3elT89cJDq/7+w97TxXA56ydpuAgN7ENTk5RdUvirgH88A6Zd7Fd20b7/pSNPrqhhPpEHQ6tjJXnTLM6FeYIOLsYen+PifAChCMt8ks8GgFG9fZh8bm1mApWOEXRpa3tg/JWSGnj7LCDjL420AGavqplXSOxAWhow4U3BgCeZ6d/5aUaFQgWDhxG8Xg d2WudLTx BapQWHJunBlYcuU8ayzSpM8TuopnOYzxvQu6NRJSGpp4tvYiA6PowhFuqHlxPGbiFPMgc+b6vmBIfcpg/NtJAdsx99+3t34XTzx7/W3tgBl6UIE3YrveojvVNVeIE32NXyBwFhXGqNco7k6OcoxEcosPgQQAFsAICK9ZDdsMZB/LOS/jQP0iZaXZDiNcF/rcnRSckvX/TjVC8Ms+w48pixTSDnkyz+wjeyb/Hrg7bF8AhXkFi582v/Vbn7Sji/wMyrvAe8u8sk4Y6jje+s5PuflBr5BYZifKHXVLEvRfuJreYLJqko3ebq2nre3PWdsl/BODLjhjT9Q/sAHGF2+AbKubsFGV1qpETjrxoa4tzNeoIvCFd9kVKbuuD4Rxliax5JFhp4QZKfu3uJ2JNAjMyXromyUNxFQXhmRcewtuF1rTJOSZ96D1nG1oR+T96IcmMDeZvReLWSUBBsZ7FmnYsjEhDo45vTHZYhfyZ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add torture test to verify percpu managed mode operations, verifying that a percpu ref does not have non-zero count when all users have dropped their reference and that there is no early release of the ref while users hold references to it. Signed-off-by: Neeraj Upadhyay --- .../admin-guide/kernel-parameters.txt | 57 +++ lib/Kconfig.debug | 9 + lib/Makefile | 1 + lib/percpu-refcount-torture.c | 367 ++++++++++++++++++ lib/percpu-refcount.c | 49 ++- lib/percpu-refcount.h | 6 + 6 files changed, 483 insertions(+), 6 deletions(-) create mode 100644 lib/percpu-refcount-torture.c create mode 100644 lib/percpu-refcount.h diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 0f02a1b04fe9..225f2dac294d 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4677,6 +4677,63 @@ Default: 5000 ms + percpu_refcount_torture.busted_early_ref_release= [KNL] + Enable testing buggy release of percpu ref while + there are active users. Used for testing failure + scenarios in the test. + + Default: 0 (disabled) + + percpu_refcount_torture.busted_late_ref_release= [KNL] + Enable testing buggy non-zero reference count after + all ref users have dropped their reference. Used for + testing failure scenarios in the test. + + Default: 0 (disabled) + + percpu_refcount_torture.delay_us= [KNL] + Delay (in us) between reference increment and decrement + operations of ref users. + + Default: 10 + + percpu_refcount_torture.niterations= [KNL] + Number of iterations of ref increment and decrement by + ref users. + + Default: 100 + + percpu_refcount_torture.nrefs= [KNL] + Number of percpu ref instances. + + Default: 2 + + percpu_refcount_torture.nusers= [KNL] + Number of percpu ref user threads which increment and + decrement a percpu ref. + + Default: 2 + + percpu_refcount_torture.onoff_holdoff= [KNL] + Set time (s) after boot for CPU-hotplug testing. + + Default: 0 + + percpu_refcount_torture.onoff_interval= [KNL] + Set time (jiffies) between CPU-hotplug operations, + or zero to disable CPU-hotplug testing. + + percpu_refcount_torture.stutter= [KNL] + Set wait time (jiffies) between two iterations of + percpu ref operations. + + Default: 0 + + percpu_refcount_torture.verbose= [KNL] + Enable additional printk() statements. + + Default: 0 (Disabled) + pirq= [SMP,APIC] Manual mp-table setup See Documentation/arch/x86/i386/IO-APIC.rst. diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index a30c03a66172..7e0117e01f05 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1611,6 +1611,15 @@ config SCF_TORTURE_TEST module may be built after the fact on the running kernel to be tested, if desired. +config PERCPU_REFCOUNT_TORTURE_TEST + tristate "torture tests for percpu refcount" + select TORTURE_TEST + help + This options provides a kernel module that runs percpu + refcount torture tests for managed percpu refs. The kernel + module may be built after the fact on the running kernel + to be tested, if desired. + config CSD_LOCK_WAIT_DEBUG bool "Debugging for csd_lock_wait(), called from smp_call_function*()" depends on DEBUG_KERNEL diff --git a/lib/Makefile b/lib/Makefile index 322bb127b4dc..d0286f7dfb37 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -50,6 +50,7 @@ obj-y += bcd.o sort.o parser.o debug_locks.o random32.o \ once.o refcount.o rcuref.o usercopy.o errseq.o bucket_locks.o \ generic-radix-tree.o bitmap-str.o obj-$(CONFIG_STRING_KUNIT_TEST) += string_kunit.o +obj-$(CONFIG_PERCPU_REFCOUNT_TORTURE_TEST) += percpu-refcount-torture.o obj-y += string_helpers.o obj-$(CONFIG_STRING_HELPERS_KUNIT_TEST) += string_helpers_kunit.o obj-y += hexdump.o diff --git a/lib/percpu-refcount-torture.c b/lib/percpu-refcount-torture.c new file mode 100644 index 000000000000..686f5a228b40 --- /dev/null +++ b/lib/percpu-refcount-torture.c @@ -0,0 +1,367 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include +#include + +#include "percpu-refcount.h" + +static int busted_early_ref_release; +module_param(busted_early_ref_release, int, 0444); +MODULE_PARM_DESC(busted_early_ref_release, + "Enable busted premature release of ref (default = 0), 0 = disable"); + +static int busted_late_ref_release; +module_param(busted_late_ref_release, int, 0444); +MODULE_PARM_DESC(busted_late_ref_release, + "Enable busted late release of ref (default = 0), 0 = disable"); + +static long delay_us = 10; +module_param(delay_us, long, 0444); +MODULE_PARM_DESC(delay_us, + "delay between reader refcount operations in microseconds (default = 10)"); + +static long nrefs = 2; +module_param(nrefs, long, 0444); +MODULE_PARM_DESC(nrefs, "Number of percpu refs (default = 2)"); + +static long niterations = 100; +module_param(niterations, long, 0444); +MODULE_PARM_DESC(niterations, + "Number of iterations of ref increment and decrement (default = 100)"); + +static long nusers = 2; +module_param(nusers, long, 0444); +MODULE_PARM_DESC(nusers, "Number of refcount users (default = 2)"); + +static int onoff_holdoff; +module_param(onoff_holdoff, int, 0444); +MODULE_PARM_DESC(onoff_holdoff, "Time after boot before CPU hotplugs (seconds)"); + +static int onoff_interval; +module_param(onoff_interval, int, 0444); +MODULE_PARM_DESC(onoff_interval, "Time between CPU hotplugs (jiffies), 0=disable"); + +static int stutter; +module_param(stutter, int, 0444); +MODULE_PARM_DESC(stutter, "Stutter period in jiffies (default = 0), 0 = disable"); + +static int verbose = 1; +module_param(verbose, int, 0444); +MODULE_PARM_DESC(verbose, "Enable verbose debugging printk()s"); + +static struct task_struct **ref_user_tasks; +static struct task_struct *ref_manager_task; +static struct task_struct **busted_early_release_tasks; +static struct task_struct **busted_late_release_tasks; + +static struct percpu_ref *refs; +static long *num_per_ref_users; + +static atomic_t running; +static atomic_t *ref_running; + +static char *torture_type = "percpu-refcount"; + +static int percpu_ref_manager_thread(void *data) +{ + int i; + + while (atomic_read(&running) != 0) { + percpu_ref_test_flush_release_work(); + stutter_wait("percpu_ref_manager_thread"); + } + /* Ensure ordering with ref users */ + smp_mb(); + + percpu_ref_test_flush_release_work(); + + for (i = 0; i < nrefs; i++) { + WARN(percpu_ref_test_is_percpu(&refs[i]), + "!!! released ref %d should be in atomic mode", i); + WARN(!percpu_ref_is_zero(&refs[i]), + "!!! released ref %d should have 0 refcount", i); + } + + do { + stutter_wait("percpu_ref_manager_thread"); + } while (!torture_must_stop()); + + torture_kthread_stopping("percpu_ref_manager_thread"); + + return 0; +} + +static int percpu_ref_test_thread(void *data) +{ + struct percpu_ref *ref = (struct percpu_ref *)data; + int i = 0; + + percpu_ref_get(ref); + + do { + percpu_ref_get(ref); + udelay(delay_us); + percpu_ref_put(ref); + stutter_wait("percpu_ref_test_thread"); + i++; + } while (i < niterations); + + atomic_dec(&ref_running[ref - refs]); + /* Order ref release with ref_running[ref_idx] == 0 */ + smp_mb(); + percpu_ref_put(ref); + /* Order ref decrement with running == 0 */ + smp_mb(); + atomic_dec(&running); + + do { + stutter_wait("percpu_ref_test_thread"); + } while (!torture_must_stop()); + + torture_kthread_stopping("percpu_ref_test_thread"); + + return 0; +} + +static int percpu_ref_busted_early_thread(void *data) +{ + struct percpu_ref *ref = (struct percpu_ref *)data; + int ref_idx = ref - refs; + int i = 0, j; + + do { + /* Extra ref put momemtarily */ + for (j = 0; j < num_per_ref_users[ref_idx]; j++) + percpu_ref_put(ref); + stutter_wait("percpu_ref_busted_early_thread"); + for (j = 0; j < num_per_ref_users[ref_idx]; j++) + percpu_ref_get(ref); + i++; + stutter_wait("percpu_ref_busted_early_thread"); + } while (i < niterations * 10); + + do { + stutter_wait("percpu_ref_busted_early_thread"); + } while (!torture_must_stop()); + + torture_kthread_stopping("percpu_ref_busted_early_thread"); + + return 0; +} + +static int percpu_ref_busted_late_thread(void *data) +{ + struct percpu_ref *ref = (struct percpu_ref *)data; + int i = 0; + + do { + /* Extra ref get momemtarily */ + percpu_ref_get(ref); + stutter_wait("percpu_ref_busted_late_thread"); + percpu_ref_put(ref); + i++; + } while (i < niterations); + + do { + stutter_wait("percpu_ref_busted_late_thread"); + } while (!torture_must_stop()); + + torture_kthread_stopping("percpu_ref_busted_late_thread"); + + return 0; +} + +static void percpu_ref_test_cleanup(void) +{ + int i; + + if (torture_cleanup_begin()) + return; + + if (busted_late_release_tasks) { + for (i = 0; i < nrefs; i++) + torture_stop_kthread(busted_late_task, busted_late_release_tasks[i]); + kfree(busted_late_release_tasks); + busted_late_release_tasks = NULL; + } + + if (busted_early_release_tasks) { + for (i = 0; i < nrefs; i++) + torture_stop_kthread(busted_early_task, busted_early_release_tasks[i]); + kfree(busted_early_release_tasks); + busted_early_release_tasks = NULL; + } + + if (ref_manager_task) { + torture_stop_kthread(ref_manager, ref_manager_task); + ref_manager_task = NULL; + } + + if (ref_user_tasks) { + for (i = 0; i < nusers; i++) + torture_stop_kthread(ref_user, ref_user_tasks[i]); + kfree(ref_user_tasks); + ref_user_tasks = NULL; + } + + kfree(ref_running); + ref_running = NULL; + + kfree(num_per_ref_users); + num_per_ref_users = NULL; + + if (refs) { + for (i = 0; i < nrefs; i++) + percpu_ref_exit(&refs[i]); + kfree(refs); + refs = NULL; + } + + torture_cleanup_end(); +} + +static void percpu_ref_test_release(struct percpu_ref *ref) +{ + WARN(!!atomic_add_return(0, &ref_running[ref-refs]), "!!! Premature ref release"); +} + +static int __init percpu_ref_torture_init(void) +{ + DEFINE_TORTURE_RANDOM(rand); + struct torture_random_state *trsp = &rand; + int flags; + int err; + int ref_idx; + int i; + + if (!torture_init_begin("percpu-refcount", verbose)) + return -EBUSY; + + atomic_set(&running, nusers); + /* Order @running with later increment and decrement operations */ + smp_mb(); + + refs = kcalloc(nrefs, sizeof(refs[0]), GFP_KERNEL); + if (!refs) { + TOROUT_ERRSTRING("out of memory"); + err = -ENOMEM; + goto init_err; + } + for (i = 0; i < nrefs; i++) { + flags = torture_random(trsp) & 1 ? PERCPU_REF_INIT_ATOMIC : PERCPU_REF_REL_MANAGED; + err = percpu_ref_init(&refs[i], percpu_ref_test_release, + flags, GFP_KERNEL); + if (err) + goto init_err; + if (!(flags & PERCPU_REF_REL_MANAGED)) + percpu_ref_switch_to_managed(&refs[i]); + } + + num_per_ref_users = kcalloc(nrefs, sizeof(num_per_ref_users[0]), GFP_KERNEL); + if (!num_per_ref_users) { + TOROUT_ERRSTRING("out of memory"); + err = -ENOMEM; + goto init_err; + } + for (i = 0; i < nrefs; i++) + num_per_ref_users[i] = 0; + + ref_user_tasks = kcalloc(nusers, sizeof(ref_user_tasks[0]), GFP_KERNEL); + if (!ref_user_tasks) { + TOROUT_ERRSTRING("out of memory"); + err = -ENOMEM; + goto init_err; + } + + ref_running = kcalloc(nrefs, sizeof(ref_running[0]), GFP_KERNEL); + if (!ref_running) { + TOROUT_ERRSTRING("out of memory"); + err = -ENOMEM; + goto init_err; + } + + for (i = 0; i < nusers; i++) { + ref_idx = torture_random(trsp) % nrefs; + atomic_inc(&ref_running[ref_idx]); + num_per_ref_users[ref_idx]++; + /* Order increments with subquent reads */ + smp_mb(); + err = torture_create_kthread(percpu_ref_test_thread, + &refs[ref_idx], ref_user_tasks[i]); + if (torture_init_error(err)) + goto init_err; + } + + err = torture_create_kthread(percpu_ref_manager_thread, NULL, ref_manager_task); + if (torture_init_error(err)) + goto init_err; + + /* Drop initial reference, after test threads have started running */ + udelay(1); + for (i = 0; i < nrefs; i++) + percpu_ref_put(&refs[i]); + + + if (busted_early_ref_release) { + busted_early_release_tasks = kcalloc(nrefs, + sizeof(busted_early_release_tasks[0]), + GFP_KERNEL); + if (!busted_early_release_tasks) { + TOROUT_ERRSTRING("out of memory"); + err = -ENOMEM; + goto init_err; + } + for (i = 0; i < nrefs; i++) { + err = torture_create_kthread(percpu_ref_busted_early_thread, + &refs[i], busted_early_release_tasks[i]); + if (torture_init_error(err)) + goto init_err; + } + } + + if (busted_late_ref_release) { + busted_late_release_tasks = kcalloc(nrefs, sizeof(busted_late_release_tasks[0]), + GFP_KERNEL); + if (!busted_late_release_tasks) { + TOROUT_ERRSTRING("out of memory"); + err = -ENOMEM; + goto init_err; + } + for (i = 0; i < nrefs; i++) { + err = torture_create_kthread(percpu_ref_busted_late_thread, + &refs[i], busted_late_release_tasks[i]); + if (torture_init_error(err)) + goto init_err; + } + } + if (stutter) { + err = torture_stutter_init(stutter, stutter); + if (torture_init_error(err)) + goto init_err; + } + + err = torture_onoff_init(onoff_holdoff * HZ, onoff_interval, NULL); + if (torture_init_error(err)) + goto init_err; + + torture_init_end(); + return 0; +init_err: + torture_init_end(); + percpu_ref_test_cleanup(); + return err; +} + +static void __exit percpu_ref_torture_exit(void) +{ + percpu_ref_test_cleanup(); +} + +module_init(percpu_ref_torture_init); +module_exit(percpu_ref_torture_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("percpu refcount torture test"); diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index 7b97f9728c5b..7d0c85c7ce57 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -11,6 +11,8 @@ #include #include +#include "percpu-refcount.h" + /* * Initially, a percpu refcount is just a set of percpu counters. Initially, we * don't try to detect the ref hitting 0 - which means that get/put can just @@ -677,6 +679,7 @@ static void percpu_ref_release_work_fn(struct work_struct *work) struct percpu_ref *ref; int count = 0; bool held; + struct llist_node *last_node = READ_ONCE(last_percpu_ref_node); first = READ_ONCE(percpu_ref_manage_head.first); if (!first) @@ -711,7 +714,7 @@ static void percpu_ref_release_work_fn(struct work_struct *work) * +----------+ +------+ +------+ +------+ +------+ * */ - if (last_percpu_ref_node == NULL || last_percpu_ref_node->next == NULL) { + if (last_node == NULL || last_node->next == NULL) { retry_sentinel_get: sen_node = percpu_ref_get_sen_node(); /* @@ -741,11 +744,10 @@ static void percpu_ref_release_work_fn(struct work_struct *work) head = prev->next; } } else { - prev = last_percpu_ref_node; + prev = last_node; head = prev->next; } - last_percpu_ref_node = NULL; llist_for_each_safe(pos, next, head) { /* Free sentinel node which is present in the list */ if (percpu_ref_is_sen_node(pos)) { @@ -773,18 +775,53 @@ static void percpu_ref_release_work_fn(struct work_struct *work) continue; __percpu_ref_switch_to_percpu_checked(ref, false); count++; - if (count == max_scan_count) { - last_percpu_ref_node = pos; - break; + if (count == READ_ONCE(max_scan_count)) { + WRITE_ONCE(last_percpu_ref_node, pos); + goto queue_release_work; } prev = pos; } + WRITE_ONCE(last_percpu_ref_node, NULL); queue_release_work: queue_delayed_work(percpu_ref_release_wq, &percpu_ref_release_work, scan_interval); } +bool percpu_ref_test_is_percpu(struct percpu_ref *ref) +{ + unsigned long __percpu *percpu_count; + + return __ref_is_percpu(ref, &percpu_count); +} +EXPORT_SYMBOL_GPL(percpu_ref_test_is_percpu); + +void percpu_ref_test_flush_release_work(void) +{ + int max_flush = READ_ONCE(max_scan_count); + int max_count = 1000; + + /* Complete any executing release work */ + flush_delayed_work(&percpu_ref_release_work); + /* Scan till the end of the llist */ + WRITE_ONCE(max_scan_count, INT_MAX); + /* max scan count update visible to release work */ + smp_mb(); + flush_delayed_work(&percpu_ref_release_work); + /* max scan count update visible to release work */ + smp_mb(); + WRITE_ONCE(max_scan_count, 1); + /* max scan count update visible to work */ + smp_mb(); + flush_delayed_work(&percpu_ref_release_work); + while (READ_ONCE(last_percpu_ref_node) != NULL && max_count--) + flush_delayed_work(&percpu_ref_release_work); + /* max scan count update visible to work */ + smp_mb(); + WRITE_ONCE(max_scan_count, max_flush); +} +EXPORT_SYMBOL_GPL(percpu_ref_test_flush_release_work); + static __init int percpu_ref_setup(void) { percpu_ref_release_wq = alloc_workqueue("percpu_ref_release_wq", diff --git a/lib/percpu-refcount.h b/lib/percpu-refcount.h new file mode 100644 index 000000000000..be2ac0411194 --- /dev/null +++ b/lib/percpu-refcount.h @@ -0,0 +1,6 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +#ifndef __LIB_REFCOUNT_H +#define __LIB_REFCOUNT_H +bool percpu_ref_test_is_percpu(struct percpu_ref *ref); +void percpu_ref_test_flush_release_work(void); +#endif From patchwork Mon Sep 16 05:08:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neeraj Upadhyay X-Patchwork-Id: 13805010 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF2C3C3ABCB for ; Mon, 16 Sep 2024 05:09:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 483916B0096; Mon, 16 Sep 2024 01:09:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 432C36B0098; Mon, 16 Sep 2024 01:09:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2AC656B0099; Mon, 16 Sep 2024 01:09:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 077A66B0096 for ; Mon, 16 Sep 2024 01:09:46 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 712A5160109 for ; Mon, 16 Sep 2024 05:09:45 +0000 (UTC) X-FDA: 82569423930.07.4056639 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam02on2050.outbound.protection.outlook.com [40.107.212.50]) by imf08.hostedemail.com (Postfix) with ESMTP id 7D393160018 for ; Mon, 16 Sep 2024 05:09:42 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=p0BqNEe5; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf08.hostedemail.com: domain of Neeraj.Upadhyay@amd.com designates 40.107.212.50 as permitted sender) smtp.mailfrom=Neeraj.Upadhyay@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726463274; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=r604xJvf0kpMD9SUu6dvCf+ZojbB7Oe7MWbhODd9r8k=; b=68SFiFpnAy6FxDGuJe5Hk493a+iT7HETkGZjJnp/JT1NpwRGNAaXN6MW9zqmuwEj4GXqLu HB8P5SMe+l+5hDhNt5ZhIYH6WnRmTSYXvk3yNEy8bpeRnamCWD532FILuYHY5NJcVbNRu2 e1dVBlrIJ7CWWhVABOO4jHyvomo0oJ8= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1726463274; a=rsa-sha256; cv=pass; b=GmwJl0OupGL1NrSSils1sxOymYcWD0oOEfpHA6reTEQg8pgcM4X+9M9bvV6v6VhFUwIICW qIUeuco8gWZLWONt+3h/HudptcavNiR206OHKNlOZTC82zQMDNE89+vrb/bMxUdM9gZeMM 8kYtUNmMptdWr5zdW2PD8g4AJVGkXiI= ARC-Authentication-Results: i=2; imf08.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=p0BqNEe5; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf08.hostedemail.com: domain of Neeraj.Upadhyay@amd.com designates 40.107.212.50 as permitted sender) smtp.mailfrom=Neeraj.Upadhyay@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=wqUtnxMrjx6LqyfywxM5se9oeFvddKQ0FLQ9kHtdW+lwIJ7I0XEl/FlKxmOwyi+gg3YhjWivBBnF34frJPez9qbnCx/tBLuTn8hybsz6LV9Y2RC7uFUu8+dxjcKxflvsIL92ZoJulLL2Qq/hb57TGM5WU4K3cZ1PvklasJEnh7bkVrBcHIM/pDKMd0ElC8YD9xxS76ES2wQ4la5/x7EZHZzGxPJaqBoc/+a3kDyRyQ4w5g2Y01Og+OA4ZfiNB+G19dAwtUSqnTPn+pvSheizmxQFBj3lBwYGB156TqmDAPyS1wTboEp1NS47XhImgL12qrSj1VOs720u/yOcBsRPXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=r604xJvf0kpMD9SUu6dvCf+ZojbB7Oe7MWbhODd9r8k=; b=oBEuog5TwBBe6BHCL/G2+YlSmsR40Os+MQLHvKmCDfgQ8Th7DRyYhuCkbvvfUy0vLI79pz+gpg2gdtGzZytewdKnMYMU9+zhSE/B60Jo2vcYgU6rdC7CY0IcWaRXKovVFpIFNjul1ACkeJxAWpzc9Rd+SwfPSI1y2C8Pyo7OIMyZxb0Y1QTNVKurRqZXgbO0oZb3hRA2e6WqBFXYvWuXm6MXcwl7CQnnnwSLP+XWyu8Gi8Jerl0BKPsdaRbsaEwMO86yi7uqwNjTWhwtgyMjARJ2IDZj82CteiKEYANf2F2/q6Tx5v3VSqdWb0mubN2I9ZscGE8TwjQBhzT4+ZPegg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=r604xJvf0kpMD9SUu6dvCf+ZojbB7Oe7MWbhODd9r8k=; b=p0BqNEe57QVw/NguqJ5X6HfoDI1KREe/QUuLbKSsO5dKhYe2GtC6yKzO5BsIIh22xyY0qShz0e9M1a2TILWJXLlykA0wSQjtUucRy2h5jFmBp2JnTkpallwrAJxR0R+zddwib36pd8b2MT6sBKm1A0eyA1QZ44QNLoHIaM7MXqc= Received: from SJ0PR03CA0168.namprd03.prod.outlook.com (2603:10b6:a03:338::23) by SA1PR12MB8743.namprd12.prod.outlook.com (2603:10b6:806:37c::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7962.23; Mon, 16 Sep 2024 05:09:37 +0000 Received: from SJ5PEPF000001CF.namprd05.prod.outlook.com (2603:10b6:a03:338:cafe::55) by SJ0PR03CA0168.outlook.office365.com (2603:10b6:a03:338::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7939.30 via Frontend Transport; Mon, 16 Sep 2024 05:09:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by SJ5PEPF000001CF.mail.protection.outlook.com (10.167.242.43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7918.13 via Frontend Transport; Mon, 16 Sep 2024 05:09:36 +0000 Received: from BLR-L-NUPADHYA.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 16 Sep 2024 00:09:30 -0500 From: Neeraj Upadhyay To: CC: , , , , , , , , , , , , , , , , , Subject: [RFC 3/6] percpu-refcount: Extend managed mode to allow runtime switching Date: Mon, 16 Sep 2024 10:38:08 +0530 Message-ID: <20240916050811.473556-4-Neeraj.Upadhyay@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240916050811.473556-1-Neeraj.Upadhyay@amd.com> References: <20240916050811.473556-1-Neeraj.Upadhyay@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ5PEPF000001CF:EE_|SA1PR12MB8743:EE_ X-MS-Office365-Filtering-Correlation-Id: e9d30e1f-8e1a-4901-7e08-08dcd60dc294 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|1800799024|36860700013|7416014|376014; X-Microsoft-Antispam-Message-Info: 2/eNvxGBUcgfLL6PgOg/waOf6yMmzplo3OmXZg6n5Gf3h/GeAwXbCtnE9c1tuinzVRcytnuYbUNRwjgd6BZQxsUhIdK00v8ONVTC3C8VvYWgadiZ7tep3MBPTrXxjkmcbi5xZVMunSygJQjEsZqS16ui4MBqfnmZdMcs38rdYrZIQe81agxqAm613fIfJHyBYMb++jETLwwD2wVadL/902iv1mS5ubpFMgz9y3R7M0nBYlP1an+M3Huk34JMVVF1iAFXevpQjvvBZd3RRRxr1FrC1193F1YGLKsmS7H+kc7Qyl2nC04GdHj3O8u58o2uoaKEseLuf51OCnQFTKIzt4IjMYHUp8IBDtURWNXZwF25OovX+/GhsBP3rPZtehNbTgJxXgJl0zquc3X/rqfjofNqRjcxDsupOvca8BWZbWaTayCU0DuyHzWx50/0xf+EfPEBfug5kwd0lTEw45zM5d+A5VRU3P+1xDQxpvsjwAcmbm0BlHmzc3ZnKAQ8AlO6vsJuPFQEhLZzk+XWL1pjkVzhuzuOh+HBuVF8f4jIAQkpdwzksY2uWEUr7nh/d41u2yrNEk2ArbqG8rPaPWpTHqwODe99GWWuLdspZQSJLHLv96kKnf5GOi+k3ftkS0RG7NMHLExH4TzIDKABHoOeNo8MQMwPrmn8Cg+UgjP+T6K7WGcKo0EkMfD55TXvKEhOOSnkOce9U+Cn6twCUpJWFgPjC1F7mhUs/ciZ+Vt/1Zk3SOmXD1gmAyqM4Pp2ce+K+fdoohjxPbEaUBYBpvc/A8ufVHLvhYNCcXexAtbx27auq1qfMWaqmCbkOhlyxpqJmROJnSM04oPwbjxqufC/BoSp4tpBYBZD9h+55OI5Dya7kVcBPUiuCnrXzfcqilzCGUNDytajrJySrDyWDf3A6TftirsXx0ps+ZFeKwZHQ7yFD14jJkQOCCIbmtH0VsaCfdanTYa0xiO/CAGYlpl0Cr9iHC7xLULCUjeBVki/4zTxxbjQKzglUjPyZGpd5BrEAf+BNZrx5fZKQkmSbM9Flaefs7N2oEtGTY38zjgsmiZclYYcEx2vI8DW6m8iIKLg5yvXvz7cAhdSIvZEzspmaM6ExguszBhHkBO/wQBGvOMHMh6fcY+MxG1gQdtirn12ebrt+YspG3FQpoht6EgURutpximmFSEdRjOYx//hbdFhuaMaxY5+72xmgrU5pmjt2q2joactJsuEO6dL10okdX4uzoFu85yVBVWT8aEzAB4uaYH7CzpHie93YjJMEziQojvE26l6PhzaDBVKJ4ekeSDJqFIM7UWOwEq8Rm9hhdo55TkLIxAkaEg2Ds+jzzmY+aDJpO3yNfvG+x3JV+HS7bC4eBInZg2YAkmsmT+joegLmnjbfcOxpEpni9GwlgraDw8OA1sTjHmR0zmkGUAdJZG3PpZ2q/VyoyBS2g4fz7nBp0ncLAyn0WzC+M8NzFM3 X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(1800799024)(36860700013)(7416014)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Sep 2024 05:09:36.5791 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e9d30e1f-8e1a-4901-7e08-08dcd60dc294 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: SJ5PEPF000001CF.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB8743 X-Rspamd-Queue-Id: 7D393160018 X-Stat-Signature: j6kwjsnfx3shwakg6awfw8sf8eiy7rdr X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1726463382-465086 X-HE-Meta: U2FsdGVkX18/vUSCJQpHp+D7mJb0dZYpgN2jnILEKajI1/FoFAyZAmAyVqsDLjgZMM0mC1P2MlXm3CEC9jFpw4HgOeimgPzDO1nTTHnu2Rdoo+pJGKObe9k0KzgHgmHuNVv/6na0GZOpCf/N9WWvuEjfn4pIhPSQn8wlcsANizuH2wpo+5CLSHnbwOli2APjDZCYzWK3uSbsSncBfN4W0aVIVjDEA2i3zhbK4KwKxW0XFFcXyoqixETAjMktBTOtXrPqbIOcxv0XA9raMUuo3fvo7oUHAUfrRTfCxy4A+XVA4nIPhd2yF+KQaOY1z2lRY5aCfnJgM4uKEqQa1PIWTC1qX8ehULLLxXUtdOmkog05TfON9JwLTKip7boq5eFh2PJQd+8fYgQDK18mIcifGO/wX8u+/9eKpu4OA+NF9GZxEnoRoB/BqAV7jXpSZazrH8QkjXnDuk7RklKjInlAukL0zj/3wHcyg98tdawRDX87hnPhBGRvNk/tsPvme87hzYbzphs7RwgOKqFbWVo/KTEL/yf2JVnJnDA0gkf/bnqWq8Rq5+tH2LyjHAveQBwV8ClE98apLjwNZppPY18Ws1/6RBJEf5qkPPzDedcXeG8ZjeylHZQTwctFHMzYbHq56jZkSgr7aCYOd5DRTn/ydzQZCraTw90MZj3MUFdxf6hVPhmPWN6QoW1qOMTU6oCCzv72SJ7rr4EhuMLavc/OWnBJGx9IlcYAXEZSxDMjVXf8ZJY9E46yyyqYcI0nVPdkVdU7yrALoRrmKLMOBiljRAmkD7ZfyACL2JMm9gMx+cLTzcicHvTo1pwOD0dfR6VBSUF0vI8T8wzcIa1G2UWFEGa2wSdYhYBhV8SGEAdiXj+ZwOyH8aSvTufWh/ls80S2MKEdpyYG7JgCsnc8/Kug1qVIK6MnuZoGaxVRONDIpOLwT1AgRPtxZmM3g+oFUR178KH1fVR585U86s9m48x LPSml727 JLKO04bysxStsMO5jQTddjb4h1VBAGm+aPkH2dZmEqiqdrBATosRps5vZKrL6LpPdBdlkAvhpJnsG8arDkSOU90eC5bsTPeAoFlEIWYpcc7+Vjkqn4yYAWPyq0RErhmV+z46CJ1udhBPgEbNhbAQ62K7bQle1Ca2fDc00HdU3y261jbJ0kGtYWsyK6LIb+FPCFxKntqaqzKO9Y4O3I+b+KrFdWwGadtCcTZiZjuJBXJFTkyu80idTgcmQOPIC/cjdrssf2jeF54ej6eJB1Zh4yT0zGmhR+Nwd5c7hXI3eQeO70gct4+boPIIXCuxhb6lnhrZ+RG+io0UxOGehZGvpAT3pZaZVF8kBXzLpwG0VsSr0XkPDy/RMiwa5ehBt1RnDiV7D11SdysbvVBIiztRK7n25Yg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Provide more flexibility in terms of runtime mode switch for a managed percpu ref. This can be useful for cases when in some scenarios, a managed ref's object enters shutdown phase. Instead of waiting for manager thread to process the ref, user can dirctly invoke percpu_ref_kill() for the ref. The init modes are same as in existing code. Runtime mode switching allows switching back a managed ref to unmanaged mode, which allows transitions to all reinit modes from managed mode. To --> A P P(RI) M D D(RI) D(RI/M) EX REI RES A y n y y n y y y y y P n n n n y n n y n n M y* n y* y n y* y y* y y P(RI) y n y y n y y y y y D(RI) y n y y n y y - y y D(RI/M) y* n y* y n y* y - y y Modes: A - Atomic P - PerCPU M - Managed P(RI) - PerCPU with ReInit D(RI) - Dead with ReInit D(RI/M) - Dead with ReInit and Managed PerCPU Ref Ops: KLL - Kill REI - Reinit RES - Resurrect (RI) is for modes which are initialized with PERCPU_REF_ALLOW_REINIT. The transitions shown above are the allowed transitions and can be indirect transitions. For example, managed ref switches to P(RI) mode when percpu_ref_switch_to_unmanaged() is called for it. P(RI) mode can be directly switched to A mode using percpu_ref_switch_to_atomic(). Signed-off-by: Neeraj Upadhyay --- include/linux/percpu-refcount.h | 3 +- lib/percpu-refcount.c | 248 +++++++++++--------------------- 2 files changed, 88 insertions(+), 163 deletions(-) diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index e6aea81b3d01..fe967db431a6 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -110,7 +110,7 @@ struct percpu_ref_data { struct rcu_head rcu; struct percpu_ref *ref; unsigned int aux_flags; - struct llist_node node; + struct list_head node; }; @@ -139,6 +139,7 @@ void percpu_ref_switch_to_atomic(struct percpu_ref *ref, void percpu_ref_switch_to_atomic_sync(struct percpu_ref *ref); void percpu_ref_switch_to_percpu(struct percpu_ref *ref); int percpu_ref_switch_to_managed(struct percpu_ref *ref); +void percpu_ref_switch_to_unmanaged(struct percpu_ref *ref); void percpu_ref_kill_and_confirm(struct percpu_ref *ref, percpu_ref_func_t *confirm_kill); void percpu_ref_resurrect(struct percpu_ref *ref); diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index 7d0c85c7ce57..b79e36905aa4 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -5,7 +5,7 @@ #include #include #include -#include +#include #include #include #include @@ -43,7 +43,12 @@ static DEFINE_SPINLOCK(percpu_ref_switch_lock); static DECLARE_WAIT_QUEUE_HEAD(percpu_ref_switch_waitq); -static LLIST_HEAD(percpu_ref_manage_head); +static struct list_head percpu_ref_manage_head = LIST_HEAD_INIT(percpu_ref_manage_head); +/* Spinlock protects node additions/deletions */ +static DEFINE_SPINLOCK(percpu_ref_manage_lock); +/* Mutex synchronizes node deletions with the node being scanned */ +static DEFINE_MUTEX(percpu_ref_active_switch_mutex); +static struct list_head *next_percpu_ref_node = &percpu_ref_manage_head; static unsigned long __percpu *percpu_count_ptr(struct percpu_ref *ref) { @@ -112,7 +117,7 @@ int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release, data->confirm_switch = NULL; data->ref = ref; ref->data = data; - init_llist_node(&data->node); + INIT_LIST_HEAD(&data->node); if (flags & PERCPU_REF_REL_MANAGED) percpu_ref_switch_to_managed(ref); @@ -150,9 +155,9 @@ static int __percpu_ref_switch_to_managed(struct percpu_ref *ref) data->force_atomic = false; if (!__ref_is_percpu(ref, &percpu_count)) __percpu_ref_switch_mode(ref, NULL); - /* Ensure ordering of percpu mode switch and node scan */ - smp_mb(); - llist_add(&data->node, &percpu_ref_manage_head); + spin_lock(&percpu_ref_manage_lock); + list_add(&data->node, &percpu_ref_manage_head); + spin_unlock(&percpu_ref_manage_lock); return 0; @@ -162,7 +167,7 @@ static int __percpu_ref_switch_to_managed(struct percpu_ref *ref) } /** - * percpu_ref_switch_to_managed - Switch an unmanaged ref to percpu mode. + * percpu_ref_switch_to_managed - Switch an unmanaged ref to percpu managed mode. * * @ref: percpu_ref to switch to managed mode * @@ -179,6 +184,47 @@ int percpu_ref_switch_to_managed(struct percpu_ref *ref) } EXPORT_SYMBOL_GPL(percpu_ref_switch_to_managed); +/** + * percpu_ref_switch_to_unmanaged - Switch a managed ref to percpu mode. + * + * @ref: percpu_ref to switch back to unmanaged percpu mode + * + * Must only be called with elevated refcount. + */ +void percpu_ref_switch_to_unmanaged(struct percpu_ref *ref) +{ + bool mutex_taken = false; + struct list_head *node; + unsigned long flags; + + might_sleep(); + + WARN_ONCE(!percpu_ref_is_managed(ref), "Percpu ref is not managed"); + + node = &ref->data->node; + spin_lock(&percpu_ref_manage_lock); + if (list_empty(node)) { + spin_unlock(&percpu_ref_manage_lock); + mutex_taken = true; + mutex_lock(&percpu_ref_active_switch_mutex); + spin_lock(&percpu_ref_manage_lock); + } + + if (next_percpu_ref_node == node) + next_percpu_ref_node = next_percpu_ref_node->next; + list_del_init(node); + spin_unlock(&percpu_ref_manage_lock); + if (mutex_taken) + mutex_unlock(&percpu_ref_active_switch_mutex); + + /* Drop the pseudo-init reference */ + percpu_ref_put(ref); + spin_lock_irqsave(&percpu_ref_switch_lock, flags); + ref->data->aux_flags &= ~__PERCPU_REL_MANAGED; + spin_unlock_irqrestore(&percpu_ref_switch_lock, flags); +} +EXPORT_SYMBOL_GPL(percpu_ref_switch_to_unmanaged); + static void __percpu_ref_exit(struct percpu_ref *ref) { unsigned long __percpu *percpu_count = percpu_count_ptr(ref); @@ -599,164 +645,35 @@ module_param(max_scan_count, int, 0444); static void percpu_ref_release_work_fn(struct work_struct *work); -/* - * Sentinel llist nodes for lockless list traveral and deletions by - * the pcpu ref release worker, while nodes are added from - * percpu_ref_init() and percpu_ref_switch_to_managed(). - * - * Sentinel node marks the head of list traversal for the current - * iteration of kworker execution. - */ -struct percpu_ref_sen_node { - bool inuse; - struct llist_node node; -}; - -/* - * We need two sentinel nodes for lockless list manipulations from release - * worker - first node will be used in current reclaim iteration. The second - * node will be used in next iteration. Next iteration marks the first node - * as free, for use in subsequent iteration. - */ -#define PERCPU_REF_SEN_NODES_COUNT 2 - -/* Track last processed percpu ref node */ -static struct llist_node *last_percpu_ref_node; - -static struct percpu_ref_sen_node - percpu_ref_sen_nodes[PERCPU_REF_SEN_NODES_COUNT]; - static DECLARE_DELAYED_WORK(percpu_ref_release_work, percpu_ref_release_work_fn); -static bool percpu_ref_is_sen_node(struct llist_node *node) -{ - return &percpu_ref_sen_nodes[0].node <= node && - node <= &percpu_ref_sen_nodes[PERCPU_REF_SEN_NODES_COUNT - 1].node; -} - -static struct llist_node *percpu_ref_get_sen_node(void) -{ - int i; - struct percpu_ref_sen_node *sn; - - for (i = 0; i < PERCPU_REF_SEN_NODES_COUNT; i++) { - sn = &percpu_ref_sen_nodes[i]; - if (!sn->inuse) { - sn->inuse = true; - return &sn->node; - } - } - - return NULL; -} - -static void percpu_ref_put_sen_node(struct llist_node *node) -{ - struct percpu_ref_sen_node *sn = container_of(node, struct percpu_ref_sen_node, node); - - sn->inuse = false; - init_llist_node(node); -} - -static void percpu_ref_put_all_sen_nodes_except(struct llist_node *node) -{ - int i; - - for (i = 0; i < PERCPU_REF_SEN_NODES_COUNT; i++) { - if (&percpu_ref_sen_nodes[i].node == node) - continue; - percpu_ref_sen_nodes[i].inuse = false; - init_llist_node(&percpu_ref_sen_nodes[i].node); - } -} - static struct workqueue_struct *percpu_ref_release_wq; static void percpu_ref_release_work_fn(struct work_struct *work) { - struct llist_node *pos, *first, *head, *prev, *next; - struct llist_node *sen_node; + struct list_head *node; struct percpu_ref *ref; int count = 0; bool held; - struct llist_node *last_node = READ_ONCE(last_percpu_ref_node); - first = READ_ONCE(percpu_ref_manage_head.first); - if (!first) + mutex_lock(&percpu_ref_active_switch_mutex); + spin_lock(&percpu_ref_manage_lock); + if (list_empty(&percpu_ref_manage_head)) { + next_percpu_ref_node = &percpu_ref_manage_head; + spin_unlock(&percpu_ref_manage_lock); + mutex_unlock(&percpu_ref_active_switch_mutex); goto queue_release_work; - - /* - * Enqueue a dummy node to mark the start of scan. This dummy - * node is used as start point of scan and ensures that - * there is no additional synchronization required with new - * label node additions to the llist. Any new labels will - * be processed in next run of the kworker. - * - * SCAN START PTR - * | - * v - * +----------+ +------+ +------+ +------+ - * | | | | | | | | - * | head ------> dummy|--->|label |--->| label|--->NULL - * | | | node | | | | | - * +----------+ +------+ +------+ +------+ - * - * - * New label addition: - * - * SCAN START PTR - * | - * v - * +----------+ +------+ +------+ +------+ +------+ - * | | | | | | | | | | - * | head |--> label|--> dummy|--->|label |--->| label|--->NULL - * | | | | | node | | | | | - * +----------+ +------+ +------+ +------+ +------+ - * - */ - if (last_node == NULL || last_node->next == NULL) { -retry_sentinel_get: - sen_node = percpu_ref_get_sen_node(); - /* - * All sentinel nodes are in use? This should not happen, as we - * require only one sentinel for the start of list traversal and - * other sentinel node is freed during the traversal. - */ - if (WARN_ONCE(!sen_node, "All sentinel nodes are in use")) { - /* Use first node as the sentinel node */ - head = first->next; - if (!head) { - struct llist_node *ign_node = NULL; - /* - * We exhausted sentinel nodes. However, there aren't - * enough nodes in the llist. So, we have leaked - * sentinel nodes. Reclaim sentinels and retry. - */ - if (percpu_ref_is_sen_node(first)) - ign_node = first; - percpu_ref_put_all_sen_nodes_except(ign_node); - goto retry_sentinel_get; - } - prev = first; - } else { - llist_add(sen_node, &percpu_ref_manage_head); - prev = sen_node; - head = prev->next; - } - } else { - prev = last_node; - head = prev->next; } + if (next_percpu_ref_node == &percpu_ref_manage_head) + node = percpu_ref_manage_head.next; + else + node = next_percpu_ref_node; + next_percpu_ref_node = node->next; + list_del_init(node); + spin_unlock(&percpu_ref_manage_lock); - llist_for_each_safe(pos, next, head) { - /* Free sentinel node which is present in the list */ - if (percpu_ref_is_sen_node(pos)) { - prev->next = pos->next; - percpu_ref_put_sen_node(pos); - continue; - } - - ref = container_of(pos, struct percpu_ref_data, node)->ref; + while (!list_is_head(node, &percpu_ref_manage_head)) { + ref = container_of(node, struct percpu_ref_data, node)->ref; __percpu_ref_switch_to_atomic_sync_checked(ref, false); /* * Drop the ref while in RCU read critical section to @@ -765,24 +682,31 @@ static void percpu_ref_release_work_fn(struct work_struct *work) rcu_read_lock(); percpu_ref_put(ref); held = percpu_ref_tryget(ref); - if (!held) { - prev->next = pos->next; - init_llist_node(pos); + if (held) { + spin_lock(&percpu_ref_manage_lock); + list_add(node, &percpu_ref_manage_head); + spin_unlock(&percpu_ref_manage_lock); + __percpu_ref_switch_to_percpu_checked(ref, false); + } else { ref->percpu_count_ptr |= __PERCPU_REF_DEAD; } rcu_read_unlock(); - if (!held) - continue; - __percpu_ref_switch_to_percpu_checked(ref, false); + mutex_unlock(&percpu_ref_active_switch_mutex); count++; - if (count == READ_ONCE(max_scan_count)) { - WRITE_ONCE(last_percpu_ref_node, pos); + if (count == READ_ONCE(max_scan_count)) goto queue_release_work; + mutex_lock(&percpu_ref_active_switch_mutex); + spin_lock(&percpu_ref_manage_lock); + node = next_percpu_ref_node; + if (!list_is_head(next_percpu_ref_node, &percpu_ref_manage_head)) { + next_percpu_ref_node = next_percpu_ref_node->next; + list_del_init(node); } - prev = pos; + spin_unlock(&percpu_ref_manage_lock); } - WRITE_ONCE(last_percpu_ref_node, NULL); + mutex_unlock(&percpu_ref_active_switch_mutex); + queue_release_work: queue_delayed_work(percpu_ref_release_wq, &percpu_ref_release_work, scan_interval); From patchwork Mon Sep 16 05:08:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neeraj Upadhyay X-Patchwork-Id: 13805011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B617C3ABB2 for ; Mon, 16 Sep 2024 05:10:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 100926B0099; Mon, 16 Sep 2024 01:10:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0B0C06B009A; Mon, 16 Sep 2024 01:10:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6C826B009B; Mon, 16 Sep 2024 01:10:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C7DBF6B0099 for ; Mon, 16 Sep 2024 01:10:02 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 41AB61A004B for ; Mon, 16 Sep 2024 05:10:02 +0000 (UTC) X-FDA: 82569424644.24.736AC5F Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2086.outbound.protection.outlook.com [40.107.93.86]) by imf05.hostedemail.com (Postfix) with ESMTP id 56BAA100004 for ; Mon, 16 Sep 2024 05:09:59 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=APZ+J72m; arc=pass ("microsoft.com:s=arcselector10001:i=1"); dmarc=pass (policy=quarantine) header.from=amd.com; spf=pass (imf05.hostedemail.com: domain of Neeraj.Upadhyay@amd.com designates 40.107.93.86 as permitted sender) smtp.mailfrom=Neeraj.Upadhyay@amd.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1726463310; a=rsa-sha256; cv=pass; b=QROweXWJWqIR1lsnoh6KZD6D77G5bTZkuwG1KHP7L6Rs6NnCM1d1jWNSD0qROjyHKLOcc0 tu11OZOX8npRSswC+FwEqvYdcyukh/uvjXjCIWkwnZnWTya9feFIowLXBV59YUAs5woRlw rRgEDDtFfmZP8A/d3QsT42HQ7yej/gs= ARC-Authentication-Results: i=2; imf05.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=APZ+J72m; arc=pass ("microsoft.com:s=arcselector10001:i=1"); dmarc=pass (policy=quarantine) header.from=amd.com; spf=pass (imf05.hostedemail.com: domain of Neeraj.Upadhyay@amd.com designates 40.107.93.86 as permitted sender) smtp.mailfrom=Neeraj.Upadhyay@amd.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726463310; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FS9htN1kP9pZdhBQxkiiDa24nt1ICyEwzzMLrt19yj8=; b=J+6Me0t0DdTvCBFP7Bbhsu39+fdeXc8EvSrBgnPxs60/Hk5bl814+81GSA98sNR7LqRG3a DRT4mFEC7P/Dip/jvrhcHAYZNYnH3Lq0qe1ZZac95un6lib9+A9fwlht8yUEEVkhxOiS1q 6CObutwthzBfJD8fmskHO00tjc9JGoc= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=yveyjA7KkyBNC1laqiTI+b0ZZ+4qeLDSgM3TKaeUF2LI0YQ6gbFUVBQUYPEN/QWWb2vE3yV5H2ZcPK5JL9+bCuUkQCuV0Y4CzU/TGuX7rfKQ5c/lId8VyGDH6rE4tBR+LxXrG68VLiLfj7SVF3ivt5RNVdrtOADx4tmMmK5Qfxh2K0BQ0IfhJBvEgHyoNrstUN7xLixQKnSo6MuPvQIEiKBbwEUvsjINMVuyRm+LLCUCfcUMppLn1zJAx+neZjrrg+bgrKJ6zTlwbzBJKF+jNbzOTD9DdMOHKnXFV9vE/aKdQ5bUzFTXzPy171+Fnfm/dCddhk9wQsFYVzGkK/50jw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=FS9htN1kP9pZdhBQxkiiDa24nt1ICyEwzzMLrt19yj8=; b=wppSOUaB7d1Ahk+vLk+tnOkRyEirQsHo3ROoAPnR10OTl+RNwqQ5+bKCPA+kDHIuSY6yg1l3EXN6R8VVzzeHMeH5a1Ati9xFJk1jbFwIkYSd9VP+Hbi0AWS6URyISIPSjxYcuG8fT9I3u+o89qmc/SMTK29KFn1nDAYqiQa8O29Hi1kCTU89Snw7QlIwfSKzcXs6tIm3EEBO6usuhm8L6tXvebVjf5ZcXt+IrXCJLPO+FM3JeGw9URq3zqePFwbZ0rqgfdg1hI6htIREgeZPsHKlV+A5slZkgYe05ovOWK1rcOibx2sY4n49hJN1HSYsS4ExBKCbgIA7hqFBX3wm7A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=FS9htN1kP9pZdhBQxkiiDa24nt1ICyEwzzMLrt19yj8=; b=APZ+J72mtV6X1/Epc5jo0S+IJmiXxYIh5xLEceQbaIzwte7CHe0xFk29Iq/F0sVsAIkC8xQsNFjiRrAcWTN9ocrdAI5SdO5rBYSuSneuZX7rx4EfxE0KXUUoEJCkv1pWyqYAA5FzYPeSee1QeGoPp71NChDvjDTOyPKJsdkukBc= Received: from MW4PR04CA0384.namprd04.prod.outlook.com (2603:10b6:303:81::29) by SN7PR12MB6931.namprd12.prod.outlook.com (2603:10b6:806:261::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7962.22; Mon, 16 Sep 2024 05:09:55 +0000 Received: from SJ5PEPF000001C8.namprd05.prod.outlook.com (2603:10b6:303:81:cafe::9e) by MW4PR04CA0384.outlook.office365.com (2603:10b6:303:81::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7939.30 via Frontend Transport; Mon, 16 Sep 2024 05:09:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by SJ5PEPF000001C8.mail.protection.outlook.com (10.167.242.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7918.13 via Frontend Transport; Mon, 16 Sep 2024 05:09:54 +0000 Received: from BLR-L-NUPADHYA.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 16 Sep 2024 00:09:48 -0500 From: Neeraj Upadhyay To: CC: , , , , , , , , , , , , , , , , , Subject: [RFC 4/6] percpu-refcount-torture: Extend test with runtime mode switches Date: Mon, 16 Sep 2024 10:38:09 +0530 Message-ID: <20240916050811.473556-5-Neeraj.Upadhyay@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240916050811.473556-1-Neeraj.Upadhyay@amd.com> References: <20240916050811.473556-1-Neeraj.Upadhyay@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ5PEPF000001C8:EE_|SN7PR12MB6931:EE_ X-MS-Office365-Filtering-Correlation-Id: 8be1d72c-8332-45e0-36a6-08dcd60dcd60 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|82310400026|1800799024|36860700013|7416014|376014; X-Microsoft-Antispam-Message-Info: z70JubxGidW8xyCupr+gZAYJkEF1nNPrmas1rQZf3OKcxAoHLvQ2A2uNSPanlek5alRGxkWvPYIMisGVkTchurTKylNrcMJiMsSKReA7ihRtLjFYiSuhQMl3bla9r2oT0plHxxVOM3tLOQs3XF47BSBg2ZCPMNmZpg+2uQ/vP7TY9MEbRoDHrFuWdgUK0NZA/U7PhqQJInZ8ldW5Fty1zQb2jEG//CE69zla2GrdCCKPQWa7dolyJgTRk8+jMXGkzUR7oQE+Fycbq0R/3n4d2bcxdhQvzHfa1J2bIv3hK7UdJi3fIqD9YHkl/uitzqafQIcVqdLtHHdZOhCQuSLEDTPopHx5zbHnOBGIOzL+GaZ+/WXidJd56MStLklEvsitCaBWcG09tvzCznmyJmt3wV9REgb2HPJSBDkLBgofpBVuS4e3TWt4UYv3BtHAzyA/rQtlxflMSYN8Tioowbq2q2iPl5wt64gOnpUoay0dFd8U70M2kdJZAWu1ubgWDsqM2vBeRcLx0bKxp1ws+p6VYy3jYeQ1IYhpS2gyAyGAVedikCbIsZ4a+BezlEYZpJOVdKLXI2gL4MOrXMdeANG66sRFlGtuNl4yrRymvQ3a5U/fPzyWbRbM8pgPb8OaoyIaMwkEF+WfCLAay7PbKgHxtsSwrzdyPt4pBNKFBhZdcMMzPt5EG0n0hgSH8pCoJJ9hVIFrj2lUMuMXU5vov+OnpgH3dXGGRxzAiOKB2P0A7jWvrIrs3fStgQswHMoH4B2TTg7gWH447O7D3y97PiIObEDalhfEw8KAJuNgs+/T78zfBMZd4chAh7SSQTirmDqS83U7ReWuF6qRaKDIcZ2bsqT1XYzNcghkRmtWsSd03dpe6BSz5PRMcg1o2qEOTvOntWTA8B9wC4AXYFJw9kHYfonMfPLvIFUAHa8b0jJPQfbYAqZSiVUQ4/XhIFWexGKM6Uw+DVNd8Ca8XSzuRq5q1NdPgRbiv+1hc3qYF+bnt3lE5kaz2DTETtnzst0QWVSMj9RcU51FhYWFXqi1U92fhdNRo2I4QfEdR2vTpBKbSABy+hCgMt6alRgFiUOAoofDKDNNMYyLewuHiQfNsb86wPNfjSV/quCr9BFelmEY5MyGtz3LlGa4ahws4g2cC0/BP8Vri8oEdrDmOp+EyC1Nat3d8jbZS9w8r5R1D3J+kLY/Z6P8GAqaEvq7vZ6um+fzw4KXFAdLYRhjMRbuSYsMbF9ZwezC3lZJ6jKHhwzSYChQNyBapizQFwtroRtstZlOR0uy/+eMBj5kLY3b60VVxfeZYEokQH5ErIgeLTdqLfdmcro4uJaieQce+/d6uJV366QnPorH7MeJvv1WZoFZMSumZSX+M3OVjL+cSh6ngrhX5568rXKsQyTu8qKhBQfiIJRNYcwBUUo5vcqyQot4VG0ysu7lSRoISjp85BEMV9AzyPuERGP3DVKjs+7Tx/oy X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(1800799024)(36860700013)(7416014)(376014);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Sep 2024 05:09:54.6758 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8be1d72c-8332-45e0-36a6-08dcd60dcd60 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: SJ5PEPF000001C8.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6931 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 56BAA100004 X-Stat-Signature: usg1o31gr1i6syu8sxcmyoseybp4wcmt X-Rspam-User: X-HE-Tag: 1726463399-810591 X-HE-Meta: U2FsdGVkX18fLeUkVTk4t9sKEV2PwBkk2cyD5agx2CHp4z47dr8pfOUH9yH7f2QUYDlh0RgEIbEoSOOHKt1pXHOpvgQ/tDAN5y1xnF8MHCsfGwgirwMc8z3kBNd98221EIk59YVlmNSjmCJKqslDfCx6cPsUql1YWhnUq4wV2lQWz3K9IaOkJrhYc1ELSu2Md8BsoVr6X13stskCcckYLFMFHJ0tJ13fzyv1PObRnc1exzOf8/0IiQPmZXhnem/xSTZJXWOSTJXo6MxeCBgzNwkitTlyM9a0kbdhxCLw5RVvxyzKIUGCTxH8WvnaNLpJZw63DJxKz7AXJFw5yH4xhDQ7NiXxlXCTD7kmM5JFb+BWX3U9Vm+EAnHr7/4kFdpHn46p6UvU/4JNvHxNnRRQYNsbd09Jkyl394p2wE+eXGLMg0ZQ15FlBcNRIKtjF+/iKxzbyOdXmsmGATKcljymg437PKQ/8w4Tv3SvvU8J8E9uHa6ydVoZdIYPHbtdx8Q567IFmUJajulYzmfgqEUm4+XNX6HJAA8pdPV8wpmpYn4BLdeP6gJpUJK4/19fAwERn7C03g+3ikUHSKTeR3g1gLvmMMC5ERUrWWgjt4C1YoAw5Qv9GZYhWXU5yRadfPrnXxsinQ48r5xWfrg22LOnktPt3QmFkwQ7tBBcMU5MjSAFt3D1fJvYnJqX101URuZvInaN883JgPCZeMauTxy60N2VmagOxe+PCS9qOi/qmCBXCMmNPw4C71vC6+Z1TdzfKWjO8SANWmCnCXBOIc42xxeAuSCLSwVSi6a5Vdqg1pHLhbRax/6AKxHiTM2xgqu0Is/bdt8s2A9gh/q6qBpdWBq9bw/THC4EWGxAqGSFd1N1QckjoINmWeuGGzFO4bMu2TfA5XGQ1s3ecIdg8gfiGuVF/Dc93TmA+glKpIOAqk4McwdBByHhL4VJgKL+hXaxfoND+3vwR/Vd+v625b+ qYcF2715 uL8k1HHf37u1MGfFDy+p7Ia6xwv1jnfGY5CP5sJrmw46sMVlHlEg2dyMGgc5+ZbBDYRcKilCqC8uMNj60GKiDwmfb2AnKmFxmEet11RKtVrOoJqijJ/sCHEqXVSZYaA4cv9jFO/DGWfQf53vRu241ZY/eNCRk5Jg4CdIEtDvKaNCOZpis97nWNzb4ahTOZ+7mN9cM5YyabuSP0iT07CJQbbloM5CGk9FoRkXSameag3OqrEXQjsWexAmGz1Hemsk71br7TQ4pF6kMwXoPsu27Th3K8X7DM7Tp5+JHhO84s6XWYlZj738gg8EXqgy1yGaI7e//mBGJU9mZ8yJ/MIkugdm2gEfejUloLDxZvcjpOSj+X6st92ntrjy4yV+Yf6B2UapQBwcTZfiUjOItV3ylVfIE2Q4dVdtLlzyUUiC7/JdkNAHpbwunhi1HqrIh9JKQhriFN4pk+g4tjriwH/Vmld1iGNKDF9ndRFV2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Extend the test to exercise runtime switching from managed mode to other reinitable active modes. Signed-off-by: Neeraj Upadhyay --- lib/percpu-refcount-torture.c | 41 +++++++++++++++++++++++++++++++++-- lib/percpu-refcount.c | 12 +++++++++- 2 files changed, 50 insertions(+), 3 deletions(-) diff --git a/lib/percpu-refcount-torture.c b/lib/percpu-refcount-torture.c index 686f5a228b40..cb2700b16517 100644 --- a/lib/percpu-refcount-torture.c +++ b/lib/percpu-refcount-torture.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include @@ -59,6 +60,7 @@ static struct task_struct **busted_late_release_tasks; static struct percpu_ref *refs; static long *num_per_ref_users; +static struct mutex *ref_switch_mutexes; static atomic_t running; static atomic_t *ref_running; @@ -97,19 +99,36 @@ static int percpu_ref_manager_thread(void *data) static int percpu_ref_test_thread(void *data) { struct percpu_ref *ref = (struct percpu_ref *)data; + DEFINE_TORTURE_RANDOM(rand); + int ref_idx = ref - refs; + int do_switch; int i = 0; percpu_ref_get(ref); do { percpu_ref_get(ref); + /* Perform checks once per 256 iterations */ + do_switch = (torture_random(&rand) & 0xff); udelay(delay_us); + if (do_switch) { + mutex_lock(&ref_switch_mutexes[ref_idx]); + percpu_ref_switch_to_unmanaged(ref); + udelay(delay_us); + percpu_ref_switch_to_atomic_sync(ref); + if (do_switch & 1) + percpu_ref_switch_to_percpu(ref); + udelay(delay_us); + percpu_ref_switch_to_managed(ref); + mutex_unlock(&ref_switch_mutexes[ref_idx]); + udelay(delay_us); + } percpu_ref_put(ref); stutter_wait("percpu_ref_test_thread"); i++; } while (i < niterations); - atomic_dec(&ref_running[ref - refs]); + atomic_dec(&ref_running[ref_idx]); /* Order ref release with ref_running[ref_idx] == 0 */ smp_mb(); percpu_ref_put(ref); @@ -213,6 +232,13 @@ static void percpu_ref_test_cleanup(void) kfree(num_per_ref_users); num_per_ref_users = NULL; + if (ref_switch_mutexes) { + for (i = 0; i < nrefs; i++) + mutex_destroy(&ref_switch_mutexes[i]); + kfree(ref_switch_mutexes); + ref_switch_mutexes = NULL; + } + if (refs) { for (i = 0; i < nrefs; i++) percpu_ref_exit(&refs[i]); @@ -251,7 +277,8 @@ static int __init percpu_ref_torture_init(void) goto init_err; } for (i = 0; i < nrefs; i++) { - flags = torture_random(trsp) & 1 ? PERCPU_REF_INIT_ATOMIC : PERCPU_REF_REL_MANAGED; + flags = (torture_random(trsp) & 1) ? PERCPU_REF_INIT_ATOMIC : + PERCPU_REF_REL_MANAGED; err = percpu_ref_init(&refs[i], percpu_ref_test_release, flags, GFP_KERNEL); if (err) @@ -269,6 +296,16 @@ static int __init percpu_ref_torture_init(void) for (i = 0; i < nrefs; i++) num_per_ref_users[i] = 0; + ref_switch_mutexes = kcalloc(nrefs, sizeof(ref_switch_mutexes[0]), GFP_KERNEL); + if (!ref_switch_mutexes) { + TOROUT_ERRSTRING("out of memory"); + err = -ENOMEM; + goto init_err; + } + + for (i = 0; i < nrefs; i++) + mutex_init(&ref_switch_mutexes[i]); + ref_user_tasks = kcalloc(nusers, sizeof(ref_user_tasks[0]), GFP_KERNEL); if (!ref_user_tasks) { TOROUT_ERRSTRING("out of memory"); diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index b79e36905aa4..4e0a453bd51f 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -723,6 +723,7 @@ EXPORT_SYMBOL_GPL(percpu_ref_test_is_percpu); void percpu_ref_test_flush_release_work(void) { int max_flush = READ_ONCE(max_scan_count); + struct list_head *next; int max_count = 1000; /* Complete any executing release work */ @@ -738,8 +739,17 @@ void percpu_ref_test_flush_release_work(void) /* max scan count update visible to work */ smp_mb(); flush_delayed_work(&percpu_ref_release_work); - while (READ_ONCE(last_percpu_ref_node) != NULL && max_count--) + + while (true) { + if (!max_count--) + break; + spin_lock(&percpu_ref_manage_lock); + next = next_percpu_ref_node; + spin_unlock(&percpu_ref_manage_lock); + if (list_is_head(next, &percpu_ref_manage_head)) + break; flush_delayed_work(&percpu_ref_release_work); + } /* max scan count update visible to work */ smp_mb(); WRITE_ONCE(max_scan_count, max_flush); From patchwork Mon Sep 16 05:08:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neeraj Upadhyay X-Patchwork-Id: 13805012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3EBFC3ABCB for ; Mon, 16 Sep 2024 05:10:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E6716B009B; Mon, 16 Sep 2024 01:10:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 396EA6B009C; Mon, 16 Sep 2024 01:10:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 210596B009D; Mon, 16 Sep 2024 01:10:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F1B126B009B for ; Mon, 16 Sep 2024 01:10:20 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A84281A004B for ; Mon, 16 Sep 2024 05:10:20 +0000 (UTC) X-FDA: 82569425400.15.A2DB3B5 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2074.outbound.protection.outlook.com [40.107.95.74]) by imf17.hostedemail.com (Postfix) with ESMTP id BEFED4000C for ; Mon, 16 Sep 2024 05:10:17 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=KSONu474; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf17.hostedemail.com: domain of Neeraj.Upadhyay@amd.com designates 40.107.95.74 as permitted sender) smtp.mailfrom=Neeraj.Upadhyay@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726463308; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6OqzpmHm/kCI17KSwacwKrAFSzbQ0xVh02AwQwCCtsY=; b=27IaP4Xeuvhgy8AavlJnW/VmW2QWdZlT5sMZLBWVw/6cgnop4f2g44NH0VkecGepLyLuyj 6TRDQtgTiqrnhQvWzso+ui+7nixJXLj+KgIoyu3OShQYRAB/SH+wywYGW6CuEnuOKghxfr qGfZ04O4dvth9FaxqOUUDYwL20SSBKo= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1726463308; a=rsa-sha256; cv=pass; b=L0IYiKl9NE8tfKPOtX+h4OLMUk2bZW6TcajyOsq8UaxSuocGH8+2naaLTkNPXtTJ2q+TWK onUojRvT6DYehK50AE3Avhw4qTxZPyP7R4qGKiYcyNDPZWbBbQ43Xgvuy7XRzYjx85EbvU 0rQ9k6O40owlCVokWLM5wE45Np8L7/0= ARC-Authentication-Results: i=2; imf17.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=KSONu474; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf17.hostedemail.com: domain of Neeraj.Upadhyay@amd.com designates 40.107.95.74 as permitted sender) smtp.mailfrom=Neeraj.Upadhyay@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=LeeRSJDjnT5XmAG9cL8MVPPV7tbchLg/mdlApLaJgQeDt7N3nFhh2WiChXLfPfpphlt8slwlPePCCNj0VaQqI2xyP7FMbUhWmxOsGVpiJahi2SOLu5olPNn7U3VLpqxGJZQ3E2sAWipoUwy4/GJrT1+wVBm63ChIdIjuZvxLczE4b2peJo6P8CJG7t1mZ3ag3yCDnJWrbkanozGd/tr/0OKhV66IW6QSeRrImSSIYY4OU6NAWY5X5k5FvjQ58vS5/4rCaDdvSr5QA7Cdc2k9N7lwZpxErd7tK+PldDJ25vXRbQugiPFaNvQe5mX3FGOm8jsvSOoJUTdoRTF+HJVXzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6OqzpmHm/kCI17KSwacwKrAFSzbQ0xVh02AwQwCCtsY=; b=Jfx1mhbbVogNZQcy+ONM9uWGGeMVFsse6JpJ6MlGfFAxni8uQ5E5w+l2smP6HbzeaHVNq7z8OboCdGfIJ4otApYZwubST24gYsbVKDAyWQCpbyuWa/IUcCkZQANhGByiK9LSX2U2qjFofG/DCQSPJ5gMyZblEZ1o8HAUttcz4GkIHiXNahCcqb+mnuglX58qABkS1L0dPrXpTO2lBWSy0ltcc5yRtNfZEzjXUp+qmnzRQBcdZU7BMOGtsw/aeVkg8PBVLPH5tyQfeo0NVMdinQCLgkn93GQwMPZXWty4oIpUC4vt40E9+87Z27fMAMi/x1hZsiwuNeZFPKV1tn2HZA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6OqzpmHm/kCI17KSwacwKrAFSzbQ0xVh02AwQwCCtsY=; b=KSONu474m2K6uG+4iUQRZWp/WfoysiVFDfw6n4PCiQ8zHzTNYeHxSbBM0B3AMujdPP9V/5uWKKJHZdMlkIC6aM5d/y6lbRJkO+vUO4LME7jvObvigZP+W/bvKc0pW5Z2FXsREnG3pVWUOOMH/qLMbykPujOZcdCw7/GBniS00S0= Received: from BY3PR10CA0016.namprd10.prod.outlook.com (2603:10b6:a03:255::21) by CH2PR12MB4279.namprd12.prod.outlook.com (2603:10b6:610:af::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7962.23; Mon, 16 Sep 2024 05:10:13 +0000 Received: from SJ5PEPF000001D3.namprd05.prod.outlook.com (2603:10b6:a03:255:cafe::9d) by BY3PR10CA0016.outlook.office365.com (2603:10b6:a03:255::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7939.30 via Frontend Transport; Mon, 16 Sep 2024 05:10:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by SJ5PEPF000001D3.mail.protection.outlook.com (10.167.242.55) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7918.13 via Frontend Transport; Mon, 16 Sep 2024 05:10:12 +0000 Received: from BLR-L-NUPADHYA.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 16 Sep 2024 00:10:06 -0500 From: Neeraj Upadhyay To: CC: , , , , , , , , , , , , , , , , , Subject: [RFC 5/6] apparmor: Switch labels to percpu refcount in atomic mode Date: Mon, 16 Sep 2024 10:38:10 +0530 Message-ID: <20240916050811.473556-6-Neeraj.Upadhyay@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240916050811.473556-1-Neeraj.Upadhyay@amd.com> References: <20240916050811.473556-1-Neeraj.Upadhyay@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ5PEPF000001D3:EE_|CH2PR12MB4279:EE_ X-MS-Office365-Filtering-Correlation-Id: 9f2f42d9-b4c6-481d-6e1e-08dcd60dd834 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|36860700013|7416014|376014|82310400026; X-Microsoft-Antispam-Message-Info: A21RKebqrsJAiHO0VT5nONbFgCNyttdxI4ojSVcCkz9cBK3PTRaWbtU9qBA5tJqfUdEH5KLtB1VcKIR2k1CiqTxSnxSR45NT3lG7KMm8X2DxbA3Ox817QebBm0/XwyDH71NyLV0cVfKjqhMfKw7wLBZklL3qWY4xhB9UZXJtNKMAD+L9aML/+Vsjss/Yq35CrzTn8xwvkqPo0yygX8bKF4gYOcuOPSG4sMeA13cKnm9FdJmgi/4ZeLohPPXwz8Tqdpi3bfGAgcqw6cCmxxRehfHsDnKBc6Cc76YpU40zHqWgTiH5QnEkq6V5MFO3yZ2kIjncq35t4uOlyjEQ9I+Avp6JSdx1BsOSeyf/9GBFA1TYEzWotTxIZUq/ox5DUy9Rxk/CiNmyfumB7MpR5ryttWVx7HuJSr0YJ7qHk737TUnU/ICLjsrC1iobDlUP4zb0k/E1wCJuBy2mY0+6cg7WvHud1UKL8KnoU0HnP/fpO47CuHZEAJi+IX+joD1ts32rHPPRHpXxDXRpbcxD5uii3LZazPmyrAs6ON15jmqIfNOoVWn9VyfWc8RmVtozN4rR/gSeEVvE9fO/EDFgrjQ4Cyrc2nNw/A/CTOBig1r7CB4M7qkQuF+rM5Dk2fyjPSDdqdxFRrXO5X2cv3x6ubcBay/SxZt4044tRLNJNqIS6keXwtGwRa+ioBEdqPIpeWymFp3+o5hb/ftvcNcO5C+SPtRyr7xMgoRiKsckPr3GXBWHntEDpnju/k7G7CaMZ4t97xIWzHuqfLU4jDBTzUIFGkeRyUFUsDcyQIErBMOvYO7TAjW5WCzD447FgJPbtTZ8fLCPE9GV35JsRG0VPJ147UWZiImx+EYKHnKoFxYN4rn79/DwD8qC4Z4L6c20h7I5BENJzh2D5CqPF5yhcsb7ZPdn91LL9rUINq8WqI6Dor20h3+spFNvVWctpl8RFeXkLmgh1w0zo+97zsRUsEWU6Vu7yLXpKm/fiTlvirYRVRbHDTQazkIcfRWH2AijYcfe4uK45gzfPXHp7EQveqEsoA3/mbbr2WU2Jx/xrJl4t4tn9VLFKnSUXbapnJBhbyobZjY/vZOL/2VjiASc3pyIemKZAkYzq5j122+lbitknGLTd7BPpHEYi+mggmLiUSti1yeMJPTqdvjDncop1UQLVRZef6OgU1RxQVirjgN4eSiGDsudkxmFcNmDc29cCVxBwU0QfuuBYoAHZSwZwXHRtEHtQMhc2k9MxfT3kJ52dBRfpVGt39wZVjWEloz/BseCKH2sqxldCekiwQOVZ/j95ElzmHeoRJHd2VmqsRAu5KA8Kgt43hTBepZz++NI2tqyyZAGCccEN/IrtxYhva0puOeHg1ResKGLtPcPtWnQzXMDCJSGy6+qFlMiP2CGo7wzUCLZOrRdxM5Tmxi19tXgQpCgVTqRRGAYUXCkFdlHhF7E9iEpVZ7lmlrKuMdcdg02 X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(36860700013)(7416014)(376014)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Sep 2024 05:10:12.9220 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9f2f42d9-b4c6-481d-6e1e-08dcd60dd834 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: SJ5PEPF000001D3.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4279 X-Stat-Signature: erc8uh4f5w1xrex6qahcszceystr1zk4 X-Rspamd-Queue-Id: BEFED4000C X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1726463417-84205 X-HE-Meta: U2FsdGVkX18pFVbWFkxqlGgJto9DHyCvp6uLyINFKohlb//huZnD1i1MNCNBNhFTkr9kd1j9zMUfxmQuDOCsMGMdaG5tParB86D0DMMG3eNlHNxiIXYaUbsUzTO3L5HFlHaAd73djl3AsK6whz+v/vTqWRSSl8bbF3JVpvIN7EU4B6+IdonZbbS9L+2wzgq+vviz/m6kI2hshR+q1UUqwNxrWAb7RT7tpW1Fi6lyHoyNbu9wc0IBgjTj3ZLxaj+alG3lSbqojJhb/+6mUeIObNfdsOlbFPSxBvuAj/kXiqJtUAMTbWat8xprKeeIlokvMtZDLHiUmwHkAhLYaaMuaSLhM+KziJrp2Y1/iRQo0205zFwJiGorOaHMXwwS4GzCb53sGbxcHWJmbFEfZIL7X8suxTwOFbbkTYTmqa9zzXPS34H43F+d0VcwF7Wx9wmRYmWyuX7JLmtl9QbqrQ5UTkO8rKlR2Rpgi+qZes4OTXTzUHGL81aq34aJOgyHcfD9d1ILigCBI3NakGa6fcj7yRjClbD6RRwmqtDng00t58kCU02CvsDltm+5k7y/NmpAOKu4wauMR4ePDqLa2Ux1w311oo+Z2xYY0HRoc5XjXfMYzqKyBC3jkOJwhZtQASBikqLn2Q7twLm0ejDUEKF79UknaCTCF0PK3+k2NKPr1rMVAs0dUfBf6nfYBQDK9ymGwwf0eoZQMNBL4V6dOW7Fzpl7de3+57ZViodeJw85wmXejOJZ9jrvO1EID5v1EjRt5Rr2QldlCC9236qt/PJKXRlLGSHVLnXbnxWaxl367NFbN2vVXBOPb1MspUlNKPzkuuIZcLGMeCCR2x78/NxhzClc9N091XST4pEfd0yVwsVxbQFhprjRuoc8vtveGt/QD6pqftzgrcRzKNS+PhIg+ssX+BWs6HZj0hL9Atx40zoyZsnS0DWpeiU+TavA2reeQE+vfrmrVxLP48opo1S GJpAPFwN ynOKcjANyxrWpi5hjXwmfBX+qT/5G8BAr0YAtbvOlbCv58tN95qGtddXWYWE/0SyyqgMddtXAdZ4E5uzlygXgWQuUJSG/UEBwW11PBja9hQJSaUvsfl6SLPF9/b88gss1oYBzI3oeWRtow2ymglr4htMCfRRJ8lYv70/3vtV6KoUkJYGE+XcNnnW5Lv6Ikt6pVpx4q1Wnl2RhjBTiWv+oTLu159wE+hREOLz3eRqUIed9Jqe5T0uAkZeOp8OYLPAXz5c3zQQBX2XBbCv7ZmTI1jv6fhwdLgjVzxRnKr/WqHHgaNN+IP9IvEfApACtrenHSeH7y5WLDYW4aFc03kpoCZy9DnSDNNpu62IaG6Edx7TxRoCdyImbinWKdTi74ioP71uazzy21U8r3BSANP1T1qMzdQ5gAN0EJUyFGsRA33n+V8Qb0RGI8EYZC4awXb1DbZBAdARaAoEA0KgmQgbNgrT/ujh9ijY0p69M X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In preparation of using percpu refcount for labels, replace label kref with percpu ref. The percpu ref is initialized to atomic mode, as using percpu mode requires tracking ref kill points. As the atomic counter is in a different cacheline now, rearrange some of the fields in aa_label struct - flags, proxy; to optimize some of the fast paths for unconfined labels. In addition to the requirement to cleanup the percpu ref using percpu_ref_exit() in label destruction path, other potential impact from this patch could be: - Increase in memory requirement (for per cpu counters) for each label. - Displacement of aa_label struct members to different cacheline, as percpu ref takes 2 pointers space. - Moving of the atomic counter outside of the cacheline of the aa_label struct. Signed-off-by: Neeraj Upadhyay --- security/apparmor/include/label.h | 16 ++++++++-------- security/apparmor/include/policy.h | 8 ++++---- security/apparmor/label.c | 11 ++++++++--- 3 files changed, 20 insertions(+), 15 deletions(-) diff --git a/security/apparmor/include/label.h b/security/apparmor/include/label.h index 2a72e6b17d68..4b29a4679c74 100644 --- a/security/apparmor/include/label.h +++ b/security/apparmor/include/label.h @@ -121,12 +121,12 @@ struct label_it { * @ent: set of profiles for label, actual size determined by @size */ struct aa_label { - struct kref count; + struct percpu_ref count; + long flags; + struct aa_proxy *proxy; struct rb_node node; struct rcu_head rcu; - struct aa_proxy *proxy; __counted char *hname; - long flags; u32 secid; int size; struct aa_profile *vec[]; @@ -276,7 +276,7 @@ void __aa_labelset_update_subtree(struct aa_ns *ns); void aa_label_destroy(struct aa_label *label); void aa_label_free(struct aa_label *label); -void aa_label_kref(struct kref *kref); +void aa_label_percpu_ref(struct percpu_ref *ref); bool aa_label_init(struct aa_label *label, int size, gfp_t gfp); struct aa_label *aa_label_alloc(int size, struct aa_proxy *proxy, gfp_t gfp); @@ -373,7 +373,7 @@ int aa_label_match(struct aa_profile *profile, struct aa_ruleset *rules, */ static inline struct aa_label *__aa_get_label(struct aa_label *l) { - if (l && kref_get_unless_zero(&l->count)) + if (l && percpu_ref_tryget(&l->count)) return l; return NULL; @@ -382,7 +382,7 @@ static inline struct aa_label *__aa_get_label(struct aa_label *l) static inline struct aa_label *aa_get_label(struct aa_label *l) { if (l) - kref_get(&(l->count)); + percpu_ref_get(&(l->count)); return l; } @@ -402,7 +402,7 @@ static inline struct aa_label *aa_get_label_rcu(struct aa_label __rcu **l) rcu_read_lock(); do { c = rcu_dereference(*l); - } while (c && !kref_get_unless_zero(&c->count)); + } while (c && !percpu_ref_tryget(&c->count)); rcu_read_unlock(); return c; @@ -442,7 +442,7 @@ static inline struct aa_label *aa_get_newest_label(struct aa_label *l) static inline void aa_put_label(struct aa_label *l) { if (l) - kref_put(&l->count, aa_label_kref); + percpu_ref_put(&l->count); } diff --git a/security/apparmor/include/policy.h b/security/apparmor/include/policy.h index 75088cc310b6..5849b6b94cea 100644 --- a/security/apparmor/include/policy.h +++ b/security/apparmor/include/policy.h @@ -329,7 +329,7 @@ static inline aa_state_t ANY_RULE_MEDIATES(struct list_head *head, static inline struct aa_profile *aa_get_profile(struct aa_profile *p) { if (p) - kref_get(&(p->label.count)); + percpu_ref_get(&(p->label.count)); return p; } @@ -343,7 +343,7 @@ static inline struct aa_profile *aa_get_profile(struct aa_profile *p) */ static inline struct aa_profile *aa_get_profile_not0(struct aa_profile *p) { - if (p && kref_get_unless_zero(&p->label.count)) + if (p && percpu_ref_tryget(&p->label.count)) return p; return NULL; @@ -363,7 +363,7 @@ static inline struct aa_profile *aa_get_profile_rcu(struct aa_profile __rcu **p) rcu_read_lock(); do { c = rcu_dereference(*p); - } while (c && !kref_get_unless_zero(&c->label.count)); + } while (c && !percpu_ref_tryget(&c->label.count)); rcu_read_unlock(); return c; @@ -376,7 +376,7 @@ static inline struct aa_profile *aa_get_profile_rcu(struct aa_profile __rcu **p) static inline void aa_put_profile(struct aa_profile *p) { if (p) - kref_put(&p->label.count, aa_label_kref); + percpu_ref_put(&p->label.count); } static inline int AUDIT_MODE(struct aa_profile *profile) diff --git a/security/apparmor/label.c b/security/apparmor/label.c index c71e4615dd46..aa9e6eac3ecc 100644 --- a/security/apparmor/label.c +++ b/security/apparmor/label.c @@ -336,6 +336,7 @@ void aa_label_destroy(struct aa_label *label) rcu_assign_pointer(label->proxy->label, NULL); aa_put_proxy(label->proxy); } + percpu_ref_exit(&label->count); aa_free_secid(label->secid); label->proxy = (struct aa_proxy *) PROXY_POISON + 1; @@ -369,9 +370,9 @@ static void label_free_rcu(struct rcu_head *head) label_free_switch(label); } -void aa_label_kref(struct kref *kref) +void aa_label_percpu_ref(struct percpu_ref *ref) { - struct aa_label *label = container_of(kref, struct aa_label, count); + struct aa_label *label = container_of(ref, struct aa_label, count); struct aa_ns *ns = labels_ns(label); if (!ns) { @@ -408,7 +409,11 @@ bool aa_label_init(struct aa_label *label, int size, gfp_t gfp) label->size = size; /* doesn't include null */ label->vec[size] = NULL; /* null terminate */ - kref_init(&label->count); + if (percpu_ref_init(&label->count, aa_label_percpu_ref, PERCPU_REF_INIT_ATOMIC, gfp)) { + aa_free_secid(label->secid); + return false; + } + RB_CLEAR_NODE(&label->node); return true; From patchwork Mon Sep 16 05:08:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Neeraj Upadhyay X-Patchwork-Id: 13805013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2DF4C3ABB2 for ; Mon, 16 Sep 2024 05:10:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4835C6B009D; Mon, 16 Sep 2024 01:10:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 432B26B009E; Mon, 16 Sep 2024 01:10:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D4096B009F; Mon, 16 Sep 2024 01:10:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0EE786B009D for ; Mon, 16 Sep 2024 01:10:42 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8751A16010C for ; Mon, 16 Sep 2024 05:10:41 +0000 (UTC) X-FDA: 82569426282.07.B3D0316 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2048.outbound.protection.outlook.com [40.107.236.48]) by imf09.hostedemail.com (Postfix) with ESMTP id A5E12140002 for ; Mon, 16 Sep 2024 05:10:37 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=muAX3U3D; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf09.hostedemail.com: domain of Neeraj.Upadhyay@amd.com designates 40.107.236.48 as permitted sender) smtp.mailfrom=Neeraj.Upadhyay@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726463328; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QB7BKcbtTTM97yXiOXbhtWdEpIfxhfdw595BbvBII0g=; b=fLHpG/gf2f8N1oVkZz1Dy56Mm0v18zmLQ4dpZG2KStqwUHYjpIwcsAifWFCF6+Rx1sP0Jq vGQxlO6gv1FuiS4J1M9UHHevSZ0pEWo5V+OBwN0Lk5I4adbR4k9Z6SDtO2yz+q58NCU5Da BwIQYA+wJ2uor8xIwZN7T0MGh6Wotf0= ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1726463328; a=rsa-sha256; cv=pass; b=1OHflryTXed9khA+26YqHZSbSzvK7pb9e9l3kQOGfwTND2BAUbIIEnJ0zjhLDt9p1trbPc 3ELpL+0q3Qa6I8oyLV2l26kKgRBSWwUIY3A2zE+z/JSf8XcJ2nsDdlRRfzntr5r7odNKwh Pj1S2DQ+xEhwtUdrpeKAasdMq4T0fOI= ARC-Authentication-Results: i=2; imf09.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b=muAX3U3D; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf09.hostedemail.com: domain of Neeraj.Upadhyay@amd.com designates 40.107.236.48 as permitted sender) smtp.mailfrom=Neeraj.Upadhyay@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=okArEj/DZhKLzPQxzvOAE0CS2lg3ObR8D3J/oEf2l8DLd02+elRZoBJCXwIC2P5xcmBxvmJZQg3e05ihF2XQQW3Nq/x1bDsJRXpZHgGCKj6RNxY0l6okDih5+j/MckqA5dRLHdIJ4cLjiT2kVlJmi+APvTChDvnnQFtW40S5slXOOiJyx46K96gaTBnrFDJbbW3zkHgfYbhJAexcZn3QvmECZ/PqfwftjwU6K07L1P0qtlX0NoQ1Hx1rgu/CPl66zP0viggFncsGKyX4xbIO/3hGOdKrPO56PInG2E1Jhi+4iCCwZFr9Cx1kdII9J6W/pNNyzilLeqrUX5vVpJg1Wg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=QB7BKcbtTTM97yXiOXbhtWdEpIfxhfdw595BbvBII0g=; b=NoCB76d9wY9NbQn9Y8TrmMCix9rhK9TFslxpTbo51UAfTW4Y9cKZ2iHIg73gXed5BAH5VREmj34IxSGATUGnzntBugL58xH49lniIDqxQW6KOqgx+pk44+3TPrNvjMzxJpsvQ1UiYiY46YrPvk9Mvp7BylFbzf5kEdacM3z6vmhANUSDfv7bYwQBTr6V4jeAxpEKY/ed0iQ6Uukd9lUO+0z1XzHrZ6gztG/i1wOveW0kRgWRIokRe83WUyRqf69Yv1EF+1sdKgpVYX1kWOb1kRGvliEId5UKC8DnLap26WwVN8XlgeDJAyFVMaSWK6R7VnsbF16YgbbIyhUjQuJ9YA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=QB7BKcbtTTM97yXiOXbhtWdEpIfxhfdw595BbvBII0g=; b=muAX3U3D1KeYTyVg2hYngVRXNdn4J+POh++7h0IBZro70XEHsCApqaL6bWUVH18eTBKbuHtdj2/TkFhsvBR/vymgsMRlD3R6LgqMiDxYG68I17W14MG1R1PnXhMrdfRmWtsENDuDbjMKbytXxK+3oB26PVi8R+QvTkN/S1QSAYw= Received: from SJ0PR13CA0042.namprd13.prod.outlook.com (2603:10b6:a03:2c2::17) by CH3PR12MB8753.namprd12.prod.outlook.com (2603:10b6:610:178::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7962.24; Mon, 16 Sep 2024 05:10:31 +0000 Received: from SJ5PEPF000001CD.namprd05.prod.outlook.com (2603:10b6:a03:2c2:cafe::a2) by SJ0PR13CA0042.outlook.office365.com (2603:10b6:a03:2c2::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7962.18 via Frontend Transport; Mon, 16 Sep 2024 05:10:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by SJ5PEPF000001CD.mail.protection.outlook.com (10.167.242.42) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.7918.13 via Frontend Transport; Mon, 16 Sep 2024 05:10:30 +0000 Received: from BLR-L-NUPADHYA.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 16 Sep 2024 00:10:25 -0500 From: Neeraj Upadhyay To: CC: , , , , , , , , , , , , , , , , , Subject: [RFC 6/6] apparmor: Switch labels to percpu ref managed mode Date: Mon, 16 Sep 2024 10:38:11 +0530 Message-ID: <20240916050811.473556-7-Neeraj.Upadhyay@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240916050811.473556-1-Neeraj.Upadhyay@amd.com> References: <20240916050811.473556-1-Neeraj.Upadhyay@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ5PEPF000001CD:EE_|CH3PR12MB8753:EE_ X-MS-Office365-Filtering-Correlation-Id: 2e608f2a-c7e0-4287-eab7-08dcd60de2df X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: ePpUjEb07aoMGi8f7wdjJyRx/HnaoTGdc6a7YBhnMBpf5KQKoUZ4nQPHwGw58Q5GktXq3Eq4c6qA1KxSEiXyk7hfgXsIAkz0KZSVW28NTUAP2l9rkP/HL7dHMoQTzee8q9/5osLbLt875Q3El42vtS487gOrStrabu+QLWInXIM8guwPDu9U7Bzi3ibJQp3uFtsprkkemLsjIyJPua5fciueQ8jZtZa0DbS+aznLjZY8hHBn6B3AW3OGcl4l23AAn3eiXL8rf5VUS1tUJ2ZcZhpLP8NH0M2A+icPlRrI/ZxnzvkeThuB4QsT6CUwoR0coq//579lnXGziVm+VipndwELmE76l6LaYCZ3W+mxDMHomnwAv+fCa2Vr866wfg8ODK3eYDJxSeWkSmMjXzuD/EJTd5ya+yeWowAhgLNEQpcos9Qw5QiXu0fJSIpw8bsSWABpZG07GQSNBH0WPUhCxWOFbUOuZiWTiW2HtAzIJx72pxJ8TFwBWJqdXA8A3/i50wG4XDU+NXIATwMFnZIOf0YMQnLpmDXQO0EQVKDX3J3RftgEqYGbiSLnJeY/nGzHokgCiiPfYCSQXEgMiI6AmhurxKDohECYJbNbDt49bOE0w4n7xvlb+AHJycYO9d7VO4WyZ+YqkZD93zP2M2udd4FD1b/6gCCoUHU1hgFtwo2stXeMsp1hy+zW1ctZkmPP7pL+h0QDGzUpfmPLLZMVQ/v//LVm5YBEUP0dZ08SaU/Qj++gqBbFzfKMaQC08B+BUjw+ukAevUuyh4Lnd072kemS3xjUdtSS4zKEjI/CbdpEOtB2ruMVB42V9nmn0c1T0lSBK5bOKQ4Aicbo3i0rBbA8m01TlS8cUcdn6cC0dKriUd/GzQqzr0lHrqrpBLCJIRVANGkVqN1GybYooL8aiYmteqIN47vng+gYeJ01pxWJBpJt5lEQaKmMA5ecM9uSAuYkbacODFg7Zs5a+2vu75wxucM0sHCJgJKNGCw28qhrHE2Ah+ZQDu+zZR9wAQwzCHatLnx/FwC7nkiYdYKsHd191OqxDfq6hE1nCGF9wPJPcQSHhXgDBLcAaZKuXEU/IQRbWGJj6rYoRlOcVFll5qm8yN/dRYcS6BKd8sO6KAJFbIkZXxgaCW9eqLr57X6XVOoeNNL0eP9b3m0oREfOwnXRfM6FJqe29M8NGTmTnD3TtNNVEbnMCUps3vy0S0E/1QMiscjBLxGYITixrgiMX1UPymedEg2WvEuNFeyK6kmhz2gsM/41/Zkhi6y3fmTVxwHjZWjy4Nb2AgwFxeNOkJvx0nJSUg0nKKC+0PliBw+8gHGdUunhIcCAvsY92V17ZL1bFGr+EMx7W8rKeNA467+vPU0AlF77Chl1jqJgtd1sygA33IvaMn8pXIRi/mw5bA/UBBmqfNgZ7VfAIkC2iUBBd6ewTmAs6jX3rMsYdUUjxC1nEMnNuhWkJd7RV9zt X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(36860700013)(82310400026);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Sep 2024 05:10:30.7605 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2e608f2a-c7e0-4287-eab7-08dcd60de2df X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: SJ5PEPF000001CD.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8753 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A5E12140002 X-Stat-Signature: yhn5fjisaju6rweex5eqpeagc6nydnoq X-HE-Tag: 1726463437-257942 X-HE-Meta: U2FsdGVkX18O/7nt3HhuD1soxWrm4vqeIaqHoZfJzp7mLR3/G29avayViUgrVT/KPsVbkJmN5bivlVlE8Cku+MEVafq6hyTMh5k0SWdYtVvpHl6/SFyDzSsdLIp/b9EmctTpG8aUcslAb9Ey/D7QMln7bp1o7nvqwV2lQApY/4ccKW9ZwMX/hjSle+9lz72IGLFbLVmQoBVCJdHc8NbjIHS0kB+Y+3jju7SlVcfRFFPABW0wrt09efW6ahoLk4YEObXrJnch8X/TXeR6LUj3YnEGnhFHtzw/IvEnYRhzPxa61LAZRgAXYRf7uXtRqmFXSHLlD89YeWt+AfgSS/d99BVMHnuCmRZ461aI+go3ybWmX1IMMICaztrqow1CNnUN2CVZsqYm2yftx1Cvg9bXEJ9VrQIV9KOnB0AjJ+j0u6kkHvqh/nSMecA5+VCwERLOasyx39nlbqkhOlW1l4GtLhtiFqxLOg4RKZLthfSzSWwpTI5dBxROq5pyz5NpsNq8WNSVRRjDAoJBQhowROGMlmbPlUzGz8aoNbZAdxe2BgIv55sVPZPRS0pDiujS/hNEbexeDb6BPIJlG9po0Q+neaKL14tkCPDR9pv9wjILD5QuuMq9+uUbIrzEM9ghfc3EY/nnyFxYADTL7cikl2GawMELPqsNyJWqzlYvao17L+KPq452/svzp7w8/wxYQQwqse6nehsvUXsbA5ypmfSDbeb64rUmXvk/tYbsokm1s8BrTC23TneZcya51dIJTP3M1CzIlvlvj7OjAdVoj5YAxP8vmfnV6trlqdKb/CSSub82dp/adcxOA6eBRkazi1ZDz2kVS/a3NlP4RT00B2zLYCsFsBIgTcS7tTggOmn0PF4ok5YLHH2efHlf9yaX/Ut8YdhvStcmJP3VKG/psuwYVD1dDpFTNwx1saBvRD8zR+LsZXFjVpuySyf0qjUkZay8O90y8Xuad4kMFz0lupx lcmq1/56 kVrkzSY4RO6HoBMQTE6k9iwgTsVgNVoIU65TzdS2k+wtSZSo6gYO3hWxDTyivaWnnuXYk2wA6XKGiqrCPY+BS28Rme3p0a1NG3fHO+bnp1YqYs/9Yf2MftKqXDZUqYupvtEZRTaZwXjOzjwEQPyr41Vc5LX6J/Y6PZ/8dJMSGa1iDwSLRUE4yzSx7Ztt21WvwKC0bCtA8U3VPTxYcPhsGdd3q3n9MT7k9tOJIsCaRN/AiQRYYcXPxNYkmV4ZpbO5Igqu7o5G1HmlmMuI8jYP8iAx1aj7daIoZGunsgOU1VEPWA3j2YrXm+/PnRYLuZVe7X+z8nEE2C1FBorSSpFSCfWVCQkIRmgpwlNQwS1n3sagO+RTrQSfStsn9hHBW06YYNvUYeHnvno2nPpVgH6cQSdwzOx8mYlC84NrveQEV195Ay2ibZexpTnwwIxnM5lww9ncUdZB2OIB7JRvkANHgoOw97QyQHwCW70kd X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Nginx performance testing with Apparmor enabled (with Nginx running in unconfined profile), on kernel versions 6.1 and 6.5 show significant drop in throughput scalability when Nginx workers are scaled to use higher number of CPUs across various L3 cache domains. Below is one sample data on the throughput scalability loss, based on results on AMD Zen4 system with 96 CPUs with SMT core count 2: Config Cache Domains apparmor=off apparmor=on scaling eff (%) scaling eff (%) 8C16T 1 100% 100% 16C32T 2 95% 94% 24C48T 3 94% 93% 48C96T 6 92% 88% 96C192T 12 85% 68% There is a significant drop in scaling efficiency for 96 cores/192 SMT threads. Perf tool shows most of the contention coming from below places: 6.56% nginx [kernel.vmlinux] [k] apparmor_current_getsecid_subj 6.22% nginx [kernel.vmlinux] [k] apparmor_file_open The majority of the CPU cycles is found to be due to memory contention in atomic_fetch_add and atomic_fetch_sub operations from kref_get() and kref_put() operations on AppArmor labels. A part of the contention was fixed with commit 2516fde1fa00 ("apparmor: Optimize retrieving current task secid"). After including this commit, the scaling efficiency improved to below: Config Cache Domains apparmor=on apparmor=on (patched) scaling eff (%) scaling eff (%) 8C16T 1 100% 100% 16C32T 2 97% 93% 24C48T 3 94% 92% 48C96T 6 88% 88% 96C192T 12 65% 79% However, the scaling efficiency impact is still significant even after including the commit. Also, the performance impact is even higher for >192 CPUs. In addition, the memory contention impact would increase when there is a high frequency of label update operations and labels are marked stale more frequently. Use the new percpu managed mode for tracking release of all Apparmor labels. Using percpu refcount for Apparmor label's refcounting improves throughput scalability for Nginx: Config Cache Domains apparmor=on (percpuref) scaling eff (%) 8C16T 1 100% 16C32T 2 96% 24C48T 3 94% 48C96T 6 93% 96C192T 12 90% Signed-off-by: Neeraj Upadhyay --- The apparmor_file_open() refcount contention has been resolved recently with commit f4fee216df7d ("apparmor: try to avoid refing the label in apparmor_file_open"). I have posted this series to get feedback on the approach to improve refcount scalability within apparmor subsystem. security/apparmor/label.c | 1 + security/apparmor/policy_ns.c | 2 ++ 2 files changed, 3 insertions(+) diff --git a/security/apparmor/label.c b/security/apparmor/label.c index aa9e6eac3ecc..016a45a180b1 100644 --- a/security/apparmor/label.c +++ b/security/apparmor/label.c @@ -710,6 +710,7 @@ static struct aa_label *__label_insert(struct aa_labelset *ls, rb_link_node(&label->node, parent, new); rb_insert_color(&label->node, &ls->root); label->flags |= FLAG_IN_TREE; + percpu_ref_switch_to_managed(&label->count); return aa_get_label(label); } diff --git a/security/apparmor/policy_ns.c b/security/apparmor/policy_ns.c index 1f02cfe1d974..18eb58b68a60 100644 --- a/security/apparmor/policy_ns.c +++ b/security/apparmor/policy_ns.c @@ -124,6 +124,7 @@ static struct aa_ns *alloc_ns(const char *prefix, const char *name) goto fail_unconfined; /* ns and ns->unconfined share ns->unconfined refcount */ ns->unconfined->ns = ns; + percpu_ref_switch_to_managed(&ns->unconfined->label.count); atomic_set(&ns->uniq_null, 0); @@ -377,6 +378,7 @@ int __init aa_alloc_root_ns(void) } kernel_t = &kernel_p->label; root_ns->unconfined->ns = aa_get_ns(root_ns); + percpu_ref_switch_to_managed(&root_ns->unconfined->label.count); return 0; }