From patchwork Tue Jan 24 05:42:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13113495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E85B9C38142 for ; Tue, 24 Jan 2023 05:48:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 87C8E6B0083; Tue, 24 Jan 2023 00:48:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 82D616B0089; Tue, 24 Jan 2023 00:48:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 680236B008A; Tue, 24 Jan 2023 00:48:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5870D6B0083 for ; Tue, 24 Jan 2023 00:48:05 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 33534140603 for ; Tue, 24 Jan 2023 05:48:05 +0000 (UTC) X-FDA: 80388611730.03.3B47D33 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2046.outbound.protection.outlook.com [40.107.93.46]) by imf25.hostedemail.com (Postfix) with ESMTP id 5294BA000C for ; Tue, 24 Jan 2023 05:48:02 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=DZuLdAMw; spf=pass (imf25.hostedemail.com: domain of apopple@nvidia.com designates 40.107.93.46 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1674539282; a=rsa-sha256; cv=pass; b=1hWj5qg9HL3dwMeAVH/0bzaiB7Uvtw+NOitmIxnZz3MF4FVu3L5r2qTyaEVRtD9cuP/7TX 4DQEq728M2g+sDAmqHEA3uSqrwhsi9QDtsJcap2gQPV/DZ5oT3sGjpGDv2P62rhXBcjeA3 d0cwh6KOaQfwKz6xzLNvk7j25neA7zI= ARC-Authentication-Results: i=2; imf25.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=DZuLdAMw; spf=pass (imf25.hostedemail.com: domain of apopple@nvidia.com designates 40.107.93.46 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674539282; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uYENaESUmkouh77B5LvetjBP009/977IJw8aWG3YGvY=; b=PPCGcx3+sALy1cImcsDE0SkNwbER8EK0JeRIQEHHvEtE9g7Y1eQ7zJpRwEU9K+BIUIzKRI nUfVosClfR2wTvZyNc7map64JkWB0ti0GC9lqmY6Xpmww09+/UnckFuQ5UEjYNAAYpAnht RcXZcDNNj6YGp843gN9zlSFfxKURMtA= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HkW8CO5oKBeC31hC8SqYp1hdcPQ77SEEaemY6tPRsGgxa01LRZjW76UQaQ8XPTSviUdxK7FwwH51X7SRl7KrwiW+1jSyuijAtQSGqHEfQBr2sDJ0kuUo8XGxNRssXlif3T5T9I6mQSAf0msLJIjBhE8l+FrRcBOjj+CnBPDEt9hbVOg/DAEE8z1PQzpcFNyk+CiUIn9suBlAUjYCXbqDf9TQcPp1H5J/y153tfflUbvrpSYX2qewBHUTqWTOL/as+ciPLkcn/MYBULMhWvMd0tOd321FJYKHgjQyQGA4XFtc4cuT2St/QsJPypJ+9Pc3RnAb4GoSWmimYhQNv3WJHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uYENaESUmkouh77B5LvetjBP009/977IJw8aWG3YGvY=; b=axREdmO3jxkeMm8A8+eYDUKBGQ7HlYxXEuFNPsudsjQQYRwbhVZg9X33ZGoO1FWzntDP8rZLfWYlwFQnvV+M7BkPv0FTLid0WX9gU4aSxu0h3TdDJczHm2JwvcEa3Up12JKrZOnlys5FotPQ4SadJAtWzFTkK9ubwbZ5EvX8DWAQlS7AUVKAfsiURIUPKTXyHQ5cwy+jXRV8uKWz1iJsy+l0Myl8hqBDBN8EpkKbUXBIt0eZLWo4fbUvLsVrprl82ghPhoLavLnhtPKScT2UWHY3s1oA3ptsdpBxGLlF7zIhUWqdlLiI83Z/mxSET4LMXDTIrGKSempAHohsYhgrjQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uYENaESUmkouh77B5LvetjBP009/977IJw8aWG3YGvY=; b=DZuLdAMwy1AlNDFcAukTIVBL2yrLQknf/VrgFP5xxffSlS22dm4r2ydYi27XErd8sffS51YQv6AHLQ0Ri7f+2Zu0OTuqafHOFr62th2AhBnMcnmbAnEy5yswKnkkZPGfZxMoA/mco/nIrQOZ1VknYcbokGBtJI+dxyaE1VYHu4yhkeKEz0N2ssTUF1CvkW3q/FKIbXI9iZ3u+xTiJtkMnj8f8oBPFb0m9Drl/SyABWMiqZvJ5athhjHx1/HCE4iv/C9fdUuJHTyKnFE6S+1v1IEPOEn2aEmwve3pZF138fYKdchB/SQdGD9F/+Mya7BBbKe/xUh0PQizUTUNwqjh9A== Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by PH7PR12MB7793.namprd12.prod.outlook.com (2603:10b6:510:270::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan 2023 05:48:00 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::465a:6564:6198:2f4e]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::465a:6564:6198:2f4e%4]) with mapi id 15.20.6002.033; Tue, 24 Jan 2023 05:48:00 +0000 From: Alistair Popple To: linux-mm@kvack.org, cgroups@vger.kernel.org Cc: linux-kernel@vger.kernel.org, jgg@nvidia.com, jhubbard@nvidia.com, tjmercier@google.com, hannes@cmpxchg.org, surenb@google.com, mkoutny@suse.com, daniel@ffwll.ch, Alistair Popple , Tejun Heo , Zefan Li , Andrew Morton Subject: [RFC PATCH 14/19] mm: Introduce a cgroup for pinned memory Date: Tue, 24 Jan 2023 16:42:43 +1100 Message-Id: <183372b80aac73e640d9f5ac3c742d505fc6c1f2.1674538665.git-series.apopple@nvidia.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: X-ClientProxiedBy: SYBPR01CA0128.ausprd01.prod.outlook.com (2603:10c6:10:5::20) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|PH7PR12MB7793:EE_ X-MS-Office365-Filtering-Correlation-Id: dc182cd2-fdb2-42e0-4f57-08dafdce8d1d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: aRdvV5lKgZRqv5YsUcdGL308yIc/tTqhp5JUegyzR1ML8QZUsB4e0FgFYg+JdbcsL84SBh2KbJnFRYZrjaRuI7JLIJpq7ip7cRffhfFurVVYrBsETVVeU3XClXQFV9CFCa34MtFfM73LOT3Atjj7HtDQOoD5AVVHKSv7ogG+duxVfrlFGMvZXn2sEmCN7sftSVX6j2C6lUZaIIJW6/zkcapb9ypcNn+slvL+59VetbpiLPLCLGfK795dJ+NEj81OzkdyksgfQj7c6fZmpwYO9JBDL27kAcNUgJqKYlB1Et/Cfje4re5yi4+TheU6MEI/Gpy32rBQjh/Kyv8FgaRJjEZdNwl+qrydozviYcWhNpD6OtjPQ5Wv9UKeaaoeLLsvhqBsfWl2oH2zOnDBz2GYQtkE38rjR9qaZlDRcuoqLpNehQhxoAhvaHEkF4uwdOdJiJVQjzmC5oC8orqzFARhVkrlGPIMdvy9PkR5oe0NFobarWSmFDy3cTC7PpKEryibzvSf1tr0SwahmcWmPec6eV+rM3XK5BbVFhAk937B9C0cxTXNd8bV3D8N3VpR+Wcjmj//b5+YQC+HEkiHxMN2jesj101nRAyWSdUUNq8BHZE0+sxPsF4yLrN/0PVHRe5i++Bs4crNqKakfQBnazHN8Q== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(136003)(396003)(366004)(376002)(39860400002)(451199015)(36756003)(316002)(4326008)(66556008)(66476007)(8676002)(86362001)(66946007)(186003)(54906003)(26005)(6512007)(6506007)(6666004)(83380400001)(6486002)(478600001)(2616005)(30864003)(7416002)(5660300002)(8936002)(41300700001)(2906002)(38100700002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: CQy+/yePQeHckVtod6yxX+bR9BQnCfoLtXM9V4jO2WmlZfEAr0WHL+X45yZRq93lFAUUsLDnXxh23L8XsM2/oMW4T32CdKVaT3KE23bA5NQIS8p9Tv15fpuEdgZlI7vAsExBOXUMrnkPs/60vKgzywGYoTaqi4k+uH4KuyzuqMwXXqkNQavO4BAulHy8qSEJ9n8VVK3yGc2u7gpq4mcHhEAIJlC8ZWrceb1FxQRd0+eB9Nz7IiZnpHTAeGwrRRO2yPrGP/nyIjGV7XT1xBJR0RJT2Zbg4sD7bJ9XY+/vsyCENcIJewtF53OAQ433rvKOsq+hMpV5Om6yi2ecPv5YT+nMU9h8pGNvQmPXQViLmyz9Wztsouh/DuaP6Q5vVQsK7Pzjiu7zXa+LB/P07ZFOOxLjcj69Ich2NGe0PukY6wiXDvbQDkmrwJtHPwh6jgB6E427/GOqQcM16j8jffoWfwAbFkjUhW4Z3WTEoxlAtcxYWDxNEPFkEQp8y4HO4ySpnAsnkPrJydQXhtxkvUgiU7h4s0pSvjZSWiJGhlyr3tmmhBeD7+V5+KPh9K0PsO9aFv7wCJCVkPuoHk8pkF7YMYO+5KjgPQUL0gm9ma92F+vps9q8vxwMc64YnR4jVCNS3YTgDpmcUJHwmWxBzOKC9ieMG010c2qLhEDulFpifQ/bLgIASy5g9mvzxSoTc+pyKmj+v8xrBR/7hHJU9sBPEC0jazP0TZOsXReLnt0JWhaJxSJnr22GsMfJY5EnzxOrkuROR6GsVcfdpHg70s1IjKG47UdM/FtTGp4lO1BU3+ie69SC6wkG+KiyiPCcEAuLM9Ic/i+6OtSHVDjhWFOl5alv3mwfJD5qHqhBy02akCKEeTdb7cD81ij2FvoeqHGgJZo/hBQ4kf+JOalA1K/9OCecJXusbGDwqpMh3wslhJi+HTVg/3fvzYyXSqAvC/7XjwWyAZMjRv1S3gTnrGczpgF0aw3/W2g9jCDF/bg7gRRh3GfbcCiSsFW9sIotixtxOZlDMF0UyTqGu0zqZb4RW9rlcG6KywI/CduKAQGIuHzKi4z64LdlFnxmfwEZ3DLqi3eKCKY23O/bbvZrw0K1edFcOq/RkdLCnr1THua3NAQ0j12wYmsatXsPojCSEhNMeEXwUQE6AnUm6PARfYGwXqIVEnUbT7xfBsjKr3Up5M8VSmiB9XyxYR24aD+ScgNle+DIk7jqo+OWPSAxdXxvJHtFA5o6PFLs0JmgWo0iDg9OsrSvOPvzfPt07PwNjms4zebcm6p13/myN4i6c9m/PODJRb2rS/R3+Ii2g1rK8IEV0/n7c7jdPhLhO6QiiQ5jCi/huQuGMmWxbpuU4KluHRFYHjp8d3hgS7VW3CfVHvY7KXvWxVGiLB2cIqAPFfD2geW5PFRW/J5GZ/ZYdJv8Sy2+FfO0McFxjOSH9wHkETYxOuqSG0f9qilG3253jSfgM7wgAKfRApkQeaENl8gfT/pxqHaTunI92DXJBfGWf0444UQe8Vlbx+7D+Qlnip+SWP/TcDhnmZQlwPALqc7Lvyxzy+NQCrtWMPmTcNH0xc4f11ddxRqa3y+cKqwcKPz/ X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: dc182cd2-fdb2-42e0-4f57-08dafdce8d1d X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 05:48:00.2382 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: hd2QmQP+D9zTYrLzOiaB/8c6g9delxecZXAmg2+5JCEWG4eSPFQW7D+37Q9TUBPNgHSiN66n5apuNt0fUeYUvw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7793 X-Rspam-User: X-Rspamd-Queue-Id: 5294BA000C X-Rspamd-Server: rspam01 X-Stat-Signature: kbkcbqg65qbyhjujy8ydixtfjdnwx5e4 X-HE-Tag: 1674539282-455169 X-HE-Meta: U2FsdGVkX1+EFuwvlXaQoZpTMYaMJY1P//aP4C+HoNCOnK4+6iJvM0LDgvAPkMYqt0E7sJWMIkes+UAII0r5Te/t17aIVIY3NtZXSdGW6r39qw/H+1LQyXcx/LfexF4SgSYGlQ0rH+xxWBeVj+z3GnCvp06NeM8ntS92mbQE5f+v338/9XT5misVlvDOoxZZSsG+PCIQTU4dYUkWDGT6yjmR+VqM5bqD1Zz7yfJxPqlCglyi2/K36+PHaeZPkomeJ4o6T9yBWlHsqUs/KF1TzH4/U5YaDozTt2hvJ32wryWOzr6sdT7N8bH60Au9vUrVU2AJ33XXr/U6+dU2+Jw2gMNMdTy9wCA1IWPu9cMc1pmH1IQHNOpkHP1dtbzbBUnOkReGWO8+4iWWELQHQ8dCL0RqgQKgpU79t7/+TIj7hZEg4GK6Nq1m7ApnjRCDmDW2rQFgxtcLcDUNXi0hHWl4hmVP/05fuhoq72Q7fcEgVyShuslhRNasmtX093HLpgwdnPkRh4b6CEu+4CT76PWyPfhIUISwBM5DJueff4rebiNvbT+11wGBS2lqmR8OYprT0tH3VJxCWNk3HAiQFA9v9ecaTjM5lWLN+cxheQf+0MfQQqR0s9gpRp+mkcEclECm5iJSPthPjZG1Pc/lglJrIbVyyMtv1yaK9AyBBnCXYUJ2Fy/bpGTOqxI7Ir7552vA/mrSbE2zR1FvKmD/P5HhNx45/t3IfY14KZmiMc0sMjtnOJEQTvpH49nIb8eDAiQrvWmMw5BlBYSnX33aO+PqLqazy0IOA63NQDXsryK0IO4omqCqE7WGoLKaDszZMmZZ5JEGx0WAZ8s/2R7GhOYJOeiagI7yxBPr8vADAhQI4Fzw9j4GSGw1GF4h3VMKn8DGh7z/xy7GrPF+RTmwlG2gQTpbj36KIQq8ZaIfop282c5/y+5vc0JfZwjigKTfkjaePN8cqN7yCvfbt9pkxPR CBA+eDLl PFVt1VxXX2nXtCU71nzTgoiWtnvQblWG5GmIn3X1TRbX9rbz9+tdvfz09jhZ8pYOhmcEz/P/IV75dY1hwWLMxmOfr1NKs+JFh/+SqSNsUOcbXCMZv38XW9AbJGnwjBIRjgh7SUcaMXLhlSva72r8drVmMsULeDMEs0mTHNwYcF2ssIavtBNzK2TwCjpXcVPIVNsaPWgLAlbquAA2O6b0WnWGA+oiOL69+8jiK36itcJeT2CBaXCknvHRcvGHMG8h4XWbmIhje2Hzk+LOB/iUFxlj3Z+ktpON9CHu7vteUrvQftwfjKs+a65/82TpJqlu2lVjtxxv5WxJ1kyTzEqd1uJq2L8GNa5rrFb+Z4cDqeuGQXRLgdfbyUa2BmM8SvC8cboGC6GR/Qg9/3CID38HX8oGrS7xcvHE4+x8Nap26gLJs2mBhsfLcYb6en/iBWj+MwkbueN+KtFLIIhhCcYsz9cMetk/1rNgPqP2S9823V0g67rCgh+KAGiWxgSgK6M7Iu5JJLHfDex/9RPaXq/hrooWFKMvN+1do+319sUviemjbqgGQx5/uRDZQKw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If too much memory in a system is pinned or locked it can lead to problems such as performance degredation or in the worst case out-of-memory errors as such memory cannot be moved or paged out. In order to prevent users without CAP_IPC_LOCK from causing these issues the amount of memory that can be pinned is typically limited by RLIMIT_MEMLOCK. However this is inflexible as limits can't be shared between tasks and the enforcement of these limits is inconsistent between in-kernel users of pinned memory such as mlock() and device drivers which may also pin pages with pin_user_pages(). To allow for a single limit to be set introduce a cgroup controller which can be used to limit the number of pages being pinned by all tasks in the cgroup. Signed-off-by: Alistair Popple Cc: Tejun Heo Cc: Zefan Li Cc: Johannes Weiner Cc: Andrew Morton Cc: linux-kernel@vger.kernel.org Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org --- MAINTAINERS | 7 +- include/linux/cgroup.h | 20 +++- include/linux/cgroup_subsys.h | 4 +- mm/Kconfig | 11 +- mm/Makefile | 1 +- mm/pins_cgroup.c | 273 +++++++++++++++++++++++++++++++++++- 6 files changed, 316 insertions(+) create mode 100644 mm/pins_cgroup.c diff --git a/MAINTAINERS b/MAINTAINERS index f781f93..f8526e2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5381,6 +5381,13 @@ F: tools/testing/selftests/cgroup/memcg_protection.m F: tools/testing/selftests/cgroup/test_kmem.c F: tools/testing/selftests/cgroup/test_memcontrol.c +CONTROL GROUP - PINNED AND LOCKED MEMORY +M: Alistair Popple +L: cgroups@vger.kernel.org +L: linux-mm@kvack.org +S: Maintained +F: mm/pins_cgroup.c + CORETEMP HARDWARE MONITORING DRIVER M: Fenghua Yu L: linux-hwmon@vger.kernel.org diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index 3410aec..440f299 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -857,4 +857,24 @@ static inline void cgroup_bpf_put(struct cgroup *cgrp) {} #endif /* CONFIG_CGROUP_BPF */ +#ifdef CONFIG_CGROUP_PINS + +struct pins_cgroup *get_pins_cg(struct task_struct *task); +void put_pins_cg(struct pins_cgroup *cg); +void pins_uncharge(struct pins_cgroup *pins, int num); +int pins_try_charge(struct pins_cgroup *pins, int num); + +#else /* CONFIG_CGROUP_PINS */ + +static inline struct pins_cgroup *get_pins_cg(struct task_struct *task) +{ + return NULL; +} + +static inline void put_pins_cg(struct pins_cgroup *cg) {} +static inline void pins_uncharge(struct pins_cgroup *pins, int num) {} +static inline int pins_try_charge(struct pins_cgroup *pins, int num) { return 0; } + +#endif /* CONFIG_CGROUP_PINS */ + #endif /* _LINUX_CGROUP_H */ diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h index 4452354..c1b4aab 100644 --- a/include/linux/cgroup_subsys.h +++ b/include/linux/cgroup_subsys.h @@ -65,6 +65,10 @@ SUBSYS(rdma) SUBSYS(misc) #endif +#if IS_ENABLED(CONFIG_CGROUP_PINS) +SUBSYS(pins) +#endif + /* * The following subsystems are not supported on the default hierarchy. */ diff --git a/mm/Kconfig b/mm/Kconfig index ff7b209..7a32b98 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1183,6 +1183,17 @@ config LRU_GEN_STATS This option has a per-memcg and per-node memory overhead. # } +config CGROUP_PINS + bool "Cgroup for pinned and locked memory" + default y + + help + Having too much memory pinned or locked can lead to system + instability due to increased likelihood of encountering + out-of-memory conditions. Select this option to enable a cgroup + which can be used to limit the overall number of pages locked or + pinned by drivers. + source "mm/damon/Kconfig" endmenu diff --git a/mm/Makefile b/mm/Makefile index 8e105e5..81db189 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -138,3 +138,4 @@ obj-$(CONFIG_IO_MAPPING) += io-mapping.o obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_GENERIC_IOREMAP) += ioremap.o obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o +obj-$(CONFIG_CGROUP_PINS) += pins_cgroup.o diff --git a/mm/pins_cgroup.c b/mm/pins_cgroup.c new file mode 100644 index 0000000..cc310d5 --- /dev/null +++ b/mm/pins_cgroup.c @@ -0,0 +1,273 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Controller for cgroups limiting number of pages pinned for FOLL_LONGETERM. + * + * Copyright (C) 2022 Alistair Popple + */ + +#include +#include +#include +#include +#include +#include + +#define PINS_MAX (-1ULL) +#define PINS_MAX_STR "max" + +struct pins_cgroup { + struct cgroup_subsys_state css; + + atomic64_t counter; + atomic64_t limit; + + struct cgroup_file events_file; + atomic64_t events_limit; +}; + +static struct pins_cgroup *css_pins(struct cgroup_subsys_state *css) +{ + return container_of(css, struct pins_cgroup, css); +} + +static struct pins_cgroup *parent_pins(struct pins_cgroup *pins) +{ + return css_pins(pins->css.parent); +} + +struct pins_cgroup *get_pins_cg(struct task_struct *task) +{ + return css_pins(task_get_css(task, pins_cgrp_id)); +} + +void put_pins_cg(struct pins_cgroup *cg) +{ + css_put(&cg->css); +} + +static struct cgroup_subsys_state * +pins_css_alloc(struct cgroup_subsys_state *parent) +{ + struct pins_cgroup *pins; + + pins = kzalloc(sizeof(struct pins_cgroup), GFP_KERNEL); + if (!pins) + return ERR_PTR(-ENOMEM); + + atomic64_set(&pins->counter, 0); + atomic64_set(&pins->limit, PINS_MAX); + atomic64_set(&pins->events_limit, 0); + return &pins->css; +} + +static void pins_css_free(struct cgroup_subsys_state *css) +{ + kfree(css_pins(css)); +} + +/** + * pins_cancel - uncharge the local pin count + * @pins: the pin cgroup state + * @num: the number of pins to cancel + * + * This function will WARN if the pin count goes under 0, because such a case is + * a bug in the pins controller proper. + */ +void pins_cancel(struct pins_cgroup *pins, int num) +{ + /* + * A negative count (or overflow for that matter) is invalid, + * and indicates a bug in the `pins` controller proper. + */ + WARN_ON_ONCE(atomic64_add_negative(-num, &pins->counter)); +} + +/** + * pins_uncharge - hierarchically uncharge the pin count + * @pins: the pin cgroup state + * @num: the number of pins to uncharge + */ +void pins_uncharge(struct pins_cgroup *pins, int num) +{ + struct pins_cgroup *p; + + for (p = pins; parent_pins(p); p = parent_pins(p)) + pins_cancel(p, num); +} + +/** + * pins_charge - hierarchically charge the pin count + * @pins: the pin cgroup state + * @num: the number of pins to charge + * + * This function does *not* follow the pin limit set. It cannot fail and the new + * pin count may exceed the limit. This is only used for reverting failed + * attaches, where there is no other way out than violating the limit. + */ +static void pins_charge(struct pins_cgroup *pins, int num) +{ + struct pins_cgroup *p; + + for (p = pins; parent_pins(p); p = parent_pins(p)) + atomic64_add(num, &p->counter); +} + +/** + * pins_try_charge - hierarchically try to charge the pin count + * @pins: the pin cgroup state + * @num: the number of pins to charge + * + * This function follows the set limit. It will fail if the charge would cause + * the new value to exceed the hierarchical limit. Returns 0 if the charge + * succeeded, otherwise -EAGAIN. + */ +int pins_try_charge(struct pins_cgroup *pins, int num) +{ + struct pins_cgroup *p, *q; + + for (p = pins; parent_pins(p); p = parent_pins(p)) { + uint64_t new = atomic64_add_return(num, &p->counter); + uint64_t limit = atomic64_read(&p->limit); + + if (limit != PINS_MAX && new > limit) + goto revert; + } + + return 0; + +revert: + for (q = pins; q != p; q = parent_pins(q)) + pins_cancel(q, num); + pins_cancel(p, num); + + return -EAGAIN; +} + +static int pins_can_attach(struct cgroup_taskset *tset) +{ + struct cgroup_subsys_state *dst_css; + struct task_struct *task; + + cgroup_taskset_for_each(task, dst_css, tset) { + struct pins_cgroup *pins = css_pins(dst_css); + struct cgroup_subsys_state *old_css; + struct pins_cgroup *old_pins; + + old_css = task_css(task, pins_cgrp_id); + old_pins = css_pins(old_css); + + pins_charge(pins, task->mm->locked_vm); + pins_uncharge(old_pins, task->mm->locked_vm); + } + + return 0; +} + +static void pins_cancel_attach(struct cgroup_taskset *tset) +{ + struct cgroup_subsys_state *dst_css; + struct task_struct *task; + + cgroup_taskset_for_each(task, dst_css, tset) { + struct pins_cgroup *pins = css_pins(dst_css); + struct cgroup_subsys_state *old_css; + struct pins_cgroup *old_pins; + + old_css = task_css(task, pins_cgrp_id); + old_pins = css_pins(old_css); + + pins_charge(old_pins, task->mm->locked_vm); + pins_uncharge(pins, task->mm->locked_vm); + } +} + + +static ssize_t pins_max_write(struct kernfs_open_file *of, char *buf, + size_t nbytes, loff_t off) +{ + struct cgroup_subsys_state *css = of_css(of); + struct pins_cgroup *pins = css_pins(css); + uint64_t limit; + int err; + + buf = strstrip(buf); + if (!strcmp(buf, PINS_MAX_STR)) { + limit = PINS_MAX; + goto set_limit; + } + + err = kstrtoll(buf, 0, &limit); + if (err) + return err; + + if (limit < 0 || limit >= PINS_MAX) + return -EINVAL; + +set_limit: + /* + * Limit updates don't need to be mutex'd, since it isn't + * critical that any racing fork()s follow the new limit. + */ + atomic64_set(&pins->limit, limit); + return nbytes; +} + +static int pins_max_show(struct seq_file *sf, void *v) +{ + struct cgroup_subsys_state *css = seq_css(sf); + struct pins_cgroup *pins = css_pins(css); + uint64_t limit = atomic64_read(&pins->limit); + + if (limit >= PINS_MAX) + seq_printf(sf, "%s\n", PINS_MAX_STR); + else + seq_printf(sf, "%lld\n", limit); + + return 0; +} + +static s64 pins_current_read(struct cgroup_subsys_state *css, + struct cftype *cft) +{ + struct pins_cgroup *pins = css_pins(css); + + return atomic64_read(&pins->counter); +} + +static int pins_events_show(struct seq_file *sf, void *v) +{ + struct pins_cgroup *pins = css_pins(seq_css(sf)); + + seq_printf(sf, "max %lld\n", (s64)atomic64_read(&pins->events_limit)); + return 0; +} + +static struct cftype pins_files[] = { + { + .name = "max", + .write = pins_max_write, + .seq_show = pins_max_show, + .flags = CFTYPE_NOT_ON_ROOT, + }, + { + .name = "current", + .read_s64 = pins_current_read, + .flags = CFTYPE_NOT_ON_ROOT, + }, + { + .name = "events", + .seq_show = pins_events_show, + .file_offset = offsetof(struct pins_cgroup, events_file), + .flags = CFTYPE_NOT_ON_ROOT, + }, + { } /* terminate */ +}; + +struct cgroup_subsys pins_cgrp_subsys = { + .css_alloc = pins_css_alloc, + .css_free = pins_css_free, + .legacy_cftypes = pins_files, + .dfl_cftypes = pins_files, + .can_attach = pins_can_attach, + .cancel_attach = pins_cancel_attach, +};