From patchwork Fri Aug 4 07:57:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhongkun He X-Patchwork-Id: 13341437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A639C001DF for ; Fri, 4 Aug 2023 07:57:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DFE8F8D0010; Fri, 4 Aug 2023 03:57:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DAEE08D0002; Fri, 4 Aug 2023 03:57:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C760A8D0010; Fri, 4 Aug 2023 03:57:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B7AB08D0002 for ; Fri, 4 Aug 2023 03:57:47 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 886F7C13B1 for ; Fri, 4 Aug 2023 07:57:47 +0000 (UTC) X-FDA: 81085668174.19.46F179A Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by imf08.hostedemail.com (Postfix) with ESMTP id D21B316000B for ; Fri, 4 Aug 2023 07:57:45 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=cyFCTzgx; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf08.hostedemail.com: domain of hezhongkun.hzk@bytedance.com designates 209.85.210.172 as permitted sender) smtp.mailfrom=hezhongkun.hzk@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691135865; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=zMxhq1yiZbeuYK1cuOPzx7JANSxn1juq0pUBihEL+Wk=; b=AkglUEdmfY5cpUdcEG00VfSSza5YUI3kj2K4uun1DWoHkTlubsKQ8iR5XDNJxDVBUWpP41 lMsAy0yI2TcQAyZAqoC+jxuztjXXc0vPzmdAwnlw1BdfW5IwPOHEOsizztWkGKOZRYs6O9 51O3HJMVbonYU4XjAOAIBuDNk4XmmNI= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=cyFCTzgx; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf08.hostedemail.com: domain of hezhongkun.hzk@bytedance.com designates 209.85.210.172 as permitted sender) smtp.mailfrom=hezhongkun.hzk@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691135865; a=rsa-sha256; cv=none; b=7NnGFUkoeubBlMjt6mjgrE1pS8y4WJae9jjNreFbDq/wCp+S3tKQwhFnYjGHFpQ47+mn4V qtFTRkY7CPTXD4CMPSb1I8SEFeXOgL6c0jxQYlGKp9uIWWDJr2MdUOmGCf4JEAjss+sbt4 suxy++UX24mGBb2ox15rFw5sO0QQ7Sk= Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-686f8614ce5so1694184b3a.3 for ; Fri, 04 Aug 2023 00:57:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1691135864; x=1691740664; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=zMxhq1yiZbeuYK1cuOPzx7JANSxn1juq0pUBihEL+Wk=; b=cyFCTzgxWtiVhk0GN4aVnu4QKoUOxtDJ285NYT/WdtIMtJDdEURpJTFHP7/r6bXjV6 L8P0q1S1aQVL0XZ83Y2GvLfNChQwxSAp23v6Z0Ggk6ewzwUZVUEJq+dP04Xd5aXcBWMO rwkKID50qG8Uedkg0KM6CqS+CltvwQomEMNoDZXFxdvxXrlgXEe2WCJrStu6gaOTpVg+ 6COTIYjMyWD2l0W45DWG3YrkIqUyIFSzZEODU4n52f54yIoB7fP+LVFOLh2zO3ZvfJ38 kiL+DUNaBlV/kRc3tJrORJMZb7dOjqVRomaj4xKOPC5hpw9jIuTWV35DIcjhV7bt0EH3 ULSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691135864; x=1691740664; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zMxhq1yiZbeuYK1cuOPzx7JANSxn1juq0pUBihEL+Wk=; b=QLYdivwjLdJN+dsWznLnsIr60ZNaqIZ9Z8zZDoO17F7Q/TMv17kyygU/t9cydPV4St VgtyCwMWPa90PvmXzj52M0DOL0qns5aLRZMfiRYme74yZbmQXMgDXvf2i5FPPl7L8FGi XL3VwscGZBIJzlZgUOco5qnpRuX0mcbQrzhUEeGvp7S7aTo62oY8X/6C7OEPfnpdZAqO +8t/QDPnJEPQI6STWOdIqm2/f6IJmFBrcgY0iabBtlYyS+qiTitjbgI/stVNBrGSiew/ 9TTAf5cGyvYgafM+lD7UopGyTZkxhC4iP0hrifZ2B8TFCFH9SkY469iTSV4unXwLOVBE 4BEg== X-Gm-Message-State: AOJu0YxVGdfEqH9MQqTmUhYD8m914ePFMRiUpSGheoZuVvJ5x8E8ySYB q/J7xwdVv1e4WwlNxOE35UifFQ== X-Google-Smtp-Source: AGHT+IEINJci+rjslbHv2EkTaLQ/k76JUU6HiPJtghJgD8x05vOGcKrfPI4DZeP83cVwWdid5ntRbg== X-Received: by 2002:a05:6a00:1a4b:b0:687:7b0a:fae4 with SMTP id h11-20020a056a001a4b00b006877b0afae4mr1578686pfv.0.1691135864729; Fri, 04 Aug 2023 00:57:44 -0700 (PDT) Received: from Tower.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id x10-20020a056a00270a00b0068796c58441sm1016718pfv.60.2023.08.04.00.57.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Aug 2023 00:57:44 -0700 (PDT) From: Zhongkun He To: minchan@kernel.org, senozhatsky@chromium.org, mhocko@suse.com Cc: david@redhat.com, yosryahmed@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Zhongkun He Subject: [RFC PATCH RESEND v2 1/2] memcg: Add support for zram object charge Date: Fri, 4 Aug 2023 15:57:36 +0800 Message-Id: <20230804075736.207995-1-hezhongkun.hzk@bytedance.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Rspamd-Queue-Id: D21B316000B X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: sa8fpeojaczsjg4jomzqstmjuy67piih X-HE-Tag: 1691135865-878196 X-HE-Meta: U2FsdGVkX19POsfcKPpccheu54wvzuAQZGFjrqM842bxfVola7oh/YTZY4herrpiZa9mnes5221w2gAxOTQYJw/UPfVUwZIne339XzaIQ+MmWUi13bAdyP46Hr+o+OGjvmaoohpeiyUD38vevzaoZDV7wipHiLUgA8WtkMMGqCySX6FkcbSIghSM34eGX70gjELEhrFqR1+wHoD/JzY2b9Skl8Ghnbaikz96BycmqQI5+Gji44CCoUpqnJkAXnhuHUKFdwg7Sw83q73Lz/MUTiNdV6QA9sz7nAMIXTsmWTWtLF/pKVXNPMKXBSq1y03joIRr27BYBRPuLB2ktId3XCxfH4OMt1KpQIn+GZfpvhzMP62iuzX3xhfuP9Y7/WJnFRUbsKQ/XALHKLGndq/SwZT3VFCL4AB4/AuHIMMazFvX662sge5g2ogNCKOhNJ497PHVLsQ3ncOtH4NDKE2oidQ12o81Ob8gksYtdzsxMtxKRLb4+HRYvFNu5f/j2tJ8DBS8mJDH6HdEQVCT+ZTBcRh3YPOw7xFyMYYeOLJgCJ3NVX5eeRVvruQ6AUQP6zYOXeTeXtIsVjtj1kYAuGIvHRFbqzMHyvg4g1bKzlzQSgGbN/u33vTNhqbWWlVvNHghdRN9KWZdhKh+G9m+ZyGUYxij+lsiz0xrHh7XzDd5EjYtdPq3HtFuZGKyP6xjKF+7SKhdApwOoAcYjr/sU/WOFJEkP9WQejH9gDqIBMk4ET/E45ZWQlKEDUsKWeQtGicjrZqxfHzh67Kc4W8LoXpUUOg0sTExflY850tbFMaF6B7qxYlfpFELZbRU2fq4DZ7dn+OovL5vIFOnVWXA8IAo9NQKf9142c6Ug23fiSPK1yan2l25OM/LLMh/o3KDWr260++7N4JKV/dhJ+lOY2XjofnEZs6voBhp9cVqv4AWFkygX8nhX4sTTbeuxp3nraUl7YOAplkp1NsjJ2XxRVd v/shcRdm yGVxfXUOklap5SZhPaFQm8Cf0RZj9uVI7Sevww0GTLJ0He06hR2VGm/0Wm5gyxH9rI2RJ6T06FJNUJ/F+qNAvSLlqgYl7Hq5LBH+Vl7GjL3+f1H50A4uo1tWr34pNgbvB2mutseQIsx/P+jsduScBSaWJ9oJEcbtQf3mbyMuZrqD9q3lDfea+nFoefl38eqbY6VSe2Up4V9cxqvn1BR3rr4FGMRxBbc5qRrgSnDyKWZsGMBM891hvTbXu1st7Essx7KAtnplOhojs4AcHTOMWncniu0JgGhmfcdDya87s/Wj3OmeEVo0TzN9GDQuuleoizwWEqs6uWVZaKVYPxYaQAb4cUo0jFdFCe569P7UZfSTre7trHRqxDBsOLZjbPlilokhZqY24K7ZJHWfchuvlvooLqpdIZ6ZnNNS+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The compressed RAM(ZRAM) is currently charged to kernel, not to any memory cgroup, which can escape their cgroup memory containment if the memory of a task is limited by memcgroup. it will swap out the memory to zram swap device when the memory is insufficient. In that case, the memory hard limit will be invalidated. So, it should makes sense to charge the compressed RAM to the page's memory cgroup. As we know, zram can be used in two ways, direct and indirect, this patchset can charge memory in both cases. Direct zram usage by process within a cgroup will fail to charge if there is no memory. Indirect zram usage by process within a cgroup via swap in PF_MEMALLOC context, will charge successfully. Signed-off-by: Zhongkun He --- include/linux/memcontrol.h | 12 ++++++++++++ mm/memcontrol.c | 24 ++++++++++++++++++++++++ 2 files changed, 36 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 5818af8eca5a..24bac877bc83 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1819,6 +1819,9 @@ static inline void count_objcg_event(struct obj_cgroup *objcg, rcu_read_unlock(); } +int obj_cgroup_charge_zram(struct obj_cgroup *objcg, gfp_t gfp, + size_t size); +void obj_cgroup_uncharge_zram(struct obj_cgroup *objcg, size_t size); #else static inline bool mem_cgroup_kmem_disabled(void) { @@ -1880,6 +1883,15 @@ static inline void count_objcg_event(struct obj_cgroup *objcg, { } +int obj_cgroup_charge_zram(struct obj_cgroup *objcg, gfp_t gfp, + size_t size) +{ + return 0; +} + +void obj_cgroup_uncharge_zram(struct obj_cgroup *objcg, size_t size) +{ +} #endif /* CONFIG_MEMCG_KMEM */ #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e8ca4bdcb03c..118544acf895 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3059,6 +3059,7 @@ struct obj_cgroup *get_obj_cgroup_from_page(struct page *page) } return objcg; } +EXPORT_SYMBOL(get_obj_cgroup_from_page); static void memcg_account_kmem(struct mem_cgroup *memcg, int nr_pages) { @@ -3409,6 +3410,29 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) refill_obj_stock(objcg, size, true); } +int obj_cgroup_charge_zram(struct obj_cgroup *objcg, gfp_t gfp, + size_t size) +{ + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) + return 0; + + /* + * Indirect zram usage in PF_MEMALLOC, charging must succeed. + * Direct zram usage, charging may failed. + */ + return obj_cgroup_charge(objcg, gfp, size); +} +EXPORT_SYMBOL(obj_cgroup_charge_zram); + +void obj_cgroup_uncharge_zram(struct obj_cgroup *objcg, size_t size) +{ + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) + return; + + obj_cgroup_uncharge(objcg, size); +} +EXPORT_SYMBOL(obj_cgroup_uncharge_zram); + #endif /* CONFIG_MEMCG_KMEM */ /* From patchwork Fri Aug 4 07:57:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhongkun He X-Patchwork-Id: 13341438 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4EDEC41513 for ; Fri, 4 Aug 2023 07:58:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 487F28D0011; Fri, 4 Aug 2023 03:58:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4380C8D0002; Fri, 4 Aug 2023 03:58:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 301578D0011; Fri, 4 Aug 2023 03:58:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 20CF28D0002 for ; Fri, 4 Aug 2023 03:58:03 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E029C413D9 for ; Fri, 4 Aug 2023 07:58:02 +0000 (UTC) X-FDA: 81085668804.02.7FD56C3 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf07.hostedemail.com (Postfix) with ESMTP id 3426440007 for ; Fri, 4 Aug 2023 07:58:00 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=lCi7sEZa; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf07.hostedemail.com: domain of hezhongkun.hzk@bytedance.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=hezhongkun.hzk@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691135881; a=rsa-sha256; cv=none; b=7aiJIquH5CkgqRUV7FQm8WQiWPvCVH5R1zpqkNDFbg7A1bvFyUdd19EwjoYJRm8vcU7bp4 pa6FlbUMEVAcv0299L/dg4AqJEjqMYP0d+UrjhAYTSu6DLdNC3Tse1gsIbbukJYowwr+sa wVR9gLPHawmSIt3Lwh8Q56PZzTuL4h8= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=lCi7sEZa; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf07.hostedemail.com: domain of hezhongkun.hzk@bytedance.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=hezhongkun.hzk@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691135881; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=Z802CX/jhRxr4i7C0Kq8FNtLQD0asQherflJFZ/9BtY=; b=aJ2dhGpKi1dTbCzmg5j0JEZSbY3rjP8bF9lla2riITwhE+RI/Dk84QHfsUx/U0KO4JaAxP Val9e0KK/qeR8OU1cA9tvdHr1GAJkWd6umbXnbGWh0pWngP9VDejagplllhLl/btYMFxD4 nx7ZNZcaHd9vsrgPxOvrcYxarmYYPLg= Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1b8b4749013so15430175ad.2 for ; Fri, 04 Aug 2023 00:58:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1691135880; x=1691740680; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=Z802CX/jhRxr4i7C0Kq8FNtLQD0asQherflJFZ/9BtY=; b=lCi7sEZaWt6+fEBpNPAu+/u1pxqMPkL9EUjiIKr2BYY5B8ttIBXJF78aA9AS+vBMc5 8lXOg0V+i6Y9jrpLdS8sdTZiMEYD8JAiQJSfcj7I5ZNPtaqyTvUqcGX74AVy/bIGXlOn lM7guK0QY9FGsSsGw8Bb8xjpjqnRgdTQ/knqfcu6iFRVEarlV0dzqNJYtu8Fd7i1X7+r vaMRa8oJkxBmT1iMVJb864MVy0fQdGblOvBIw4ONNZcg8rV61BlsUZNI8HzwDbs1CczS ph5ck4WD5ulsLnWc8AavPNJvvlfMgiR8KNh8WKuLBWB2hPs9lGHg4MI5qozYgCAVQ8J4 osdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691135880; x=1691740680; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Z802CX/jhRxr4i7C0Kq8FNtLQD0asQherflJFZ/9BtY=; b=cQq0sTVJ8IQE0LGgfbCuXmxIvGvnfTnBN8EHnG62bWDcsQwJSU0EdlgNBhBVeKZ/Vf c24A84sE+kHYjFMSxNJ9yjIn42bSIzkUpm9VhzN49/cqA7fm4g0r4vLln0bLZpd7pcIz eoedqeV3W7gOdD63gfREBvWb+dB2CA93IBlOJEyGC2h7NLdCTntUr2zO34PTvBol2pIC l6DcKFAAU7LnGHRKf81Sxp7W95jMTmaMHc39HavJPkrsXBke00zwIar4SIs1bzkxAOSJ aMu0SV37EKRxuEVva2cNbk/ZarGUFzjJAkDkY1LSlqaHgH62ILwbTpDuY8dregaLtgL3 1wkw== X-Gm-Message-State: AOJu0YxWT/vpYTrr8f+E4sqnYcjbW+moXH1N9HnbfRgmdCeQLKO9UYdJ rsFmM9UvhtMqdC5bvpBn/vbRqg== X-Google-Smtp-Source: AGHT+IEO3s/e3Y8sbKb+ObPEL8ybDjUD9tl+Yl9hlHtUssuaKBKvlY1JRW9qf8IMCVxcEFmBwKBBWA== X-Received: by 2002:a17:902:f7c1:b0:1bc:41e4:af57 with SMTP id h1-20020a170902f7c100b001bc41e4af57mr1056464plw.48.1691135880054; Fri, 04 Aug 2023 00:58:00 -0700 (PDT) Received: from Tower.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id q17-20020a170902dad100b001bb9aadfb04sm1093113plx.220.2023.08.04.00.57.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Aug 2023 00:57:59 -0700 (PDT) From: Zhongkun He To: minchan@kernel.org, senozhatsky@chromium.org, mhocko@suse.com Cc: david@redhat.com, yosryahmed@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Zhongkun He Subject: [RFC PATCH RESEND v2 2/2] zram: charge the compressed RAM to the page's memcgroup Date: Fri, 4 Aug 2023 15:57:51 +0800 Message-Id: <20230804075751.208045-1-hezhongkun.hzk@bytedance.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3426440007 X-Stat-Signature: qrr8zz8fuipkkr9y9gu8fcx1gjkec64b X-HE-Tag: 1691135880-47966 X-HE-Meta: U2FsdGVkX1+PFaJ1xyzYnfJq96DKAPQD0qA5VnofD838Q7ub7joKiPKTjyFfAxD7i3TZHFCdKzNHBGhDZk9WxekC3Wrgq+mpnC0Yfarw0ZZKybHDoQ9c3oGsTDY1Eqcg0EQQ0aNrzzrUglzimnQkS2rsAL7QS+pqlURj3qy7r1QP+flG3MDH6OtAnno7CVoKzDNKS/pKhUfMGyVr+/c6mIPRuXViwH4/w9jaEeZpg7KUxFpP5C1E4P0aCbILLLm6T0P8gxBvHVEr/ryuxdbHhXT86VrAQ6al5oNCMVBi2VtS63T5yxxYq46GPxmec1qn+yP0ab1MyKUyzOWdTHsscxo2EfJ+LQVdzVLp+SSeHmi9UyTT89Y4iHUUnLG8q4XECaXvKOTLlzSla/unkqvDxMlIGBM8Iq175qi74rWW62IX/oRxFXFHWHR5+gadzFIaHIuNSw8lpNdQ1iTFZTfnbV2OqBL59XCfpuan6ToLB+m9tfXpAhfgo3qRqm1hSCDlCEkLJHHWwdpAbjkelkBd0O62yOzl0+InzdIm+1KNSF6jLcmHhDpxOd4JrcxzCSzRadBNr8PD9eEerftE627kI+utuYKR780v1E+tLVnTFRmWggWZlcKkh5tKpiQfNBF20tLJUnYsKCcWZycGsst0zhLqWGL2H6w+pc6wqU5wg8B3Abn5OuTWhd8RQjCb04Z6oEZKkO5HXEdduS1vQ1PdW4yfp2fdSyNLWezKApZWS++mfLk5P2FidvqTZpOHl2PRlHkm0SJ37xBVRBX5Z5r7dxF0CYfAoqejkD0GMLsFAET5IQM8O5YjkssDdi2WrTr+bwgTvMVeyOyoa/qAWUWomlZ8pZv1+3rNCZIo3p8ijTnNYjcDiK7Irpv3kNspACzom1175jBOu7OqzprA317O57kIX85XuqGz5zjOpZMvLPtH/s3S3w/D0Y1q/XCOcA5WPaWQkk+8FTXfIPzOA3H EbBOLqDP BMOVqSWT4ZALgmiQXpoabA8wx1Mrf9OzXtSR7snV5IALYSbMVeZhm+pY2qfKQzy/wwjmSKmxSBEmvwcHlTBnDD5EP74pqPZ6TuGxfcjsM1dBKVLb3YrlRCDmlP0mILG+zHhymnvQeRaBjRxjwhzK0jDIYflUZ+EFryTVPO2MX0g+bKorSazXPZ4kfIXg/OFRI/klDxxeCCPyG6EQ+RIV4L5PavcZ+izJ4xYzA35qld2vRLWHlqLNG4yb2TxsXCgJOoxyQu8zcQsgtoUTf2+LausMyAYD/3mPMn6JUOR0RacqWt4qeJgU/z9tLme+q9id5lA6dbt41a0r+xdMzTs+YDzly4o4O62ECuH1TluSujCMhu/EkORVaD3dfeIt2zjwwWepfW4X0XkG9Hs4HRsK0s7dSUOT6VG8ycRQN X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The compressed RAM is currently charged to kernel, not to any memory cgroup. This patch can charge the pages regardless of direct or indirect zram usage. Direct zram usage by process within a cgroup will fail to charge if there is no memory. Indirect zram usage by process within a cgroup via swap in PF_MEMALLOC context, wqwill charge successfully. This allows some limit overrun, but not enough to matter in practice.Charge compressed page once, mean a page will be freed.the size of compressed page is less than or equal to the page to be freed. The numbers of excess depend on the compression ratio only. The maximum amount will not exceed 400KB, and will be smaller than the hard limit finally, So not an unbounded way. Signed-off-by: Zhongkun He --- drivers/block/zram/zram_drv.c | 45 +++++++++++++++++++++++++++++++++++ drivers/block/zram/zram_drv.h | 1 + 2 files changed, 46 insertions(+) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 5676e6dd5b16..3aa352940b9b 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -33,6 +33,7 @@ #include #include #include +#include #include "zram_drv.h" @@ -135,6 +136,18 @@ static void zram_set_obj_size(struct zram *zram, zram->table[index].flags = (flags << ZRAM_FLAG_SHIFT) | size; } +static inline void zram_set_obj_cgroup(struct zram *zram, u32 index, + struct obj_cgroup *objcg) +{ + zram->table[index].objcg = objcg; +} + +static inline struct obj_cgroup *zram_get_obj_cgroup(struct zram *zram, + u32 index) +{ + return zram->table[index].objcg; +} + static inline bool zram_allocated(struct zram *zram, u32 index) { return zram_get_obj_size(zram, index) || @@ -1256,6 +1269,7 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize) static void zram_free_page(struct zram *zram, size_t index) { unsigned long handle; + struct obj_cgroup *objcg; #ifdef CONFIG_ZRAM_MEMORY_TRACKING zram->table[index].ac_time = 0; @@ -1289,6 +1303,13 @@ static void zram_free_page(struct zram *zram, size_t index) goto out; } + objcg = zram_get_obj_cgroup(zram, index); + if (objcg) { + obj_cgroup_uncharge_zram(objcg, zram_get_obj_size(zram, index)); + obj_cgroup_put(objcg); + zram_set_obj_cgroup(zram, index, NULL); + } + handle = zram_get_handle(zram, index); if (!handle) return; @@ -1419,6 +1440,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) struct zcomp_strm *zstrm; unsigned long element = 0; enum zram_pageflags flags = 0; + struct obj_cgroup *objcg; mem = kmap_atomic(page); if (page_same_filled(mem, &element)) { @@ -1494,6 +1516,14 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) return -ENOMEM; } + objcg = get_obj_cgroup_from_page(page); + if (objcg && obj_cgroup_charge_zram(objcg, GFP_KERNEL, comp_len)) { + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zs_free(zram->mem_pool, handle); + obj_cgroup_put(objcg); + return -ENOMEM; + } + dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); src = zstrm->buffer; @@ -1526,6 +1556,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) } else { zram_set_handle(zram, index, handle); zram_set_obj_size(zram, index, comp_len); + zram_set_obj_cgroup(zram, index, objcg); } zram_slot_unlock(zram, index); @@ -1575,6 +1606,7 @@ static int zram_recompress(struct zram *zram, u32 index, struct page *page, u32 threshold, u32 prio, u32 prio_max) { struct zcomp_strm *zstrm = NULL; + struct obj_cgroup *objcg; unsigned long handle_old; unsigned long handle_new; unsigned int comp_len_old; @@ -1669,6 +1701,16 @@ static int zram_recompress(struct zram *zram, u32 index, struct page *page, if (threshold && comp_len_new >= threshold) return 0; + objcg = zram_get_obj_cgroup(zram, index); + if (objcg) { + obj_cgroup_get(objcg); + if (obj_cgroup_charge_zram(objcg, GFP_KERNEL, comp_len_new)) { + zcomp_stream_put(zram->comps[prio]); + obj_cgroup_put(objcg); + return -ENOMEM; + } + } + /* * No direct reclaim (slow path) for handle allocation and no * re-compression attempt (unlike in zram_write_bvec()) since @@ -1683,6 +1725,8 @@ static int zram_recompress(struct zram *zram, u32 index, struct page *page, __GFP_MOVABLE); if (IS_ERR_VALUE(handle_new)) { zcomp_stream_put(zram->comps[prio]); + obj_cgroup_uncharge_zram(objcg, comp_len_new); + obj_cgroup_put(objcg); return PTR_ERR((void *)handle_new); } @@ -1696,6 +1740,7 @@ static int zram_recompress(struct zram *zram, u32 index, struct page *page, zram_set_handle(zram, index, handle_new); zram_set_obj_size(zram, index, comp_len_new); zram_set_priority(zram, index, prio); + zram_set_obj_cgroup(zram, index, objcg); atomic64_add(comp_len_new, &zram->stats.compr_data_size); atomic64_inc(&zram->stats.pages_stored); diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index ca7a15bd4845..959d721d5474 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -72,6 +72,7 @@ struct zram_table_entry { #ifdef CONFIG_ZRAM_MEMORY_TRACKING ktime_t ac_time; #endif + struct obj_cgroup *objcg; }; struct zram_stats {