From patchwork Tue Oct 12 10:18:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasily Averin X-Patchwork-Id: 12551967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38F00C433EF for ; Tue, 12 Oct 2021 10:19:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9541D6101D for ; Tue, 12 Oct 2021 10:19:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9541D6101D Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=virtuozzo.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 38044900002; Tue, 12 Oct 2021 06:19:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 309C36B0071; Tue, 12 Oct 2021 06:19:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1AA9E900002; Tue, 12 Oct 2021 06:19:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id 091E36B006C for ; Tue, 12 Oct 2021 06:19:09 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A3C9331EAE for ; Tue, 12 Oct 2021 10:19:08 +0000 (UTC) X-FDA: 78687387576.31.F6E994B Received: from relay.sw.ru (relay.sw.ru [185.231.240.75]) by imf11.hostedemail.com (Postfix) with ESMTP id 1BD63F0000AE for ; Tue, 12 Oct 2021 10:19:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=virtuozzo.com; s=relay; h=Content-Type:MIME-Version:Date:Message-ID:Subject :From; bh=8rJ+XIdW21uQTmdwLWrTlCZ6Z0iC9PvW7PwYWi7Lxm8=; b=Fr5tYoeGULbawkQ7Adk DF6UzznqTCzJ1QigjEfGsbfG2TfaU5+iSrPxt8sGQUE5sYdivT0hS1CPPW3zkQ00hBVpPVOLvJGM3 DG40JyQ+UnAP1EFkrG9HByh9CpRywktFBlpa3sDw0kK4IUXeU7N71gOD8mo0S+3nDBRKRyBywNs=; Received: from [172.29.1.17] by relay.sw.ru with esmtp (Exim 4.94.2) (envelope-from ) id 1maEs8-005n5x-J5; Tue, 12 Oct 2021 13:19:00 +0300 From: Vasily Averin Subject: [PATCH mm v2] memcg: enable memory accounting in __alloc_pages_bulk To: Michal Hocko , Johannes Weiner , Vladimir Davydov , Andrew Morton , Shakeel Butt Cc: Roman Gushchin , Mel Gorman , Uladzislau Rezki , Vlastimil Babka , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel@openvz.org References: Message-ID: <2410e99a-087c-3f89-9bdf-b62a7d5df725@virtuozzo.com> Date: Tue, 12 Oct 2021 13:18:39 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 1BD63F0000AE X-Stat-Signature: dsgr8rgb7j9ubxwdxtrbjf681kwagy6f Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=virtuozzo.com header.s=relay header.b=Fr5tYoeG; spf=pass (imf11.hostedemail.com: domain of vvs@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=vvs@virtuozzo.com; dmarc=pass (policy=quarantine) header.from=virtuozzo.com X-HE-Tag: 1634033947-340926 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Enable memory accounting for bulk page allocator. Fixes: 387ba26fb1cb ("mm/page_alloc: add a bulk page allocator") Cc: Signed-off-by: Vasily Averin --- v2: modified according to Shakeel Butt's remarks --- include/linux/memcontrol.h | 11 +++++++++ mm/memcontrol.c | 48 +++++++++++++++++++++++++++++++++++++- mm/page_alloc.c | 14 ++++++++++- 3 files changed, 71 insertions(+), 2 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 3096c9a0ee01..990acd70c846 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -810,6 +810,12 @@ static inline void obj_cgroup_put(struct obj_cgroup *objcg) percpu_ref_put(&objcg->refcnt); } +static inline void obj_cgroup_put_many(struct obj_cgroup *objcg, + unsigned long nr) +{ + percpu_ref_put_many(&objcg->refcnt, nr); +} + static inline void mem_cgroup_put(struct mem_cgroup *memcg) { if (memcg) @@ -1746,4 +1752,9 @@ static inline struct mem_cgroup *mem_cgroup_from_obj(void *p) #endif /* CONFIG_MEMCG_KMEM */ +bool memcg_bulk_pre_charge_hook(struct obj_cgroup **objcgp, gfp_t gfp, + unsigned int nr_pages); +void memcg_bulk_charge_hook(struct obj_cgroup *objcgp, struct page *page); +void memcg_bulk_post_charge_hook(struct obj_cgroup *objcg, + unsigned int nr_pages); #endif /* _LINUX_MEMCONTROL_H */ diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 87e41c3cac10..16fe3384c12c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3239,7 +3239,53 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) refill_obj_stock(objcg, size, true); } -#endif /* CONFIG_MEMCG_KMEM */ +bool memcg_bulk_pre_charge_hook(struct obj_cgroup **objcgp, gfp_t gfp, + unsigned int nr_pages) +{ + struct obj_cgroup *objcg = NULL; + + if (!memcg_kmem_enabled() || !(gfp & __GFP_ACCOUNT)) + return true; + + objcg = get_obj_cgroup_from_current(); + + if (objcg && obj_cgroup_charge_pages(objcg, gfp, nr_pages)) { + obj_cgroup_put(objcg); + return false; + } + obj_cgroup_get_many(objcg, nr_pages - 1); + *objcgp = objcg; + return true; +} + +void memcg_bulk_charge_hook(struct obj_cgroup *objcg, struct page *page) +{ + page->memcg_data = (unsigned long)objcg | MEMCG_DATA_KMEM; +} + +void memcg_bulk_post_charge_hook(struct obj_cgroup *objcg, + unsigned int nr_pages) +{ + obj_cgroup_uncharge_pages(objcg, nr_pages); + obj_cgroup_put_many(objcg, nr_pages); +} +#else /* !CONFIG_MEMCG_KMEM */ +bool memcg_bulk_pre_charge_hook(struct obj_cgroup **objcgp, gfp_t gfp, + unsigned int nr_pages) +{ + return true; +} + +void memcg_bulk_charge_hook(struct obj_cgroup *objcgp, struct page *page) +{ +} + +void memcg_bulk_post_charge_hook(struct obj_cgroup *objcg, + unsigned int nr_pages) +{ +} +#endif + /* * Because page_memcg(head) is not set on tails, set it now. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b37435c274cf..eb37177bf507 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5207,6 +5207,8 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, gfp_t alloc_gfp; unsigned int alloc_flags = ALLOC_WMARK_LOW; int nr_populated = 0, nr_account = 0; + unsigned int nr_pre_charge = 0; + struct obj_cgroup *objcg = NULL; /* * Skip populated array elements to determine if any pages need @@ -5275,6 +5277,10 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, if (unlikely(!zone)) goto failed; + nr_pre_charge = nr_pages - nr_populated; + if (!memcg_bulk_pre_charge_hook(&objcg, gfp, nr_pre_charge)) + goto failed; + /* Attempt the batch allocation */ local_lock_irqsave(&pagesets.lock, flags); pcp = this_cpu_ptr(zone->per_cpu_pageset); @@ -5299,6 +5305,9 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, nr_account++; prep_new_page(page, 0, gfp, 0); + if (objcg) + memcg_bulk_charge_hook(objcg, page); + if (page_list) list_add(&page->lru, page_list); else @@ -5310,13 +5319,16 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, __count_zid_vm_events(PGALLOC, zone_idx(zone), nr_account); zone_statistics(ac.preferred_zoneref->zone, zone, nr_account); + if (objcg) + memcg_bulk_post_charge_hook(objcg, nr_pre_charge - nr_account); out: return nr_populated; failed_irq: local_unlock_irqrestore(&pagesets.lock, flags); - + if (objcg) + memcg_bulk_post_charge_hook(objcg, nr_pre_charge); failed: page = __alloc_pages(gfp, 0, preferred_nid, nodemask); if (page) {