From patchwork Tue Jun 23 17:40:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roman Gushchin X-Patchwork-Id: 11621221 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D173960D for ; Tue, 23 Jun 2020 17:40:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 91FA8206B7 for ; Tue, 23 Jun 2020 17:40:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=fb.com header.i=@fb.com header.b="gIjqdNal" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91FA8206B7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=fb.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9660A6B0005; Tue, 23 Jun 2020 13:40:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8EFC26B0007; Tue, 23 Jun 2020 13:40:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 790F06B0008; Tue, 23 Jun 2020 13:40:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 5B8B66B0005 for ; Tue, 23 Jun 2020 13:40:52 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EC44D8018F2A for ; Tue, 23 Jun 2020 17:40:51 +0000 (UTC) X-FDA: 76961191902.23.fish62_050be2126e3d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id AF5521FC43 for ; Tue, 23 Jun 2020 17:40:51 +0000 (UTC) X-Spam-Summary: 1,0,0,2f7ab9e3bee9467e,d41d8cd98f00b204,prvs=34435a7ffb=guro@fb.com,,RULES_HIT:1:2:41:69:355:379:421:467:541:967:968:973:981:988:989:1260:1261:1277:1311:1313:1314:1345:1437:1513:1515:1516:1518:1521:1605:1730:1747:1777:1792:1801:2198:2199:2393:2525:2559:2564:2682:2685:2731:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4051:4250:4321:4423:4605:5007:6261:6630:6653:7875:7903:7904:7974:8603:9025:9040:10004:11026:11657:11658:11914:12043:12050:12295:12296:12297:12438:12555:12679:12895:13153:13161:13227:13228:13229:14096:14097:14394:21067:21080:21222:21325:21433:21450:21451:21627:21740:21939:22013:30012:30034:30054:30056:30064:30066:30070,0,RBL:67.231.153.30:@fb.com:.lbl8.mailshell.net-62.12.0.100 64.201.201.201;04y8x6mjygw41etft57kxg1xycd19oc3osbkwa9zzewbxtte1hyb4qqhqwpih31.y71b9xohxhb9etrxtsquyc3767oujok6oinca378hhrzthwi9nj64j6qjuf3zok.n-lbl8.mailsh ell.net- X-HE-Tag: fish62_050be2126e3d X-Filterd-Recvd-Size: 11870 Received: from mx0b-00082601.pphosted.com (mx0b-00082601.pphosted.com [67.231.153.30]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Tue, 23 Jun 2020 17:40:50 +0000 (UTC) Received: from pps.filterd (m0109332.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 05NHegR5020532 for ; Tue, 23 Jun 2020 10:40:50 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=facebook; bh=l2Atjg2cZ92VrXw/Le4Dc9vCYDmS8qVqKkU4wSnDZdk=; b=gIjqdNalg94z8SspWRmJlY8S3RiEvJXP78i9kW1hiMOsUJtArEghSk1j6f6+2elQZqZl CwDo8ondEMIKOPdEuyLWTvtCpmgmA78bMKkkzF97IoxGqC0uyEqi2ksJpG7Uj7dtPVsr BZD7tqtkykZvNM5wR0QjzEk3z9jQo4B9nQA= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com with ESMTP id 31uk21h60x-16 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Tue, 23 Jun 2020 10:40:50 -0700 Received: from intmgw004.06.prn3.facebook.com (2620:10d:c085:108::8) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Tue, 23 Jun 2020 10:40:43 -0700 Received: by devvm1291.vll0.facebook.com (Postfix, from userid 111017) id 96203273E5D2; Tue, 23 Jun 2020 10:40:41 -0700 (PDT) Smtp-Origin-Hostprefix: devvm From: Roman Gushchin Smtp-Origin-Hostname: devvm1291.vll0.facebook.com To: Andrew Morton , Christoph Lameter CC: Johannes Weiner , Michal Hocko , Shakeel Butt , , Vlastimil Babka , , , Roman Gushchin Smtp-Origin-Cluster: vll0c01 Subject: [PATCH v7 00/19] The new cgroup slab memory controller Date: Tue, 23 Jun 2020 10:40:18 -0700 Message-ID: <20200623174037.3951353-1-guro@fb.com> X-Mailer: git-send-email 2.24.1 MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.216,18.0.687 definitions=2020-06-23_11:2020-06-23,2020-06-23 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 bulkscore=0 impostorscore=0 mlxscore=0 clxscore=1015 lowpriorityscore=0 spamscore=0 phishscore=0 malwarescore=0 mlxlogscore=999 adultscore=0 suspectscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006120000 definitions=main-2006230124 X-FB-Internal: deliver X-Rspamd-Queue-Id: AF5521FC43 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is v7 of the slab cgroup controller rework. The patchset moves the accounting from the page level to the object level. It allows to share slab pages between memory cgroups. This leads to a significant win in the slab utilization (up to 45%) and the corresponding drop in the total kernel memory footprint. The reduced number of unmovable slab pages should also have a positive effect on the memory fragmentation. The patchset makes the slab accounting code simpler: there is no more need in the complicated dynamic creation and destruction of per-cgroup slab caches, all memory cgroups use a global set of shared slab caches. The lifetime of slab caches is not more connected to the lifetime of memory cgroups. The more precise accounting does require more CPU, however in practice the difference seems to be negligible. We've been using the new slab controller in Facebook production for several months with different workloads and haven't seen any noticeable regressions. What we've seen were memory savings in order of 1 GB per host (it varied heavily depending on the actual workload, size of RAM, number of CPUs, memory pressure, etc). The third version of the patchset added yet another step towards the simplification of the code: sharing of slab caches between accounted and non-accounted allocations. It comes with significant upsides (most noticeable, a complete elimination of dynamic slab caches creation) but not without some regression risks, so this change sits on top of the patchset and is not completely merged in. So in the unlikely event of a noticeable performance regression it can be reverted separately. The slab memory accounting works in exactly the same way for SLAB and SLUB. With both allocators the new controller shows significant memory savings, with SLUB the difference is bigger. On my 16-core desktop machine running Fedora 32 the size of the slab memory measured after the start of the system was lower by 58% and 38% with SLUB and SLAB correspondingly. As an estimation of a potential CPU overhead, below are results of slab_bulk_test01 test, kindly provided by Jesper D. Brouer. He also helped with the evaluation of results. The test can be found here: https://github.com/netoptimizer/prototype-kernel/ The smallest number in each row should be picked for a comparison. SLUB-patched - bulk-API - SLUB-patched : bulk_quick_reuse objects=1 : 187 - 90 - 224 cycles(tsc) - SLUB-patched : bulk_quick_reuse objects=2 : 110 - 53 - 133 cycles(tsc) - SLUB-patched : bulk_quick_reuse objects=3 : 88 - 95 - 42 cycles(tsc) - SLUB-patched : bulk_quick_reuse objects=4 : 91 - 85 - 36 cycles(tsc) - SLUB-patched : bulk_quick_reuse objects=8 : 32 - 66 - 32 cycles(tsc) SLUB-original - bulk-API - SLUB-original: bulk_quick_reuse objects=1 : 87 - 87 - 142 cycles(tsc) - SLUB-original: bulk_quick_reuse objects=2 : 52 - 53 - 53 cycles(tsc) - SLUB-original: bulk_quick_reuse objects=3 : 42 - 42 - 91 cycles(tsc) - SLUB-original: bulk_quick_reuse objects=4 : 91 - 37 - 37 cycles(tsc) - SLUB-original: bulk_quick_reuse objects=8 : 31 - 79 - 76 cycles(tsc) SLAB-patched - bulk-API - SLAB-patched : bulk_quick_reuse objects=1 : 67 - 67 - 140 cycles(tsc) - SLAB-patched : bulk_quick_reuse objects=2 : 55 - 46 - 46 cycles(tsc) - SLAB-patched : bulk_quick_reuse objects=3 : 93 - 94 - 39 cycles(tsc) - SLAB-patched : bulk_quick_reuse objects=4 : 35 - 88 - 85 cycles(tsc) - SLAB-patched : bulk_quick_reuse objects=8 : 30 - 30 - 30 cycles(tsc) SLAB-original- bulk-API - SLAB-original: bulk_quick_reuse objects=1 : 143 - 136 - 67 cycles(tsc) - SLAB-original: bulk_quick_reuse objects=2 : 45 - 46 - 46 cycles(tsc) - SLAB-original: bulk_quick_reuse objects=3 : 38 - 39 - 39 cycles(tsc) - SLAB-original: bulk_quick_reuse objects=4 : 35 - 87 - 87 cycles(tsc) - SLAB-original: bulk_quick_reuse objects=8 : 29 - 66 - 30 cycles(tsc) v7: 1) rebased on top of Vlastimil's slub improvements, by Andrew 2) page->obj_cgroups is allocated from the same node, by Shakeel 3) perf optimization in get_obj_cgroup_from_current(), by Shakeel 4) added synchronization around allocation of page->obj_cgroups, by Shakeel 5) fixed kmemleak false positives, by Qian Cai 6) fixed a compiler warning on clang, by Nathan 7) other minor fixes v6: 1) rebased on top of the mm tree 2) removed a redundant check from cache_from_obj(), suggested by Vlastimil v5: 1) fixed a build error, spotted by Vlastimil 2) added a comment about memcg->nr_charged_bytes, asked by Johannes 3) added missed acks and reviews v4: 1) rebased on top of the mm tree, some fixes here and there 2) merged obj_to_index() with slab_index(), suggested by Vlastimil 3) changed objects_per_slab() to a better objects_per_slab_page(), suggested by Vlastimil 4) other minor fixes and changes v3: 1) added a patch that switches to a global single set of kmem_caches 2) kmem API clean up dropped, because if has been already merged 3) byte-sized slab vmstat API over page-sized global counters and bytes-sized memcg/lruvec counters 3) obj_cgroup refcounting simplifications and other minor fixes 4) other minor changes v2: 1) implemented re-layering and renaming suggested by Johannes, added his patch to the set. Thanks! 2) fixed the issue discovered by Bharata B Rao. Thanks! 3) added kmem API clean up part 4) added slab/memcg follow-up clean up part 5) fixed a couple of issues discovered by internal testing on FB fleet. 6) added kselftests 7) included metadata into the charge calculation 8) refreshed commit logs, regrouped patches, rebased onto mm tree, etc v1: 1) fixed a bug in zoneinfo_show_print() 2) added some comments to the subpage charging API, a minor fix 3) separated memory.kmem.slabinfo deprecation into a separate patch, provided a drgn-based replacement 4) rebased on top of the current mm tree RFC: https://lwn.net/Articles/798605/ Johannes Weiner (1): mm: memcontrol: decouple reference counting from page accounting Roman Gushchin (18): mm: memcg: factor out memcg- and lruvec-level changes out of __mod_lruvec_state() mm: memcg: prepare for byte-sized vmstat items mm: memcg: convert vmstat slab counters to bytes mm: slub: implement SLUB version of obj_to_index() mm: memcg/slab: obj_cgroup API mm: memcg/slab: allocate obj_cgroups for non-root slab pages mm: memcg/slab: save obj_cgroup for non-root slab objects mm: memcg/slab: charge individual slab objects instead of pages mm: memcg/slab: deprecate memory.kmem.slabinfo mm: memcg/slab: move memcg_kmem_bypass() to memcontrol.h mm: memcg/slab: use a single set of kmem_caches for all accounted allocations mm: memcg/slab: simplify memcg cache creation mm: memcg/slab: remove memcg_kmem_get_cache() mm: memcg/slab: deprecate slab_root_caches mm: memcg/slab: remove redundant check in memcg_accumulate_slabinfo() mm: memcg/slab: use a single set of kmem_caches for all allocations kselftests: cgroup: add kernel memory accounting tests tools/cgroup: add memcg_slabinfo.py tool drivers/base/node.c | 6 +- fs/proc/meminfo.c | 4 +- include/linux/memcontrol.h | 85 ++- include/linux/mm_types.h | 5 +- include/linux/mmzone.h | 24 +- include/linux/slab.h | 5 - include/linux/slab_def.h | 9 +- include/linux/slub_def.h | 31 +- include/linux/vmstat.h | 14 +- kernel/power/snapshot.c | 2 +- mm/memcontrol.c | 610 +++++++++++-------- mm/oom_kill.c | 2 +- mm/page_alloc.c | 8 +- mm/slab.c | 70 +-- mm/slab.h | 370 +++++------- mm/slab_common.c | 643 +-------------------- mm/slob.c | 12 +- mm/slub.c | 229 +------- mm/vmscan.c | 3 +- mm/vmstat.c | 30 +- mm/workingset.c | 6 +- tools/cgroup/memcg_slabinfo.py | 226 ++++++++ tools/testing/selftests/cgroup/.gitignore | 1 + tools/testing/selftests/cgroup/Makefile | 2 + tools/testing/selftests/cgroup/test_kmem.c | 382 ++++++++++++ 25 files changed, 1380 insertions(+), 1399 deletions(-) create mode 100644 tools/cgroup/memcg_slabinfo.py create mode 100644 tools/testing/selftests/cgroup/test_kmem.c