From patchwork Wed Mar 19 22:21:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: JP Kobryn X-Patchwork-Id: 14023236 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05673C35FFA for ; Wed, 19 Mar 2025 22:22:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 05CB728000B; Wed, 19 Mar 2025 18:22:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB1C3280004; Wed, 19 Mar 2025 18:22:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C94FE28000B; Wed, 19 Mar 2025 18:22:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9A35D280004 for ; Wed, 19 Mar 2025 18:22:05 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A74E61A04EF for ; Wed, 19 Mar 2025 22:22:07 +0000 (UTC) X-FDA: 83239724694.03.DBF186C Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf28.hostedemail.com (Postfix) with ESMTP id D5D35C0004 for ; Wed, 19 Mar 2025 22:22:05 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Xvg43KdG; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of inwardvessel@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=inwardvessel@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742422925; a=rsa-sha256; cv=none; b=VWba7J8cTRHQzBfOIQtJz8eXkc2jAcC1CIT07uKAXE9q8q0aWj9fxX/AE8Ek1TbRvsmZfT 2sQa0Vq06pK7T7RD/ypC0Zi0ucw2/DctK4iUZPI9Abr1kULs5uZwvD96OqWV0ZECXmYIGa tpqrcYPJzRljQN1no6rf/1APd+TzCOk= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Xvg43KdG; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of inwardvessel@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=inwardvessel@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742422925; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=afEnSPzJRvzNwhVeLcHVZ9SDJWWdS1gSO21ww78tbw0=; b=t4jCtukugji5w01fvpFcmfRGmCIhNx8eD3OZgFPs2bwNun2kz08UMT699py3dgfldgVPD7 DqW1hkk0ctiQ4YJ3o0maa2nIqRMRg/daLSLHJSTgCckdVwBuDR6Z5hWGkFCzSsgF9rA5uT /palC9kjWCUmrGOIF/m7L84kCcCaDAg= Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-224019ad9edso1430835ad.1 for ; Wed, 19 Mar 2025 15:22:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1742422925; x=1743027725; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=afEnSPzJRvzNwhVeLcHVZ9SDJWWdS1gSO21ww78tbw0=; b=Xvg43KdGcl9n4x3merGpN2a6H9K88SfQraidkniDYLhCPObSRgAd923epjlhiyKQil mMiBmfO6cLlMBzC7N/6v6jxUsOIVGgBqqgLb0MNZiLfWARroqHyQOmvfUQwx2Ws5YMGL Q7rMSisZhCRAV72D3ECMl0urYbvMf5W+5LwH16A33gn32rONNMMUG/oPXHR4DlMND0rL E8HwU1AF6r5FHJ5MLiGQvr9/ll1Y+lBF79vB1FAO0CWemR8yM65Ral88i2N1qLWoiTRr Bte0UdEYJVLWz4knuCAsEcN4GzzkqK2dHBOMVahORtFIraDJxujn1GttUrZ0ShwuzNCg HDHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742422925; x=1743027725; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=afEnSPzJRvzNwhVeLcHVZ9SDJWWdS1gSO21ww78tbw0=; b=jK9OLZdBllzGzifFP1zCDS1blKkgTEDVuCYi44Ak3mTiBboN0Wcbf5BZE8cujN7lG6 beHL8HSCq57FXN1saqy283srRFyzEiMCLJm53FVD2+kEXdUhJ5QCzkwamJCWvYj7UPFZ RnS/G3dFuU3rz0y16l6EsDO5Ul8avlyGtkDpreK2wkh2S3+a94IQRqsbWn9/gchMflA1 ZZ9G72UebTh5gqvGjnBSUwvrGssyMaw63V+93jHDeDFzXTW++XiksFdwPXkA8+FLQEKl LGKX8FbQirBYV/jpeT7kAbYpKfebKbPJ2XQpNgY3GvgQDdb2Mca3b4pIhO1KWmSIxlDc dq6Q== X-Gm-Message-State: AOJu0Yxr/wk1w0IQDwuSM7ltWbyFBpON5NdXYN1/GS3N0suNPKUbXM6f gAZTjc5Xrs/3XHrsaCM+Wse5QpuWEMiN5PzHdyAG6fNn3EFlbfxB X-Gm-Gg: ASbGnctP+UtOI6+hkMJJZJLyKo4I7Im6PH3kKYOulYanDTQaJ+Cw3JKmz/C2bq8uqcf TJ651QhdAU5uG8MNvVd68H0a1O1DIKVP4dSjSv3YDy/xPlBmyU4R1o+q7Y0pDNxC16/TvDtGuv4 u5668L0Qp2pa2Q1284rrMJzhdCMFt1GtAHlmNDY3lQv9xLz0TfWbV+kremU9IKWTe+AXAwmYUOB QW/Qy54B4NqKYBBOu0U+UJmy5lKa4sqHnPasZQFoYwTcB5QtLKdQn237saUe7w68xjqN8OulpxR wxdOiarsAncbZTKARM5vMdiXBvbhjxolNug0yZDq4C8XXfBgP6mI3mFwuEzqpilUZRnLmik+U6p xJ4afz0E= X-Google-Smtp-Source: AGHT+IHeBTMuPZFnYGmdte7IN0yVe8czKRw723EhSW3NMILXdMshuXnqJ5xofxMOD6z6JMe6DSzi6Q== X-Received: by 2002:a05:6a21:7308:b0:1fa:995a:5004 with SMTP id adf61e73a8af0-1fbecd36be2mr7740753637.26.1742422924746; Wed, 19 Mar 2025 15:22:04 -0700 (PDT) Received: from jpkobryn-fedora-PF5CFKNC.thefacebook.com ([2620:10d:c090:500::4:39d5]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-af56e9dd388sm11467484a12.20.2025.03.19.15.22.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Mar 2025 15:22:04 -0700 (PDT) From: JP Kobryn To: tj@kernel.org, shakeel.butt@linux.dev, yosryahmed@google.com, mkoutny@suse.com, hannes@cmpxchg.org, akpm@linux-foundation.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, kernel-team@meta.com Subject: [PATCH 4/4 v3] cgroup: save memory by splitting cgroup_rstat_cpu into compact and full versions Date: Wed, 19 Mar 2025 15:21:50 -0700 Message-ID: <20250319222150.71813-5-inwardvessel@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250319222150.71813-1-inwardvessel@gmail.com> References: <20250319222150.71813-1-inwardvessel@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: D5D35C0004 X-Rspamd-Server: rspam05 X-Stat-Signature: pqyuqm4jgjmyffur6uz89ecw9xoumswu X-HE-Tag: 1742422925-16215 X-HE-Meta: U2FsdGVkX1+WkQZsdEuRichS0hTZlpIwiFNm8HOL1rcRZKt1JAprkNLgFLos3l9s2iTsubgbHaQTTv07pnOXD0R1bKwA5ZwbWS6ZRtJYF6zHMLKqxj3UEqoCqkoeNLpRxp3SzawwZTplPS7XpHqzT4wiVi1vEAIvcxS0Z518sBLU7ZZvuYWZpKpAJilxe+QDfuX23+j2+K6En4gD82rRt/q5U2X3qp2T0ng2Sz+3TOcagaSo+wrp3JpQZqxgS+FOmuhUHCG2VGerSv4RbbL1hVdnh3fKlBZGveS1zYCgZ3i6NtwZffvnb4YYjmBtw9aT1NchMHJ0jQQl0UsFitx+UVoP/xomoTYWk6YM1rn4AIX9C4cbqFVsuHHsnrnG3lO40iDYlt0q/V2+TbgqEnTOuzY5Q32OfwJwL3TmMVH0WkS2nRjyuvnBenM8z88b/6cqBw0w0Y7AgBafgChqUDwwtB1umHfRuUoQ0Rm8Fp6qEdBmrcrgqI8dX7GP+mvRh2y2K6hCxjqM7esb6/M5fQeoCPKbi7cGemF9N8+TJiHOpj1nV2FA064iFBAgtU2+QwFrmnXXNXFYS2k/+14WEri0bdDKu5/dbCSaG6KeMPDH+Ksod1K+ycwXWaVk8iQnUhp9xdFtsHCpERl5070S+bb/HtUwmxuUYLB8DvYNHISPKzzkzRFF3ExlMBscqoUvpUPZPQrVN5GiWI5acHWgvUbgUUQqw46ARm57hHCt5LWf3knftvWRewYEpdcekRdlUOEliFeNdQHMeRIvYBsY5VxV2QYHpXSeD5GOZBIFiV5sR21AFusMlutwwHc1gTBHsjIw2VxZfU+ISckgTDpMV79a691/95KNdXN9imuLxpDNl6K2GSAlqvC3pWGqh3NrtlXi4QgRAdemIv6od4IAsJcSlTIFUGkbPtv31bDLhD3giPloUTbUtqqLBLK5vMXe8uX/JZgVim5/qyJR3UoF7FO SwAPykto DAKpkQyZeDNuMhHsTZ5teWoxc0aadSSXH/OlMR46ywsp4KDHh3imFH0cyYm7O+rAv/t9LtffAFwKEB+EuCrcZroHC5yq9QKenZxI5Gq98srRpqq4on6hZL4XauWeZmzHWiPkdM6uAXkWpl2++E/tiCcVrEfyztU7gLTo4ZKuEXJIt2fM6LrpWHwkfAeaTlGWPbUgt3seQ3m0r3fdHdbTMm62HXlQZO/fk6p29BQR2TfrvtvLgQdsoU80nIADEtWGzRCQpDunHeMJ3BejygqXgr61V57BSgRZq9PFvT9/1NGMZO3u8f7xJoqgWiPYeVE7gLAfkbKaNcllyx5zBwHR9rb7M/OZbJuvb8S6flgasJPkrogLMqAVO0J67oKBefZUGo2xKSHg0yCEnHEdKaxi5nTWvsgY0l+RJJhu5Yw0npXZfaEQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The cgroup_rstat_cpu struct contains rstat node pointers and also the base stat objects. Since ownership of the cgroup_rstat_cpu has shifted from cgroup to cgroup_subsys_state, css's other than cgroup::self are now carrying along these base stat objects which go unused. Eliminate this wasted memory by splitting up cgroup_rstat_cpu into two separate structs. The cgroup_rstat_cpu struct is modified in a way that it now contains only the rstat node pointers. css's that are associated with a subsystem (memory, io) use this compact struct to participate in rstat without the memory overhead of the base stat objects. As for css's represented by cgroup::self, a new cgroup_rstat_base_cpu struct is introduced. It contains the new compact cgroup_rstat_cpu struct as its first field followed by the base stat objects. Because the rstat pointers exist at the same offset (beginning) in both structs, cgroup_subsys_state is modified to contain a union of the two structs. Where css initialization is done, the compact struct is allocated when the css is associated with a subsystem. When the css is not associated with a subsystem, the full struct is allocated. The union allows the existing rstat updated/flush routines to work with any css regardless of subsystem association. The base stats routines however, were modified to access the full struct field in the union. The change in memory on a per-cpu basis is shown below. before: struct size sizeof(cgroup_rstat_cpu) =~ 144 bytes /* can vary based on config */ per-cpu overhead nr_cgroups * ( sizeof(cgroup_rstat_cpu) * (1 + nr_rstat_subsystems) ) nr_cgroups * (144 * (1 + 2)) nr_cgroups * 432 432 bytes per cgroup per cpu after: struct sizes sizeof(cgroup_rstat_base_cpu) =~ 144 bytes sizeof(cgroup_rstat_cpu) = 16 bytes per-cpu overhead nr_cgroups * ( sizeof(cgroup_rstat_base_cpu) + sizeof(cgroup_rstat_cpu) * (nr_rstat_subsystems) ) nr_cgroups * (144 + 16 * 2) nr_cgroups * 176 176 bytes per cgroup per cpu savings: 256 bytes per cgroup per cpu Reviewed-by: Shakeel Butt Signed-off-by: JP Kobryn --- include/linux/cgroup-defs.h | 41 +++++++++------ kernel/cgroup/rstat.c | 100 ++++++++++++++++++++++-------------- 2 files changed, 86 insertions(+), 55 deletions(-) diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h index 0ffc8438c6d9..f9b84e7f718d 100644 --- a/include/linux/cgroup-defs.h +++ b/include/linux/cgroup-defs.h @@ -170,7 +170,10 @@ struct cgroup_subsys_state { struct percpu_ref refcnt; /* per-cpu recursive resource statistics */ - struct css_rstat_cpu __percpu *rstat_cpu; + union { + struct css_rstat_cpu __percpu *rstat_cpu; + struct css_rstat_base_cpu __percpu *rstat_base_cpu; + }; /* * siblings list anchored at the parent's ->children @@ -358,6 +361,26 @@ struct cgroup_base_stat { * resource statistics on top of it - bsync, bstat and last_bstat. */ struct css_rstat_cpu { + /* + * Child cgroups with stat updates on this cpu since the last read + * are linked on the parent's ->updated_children through + * ->updated_next. + * + * In addition to being more compact, singly-linked list pointing + * to the cgroup makes it unnecessary for each per-cpu struct to + * point back to the associated cgroup. + * + * Protected by per-cpu rstat_base_cpu_lock when css->ss == NULL + * otherwise, + * Protected by per-cpu css->ss->rstat_cpu_lock + */ + struct cgroup_subsys_state *updated_children; /* terminated by self */ + struct cgroup_subsys_state *updated_next; /* NULL if not on list */ +}; + +struct css_rstat_base_cpu { + struct css_rstat_cpu rstat_cpu; + /* * ->bsync protects ->bstat. These are the only fields which get * updated in the hot path. @@ -384,22 +407,6 @@ struct css_rstat_cpu { * deltas to propagate to the per-cpu subtree_bstat. */ struct cgroup_base_stat last_subtree_bstat; - - /* - * Child cgroups with stat updates on this cpu since the last read - * are linked on the parent's ->updated_children through - * ->updated_next. - * - * In addition to being more compact, singly-linked list pointing - * to the cgroup makes it unnecessary for each per-cpu struct to - * point back to the associated cgroup. - * - * Protected by per-cpu rstat_base_cpu_lock when css->ss == NULL - * otherwise, - * Protected by per-cpu css->ss->rstat_cpu_lock - */ - struct cgroup_subsys_state *updated_children; /* terminated by self */ - struct cgroup_subsys_state *updated_next; /* NULL if not on list */ }; struct cgroup_freezer_state { diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c index ffd7ac6bcefc..250f0987407e 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c @@ -20,6 +20,12 @@ static struct css_rstat_cpu *css_rstat_cpu( return per_cpu_ptr(css->rstat_cpu, cpu); } +static struct css_rstat_base_cpu *css_rstat_base_cpu( + struct cgroup_subsys_state *css, int cpu) +{ + return per_cpu_ptr(css->rstat_base_cpu, cpu); +} + static spinlock_t *ss_rstat_lock(struct cgroup_subsys *ss) { if (ss) @@ -425,17 +431,35 @@ int css_rstat_init(struct cgroup_subsys_state *css) /* the root cgrp's self css has rstat_cpu preallocated */ if (!css->rstat_cpu) { - css->rstat_cpu = alloc_percpu(struct css_rstat_cpu); - if (!css->rstat_cpu) - return -ENOMEM; + /* One of the union fields must be initialized. + * Allocate the larger rstat struct for base stats when css is + * cgroup::self. + * Otherwise, allocate the compact rstat struct since the css is + * associated with a subsystem. + */ + if (css_is_cgroup(css)) { + css->rstat_base_cpu = alloc_percpu(struct css_rstat_base_cpu); + if (!css->rstat_base_cpu) + return -ENOMEM; + } else { + css->rstat_cpu = alloc_percpu(struct css_rstat_cpu); + if (!css->rstat_cpu) + return -ENOMEM; + } } - /* ->updated_children list is self terminated */ for_each_possible_cpu(cpu) { - struct css_rstat_cpu *rstatc = css_rstat_cpu(css, cpu); + struct css_rstat_cpu *rstatc; + rstatc = css_rstat_cpu(css, cpu); rstatc->updated_children = css; - u64_stats_init(&rstatc->bsync); + + if (css_is_cgroup(css)) { + struct css_rstat_base_cpu *rstatbc; + + rstatbc = css_rstat_base_cpu(css, cpu); + u64_stats_init(&rstatbc->bsync); + } } return 0; @@ -522,9 +546,9 @@ static void cgroup_base_stat_sub(struct cgroup_base_stat *dst_bstat, static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu) { - struct css_rstat_cpu *rstatc = css_rstat_cpu(&cgrp->self, cpu); + struct css_rstat_base_cpu *rstatbc = css_rstat_base_cpu(&cgrp->self, cpu); struct cgroup *parent = cgroup_parent(cgrp); - struct css_rstat_cpu *prstatc; + struct css_rstat_base_cpu *prstatbc; struct cgroup_base_stat delta; unsigned seq; @@ -534,15 +558,15 @@ static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu) /* fetch the current per-cpu values */ do { - seq = __u64_stats_fetch_begin(&rstatc->bsync); - delta = rstatc->bstat; - } while (__u64_stats_fetch_retry(&rstatc->bsync, seq)); + seq = __u64_stats_fetch_begin(&rstatbc->bsync); + delta = rstatbc->bstat; + } while (__u64_stats_fetch_retry(&rstatbc->bsync, seq)); /* propagate per-cpu delta to cgroup and per-cpu global statistics */ - cgroup_base_stat_sub(&delta, &rstatc->last_bstat); + cgroup_base_stat_sub(&delta, &rstatbc->last_bstat); cgroup_base_stat_add(&cgrp->bstat, &delta); - cgroup_base_stat_add(&rstatc->last_bstat, &delta); - cgroup_base_stat_add(&rstatc->subtree_bstat, &delta); + cgroup_base_stat_add(&rstatbc->last_bstat, &delta); + cgroup_base_stat_add(&rstatbc->subtree_bstat, &delta); /* propagate cgroup and per-cpu global delta to parent (unless that's root) */ if (cgroup_parent(parent)) { @@ -551,73 +575,73 @@ static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu) cgroup_base_stat_add(&parent->bstat, &delta); cgroup_base_stat_add(&cgrp->last_bstat, &delta); - delta = rstatc->subtree_bstat; - prstatc = css_rstat_cpu(&parent->self, cpu); - cgroup_base_stat_sub(&delta, &rstatc->last_subtree_bstat); - cgroup_base_stat_add(&prstatc->subtree_bstat, &delta); - cgroup_base_stat_add(&rstatc->last_subtree_bstat, &delta); + delta = rstatbc->subtree_bstat; + prstatbc = css_rstat_base_cpu(&parent->self, cpu); + cgroup_base_stat_sub(&delta, &rstatbc->last_subtree_bstat); + cgroup_base_stat_add(&prstatbc->subtree_bstat, &delta); + cgroup_base_stat_add(&rstatbc->last_subtree_bstat, &delta); } } -static struct css_rstat_cpu * +static struct css_rstat_base_cpu * cgroup_base_stat_cputime_account_begin(struct cgroup *cgrp, unsigned long *flags) { - struct css_rstat_cpu *rstatc; + struct css_rstat_base_cpu *rstatbc; - rstatc = get_cpu_ptr(cgrp->self.rstat_cpu); - *flags = u64_stats_update_begin_irqsave(&rstatc->bsync); - return rstatc; + rstatbc = get_cpu_ptr(cgrp->self.rstat_base_cpu); + *flags = u64_stats_update_begin_irqsave(&rstatbc->bsync); + return rstatbc; } static void cgroup_base_stat_cputime_account_end(struct cgroup *cgrp, - struct css_rstat_cpu *rstatc, + struct css_rstat_base_cpu *rstatbc, unsigned long flags) { - u64_stats_update_end_irqrestore(&rstatc->bsync, flags); + u64_stats_update_end_irqrestore(&rstatbc->bsync, flags); css_rstat_updated(&cgrp->self, smp_processor_id()); - put_cpu_ptr(rstatc); + put_cpu_ptr(rstatbc); } void __cgroup_account_cputime(struct cgroup *cgrp, u64 delta_exec) { - struct css_rstat_cpu *rstatc; + struct css_rstat_base_cpu *rstatbc; unsigned long flags; - rstatc = cgroup_base_stat_cputime_account_begin(cgrp, &flags); - rstatc->bstat.cputime.sum_exec_runtime += delta_exec; - cgroup_base_stat_cputime_account_end(cgrp, rstatc, flags); + rstatbc = cgroup_base_stat_cputime_account_begin(cgrp, &flags); + rstatbc->bstat.cputime.sum_exec_runtime += delta_exec; + cgroup_base_stat_cputime_account_end(cgrp, rstatbc, flags); } void __cgroup_account_cputime_field(struct cgroup *cgrp, enum cpu_usage_stat index, u64 delta_exec) { - struct css_rstat_cpu *rstatc; + struct css_rstat_base_cpu *rstatbc; unsigned long flags; - rstatc = cgroup_base_stat_cputime_account_begin(cgrp, &flags); + rstatbc = cgroup_base_stat_cputime_account_begin(cgrp, &flags); switch (index) { case CPUTIME_NICE: - rstatc->bstat.ntime += delta_exec; + rstatbc->bstat.ntime += delta_exec; fallthrough; case CPUTIME_USER: - rstatc->bstat.cputime.utime += delta_exec; + rstatbc->bstat.cputime.utime += delta_exec; break; case CPUTIME_SYSTEM: case CPUTIME_IRQ: case CPUTIME_SOFTIRQ: - rstatc->bstat.cputime.stime += delta_exec; + rstatbc->bstat.cputime.stime += delta_exec; break; #ifdef CONFIG_SCHED_CORE case CPUTIME_FORCEIDLE: - rstatc->bstat.forceidle_sum += delta_exec; + rstatbc->bstat.forceidle_sum += delta_exec; break; #endif default: break; } - cgroup_base_stat_cputime_account_end(cgrp, rstatc, flags); + cgroup_base_stat_cputime_account_end(cgrp, rstatbc, flags); } /*