From patchwork Thu Aug 25 00:05:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 12954100 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FDDBC04AA5 for ; Thu, 25 Aug 2022 00:05:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230450AbiHYAFv (ORCPT ); Wed, 24 Aug 2022 20:05:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231150AbiHYAFp (ORCPT ); Wed, 24 Aug 2022 20:05:45 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1778696FF for ; Wed, 24 Aug 2022 17:05:39 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id i3-20020aa78b43000000b005320ac5b724so8087942pfd.4 for ; Wed, 24 Aug 2022 17:05:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=3ANOEI0BBXDszhHsMjNjVkJxW2dSCbobxCznkiQMg0M=; b=M8tSmqMgyZLVy2Ldt7OEwatTGNnYnfDpQ69BuMQNcXOyVDB1zOG1S5rA/96I/PJKxE 4+npbWP3pWJ08Qbrxcia+8i6j7Gu69xnnOJNpGf1qEh/CTh6F27WQ4nVQLW4A9J5Lhh9 wZHGTsioVx7toZiUJ8cHixmiZOFP7E6zdyNZLONxSLuMdImMycE2Fyt+XuqZ+8IhLHsR C6dFsjW4r02bqFmtyRS3Kq6G3VP5sgtlfNBMT7ztQEx6AL13xu9pwUVXfdMd6YEAnByA YGozttZzHIvY64hF0IBSZ5RbI4tZvRxI5hd9vrRa+K0+Wr8X++i6Jrnvoe/mNBWb+0ko v09Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=3ANOEI0BBXDszhHsMjNjVkJxW2dSCbobxCznkiQMg0M=; b=r+W+igIKEcobZs0ep92fUzmwAqE3iLOy3IYq4sIFhDsRFz6CRFVlQMkG0hfKdKPj+7 rNtKVHg/40x6VNyCQmEIbdr7J/YaJ/WBHr28m9baCyxbG6nVLH47WWbtWV3o7X1x8R3C d/NcQuu+j/YlqNVx4+cPOtnzDj6c9qDnlo+aA4NGblcMXMqHD0DOOpD5srSZoXpE5H7Q o9++dC6YP1QrzN85tBk3+NfzgAHzxBGtHJW9u/yvJf+gxTG/biZR6xR0dhfIYJH+c12j qLYLlNSTEWvkrv7+XWV/9Jf16urfOTgx5vPSdwr5RNtkLrIi2Q5N+U60fFTowT0HvjHu Ge2A== X-Gm-Message-State: ACgBeo3GeB+lWiYii00qgTdIm/YTjwAWfXyWXvNpy63EoZ1Y3D3xHGC+ 90NS2/KMw0icpME5AssOwkMwbTHsrFrT0Q== X-Google-Smtp-Source: AA6agR4pbPM00Y2dhBDzDYXIyyPDDsgaDbSO1VcMlYVbQBY5g87vni0ByGdo33smXqzxyN0vznInx5Zn2YarNQ== X-Received: from shakeelb.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:262e]) (user=shakeelb job=sendgmr) by 2002:a05:6a00:1803:b0:536:e9a4:89d4 with SMTP id y3-20020a056a00180300b00536e9a489d4mr1451224pfa.28.1661385939060; Wed, 24 Aug 2022 17:05:39 -0700 (PDT) Date: Thu, 25 Aug 2022 00:05:06 +0000 In-Reply-To: <20220825000506.239406-1-shakeelb@google.com> Message-Id: <20220825000506.239406-4-shakeelb@google.com> Mime-Version: 1.0 References: <20220825000506.239406-1-shakeelb@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH v2 3/3] memcg: increase MEMCG_CHARGE_BATCH to 64 From: Shakeel Butt To: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song Cc: " =?utf-8?q?Michal_Koutn=C3=BD?= " , Eric Dumazet , Soheil Hassas Yeganeh , Feng Tang , Oliver Sang , Andrew Morton , lkp@lists.01.org, cgroups@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shakeel Butt Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org For several years, MEMCG_CHARGE_BATCH was kept at 32 but with bigger machines and the network intensive workloads requiring througput in Gbps, 32 is too small and makes the memcg charging path a bottleneck. For now, increase it to 64 for easy acceptance to 6.0. We will need to revisit this in future for ever increasing demand of higher performance. Please note that the memcg charge path drain the per-cpu memcg charge stock, so there should not be any oom behavior change. Though it does have impact on rstat flushing and high limit reclaim backoff. To evaluate the impact of this optimization, on a 72 CPUs machine, we ran the following workload in a three level of cgroup hierarchy. $ netserver -6 # 36 instances of netperf with following params $ netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K Results (average throughput of netperf): Without (6.0-rc1) 10482.7 Mbps With patch 17064.7 Mbps (62.7% improvement) With the patch, the throughput improved by 62.7%. Signed-off-by: Shakeel Butt Reported-by: kernel test robot Acked-by: Soheil Hassas Yeganeh Reviewed-by: Feng Tang Acked-by: Roman Gushchin Acked-by: Michal Hocko Acked-by: Muchun Song --- Changes since v1: - Updated commit message include/linux/memcontrol.h | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4d31ce55b1c0..70ae91188e16 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -354,10 +354,11 @@ struct mem_cgroup { }; /* - * size of first charge trial. "32" comes from vmscan.c's magic value. - * TODO: maybe necessary to use big numbers in big irons. + * size of first charge trial. + * TODO: maybe necessary to use big numbers in big irons or dynamic based of the + * workload. */ -#define MEMCG_CHARGE_BATCH 32U +#define MEMCG_CHARGE_BATCH 64U extern struct mem_cgroup *root_mem_cgroup;