From patchwork Thu Jan 5 05:35:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13089461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E604EC3DA7D for ; Thu, 5 Jan 2023 05:35:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 59FE68E0003; Thu, 5 Jan 2023 00:35:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 528178E0001; Thu, 5 Jan 2023 00:35:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C9AD8E0003; Thu, 5 Jan 2023 00:35:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2B3918E0001 for ; Thu, 5 Jan 2023 00:35:33 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id EFC7D140B0B for ; Thu, 5 Jan 2023 05:35:32 +0000 (UTC) X-FDA: 80319632904.23.4631708 Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by imf23.hostedemail.com (Postfix) with ESMTP id 61694140007 for ; Thu, 5 Jan 2023 05:35:30 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=b6rJqUBU; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf23.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.215.169 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672896930; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OHS7GwecdRuxXk3+Zc4WVXvP35qWQPtvOh3gevBpM+A=; b=hdo+lHDTmfZRtjllqLyaK9H/6CydlxsyzfVt4bHc4hGsjg+Sun+RVoHmyNcde8PEWP7+wU f17PwXpApSDIwRbQfe9zKVxJo4sPeHSORKS9OKt1wZp+V3Tu416vM4Q4M4XIfgbkxakqSK URtuh7rkx6fFtiS/3hIi3GboD+LiZnY= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=b6rJqUBU; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf23.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.215.169 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672896930; a=rsa-sha256; cv=none; b=na8YEaRazmVWLZlIyr59x51JbBsuC4yr/bh8bYX8tFSPIA6JP8xQx1rTOErHMig9CwnbGC MljGSMtL+PYFufGgKRHZPvUnVqlKTnQWebC0PG+XiJ3Bwyxv0bYyOau92/+mUUS46Jgxbo AU5TmRJ6HMMVVZuF+jcCUVT4/WJzw+s= Received: by mail-pg1-f169.google.com with SMTP id h192so19001678pgc.7 for ; Wed, 04 Jan 2023 21:35:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OHS7GwecdRuxXk3+Zc4WVXvP35qWQPtvOh3gevBpM+A=; b=b6rJqUBUXZ6Q+Bgt7gJ5/kTxgMlrEP1hR1HsAaFEIJOVqcYGEqzmgqaayIu44OklLY aKr7mCHk9NuWDdeZ/sLDjprbzjTysci4voRXkScpJ9J7O3UQJDwMi88MSEXKhAd31MRX IkgpaAnfci6cSUtZQFzoqH74BOPlvcbzaXSNQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OHS7GwecdRuxXk3+Zc4WVXvP35qWQPtvOh3gevBpM+A=; b=SNyx/mXQavpd8HtBHh6FsqN4u5tWdu/x7VqBt6FlZSOnSyxbhnEiQlZWM31iTIgLUE T9Ii3+W+o8NbhUkuwkksbi+/tWCDYpimvoeNI8A+Tra8gTPmTguyCRxzitdFsa/MoUfq /AonsMJsNJ4odTyFB/t0qJF7IU7nhpGbDwBk3WQKXQlW7NSB/p/bkiepy1fQnEVGeQjF gPaQqnnYnOh/qoLOvrYo27NxSP6RlbWxwS4bspdSPLHr6fMU7VKW8rgVGhSHGBaq58Ge duLdML6hvc2OQAP5+ILiJtYKWnmLw0IRLZlESsPkqzJBRviXzsiFRZqH6RRWjMomWZGN UlGg== X-Gm-Message-State: AFqh2krdwB3KrSqTnSI2wkxC+1ZDIRlGJz344PMOY1B6QfN2XK/cP5p8 skzYnI4t8CaKAsP/4AEndHrt7A== X-Google-Smtp-Source: AMrXdXvCZQaKMjk2imZtfP2WyTYli30IFBbctSlFI2dnkGh9PE8CQrY3FcAiS28OhT3LjTERiI6OGw== X-Received: by 2002:a05:6a00:84c:b0:581:1ee0:75a with SMTP id q12-20020a056a00084c00b005811ee0075amr43961009pfk.32.1672896929190; Wed, 04 Jan 2023 21:35:29 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:da84:4bce:bb29:7dea]) by smtp.gmail.com with ESMTPSA id v27-20020aa799db000000b00577c5915138sm2036855pfi.128.2023.01.04.21.35.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jan 2023 21:35:28 -0800 (PST) From: Sergey Senozhatsky To: Minchan Kim , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Sergey Senozhatsky Subject: [PATCH 1/4] zsmalloc: rework zspage chain size selection Date: Thu, 5 Jan 2023 14:35:07 +0900 Message-Id: <20230105053510.1819862-2-senozhatsky@chromium.org> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog In-Reply-To: <20230105053510.1819862-1-senozhatsky@chromium.org> References: <20230105053510.1819862-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 61694140007 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: htuf1hmyqcdfepum9emqckzghsqp94hs X-HE-Tag: 1672896930-853625 X-HE-Meta: U2FsdGVkX1++wzz5oQTfFxHceAEmkduPOuGa2nwK90REjBKFDN+ZPMXSfgfnjuhTR9N6N2G5e9yGxAUW1g4s1GT1213Vp4LCYLmZPHl3by+toehRygE1MS8G13PqHESnJ1gIWvPlOAIvymH7bQs/616EIv9IvqC17uFYUNPXPoXYDUPKZW1LE1RFJ997eB9x334wy1O6LOeQEzUgxfQALYknas4TiU6aYDjkWBN+CDWBt5q6kjix/RJD9vRRIKLFPAyLHQX32kwHvIK8QWAp2GDEIgNtFORvI0J85YZJH6S3AvflwbnnIUp4UFh2G5ti/nIxEQF2AbfWYW/WxYd8Ng8f2xHZ6YyrAPIhC2h0gKbLa7eLtBqANX/4ORB5ShUPtve6Wje33QrPbxVfsT3v1xZS+kBFwezMdDqjXb2sC1Xv+7JRhZv2jbJ6bZA1JMGXnqxOFV5hQxCq3E2Ge0Ud2A4UnZx402htYnzzyO/c1gt/brB9QoWI4cvKg77ffa1FeEQo3a/ZxghzLrrhkzweTDG3U4Ddwzq+NH/9jl3lvZ2tYMx60DIY2XTCJ9yOMI9fUFLGZeN09htfHtb+/HVkAwq3oUk9yp196bdIzpibyiAtpzY0waz6g7AdIARtd67BLhHiV9vDl+p7f1PaLxwAVa/qbvRs8MdZN6mp149tJTsl7pGnSh1NG/XTd6v5zRQK+wNPqz5n87QgarDR804ZQ7M9fgeZj+Qqkxg8NARrR+QSwHExsG86aeR0Xl0CIbBboAzvDRjUjNbOlgoAInu+82jtMzD4cQkeEq9OjGRAbHvu7+1o0yufY28fgXqZA+vT8EWrHcHR0cYGlWATW6qf0Vpe0BJyb5kxJ6UnZGOointgJhgt7zANTWz7SEy2COLQdBokseWtXwTgPfilMnuebvy3v/s2G9oW1P7pNMGcdWZx27BSwMmHmNBb7eHz1GeJYr02ZEdbD6FQnoL0RGe o3C03jWu wwSMKMYDYCGSQL7bhD17yQfJEoNxTpunPIiK5oNzIO0RgmU5aHghgkzqZ0Tsk9NoLueplWpgt/YwBNPZGkB7+1Qe50XK5IetXS67AvYCodgWMattne57WyLILlb/ej/uownwU X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Computers are bad at division. We currently decide the best zspage chain size (max number of physical pages per-zspage) by looking at a `used percentage` value. This is not enough as we lose precision during usage percentage calculations For example, let's look at size class 208: pages per zspage wasted bytes used% 1 144 96 2 80 99 3 16 99 4 160 99 Current algorithm will select 2 page per zspage configuration, as it's the first one to reach 99%. However, 3 pages per zspage waste less memory. Change algorithm and select zspage configuration that has lowest wasted value. Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 56 +++++++++++++++++---------------------------------- 1 file changed, 19 insertions(+), 37 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 9445bee6b014..959126e708a3 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -802,42 +802,6 @@ static enum fullness_group fix_fullness_group(struct size_class *class, return newfg; } -/* - * We have to decide on how many pages to link together - * to form a zspage for each size class. This is important - * to reduce wastage due to unusable space left at end of - * each zspage which is given as: - * wastage = Zp % class_size - * usage = Zp - wastage - * where Zp = zspage size = k * PAGE_SIZE where k = 1, 2, ... - * - * For example, for size class of 3/8 * PAGE_SIZE, we should - * link together 3 PAGE_SIZE sized pages to form a zspage - * since then we can perfectly fit in 8 such objects. - */ -static int get_pages_per_zspage(int class_size) -{ - int i, max_usedpc = 0; - /* zspage order which gives maximum used size per KB */ - int max_usedpc_order = 1; - - for (i = 1; i <= ZS_MAX_PAGES_PER_ZSPAGE; i++) { - int zspage_size; - int waste, usedpc; - - zspage_size = i * PAGE_SIZE; - waste = zspage_size % class_size; - usedpc = (zspage_size - waste) * 100 / zspage_size; - - if (usedpc > max_usedpc) { - max_usedpc = usedpc; - max_usedpc_order = i; - } - } - - return max_usedpc_order; -} - static struct zspage *get_zspage(struct page *page) { struct zspage *zspage = (struct zspage *)page_private(page); @@ -2321,6 +2285,24 @@ static int zs_register_shrinker(struct zs_pool *pool) pool->name); } +static int calculate_zspage_chain_size(int class_size) +{ + int i, min_waste = INT_MAX; + int chain_size = 1; + + for (i = 1; i <= ZS_MAX_PAGES_PER_ZSPAGE; i++) { + int waste; + + waste = (i * PAGE_SIZE) % class_size; + if (waste < min_waste) { + min_waste = waste; + chain_size = i; + } + } + + return chain_size; +} + /** * zs_create_pool - Creates an allocation pool to work from. * @name: pool name to be created @@ -2365,7 +2347,7 @@ struct zs_pool *zs_create_pool(const char *name) size = ZS_MIN_ALLOC_SIZE + i * ZS_SIZE_CLASS_DELTA; if (size > ZS_MAX_ALLOC_SIZE) size = ZS_MAX_ALLOC_SIZE; - pages_per_zspage = get_pages_per_zspage(size); + pages_per_zspage = calculate_zspage_chain_size(size); objs_per_zspage = pages_per_zspage * PAGE_SIZE / size; /*