From patchwork Mon Jan 27 07:59:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13951056 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63C45C0218C for ; Mon, 27 Jan 2025 08:03:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EA797280117; Mon, 27 Jan 2025 03:03:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E562E2800E8; Mon, 27 Jan 2025 03:03:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CCF04280117; Mon, 27 Jan 2025 03:03:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id AB7492800E8 for ; Mon, 27 Jan 2025 03:03:30 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 57AF6B480A for ; Mon, 27 Jan 2025 08:03:30 +0000 (UTC) X-FDA: 83052492180.09.F8FE7D0 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf04.hostedemail.com (Postfix) with ESMTP id 791A64000E for ; Mon, 27 Jan 2025 08:03:28 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=dcOSqiOC; spf=pass (imf04.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.175 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737965008; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kn1+fRD/5z0gRUNjM1y0pmoc8sr5OaAxZlYjVZdB2hU=; b=nB2k4QW8QA2L8PY4sL2uXMYJpll05skUhoWqzfbK8qbYJhgyiBaVHjZ5/LNvEgONsE+ivT TJFB+k80WmaA+zKtGzNmmfpkh0zf/0RuTiPVP3CkUKONQvM3r36LwhGN7y/ZxsGVzIKFQ0 v1LF159CMC0JotOk987FflU0i9U/8qM= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=dcOSqiOC; spf=pass (imf04.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.175 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737965008; a=rsa-sha256; cv=none; b=cdpPGjvUQ/TEngm3eIC4f2/p5DbMApmJzCqTMiPvG8cLk3IHQZC71axLmV8/R1xj0YTXuF BzpZO1zv2aB0I3wEgIARN0ZpSQ/onAQWs2/n5L4Tw4taotMxqOQhtv6k9HkbhE4/lGlSnw vhu0s246r2ABz2cFKW0P0x7S9R942kE= Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-21634338cfdso96463985ad.2 for ; Mon, 27 Jan 2025 00:03:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1737965007; x=1738569807; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kn1+fRD/5z0gRUNjM1y0pmoc8sr5OaAxZlYjVZdB2hU=; b=dcOSqiOCMvxuel8gYB9kuTz5mbnSRv9uE5SdOclMcbTQg0kbQT9VZFjvwOuu/nHpyy EfzJEdRkqkpYiD731pLPa8ZCIyTr4ppQqHheqTY0YhxuoQiPQzHkr+A9sez9jN/iEwhL Jj2tcsrlPvp1Se+qY8Z7kQxC1PNRubyuhtHZM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737965007; x=1738569807; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kn1+fRD/5z0gRUNjM1y0pmoc8sr5OaAxZlYjVZdB2hU=; b=peNA5O360rxl2+6dEnjtZtFWo7clFrHwSe9o8EKDUj7uX/k/ARo/X7YGo3erin82fX O8z0Xr1Sr1KEs99cLN3qxc3++gqaTkAMyaC0WLsh1/ptbbeNSHG8DsgoLGEwGPEKaomC HzYoNbeJBd9JOg2vqLYTIp73tBKYh8bT5M5kNN3Y1/0b1JHzOxQnQvxBLLrkoRbg67FL +YQRkH9uk4svYdwzAKwGy4rq+lIWbWsL/bLzO32MmPzDtvAfr8N9WebFPLNgPehyRuHU xIFPKG3zXTZtsuAlzQvOOtHlfgXSaO6Zc8w80c4LDpSarnkaQPp+rTwTcbn4C3Ojsput 34Fw== X-Gm-Message-State: AOJu0Yx4I6+G+3/pOM5VOYN0r+8zRRv9CMcxNpMoQRXG+wm5vZUNRfQb iyNBfwITxjylHBhuM9rFfhD5AMr8txxhcuI0sR2oP2/nfZIrm08hsBRBfxm/Ig== X-Gm-Gg: ASbGncuI4iTBn8m9aPmpu7Qtrt7OTyiRZav8k7RoYJCMUS9Of7SIwHBLD23PhU8n95a feCfOqR0EJW9XAztsajrHEYzVxPrViq91PZzJuqmGPptxUE9IoMAO2HjlwfIhfTyvVfPGLs6MJ1 4rhNcOBnaT+0cqxnL0lTogVJSk67Lag3GuyRstt9RujmOr+FGLyq/CeKoqJVK8VmoFJ5tcn2Nsl ZRdcc9Fj36vLYDbh/n8+aigPSkMYD+g7UmzRuVeW+BQjOnKNhqxke2fvZUZ/MjC/seP+tyKLl73 9RbjK7M= X-Google-Smtp-Source: AGHT+IE0KtyysZw7cacIrE1leqh4Co/IJ/3WMyHKfL3/R554bfYapuUKAhoc+fS50PnMBkNIdu63xg== X-Received: by 2002:a17:902:ecc6:b0:217:803a:e47a with SMTP id d9443c01a7336-21c3553b21cmr580819105ad.4.1737965007128; Mon, 27 Jan 2025 00:03:27 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:566d:6152:c049:8d3a]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21da3ea3081sm56832255ad.62.2025.01.27.00.03.24 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 27 Jan 2025 00:03:26 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [RFC PATCH 4/6] zsmalloc: make class lock sleepable Date: Mon, 27 Jan 2025 16:59:29 +0900 Message-ID: <20250127080254.1302026-5-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250127080254.1302026-1-senozhatsky@chromium.org> References: <20250127080254.1302026-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 791A64000E X-Stat-Signature: xkzfmo9s3pfcigynrwtuobxndfkpg1tw X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1737965008-265976 X-HE-Meta: U2FsdGVkX19q+fM55cO49/fNo4ZfG5siFiVR8d3vCxNSFD5/n5Rw/7xp3hxTFXx46ycAtfVNQ7KmJXqDgeXeH7mvAeNMgySx6wpqrfsaHwXV6s5BBUAMUZeLBpZHapAo2H6ZEI0fzNmj37rUCnFY2/sj/guky+22w6FykIXstr+DKSetCBwTEfWYPkKhGmSOmxaSdpsBzkVqJWarcf6WhKMg9MJ2P/G6MRN3ZrR1Bz0U9nLstSlfiGHcB/lnA+dK4QP4gImrwhTVJ12MuibxMAt5CBLaqrBl6F3wbDqc/aEZwef8pT6H1NbABJfie/RAxkWVfYX/eR9ywBgqrKRBB7pf2T2BgFsV8oRmpePQLWjMu5xwg3FxVyL+kc+UB/MBq3s1XbkFc01z2Bsr96o7FdtMHcbLrTvmwIOmysDz8pJ8ZSz/Yz9sJ2+g0/ScTKyO3nQqg4++jAPbiFTQAwovrzlyPe+auztgWZzyrCjxWvSgpNYnyeP1ZWN6ykr3/SHBJUbzkWXxbroRh9S/PUH9I/5kvmRuNCfro8f54ep7uMG0gapXyAAdGNgsPeNhGkfWC0XdrKkVw6XY1+mG1qVfm2ESKfS4umCpmFoBVfPmZ6uZJ2u4gWagJH9IZTbBtUtMDHrR/uVcHY716vzlWSyOXsionPYxEt+eltoWEEn+SGHhW0voDv3oPMEEXs+kmPnhuXd+aSwPOGGIuAeTKOHB9Q1Ofu1wvwcZikup1JLumvkIxmRHpPugXuNib5Mj/z3MxRzM7EcSYBANU41B50POLf2vJvG/oKXfSAfjBgh9zYYzL6h0wGKqVYuhEziL3eFGiAp5nrbQSJVyO/dW1zRayEts8JU5NLXUT8u34Gm93OeU0PCRsOr2N5p4e9kRWRPniwNzVS+Fw3TrhqsNigVv7sRZ7TNxSvFytwPIEhFAXlBUuZGcDDM7r51i5L5r1zZ8+A4UbdFT/rfg44NVYue wHynr+Vi RWWwaC8F4NG+8fEEbMEFMxjKmcUgdOVIQuCTZv9OJtFr43gHZSoTXcMm1eqyTwoILDba/Jf29QlBYLBIYXyXkuJOdhrtuu8T9/yWP7MYmX4nIAfoyHzuboV6ZVGt25wDfR920/qLO5rfb0tp/6f1ukeTOCq77vBYVDwMG4/CAmCD5fNG5PpPjKucCc2CUwf/o8FY6icVhQxCalMYRRE2PoVMtUuqmKm6ipWE0bH5yVpLwBQzOWCpvUR3khXld9U78SkFslfRHunoMjMPgrsiXk9l3uHC2lCu91KYSwd4W4ee5b4ZnuVj/XY4CtQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Switch over from spin-lock to mutex, also introduce simple helpers to lock/unlock size class. This is needed to make zsmalloc preemptible in the future. Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 54 ++++++++++++++++++++++++++++----------------------- 1 file changed, 30 insertions(+), 24 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 751871ec533f..a5c1f9852072 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -168,7 +168,7 @@ static struct dentry *zs_stat_root; static size_t huge_class_size; struct size_class { - spinlock_t lock; + struct mutex lock; struct list_head fullness_list[NR_FULLNESS_GROUPS]; /* * Size of objects stored in this class. Must be multiple @@ -252,6 +252,16 @@ static bool zspool_lock_is_contended(struct zs_pool *pool) return rwsem_is_contended(&pool->migrate_lock); } +static void size_class_lock(struct size_class *class) +{ + mutex_lock(&class->lock); +} + +static void size_class_unlock(struct size_class *class) +{ + mutex_unlock(&class->lock); +} + static inline void zpdesc_set_first(struct zpdesc *zpdesc) { SetPagePrivate(zpdesc_page(zpdesc)); @@ -657,8 +667,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) if (class->index != i) continue; - spin_lock(&class->lock); - + size_class_lock(class); seq_printf(s, " %5u %5u ", i, class->size); for (fg = ZS_INUSE_RATIO_10; fg < NR_FULLNESS_GROUPS; fg++) { inuse_totals[fg] += class_stat_read(class, fg); @@ -668,7 +677,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) obj_allocated = class_stat_read(class, ZS_OBJS_ALLOCATED); obj_used = class_stat_read(class, ZS_OBJS_INUSE); freeable = zs_can_compact(class); - spin_unlock(&class->lock); + size_class_unlock(class); objs_per_zspage = class->objs_per_zspage; pages_used = obj_allocated / objs_per_zspage * @@ -926,8 +935,6 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class, { struct zpdesc *zpdesc, *next; - assert_spin_locked(&class->lock); - VM_BUG_ON(get_zspage_inuse(zspage)); VM_BUG_ON(zspage->fullness != ZS_INUSE_RATIO_0); @@ -1443,7 +1450,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) class = pool->size_class[get_size_class_index(size)]; /* class->lock effectively protects the zpage migration */ - spin_lock(&class->lock); + size_class_lock(class); zspage = find_get_zspage(class); if (likely(zspage)) { obj_malloc(pool, zspage, handle); @@ -1453,8 +1460,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) goto out; } - - spin_unlock(&class->lock); + size_class_unlock(class); zspage = alloc_zspage(pool, class, gfp); if (!zspage) { @@ -1462,7 +1468,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) return (unsigned long)ERR_PTR(-ENOMEM); } - spin_lock(&class->lock); + size_class_lock(class); obj_malloc(pool, zspage, handle); newfg = get_fullness_group(class, zspage); insert_zspage(class, zspage, newfg); @@ -1473,7 +1479,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) /* We completely set up zspage so mark them as movable */ SetZsPageMovable(pool, zspage); out: - spin_unlock(&class->lock); + size_class_unlock(class); return handle; } @@ -1527,7 +1533,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle) obj_to_zpdesc(obj, &f_zpdesc); zspage = get_zspage(f_zpdesc); class = zspage_class(pool, zspage); - spin_lock(&class->lock); + size_class_lock(class); pool_read_unlock(pool); class_stat_sub(class, ZS_OBJS_INUSE, 1); @@ -1537,7 +1543,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle) if (fullness == ZS_INUSE_RATIO_0) free_zspage(pool, class, zspage); - spin_unlock(&class->lock); + size_class_unlock(class); cache_free_handle(pool, handle); } EXPORT_SYMBOL_GPL(zs_free); @@ -1846,7 +1852,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, /* * the class lock protects zpage alloc/free in the zspage. */ - spin_lock(&class->lock); + size_class_lock(class); /* the zspage_write_lock protects zpage access via zs_map_object */ zspage_write_lock(zspage); @@ -1878,7 +1884,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * it's okay to release migration_lock. */ pool_write_unlock(pool); - spin_unlock(&class->lock); + size_class_unlock(class); zspage_write_unlock(zspage); zpdesc_get(newzpdesc); @@ -1922,10 +1928,10 @@ static void async_free_zspage(struct work_struct *work) if (class->index != i) continue; - spin_lock(&class->lock); + size_class_lock(class); list_splice_init(&class->fullness_list[ZS_INUSE_RATIO_0], &free_pages); - spin_unlock(&class->lock); + size_class_unlock(class); } list_for_each_entry_safe(zspage, tmp, &free_pages, list) { @@ -1933,10 +1939,10 @@ static void async_free_zspage(struct work_struct *work) lock_zspage(zspage); class = zspage_class(pool, zspage); - spin_lock(&class->lock); + size_class_lock(class); class_stat_sub(class, ZS_INUSE_RATIO_0, 1); __free_zspage(pool, class, zspage); - spin_unlock(&class->lock); + size_class_unlock(class); } }; @@ -2001,7 +2007,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, * as well as zpage allocation/free */ pool_write_lock(pool); - spin_lock(&class->lock); + size_class_lock(class); while (zs_can_compact(class)) { int fg; @@ -2031,11 +2037,11 @@ static unsigned long __zs_compact(struct zs_pool *pool, putback_zspage(class, dst_zspage); dst_zspage = NULL; - spin_unlock(&class->lock); + size_class_unlock(class); pool_write_unlock(pool); cond_resched(); pool_write_lock(pool); - spin_lock(&class->lock); + size_class_lock(class); } } @@ -2045,7 +2051,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, if (dst_zspage) putback_zspage(class, dst_zspage); - spin_unlock(&class->lock); + size_class_unlock(class); pool_write_unlock(pool); return pages_freed; @@ -2255,7 +2261,7 @@ struct zs_pool *zs_create_pool(const char *name) class->index = i; class->pages_per_zspage = pages_per_zspage; class->objs_per_zspage = objs_per_zspage; - spin_lock_init(&class->lock); + mutex_init(&class->lock); pool->size_class[i] = class; fullness = ZS_INUSE_RATIO_0;