From patchwork Thu Jan 30 04:42:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13954301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EA9CC0218A for ; Thu, 30 Jan 2025 04:45:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 58C902800BE; Wed, 29 Jan 2025 23:45:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 53C252800B9; Wed, 29 Jan 2025 23:45:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4028C2800BE; Wed, 29 Jan 2025 23:45:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 220792800B9 for ; Wed, 29 Jan 2025 23:45:24 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 93BC9A08BF for ; Thu, 30 Jan 2025 04:45:23 +0000 (UTC) X-FDA: 83062879326.30.633E251 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf05.hostedemail.com (Postfix) with ESMTP id A8DA0100010 for ; Thu, 30 Jan 2025 04:45:21 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=C7hTVV6L; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf05.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.176 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738212321; a=rsa-sha256; cv=none; b=gt0mLObyDCVGc3oMzTJmB/dU27YFdN8IaOdsoGPgQ3QvHIDQFRnrD8WsehRuhXlPxzRHfE iT9Xs8Q27xQcGZF1wuTf82wH956Cz3Ph7bCkqE8RgYzuvm1cEOvv38Ui4n2TJZJVpS3Qic xtu/pNJ7puRFYJG+dKx42RwgBKOHNw4= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=C7hTVV6L; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf05.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.176 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738212321; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9DZ2RsvFcvw71ALze2h8Qgc3kShnLK348TMZbsDChLU=; b=7fee2RWm/3Hf5G/ICDUuy0WtSMJO4E7nyejVlZxt/O6Km0oH0oEB6S43guKlry5AlUjV4T /EfEI2mzyCTB+ntltUS+hDPrk7SrNjceCp4YrAfqSjIuRyKcjxqevYFOFdkhoeV+8xjgju 77CRMoSDw6v3EgHSsp7cpQI9ZBUx+nc= Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-2161eb94cceso3837305ad.2 for ; Wed, 29 Jan 2025 20:45:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738212320; x=1738817120; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9DZ2RsvFcvw71ALze2h8Qgc3kShnLK348TMZbsDChLU=; b=C7hTVV6LtEreoKhBeX4UBQu3Y6aX1bvWl07zALgNJDrdVjh4I1DQaNl0BS2ByoTQXG m2fvAhHM2Azzm4sMK7zVm8O3bVrVVMS/0TfFBieNv6CwpPYSdoNejfLRjeH/5l4JSPYw gzaLq4MNl0JwF0cVWHZiAulXx20ccSXH5AaMc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738212320; x=1738817120; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9DZ2RsvFcvw71ALze2h8Qgc3kShnLK348TMZbsDChLU=; b=xIWsd+HD21IOSLgyuwHmnCV6dMHtdZM12XjLaug+0i+W+MvFkCqwhOBiALX9WBqnnF K2deaTBzBSRiaC7DCjoaxfnAGWJj5NLiYm4khUp+h9yJqi1UwZ1E7A2HdFMH/jdSTUls AGGrb4u9Q0CmW5Bv2YLikg6BBz7SpbQvnj8Yp6McAIFjIHefR50k4wEluFtL2CgrrN8f pqMwv3mukPiOfu6YQMFO+GGPLwCDURS37q/32tN/vjOWVcIArmCGylRM1oVgYWJjzldh CHunlfXOwabp2h5qor2YAAwZ5V4G0rFvRhi6o7gVg41LtJb0o//YfxZeqYKhtBt8WUOc UCEQ== X-Forwarded-Encrypted: i=1; AJvYcCUiO8gQ0QBrMxWkmn50rvFSUmuTmsL3zca4DE6VuIdm7zBzWivmbIRf1t/Q/BWkdEVi0rGz7gDj0w==@kvack.org X-Gm-Message-State: AOJu0YzJs2fpw/efCWwKjfL5ZrYzimgqBEGAlbF26I5tB25YO1uJzbar tH+xjF+8AIWMU1Ab3GRYfTfoeJglV/zO+9eLPvGIYRpko/1/9zTPZ6NrC8/XOg0OnyqV5t+QmC4 = X-Gm-Gg: ASbGnctFkDa/ZnWmDqSfqSXWOlGtQFj0HKSMYhtqjiOAKZcqYL3XYwCbwXV0evQaApG 8DXFU8t07xrpNETYLTG79ekrzrPfrsXNBSKNJB25yYeVO/e8zRohQzi22fJIHRDgz4Bd+nq4u2w 3MTgB5ZXiQAEFkilKSBsL/ec+pVKwuf64R79+AXJZYkIWo4vNRkAdJ4aCj9i9X8aHoFI7Y8Mp7/ gBP7iJlWwq/cf2uSYiLkfVohoEpBlNawBCa1yiKoQn5Akkiz1SrSOo+at1rkZ8e/HG4TYBvfhDu ohfd4obKqUu/WQIn2Q== X-Google-Smtp-Source: AGHT+IFN7QhZkttButj+44a8KQus5JiMx2Oq0RGEUpSgZ97hnkDVL1qKxutrj1FjlwXhcYZwTuD5rg== X-Received: by 2002:a05:6a21:9004:b0:1e1:9f24:2e4c with SMTP id adf61e73a8af0-1ed7a5f0665mr8510856637.16.1738212320451; Wed, 29 Jan 2025 20:45:20 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:d794:9e7a:5186:857e]) by smtp.gmail.com with UTF8SMTPSA id 41be03b00d2f7-acebe385085sm321668a12.25.2025.01.29.20.45.17 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 20:45:20 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Yosry Ahmed Cc: Minchan Kim , Johannes Weiner , Nhat Pham , Uros Bizjak , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv2 3/7] zsmalloc: factor out size-class locking helpers Date: Thu, 30 Jan 2025 13:42:46 +0900 Message-ID: <20250130044455.2642465-4-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250130044455.2642465-1-senozhatsky@chromium.org> References: <20250130044455.2642465-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A8DA0100010 X-Stat-Signature: qcjyi98tkc737tn8wekocxcdc46poeco X-HE-Tag: 1738212321-636137 X-HE-Meta: U2FsdGVkX191ay6rFXOqISFfpjRzfILbD1K9gRfGsCS0galggIBe+UhhTqmMxmGAqE86zoqq2nOSQgWWccrzgv3YAkUj9qeLpujpFyoLASUQgMzHBgBi5IGxTss8ePEk7TeCBTmWgtMlmOWBUKthrAFBSOjHoPp0p4BWz97OmW6P8aDeaFDoBbALZtM5835Ul6THIoGpHrcPob3HC7oqwKYyKugwtrljt1Lln4c8daPgkiLkaQnWnbUjHMpxmi2f0P8fSrG5HtWnZNp80dJcvskG6ITGbLUSZ2ZbwO+Kubv7p9/7zWRL8oNLGuTeGFkvk0LxXWZ9mEnoJesoPLgetCG8BGzFmNWtINT9+kcvIyfQVLRSg6C+s9UmlT3ymWx+VHS4s1PcPbgojtqJiGCnL9lcnDcSRrQ40iTGaFFLx0FiGbdPtF6zp1Gl0zsRvvVf49qG86zkXdSqIVKdPXSBXdUtl2gD5vBHAWalmN0sqmmMfFwTd5Sz1gf20G4kc2FVSuVpZRmaCF+nK/auaziwfr5fAUIEe2xxiZNTSTNBCCMfH3Ognlnnw+6rEfYXLgV+9iZk96/tAWBnd/9IuhHLrPFQDnXyeJ+tmzZS1Isso2IKxwLaezEoFLpQl37tkc+u8vU7Uj/OgMK2jhXCei1bkzXzTNieMPXb1AQ+jAMEbn8mBE78OHWwH+wDYC0wXi+KqSdwDVK7othNKLDieAdKMfESF28WS/eWnzTpl3slF9n0G5aFt+l7JNTwfs4B5KmWoy+e72L9UO6UZJxqKwNYsQCIRY77Nfl2mpdLwNUjno9/9ph7CuKg5fzBBCSLAzxTrAk9vp7tsievUqkm/i9Qe/Sq7TUZH30LF5MU6A1xt4YXEjQnhDMybskZY7J4YgwIcHYkbPVqKoAD/PUX/z84+8Zn9nZGuEo3ERx/qU7sPRkN+rtaEkj06zf2GtzoQrBzg5MS6pgByc0rOyo1bmv YAEDyeXL kHAMtsLOxeHxWqXIUZLfzU6a7KYpjEPs/KT9zJVVpKmzFM+YtaZK7xlEVNUOGMayDA6AePrOpRfFvRQXiCAADpJ8YEXQ713V5zu5rc+DGIQBt708YymvssmrP5sQ0TGh1PE5EzBoeSJIWeFLaiVQrP5g7XCRVKg6qWxWyRwIoyAA5U7Xv2uvzK8PTzmVDOOupDjYqfpOq7DS3Jw0aY5F3uvCRZAl6oVDaAy+xuRbRpacFxz/EE5p/NsRWHDBaO85GBA2sK8+yLPPL7orwg+UJDEYyq/FSaUYqP6QwxbfFn2HJQ6hN4oS/cKHlvw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move open-coded size-class locking to dedicated helpers. Signed-off-by: Sergey Senozhatsky Reviewed-by: Yosry Ahmed --- mm/zsmalloc.c | 47 ++++++++++++++++++++++++++++------------------- 1 file changed, 28 insertions(+), 19 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 2280ea17796b..9053777035af 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -255,6 +255,16 @@ static bool pool_lock_is_contended(struct zs_pool *pool) return rwlock_is_contended(&pool->migrate_lock); } +static void size_class_lock(struct size_class *class) +{ + spin_lock(&class->lock); +} + +static void size_class_unlock(struct size_class *class) +{ + spin_unlock(&class->lock); +} + static inline void zpdesc_set_first(struct zpdesc *zpdesc) { SetPagePrivate(zpdesc_page(zpdesc)); @@ -615,8 +625,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) if (class->index != i) continue; - spin_lock(&class->lock); - + size_class_lock(class); seq_printf(s, " %5u %5u ", i, class->size); for (fg = ZS_INUSE_RATIO_10; fg < NR_FULLNESS_GROUPS; fg++) { inuse_totals[fg] += class_stat_read(class, fg); @@ -626,7 +635,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) obj_allocated = class_stat_read(class, ZS_OBJS_ALLOCATED); obj_used = class_stat_read(class, ZS_OBJS_INUSE); freeable = zs_can_compact(class); - spin_unlock(&class->lock); + size_class_unlock(class); objs_per_zspage = class->objs_per_zspage; pages_used = obj_allocated / objs_per_zspage * @@ -1401,7 +1410,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) class = pool->size_class[get_size_class_index(size)]; /* class->lock effectively protects the zpage migration */ - spin_lock(&class->lock); + size_class_lock(class); zspage = find_get_zspage(class); if (likely(zspage)) { obj_malloc(pool, zspage, handle); @@ -1412,7 +1421,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) goto out; } - spin_unlock(&class->lock); + size_class_unlock(class); zspage = alloc_zspage(pool, class, gfp); if (!zspage) { @@ -1420,7 +1429,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) return (unsigned long)ERR_PTR(-ENOMEM); } - spin_lock(&class->lock); + size_class_lock(class); obj_malloc(pool, zspage, handle); newfg = get_fullness_group(class, zspage); insert_zspage(class, zspage, newfg); @@ -1431,7 +1440,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) /* We completely set up zspage so mark them as movable */ SetZsPageMovable(pool, zspage); out: - spin_unlock(&class->lock); + size_class_unlock(class); return handle; } @@ -1485,7 +1494,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle) obj_to_zpdesc(obj, &f_zpdesc); zspage = get_zspage(f_zpdesc); class = zspage_class(pool, zspage); - spin_lock(&class->lock); + size_class_lock(class); pool_read_unlock(pool); class_stat_sub(class, ZS_OBJS_INUSE, 1); @@ -1495,7 +1504,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle) if (fullness == ZS_INUSE_RATIO_0) free_zspage(pool, class, zspage); - spin_unlock(&class->lock); + size_class_unlock(class); cache_free_handle(pool, handle); } EXPORT_SYMBOL_GPL(zs_free); @@ -1829,7 +1838,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, /* * the class lock protects zpage alloc/free in the zspage. */ - spin_lock(&class->lock); + size_class_lock(class); /* the migrate_write_lock protects zpage access via zs_map_object */ migrate_write_lock(zspage); @@ -1861,7 +1870,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * it's okay to release migration_lock. */ pool_write_unlock(pool); - spin_unlock(&class->lock); + size_class_unlock(class); migrate_write_unlock(zspage); zpdesc_get(newzpdesc); @@ -1905,10 +1914,10 @@ static void async_free_zspage(struct work_struct *work) if (class->index != i) continue; - spin_lock(&class->lock); + size_class_lock(class); list_splice_init(&class->fullness_list[ZS_INUSE_RATIO_0], &free_pages); - spin_unlock(&class->lock); + size_class_unlock(class); } list_for_each_entry_safe(zspage, tmp, &free_pages, list) { @@ -1916,10 +1925,10 @@ static void async_free_zspage(struct work_struct *work) lock_zspage(zspage); class = zspage_class(pool, zspage); - spin_lock(&class->lock); + size_class_lock(class); class_stat_sub(class, ZS_INUSE_RATIO_0, 1); __free_zspage(pool, class, zspage); - spin_unlock(&class->lock); + size_class_unlock(class); } }; @@ -1984,7 +1993,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, * as well as zpage allocation/free */ pool_write_lock(pool); - spin_lock(&class->lock); + size_class_lock(class); while (zs_can_compact(class)) { int fg; @@ -2014,11 +2023,11 @@ static unsigned long __zs_compact(struct zs_pool *pool, putback_zspage(class, dst_zspage); dst_zspage = NULL; - spin_unlock(&class->lock); + size_class_unlock(class); pool_write_unlock(pool); cond_resched(); pool_write_lock(pool); - spin_lock(&class->lock); + size_class_lock(class); } } @@ -2028,7 +2037,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, if (dst_zspage) putback_zspage(class, dst_zspage); - spin_unlock(&class->lock); + size_class_unlock(class); pool_write_unlock(pool); return pages_freed;