From patchwork Wed Jan 29 06:43:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13953466 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86B7AC02192 for ; Wed, 29 Jan 2025 06:49:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E4173280036; Wed, 29 Jan 2025 01:49:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DF0F9280032; Wed, 29 Jan 2025 01:49:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C6B45280036; Wed, 29 Jan 2025 01:49:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A8426280032 for ; Wed, 29 Jan 2025 01:49:17 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2E1FAC09A8 for ; Wed, 29 Jan 2025 06:49:17 +0000 (UTC) X-FDA: 83059562754.27.0D9D17E Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf30.hostedemail.com (Postfix) with ESMTP id 503D48000F for ; Wed, 29 Jan 2025 06:49:15 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=Hk7koUlZ; spf=pass (imf30.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.175 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738133355; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=U5xlt93WdnDLIhYO6MmK7r5usvow5aZkNdIQfTNWjy0=; b=o0orT44mTMHAtPXfgz1G+2FOAtU+w3Ae32KRPj+asFxyM/lVFjGo5njlh84/ve3bG5d6qf e2Pkbg/uuAXp6Z3EtXFdJ8Q8KLrkBegYlmpNIFMOHHn4oIXhdTnW92Caq/aHl5iC1DMOt9 xFxCe429T7CFYWG81eUYsZg58uTfncc= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=Hk7koUlZ; spf=pass (imf30.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.175 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738133355; a=rsa-sha256; cv=none; b=tEF2S1lKoasmEiCN2mlXUJ2IaV738r9ZHeNmT1y84mpzR4dfxSTLro0j8/HPdHf+DJ3PdC 1fAGYa1njHJzcYj4eP7ggFOhyARs6PmyUVMv34zEsdMt/81d0EFiqhkdiXJHsc+BeGbwOf zpbM8HoU2koX9NOc7nmjWZDAf+itYoI= Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-21636268e43so141759805ad.2 for ; Tue, 28 Jan 2025 22:49:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738133354; x=1738738154; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=U5xlt93WdnDLIhYO6MmK7r5usvow5aZkNdIQfTNWjy0=; b=Hk7koUlZ6Uy+0HLrfot3esDksNg1p7t/ieQveZta8kNCCaPJOnPy5CEY+b1lMy7LWV 0CCvycbzR3SXThloPcolPBNEhfAFEbWnYPtqpd2oCtSIFGI7UUM/fy+wHmm+psL1ZE8M 1+31U3czC6C6aWCc/7ds9gCmUAQhm7kly+B1M= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738133354; x=1738738154; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U5xlt93WdnDLIhYO6MmK7r5usvow5aZkNdIQfTNWjy0=; b=DvdZjS/0AB/Qg4lEpB0DcdyWKLEJG9eo0+cFRgD1dl5tWkOX37KDmDvcppWmM0hNF2 ygyU4pOij5Dq7eHqeLgnj4fUXzG2LdRBIvfUsak2L1YbZ+xWV0SMeZlZQSQHiBmBZR5P 21ojovZ7dOQObnWNslMsMiYH3oG+enMa/ZB3IgMF6lq2OQ95WpORFGr6B9NKzxjpioWm XIBi/InV4M4lklYeU//xtEcHLgFdzfL6o8nNXEfgUBeYSbOdjn3f2q1GC4aBu2wPWk61 /I2ESmPAFI6AWg2wk2OhpOoShj8k/3JklXEjg4kISsQJbHSszToXuyt4dKc/9Dk7wX9R vO2w== X-Gm-Message-State: AOJu0YwKJP9To5TyiIdSLCgvKu5CAk4M9HfTar2Il5woKXEpUYw1ki/e 1bUChCyjHzods4LNZdvdl7ryl5svcbU32+TZWHcEsptFV1OdIFcErBIsKwYhRw== X-Gm-Gg: ASbGncvjUsNH+h1O5r3G5PXvMQw451PhGLIpALqYQLTGmpBFlTRnwUal9CKkN/UzgPd 9J+B7UlvmmrSsr0NQzQkfD1Otolfi7+ClBuegIQ2nBaaOmSJNBd4K9ME+Ev0XKQ30O+1KHu5ofN shCY6YeNhxTEbA3K8fgDiOm7S5u16VwVi3qai4brIDv8iWz400M0lEzg6/GuGNkoQq1KJgxniWo ZoIwxTUSYpX3EDDoClHa3dpH/rFNGtEOxURaAYKUZlB6CHVjQKUKf49aM/JvzGFGwVKamlAYxYh Fv9WlMZsD4jdpKjVpg== X-Google-Smtp-Source: AGHT+IFzqhCwAv4hpOOWLJMonKQwPRQTh3PdlNYEJTZLHBEywcIgCKkqWY9wtUBWb0Oa2eHoYdi0kA== X-Received: by 2002:a17:902:da85:b0:215:6489:cfbf with SMTP id d9443c01a7336-21dd7c44a4dmr33973515ad.11.1738133354160; Tue, 28 Jan 2025 22:49:14 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b323:d70b:a1b8:1683]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21da414d87fsm92309165ad.172.2025.01.28.22.49.11 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Jan 2025 22:49:13 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv1 2/6] zsmalloc: factor out size-class locking helpers Date: Wed, 29 Jan 2025 15:43:48 +0900 Message-ID: <20250129064853.2210753-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250129064853.2210753-1-senozhatsky@chromium.org> References: <20250129064853.2210753-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 503D48000F X-Stat-Signature: kbw9yzzxn4g45kwwg1tm4kn9hgkqiikw X-Rspam-User: X-HE-Tag: 1738133355-170714 X-HE-Meta: U2FsdGVkX18CtQ8k47o/k1MQojn/2AWdIlvRiuPkVQ7sx9POFxheBSU0TTksVq9zjS4GWB2XpYtKw7aMV2bxghy1h11ATybR7pMbsreLqNkuIoNY5BH1o/d7IHMyZVqw+3ReHAjldU0nPwCSb3eTMLuwDC4owCqHksWAfoxOZcBqLE0T5R1LMk9QO9FzpC+v7SV5bLvzE5sM5dv9l9xDQS4JH5oRcyTN/X1ftrOV7eOb+hYaH8G4LHMs0+cN5R79J01TtyAdXtvCtQooSsI0Y5p6oR1ga4cUcHOOG6FXk7rUqbvcnK3Ustb7/4Cv6q4K1ghEQ4eD8sXTLxyf0ac29m+MvkuJ780HqQBZ7MNVrAqaVUSIy+NoowUgC2/nQ5FwiEBDF5hH7ZtUCNoyERiyNggSk7sF5rJVPDOE98o+F8ifnFAnm5eKEsKihtDVYaWKwrWJNC+Law1cN5fMqEXoj9mCrztgNaOH2KNTQb9UoSyHn+FxWxCFEQ3INv2kqpHyCCL9YNlwViOSSIDKqIKjYUlN/TIdGpndkL5NuvO7WQ4wY0ey1G9GSzCBtAwS1oaOPCObKzE9xGeyo71ZTG+veXBoz8r28mRIte5PHYEEBDXg29LQ5BUhc4T4T8QpfQtkKQY5VDGWzwI8Gc/0bRpD+FsbDUooxFlo/mWdt3+CQkrS6VvYRN330F86KEOZZVtxVaSuPw69ud7tO3Hkpeo1OywcGXerxWNnWSuZjux2CuGmFCPxrhAtMxxQMEAVH2+9zqClmwGog116Hcad02d5yuXPaq0FPO4Xw6uf16rdb6gCVeE0xH8eJDYTbzALoXcsANtbKAoevUSpxGcPwn3J1TukrXBoClmjk0/zxiYSPJhd4vFWbRUJEE6nD92vx0qUMTHo391HAXFNX5CKCS2aXW+bpmH2oQ5iOXIFD73Qi2huv74UKy3P5bAlrA7UYSjYQ4TCv0qwoKm7AqgyNPq JmlCyLZK 1ngxj9uHdqKaI8rbZgr4+/vaayrXs8rT7I6mViULdDgbcm8LkvZwCb7RSeVL3dRmI0ive2wLFuHH2ckE1XDdfCfZ8Cdm1CYUm4C40CFghUI5++QRQr1pM2G+2uD+Lg0td8A2+MTmKIxM/g3t5mI3akU0oNu8tvVLv1KY656j3wDSRCptWCrhgxF8Dw4w+qb0R1d++FXsFWl3iJDriMwwGmuDX7lr5N+qmgai9fRnfaiJhF6RBk336Pgy7Q0qiq+HYfsdUX/kQ3R/4ywGkv/t/EygXYO2OJkQkghc0UgEqLFK9d4d7dBm9NEDaOw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move open-coded size-class locking to dedicated helpers. Signed-off-by: Sergey Senozhatsky Reviewed-by: Yosry Ahmed --- mm/zsmalloc.c | 47 ++++++++++++++++++++++++++++------------------- 1 file changed, 28 insertions(+), 19 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 2f8a2b139919..0f575307675d 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -254,6 +254,16 @@ static bool pool_lock_is_contended(struct zs_pool *pool) return rwlock_is_contended(&pool->migrate_lock); } +static void size_class_lock(struct size_class *class) +{ + spin_lock(&class->lock); +} + +static void size_class_unlock(struct size_class *class) +{ + spin_unlock(&class->lock); +} + static inline void zpdesc_set_first(struct zpdesc *zpdesc) { SetPagePrivate(zpdesc_page(zpdesc)); @@ -614,8 +624,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) if (class->index != i) continue; - spin_lock(&class->lock); - + size_class_lock(class); seq_printf(s, " %5u %5u ", i, class->size); for (fg = ZS_INUSE_RATIO_10; fg < NR_FULLNESS_GROUPS; fg++) { inuse_totals[fg] += class_stat_read(class, fg); @@ -625,7 +634,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) obj_allocated = class_stat_read(class, ZS_OBJS_ALLOCATED); obj_used = class_stat_read(class, ZS_OBJS_INUSE); freeable = zs_can_compact(class); - spin_unlock(&class->lock); + size_class_unlock(class); objs_per_zspage = class->objs_per_zspage; pages_used = obj_allocated / objs_per_zspage * @@ -1400,7 +1409,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) class = pool->size_class[get_size_class_index(size)]; /* class->lock effectively protects the zpage migration */ - spin_lock(&class->lock); + size_class_lock(class); zspage = find_get_zspage(class); if (likely(zspage)) { obj_malloc(pool, zspage, handle); @@ -1411,7 +1420,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) goto out; } - spin_unlock(&class->lock); + size_class_unlock(class); zspage = alloc_zspage(pool, class, gfp); if (!zspage) { @@ -1419,7 +1428,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) return (unsigned long)ERR_PTR(-ENOMEM); } - spin_lock(&class->lock); + size_class_lock(class); obj_malloc(pool, zspage, handle); newfg = get_fullness_group(class, zspage); insert_zspage(class, zspage, newfg); @@ -1430,7 +1439,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) /* We completely set up zspage so mark them as movable */ SetZsPageMovable(pool, zspage); out: - spin_unlock(&class->lock); + size_class_unlock(class); return handle; } @@ -1484,7 +1493,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle) obj_to_zpdesc(obj, &f_zpdesc); zspage = get_zspage(f_zpdesc); class = zspage_class(pool, zspage); - spin_lock(&class->lock); + size_class_lock(class); pool_read_unlock(pool); class_stat_sub(class, ZS_OBJS_INUSE, 1); @@ -1494,7 +1503,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle) if (fullness == ZS_INUSE_RATIO_0) free_zspage(pool, class, zspage); - spin_unlock(&class->lock); + size_class_unlock(class); cache_free_handle(pool, handle); } EXPORT_SYMBOL_GPL(zs_free); @@ -1828,7 +1837,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, /* * the class lock protects zpage alloc/free in the zspage. */ - spin_lock(&class->lock); + size_class_lock(class); /* the migrate_write_lock protects zpage access via zs_map_object */ migrate_write_lock(zspage); @@ -1860,7 +1869,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * it's okay to release migration_lock. */ pool_write_unlock(pool); - spin_unlock(&class->lock); + size_class_unlock(class); migrate_write_unlock(zspage); zpdesc_get(newzpdesc); @@ -1904,10 +1913,10 @@ static void async_free_zspage(struct work_struct *work) if (class->index != i) continue; - spin_lock(&class->lock); + size_class_lock(class); list_splice_init(&class->fullness_list[ZS_INUSE_RATIO_0], &free_pages); - spin_unlock(&class->lock); + size_class_unlock(class); } list_for_each_entry_safe(zspage, tmp, &free_pages, list) { @@ -1915,10 +1924,10 @@ static void async_free_zspage(struct work_struct *work) lock_zspage(zspage); class = zspage_class(pool, zspage); - spin_lock(&class->lock); + size_class_lock(class); class_stat_sub(class, ZS_INUSE_RATIO_0, 1); __free_zspage(pool, class, zspage); - spin_unlock(&class->lock); + size_class_unlock(class); } }; @@ -1983,7 +1992,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, * as well as zpage allocation/free */ pool_write_lock(pool); - spin_lock(&class->lock); + size_class_lock(class); while (zs_can_compact(class)) { int fg; @@ -2013,11 +2022,11 @@ static unsigned long __zs_compact(struct zs_pool *pool, putback_zspage(class, dst_zspage); dst_zspage = NULL; - spin_unlock(&class->lock); + size_class_unlock(class); pool_write_unlock(pool); cond_resched(); pool_write_lock(pool); - spin_lock(&class->lock); + size_class_lock(class); } } @@ -2027,7 +2036,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, if (dst_zspage) putback_zspage(class, dst_zspage); - spin_unlock(&class->lock); + size_class_unlock(class); pool_write_unlock(pool); return pages_freed;