From patchwork Wed Jan 29 06:43:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13953465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C574EC0218D for ; Wed, 29 Jan 2025 06:49:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4CB53280034; Wed, 29 Jan 2025 01:49:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 47AA2280032; Wed, 29 Jan 2025 01:49:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 31B6C280034; Wed, 29 Jan 2025 01:49:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 10DF0280032 for ; Wed, 29 Jan 2025 01:49:12 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 957B314049E for ; Wed, 29 Jan 2025 06:49:11 +0000 (UTC) X-FDA: 83059562502.06.2B720B1 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf27.hostedemail.com (Postfix) with ESMTP id B785640005 for ; Wed, 29 Jan 2025 06:49:09 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=UIN3qDVt; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf27.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.172 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738133349; a=rsa-sha256; cv=none; b=zGjxlpVlKT+ZcraGU6InGBUqrKOA9slK+SlvXKos6FKeX3kqvQKE3QAHALwc5w8sKkKIeC Jv391FgRqPWYFljazFW9lITXwJaYV9w7NR1sFg2sMSvJxHxEUxGwVlLGHznVrXIx2xTW1C GJrgzA1RB0LHV3ezINrjOxxhpS8U2aU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=UIN3qDVt; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf27.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.172 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738133349; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KC3ZyRGR1rlydzPUPnDKs73KJRkoEFLjCJSbJ+Sf7vQ=; b=LAcWIywa+IgMqIPDafKJDD2jVop9aY0VPtCErhU5iXjxXdXiil8hsmyo+VzFDtyJHY6nvH ZZEsKQtB7hmCWl07fZmc1jjwC8aSsCcHAb2/6MEyYLpV9tpbkh6iyWmb0mS9TNv90Imkxn 9hWcUzc+ytkAkuGdFC3n++avU5mY4IA= Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-216728b1836so107154715ad.0 for ; Tue, 28 Jan 2025 22:49:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738133348; x=1738738148; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KC3ZyRGR1rlydzPUPnDKs73KJRkoEFLjCJSbJ+Sf7vQ=; b=UIN3qDVtokuY+ht/wPCADo5s6OEqWfN0+vfliGvm4EmFWaTp6dyS9o/R4jxhotOSla 01jQVFlyGjqZUSuc8rGycAbYB88o3B1W27x+xteTnmw70mD42GmBb2gnh14ZwMMDiR7x 3JnLm+rQfIJFX1DsMk6xNWvHgcRpnc0weEcoo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738133348; x=1738738148; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KC3ZyRGR1rlydzPUPnDKs73KJRkoEFLjCJSbJ+Sf7vQ=; b=uBYbzDqjgtX5TE9XRW15QjlAvbutHO3Fbi6U6S29GKxjQs7iXDB3J+hxAcbaFkaDPR a0/0oNqC/y6Cgte9qka1ixx/HYCy9TgWk8fNQeKZvuNENBPZyIfezUb4bbrTb8HeUtIy lgUfboc66V8qkS3q7z3/C/qQIATeWZSgXdogC2VcbORsVMKpD7faDDbslMccXthM3L9s bS69OPs863xGCJeFHfHFqIw4/MhPq1H5lEkl1ZKPefmMyblXCNdyTqR1cNCK2fmD6kfe z/b2aoVCrSdM+qHXWKFc+A7G3hzVDrNHK70oJUgCatwgaiXnTlWRiyZ4J/LhHO4Xk3Jc 9bJQ== X-Gm-Message-State: AOJu0YwUMQNVFe8Q7vcQUlzbEJtjGWd9vndrRoYP7bwB0b9kktxu6As4 YwkAMY9wdFdJvEq8g8cgjMQhaZYhrrak5599dATteJApvNGTG0VSvKp3g5IsjQ== X-Gm-Gg: ASbGncsqGgHeKWRdEbZm3DeFXgRGeyuz+pW6Qk272chIjI6CSYxKjsCmnemggtLAgmy TszQtPPLCUQn8gV9Lzo1AdrxjHzxF8wEjQIC9taI+ABhUGgcvTVoHgZY/MMHsfOgWzXdN8btbTx GQX4p0mNhenILX9uUvMffhXuYgFtEgszAOALElWIpV3aOxNtpDM+d46xjyHvm6WvcXMvqoQaKm1 1IrthaHTW1knucVO/6l00Fkn8jPKzl+fVYcD4/UTp4RZ15/DQmZLppsuudJLngnDSF0C9ANg5dk 0PsWlqmEnDEYMIkNEw== X-Google-Smtp-Source: AGHT+IELG1ixvgXjpQkcQZp7u89eXn+v+EqpAJ6lPiX4gzZ3VprPTBr1WTOe4RzEZR60eYHdIh3XEQ== X-Received: by 2002:a17:902:d492:b0:216:6f1a:1c81 with SMTP id d9443c01a7336-21dd7c35597mr28059235ad.2.1738133348518; Tue, 28 Jan 2025 22:49:08 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b323:d70b:a1b8:1683]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21da3d9c9f2sm92158755ad.42.2025.01.28.22.49.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Jan 2025 22:49:08 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv1 1/6] zsmalloc: factor out pool locking helpers Date: Wed, 29 Jan 2025 15:43:47 +0900 Message-ID: <20250129064853.2210753-2-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250129064853.2210753-1-senozhatsky@chromium.org> References: <20250129064853.2210753-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B785640005 X-Stat-Signature: ekdrd39mgtmaoxso3eysju34nz11zrcd X-HE-Tag: 1738133349-571782 X-HE-Meta: U2FsdGVkX1/YxpG470GBoQj8EwYLD9G7FMQ9D6FObZ09v0P9Kj4iQGhSFsGVGoTRn9MM+ZxD6G0mVOwOL6crySP6bNDJhrzLR6W/BZlUKstYrFSqX5z1lspVxqSWqXMlCv4sbQ8472t3bwVvRVYHClRJeJdv39GhodiBFq1JyCDFLc1eVAHp58W/ABVHEbkjhq3oGB7llAPhQ2sq3siaVR1nZetxIdABs/GGM40RIFFil76oLES4fHTiXTXVXdUaJeD0k1+b6tc98SrgYeKtM6wM0ncTDWS/6Nec8jZ+VQjlIvfyNW/EtAFKwNq81nc1k10J03FBC5Hz3jBHZvS9QKgKwGNXI9ayDm2+2O/+/80ZfHW4wkQDYCjBvrkNFSjY7QC/sPPzZ+CePePq+YrsQUfxqDHZbKBL6e+d0qYajhJW90pjbof1t6r2CuuVBLCVX6KGTNPqzk1KX3QhMLGCbETaDe2ulIN1iXwHvamEAuq0K9ik6WruqadrvzxrI8a52NEw+TNDfB7Uaie3qd5ZYOiRIBYCl6+Y6VZ7XFJckywNIC+1KAYGJFXfNiYzl1GhA0bBHOQypuwaTkUe/siiDnJyMe0/k/eBio43gznBZHbNztEPveTDVFYo1XuFcLkFJy6UQ+dMLIpX8jlAg6kHhU5A2x8s4Zz1UBUizr2JOTIx+gEZyDSBrQgzNovzV5c/Db7M8Tmvpv43WQQW/IXfub53eKiSkPfu20jwJBLQ94f30fDENHeEvelSbaPowpMTDk+fq2w3Jta0qyf4fkBDVAxDaw7uKfA8eqROKYRk9/Pmdp7tuvan0vKGrbjMo1bjyaew++hWy2lvle734WiHf6VulMdmyQ2ecto2vY//Jy3zztqjmybhEFbgwPRLdZHqMUp7jSILZOQOBCeUKNE6Y7wsy7h8f1Ow+c7AV1GZbnW9HAIWJiNLov9VdY0luvG49IEcKKBSD76qbQuEvCL JrvrQdNH FWyRDjRNpPsVk4PGEP0nRVIR/i/0PJ5EBhmf0/Wwd3tYlgDI6mY4/pvnZ/fMZKZwKbe33FTLJvQX3smftFy6xJSDb0SNv2u3qA38jbyA/EolmXPFVotE2B2P4Zlo2rYsmj7WV5CwWexiPV4F9lfFgq9AgbqgqU56OMlKVeLhzbC3+OJZDXZkVuOcwkN/DgueKYKZxyuFN1Gcj8qOZm7fOMmH+e4ECUwX9ChJc57xt+ehZ/XzzHZCJqIbXj6flsI6W00oFtnvsqSehpyQ8fJ3lzltyyOZTNHSE9n+VM1W/7SrJGLwDWRGzp+Fi6A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We currently have a mix of migrate_{read,write}_lock() helpers that lock zspages, but it's zs_pool that actually has a ->migrate_lock access to which is opene-coded. Factor out pool migrate locking into helpers, zspage migration locking API will be renamed to reduce confusion. Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 56 +++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 41 insertions(+), 15 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 817626a351f8..2f8a2b139919 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -204,7 +204,8 @@ struct link_free { }; struct zs_pool { - const char *name; + /* protect page/zspage migration */ + rwlock_t migrate_lock; struct size_class *size_class[ZS_SIZE_CLASSES]; struct kmem_cache *handle_cachep; @@ -213,6 +214,7 @@ struct zs_pool { atomic_long_t pages_allocated; struct zs_pool_stats stats; + atomic_t compaction_in_progress; /* Compact classes */ struct shrinker *shrinker; @@ -223,11 +225,35 @@ struct zs_pool { #ifdef CONFIG_COMPACTION struct work_struct free_work; #endif - /* protect page/zspage migration */ - rwlock_t migrate_lock; - atomic_t compaction_in_progress; + + const char *name; }; +static void pool_write_unlock(struct zs_pool *pool) +{ + write_unlock(&pool->migrate_lock); +} + +static void pool_write_lock(struct zs_pool *pool) +{ + write_lock(&pool->migrate_lock); +} + +static void pool_read_unlock(struct zs_pool *pool) +{ + read_unlock(&pool->migrate_lock); +} + +static void pool_read_lock(struct zs_pool *pool) +{ + read_lock(&pool->migrate_lock); +} + +static bool pool_lock_is_contended(struct zs_pool *pool) +{ + return rwlock_is_contended(&pool->migrate_lock); +} + static inline void zpdesc_set_first(struct zpdesc *zpdesc) { SetPagePrivate(zpdesc_page(zpdesc)); @@ -1206,7 +1232,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, BUG_ON(in_interrupt()); /* It guarantees it can get zspage from handle safely */ - read_lock(&pool->migrate_lock); + pool_read_lock(pool); obj = handle_to_obj(handle); obj_to_location(obj, &zpdesc, &obj_idx); zspage = get_zspage(zpdesc); @@ -1218,7 +1244,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, * which is smaller granularity. */ migrate_read_lock(zspage); - read_unlock(&pool->migrate_lock); + pool_read_unlock(pool); class = zspage_class(pool, zspage); off = offset_in_page(class->size * obj_idx); @@ -1453,13 +1479,13 @@ void zs_free(struct zs_pool *pool, unsigned long handle) * The pool->migrate_lock protects the race with zpage's migration * so it's safe to get the page from handle. */ - read_lock(&pool->migrate_lock); + pool_read_lock(pool); obj = handle_to_obj(handle); obj_to_zpdesc(obj, &f_zpdesc); zspage = get_zspage(f_zpdesc); class = zspage_class(pool, zspage); spin_lock(&class->lock); - read_unlock(&pool->migrate_lock); + pool_read_unlock(pool); class_stat_sub(class, ZS_OBJS_INUSE, 1); obj_free(class->size, obj); @@ -1796,7 +1822,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * The pool migrate_lock protects the race between zpage migration * and zs_free. */ - write_lock(&pool->migrate_lock); + pool_write_lock(pool); class = zspage_class(pool, zspage); /* @@ -1833,7 +1859,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * Since we complete the data copy and set up new zspage structure, * it's okay to release migration_lock. */ - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); spin_unlock(&class->lock); migrate_write_unlock(zspage); @@ -1956,7 +1982,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, * protect the race between zpage migration and zs_free * as well as zpage allocation/free */ - write_lock(&pool->migrate_lock); + pool_write_lock(pool); spin_lock(&class->lock); while (zs_can_compact(class)) { int fg; @@ -1983,14 +2009,14 @@ static unsigned long __zs_compact(struct zs_pool *pool, src_zspage = NULL; if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100 - || rwlock_is_contended(&pool->migrate_lock)) { + || pool_lock_is_contended(pool)) { putback_zspage(class, dst_zspage); dst_zspage = NULL; spin_unlock(&class->lock); - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); cond_resched(); - write_lock(&pool->migrate_lock); + pool_write_lock(pool); spin_lock(&class->lock); } } @@ -2002,7 +2028,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, putback_zspage(class, dst_zspage); spin_unlock(&class->lock); - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); return pages_freed; }