From patchwork Wed Feb 12 06:27:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13971038 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B477DC0219E for ; Wed, 12 Feb 2025 06:32:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43684280008; Wed, 12 Feb 2025 01:32:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3BE7C280001; Wed, 12 Feb 2025 01:32:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 238AF280008; Wed, 12 Feb 2025 01:32:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id F2E56280001 for ; Wed, 12 Feb 2025 01:32:55 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B6D7FA0E96 for ; Wed, 12 Feb 2025 06:32:55 +0000 (UTC) X-FDA: 83110324710.07.F0C6B09 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf30.hostedemail.com (Postfix) with ESMTP id E3E8F8000B for ; Wed, 12 Feb 2025 06:32:53 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=DCnyq0mn; spf=pass (imf30.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.177 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739341974; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=J+hKSGhtq9SEMO5RmXUCmDriDLy9gufp9/zvmYkmXcc=; b=QRyV206neetQVXHDOA4/qfbWBQ+yMP+pW9OFTopfJqY6x04r+sayv/CYbx++0YskGA57Rs GPljejNQvsmX4luQOnAgZTVNIzAwOWdmb+KgHzV3/dvkDNYxFYh0dT6qk3i8z122f8LUBp yKIlz3ITuH29rSKvKMAcsE3ElhzIrGU= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=DCnyq0mn; spf=pass (imf30.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.177 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739341974; a=rsa-sha256; cv=none; b=nqXTWlwRGiv1gPzXQ80sobZeDQd0J6eF4wzRf8YGqLaT/OyLg7kv1Kik6Wc2qH5W7FEA77 e9iSVOJtvLbqO1f4JP5Toaj76TpGyRPE6cCgXEWTkEbXJWzBCMVp4opbVw1LCfSfTnVRSZ Obx7pDmwlOPyu0jiAFQbdmmIIFih9io= Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-21f6a47d617so68242805ad.2 for ; Tue, 11 Feb 2025 22:32:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1739341973; x=1739946773; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=J+hKSGhtq9SEMO5RmXUCmDriDLy9gufp9/zvmYkmXcc=; b=DCnyq0mnAjijhrB8+8tpdOf1lq39A/V15sZ2xu1QxsqTdn9o77Q25lVorJKmWhLdSh oiMHrPG58zOynQC6qCf/d6MtcQgOid154R6X0tMgkJzOOUp38JbhlxkF56XuksUYo7Ne BeuWQ/JpWOCftIfhDmj091QM6KHooOGmYbATA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739341973; x=1739946773; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J+hKSGhtq9SEMO5RmXUCmDriDLy9gufp9/zvmYkmXcc=; b=epKkKhMwspPP6EeFhEhdSR5qZbzNscsaJ/K9ccxAc3gncltowJIdOcWTfY0KLtUADJ 3dSXZHFpQSLRycIZmsGXW6lwDRSZfN/yB6x6SHGTOXI5Q+v+4S9tEpxVwlV1h64+X60X HRAu3HFIM11kfcHUJkJpnAHukH9fXXVAyFNnmYNqQsLCPO1MsJjLIb6ldxp+2xI7F+6W IXOvq94o+zIv6if9dEXejz6yhTF+VhKWsV77lLmipLU0a/ibkyPjNLDsKri2qj0wcRFs 4DSw+iwb0XA/edYKEEUgduK2fqCrWpixsUaW+GGX/S0K+2ARW9XqocsiiI8mygPcxXXH jyMw== X-Forwarded-Encrypted: i=1; AJvYcCV7KmjabVuxdfJsIgMp/XnBTIb8DPZ3uvXG4rlwvi3xKVJZmcz0sE7m1LzYLJ7oyyyb8zKCZSeo0g==@kvack.org X-Gm-Message-State: AOJu0Yx/CFie1OPpWpunfjjZoMSsZuqVbkrkqMBp67fPNEe5Y2DT/bts viRktihqOqSzP7TUxZRISVTHJb5eH/hUjFB37+jndhhm6CUdsVjZxLJn207F+g== X-Gm-Gg: ASbGnctFbpsUwvS6boqXnUtGVWmEHY6BKLcChtUMBQM82PNki197OZNGqLWnrcATtIr 4iqcLuL6b/RRHLIbtxcZeF67ToAHv51lkTdLRfdvHCkbtncJrdmioQTGU++DqtX+OmkEFjU1Rwl 5UU74t+KM7HOdDfR0KCrtopjoT5qli/ex7+GE7+66Ij2+oyMP9tBmi+ozeBWcoxQhM+ekx7cjl+ Eqi3wUouFL+S8eg/7wVmmXPfVOP354olLztSn6VpoiucWlu+dZmL0ggazCLWHalX/cXVInjvAJ+ 1R0nmfiF46U/woSTww== X-Google-Smtp-Source: AGHT+IGDl9mV+HtGUyn4myzfZBpkMQ7To3I2eCgVPnnWhAaFxnQ0KIUaHGWBSJNUrky9tjmUsZBSrQ== X-Received: by 2002:a05:6a20:3943:b0:1ee:650b:ac22 with SMTP id adf61e73a8af0-1ee650bb853mr710537637.40.1739341972819; Tue, 11 Feb 2025 22:32:52 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:69f5:6852:451e:8142]) by smtp.gmail.com with UTF8SMTPSA id d2e1a72fcca58-73077975f05sm7037787b3a.14.2025.02.11.22.32.50 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 11 Feb 2025 22:32:52 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Yosry Ahmed , Kairui Song , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH v5 10/18] zsmalloc: factor out pool locking helpers Date: Wed, 12 Feb 2025 15:27:08 +0900 Message-ID: <20250212063153.179231-11-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog In-Reply-To: <20250212063153.179231-1-senozhatsky@chromium.org> References: <20250212063153.179231-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: E3E8F8000B X-Stat-Signature: 3f5m6j4fz1dswmudt8wxdbgu7he7efyy X-HE-Tag: 1739341973-544835 X-HE-Meta: U2FsdGVkX19jCl9K4HJGtKXyfILlIfptqJp/PCeNHj76KguM3IUX/mtfaL2CTT8LoCpTLcKcP8LkzM272aZD0fQkGL+Z8anICVTcdZoeK8jGdccLx257fmtLOVwjgv2gUCDuxh3B4InF+XfRWG9CiaLDOPlYYm0nUjhcAsgoVvwQreqzVoNbxL3a3dqmNTcsKosCcykZnVVcpKe2R9ViA0xQ2Zs8FLecz5oW7VBxYA8ig5yQ+7pSL92ifl4XncooEqRb3FWqMY1zXDHcqVfVzKoLRvnA+R7c4MKVhsWk/TEwGwcwj24+5LzkTt3Nj4E+TITpI5GXjCXG1C2rO6WMyd2j3/IDLxnkgfxa8erVqYr3BwNrvqv1dL5De/7w1Gz7x0G6aDJPaFc6DHsTMxF/OOsS+nds9PU9u+Ig6zEbBFUGstVnRlcPte6rnfgU3hEbj3Xuf0UvliNobHdlSg7RoNjuhYTXD89YEOY2eGN0DwZ8eSEBzmGh7kQaygMhJsK5zhhSLBYkzRXRMORucnPUWKuwcFlAyVx1KP9iAHlOamhJXfqUlwqBTjp/m63e6qzwais/paVasGfKd2W6n3OSISLxUpOwyt6V2ChileYS1rz5fbAHpfX6QCbAEVXgurNOV6oakKQX0+BR+IHVPmrciF50o9QEKsZkBTWbFI9vqRkmHMXEC+k6JPZi+60+w64PgcKsvqXySiIsJt5f2oyg5nMy9IaCdgRrvla8jOjIrpCmPExKCaNSISV2wvwij+V9+YsG26ewjtnx9pBgAHeyoCGr6/KDw6hd3dbzo+7OY9LHtpSH3k8y0tDFKOZEjW2+KJpB9tImyWuMigqz7vanH/sCqfxf/n3JN+k81+WBO/ZCJJ9hxR+xnSeKLDMIIKhyYbuxz7sYL1qCrY6QSTHJDG1LJqAUwB4V6Tskmc5DBYkRl4SCgN+WARbDSXmE4UfihVU299rKLhHuyAEaIzM 0/3kXvLX du7dh1ddvIi3ahcd54juchT6j83uvQykpBla/oB5+Vk9DYZOi4qUaquuuVYkVCUfuI8TLekgPgq2+ylJ3fl5ObxXWC5o1z6DWnCQ6Qo8xuO9bq59Qvr2AzI5h8d5kgM9FunK0H37PXvAO7mCAh2KQg9JuWTwaOP4BTU8wWBscrW4v+D63LqP0S3aj0Ff5KyFEgQQeQQIgcBRPRInyWycDByCODnA+zYP2D1jgccTzE7AqJdpHO4JnF5VGh3XR4GFV95JasLHtRayOp7firtckhP5pPPJ2GhHRk7kZnkcUuHdWi4cyMLchh/tTWw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We currently have a mix of migrate_{read,write}_lock() helpers that lock zspages, but it's zs_pool that actually has a ->migrate_lock access to which is opene-coded. Factor out pool migrate locking into helpers, zspage migration locking API will be renamed to reduce confusion. It's worth mentioning that zsmalloc locks sync not only migration, but also compaction. Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 63 +++++++++++++++++++++++++++++++++++---------------- 1 file changed, 44 insertions(+), 19 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 6d0e47f7ae33..47c638df47c5 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -18,7 +18,7 @@ /* * lock ordering: * page_lock - * pool->migrate_lock + * pool->lock * class->lock * zspage->lock */ @@ -224,10 +224,35 @@ struct zs_pool { struct work_struct free_work; #endif /* protect page/zspage migration */ - rwlock_t migrate_lock; + rwlock_t lock; atomic_t compaction_in_progress; }; +static void pool_write_unlock(struct zs_pool *pool) +{ + write_unlock(&pool->lock); +} + +static void pool_write_lock(struct zs_pool *pool) +{ + write_lock(&pool->lock); +} + +static void pool_read_unlock(struct zs_pool *pool) +{ + read_unlock(&pool->lock); +} + +static void pool_read_lock(struct zs_pool *pool) +{ + read_lock(&pool->lock); +} + +static bool pool_lock_is_contended(struct zs_pool *pool) +{ + return rwlock_is_contended(&pool->lock); +} + static inline void zpdesc_set_first(struct zpdesc *zpdesc) { SetPagePrivate(zpdesc_page(zpdesc)); @@ -1206,7 +1231,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, BUG_ON(in_interrupt()); /* It guarantees it can get zspage from handle safely */ - read_lock(&pool->migrate_lock); + pool_read_lock(pool); obj = handle_to_obj(handle); obj_to_location(obj, &zpdesc, &obj_idx); zspage = get_zspage(zpdesc); @@ -1218,7 +1243,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, * which is smaller granularity. */ migrate_read_lock(zspage); - read_unlock(&pool->migrate_lock); + pool_read_unlock(pool); class = zspage_class(pool, zspage); off = offset_in_page(class->size * obj_idx); @@ -1450,16 +1475,16 @@ void zs_free(struct zs_pool *pool, unsigned long handle) return; /* - * The pool->migrate_lock protects the race with zpage's migration + * The pool->lock protects the race with zpage's migration * so it's safe to get the page from handle. */ - read_lock(&pool->migrate_lock); + pool_read_lock(pool); obj = handle_to_obj(handle); obj_to_zpdesc(obj, &f_zpdesc); zspage = get_zspage(f_zpdesc); class = zspage_class(pool, zspage); spin_lock(&class->lock); - read_unlock(&pool->migrate_lock); + pool_read_unlock(pool); class_stat_sub(class, ZS_OBJS_INUSE, 1); obj_free(class->size, obj); @@ -1793,10 +1818,10 @@ static int zs_page_migrate(struct page *newpage, struct page *page, pool = zspage->pool; /* - * The pool migrate_lock protects the race between zpage migration + * The pool lock protects the race between zpage migration * and zs_free. */ - write_lock(&pool->migrate_lock); + pool_write_lock(pool); class = zspage_class(pool, zspage); /* @@ -1833,7 +1858,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * Since we complete the data copy and set up new zspage structure, * it's okay to release migration_lock. */ - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); spin_unlock(&class->lock); migrate_write_unlock(zspage); @@ -1956,7 +1981,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, * protect the race between zpage migration and zs_free * as well as zpage allocation/free */ - write_lock(&pool->migrate_lock); + pool_write_lock(pool); spin_lock(&class->lock); while (zs_can_compact(class)) { int fg; @@ -1983,14 +2008,14 @@ static unsigned long __zs_compact(struct zs_pool *pool, src_zspage = NULL; if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100 - || rwlock_is_contended(&pool->migrate_lock)) { + || pool_lock_is_contended(pool)) { putback_zspage(class, dst_zspage); dst_zspage = NULL; spin_unlock(&class->lock); - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); cond_resched(); - write_lock(&pool->migrate_lock); + pool_write_lock(pool); spin_lock(&class->lock); } } @@ -2002,7 +2027,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, putback_zspage(class, dst_zspage); spin_unlock(&class->lock); - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); return pages_freed; } @@ -2014,10 +2039,10 @@ unsigned long zs_compact(struct zs_pool *pool) unsigned long pages_freed = 0; /* - * Pool compaction is performed under pool->migrate_lock so it is basically + * Pool compaction is performed under pool->lock so it is basically * single-threaded. Having more than one thread in __zs_compact() - * will increase pool->migrate_lock contention, which will impact other - * zsmalloc operations that need pool->migrate_lock. + * will increase pool->lock contention, which will impact other + * zsmalloc operations that need pool->lock. */ if (atomic_xchg(&pool->compaction_in_progress, 1)) return 0; @@ -2139,7 +2164,7 @@ struct zs_pool *zs_create_pool(const char *name) return NULL; init_deferred_free(pool); - rwlock_init(&pool->migrate_lock); + rwlock_init(&pool->lock); atomic_set(&pool->compaction_in_progress, 0); pool->name = kstrdup(name, GFP_KERNEL);