From patchwork Wed Jan 29 06:43:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13953465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C574EC0218D for ; Wed, 29 Jan 2025 06:49:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4CB53280034; Wed, 29 Jan 2025 01:49:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 47AA2280032; Wed, 29 Jan 2025 01:49:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 31B6C280034; Wed, 29 Jan 2025 01:49:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 10DF0280032 for ; Wed, 29 Jan 2025 01:49:12 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 957B314049E for ; Wed, 29 Jan 2025 06:49:11 +0000 (UTC) X-FDA: 83059562502.06.2B720B1 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf27.hostedemail.com (Postfix) with ESMTP id B785640005 for ; Wed, 29 Jan 2025 06:49:09 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=UIN3qDVt; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf27.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.172 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738133349; a=rsa-sha256; cv=none; b=zGjxlpVlKT+ZcraGU6InGBUqrKOA9slK+SlvXKos6FKeX3kqvQKE3QAHALwc5w8sKkKIeC Jv391FgRqPWYFljazFW9lITXwJaYV9w7NR1sFg2sMSvJxHxEUxGwVlLGHznVrXIx2xTW1C GJrgzA1RB0LHV3ezINrjOxxhpS8U2aU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=UIN3qDVt; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf27.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.172 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738133349; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KC3ZyRGR1rlydzPUPnDKs73KJRkoEFLjCJSbJ+Sf7vQ=; b=LAcWIywa+IgMqIPDafKJDD2jVop9aY0VPtCErhU5iXjxXdXiil8hsmyo+VzFDtyJHY6nvH ZZEsKQtB7hmCWl07fZmc1jjwC8aSsCcHAb2/6MEyYLpV9tpbkh6iyWmb0mS9TNv90Imkxn 9hWcUzc+ytkAkuGdFC3n++avU5mY4IA= Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-216728b1836so107154715ad.0 for ; Tue, 28 Jan 2025 22:49:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738133348; x=1738738148; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KC3ZyRGR1rlydzPUPnDKs73KJRkoEFLjCJSbJ+Sf7vQ=; b=UIN3qDVtokuY+ht/wPCADo5s6OEqWfN0+vfliGvm4EmFWaTp6dyS9o/R4jxhotOSla 01jQVFlyGjqZUSuc8rGycAbYB88o3B1W27x+xteTnmw70mD42GmBb2gnh14ZwMMDiR7x 3JnLm+rQfIJFX1DsMk6xNWvHgcRpnc0weEcoo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738133348; x=1738738148; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KC3ZyRGR1rlydzPUPnDKs73KJRkoEFLjCJSbJ+Sf7vQ=; b=uBYbzDqjgtX5TE9XRW15QjlAvbutHO3Fbi6U6S29GKxjQs7iXDB3J+hxAcbaFkaDPR a0/0oNqC/y6Cgte9qka1ixx/HYCy9TgWk8fNQeKZvuNENBPZyIfezUb4bbrTb8HeUtIy lgUfboc66V8qkS3q7z3/C/qQIATeWZSgXdogC2VcbORsVMKpD7faDDbslMccXthM3L9s bS69OPs863xGCJeFHfHFqIw4/MhPq1H5lEkl1ZKPefmMyblXCNdyTqR1cNCK2fmD6kfe z/b2aoVCrSdM+qHXWKFc+A7G3hzVDrNHK70oJUgCatwgaiXnTlWRiyZ4J/LhHO4Xk3Jc 9bJQ== X-Gm-Message-State: AOJu0YwUMQNVFe8Q7vcQUlzbEJtjGWd9vndrRoYP7bwB0b9kktxu6As4 YwkAMY9wdFdJvEq8g8cgjMQhaZYhrrak5599dATteJApvNGTG0VSvKp3g5IsjQ== X-Gm-Gg: ASbGncsqGgHeKWRdEbZm3DeFXgRGeyuz+pW6Qk272chIjI6CSYxKjsCmnemggtLAgmy TszQtPPLCUQn8gV9Lzo1AdrxjHzxF8wEjQIC9taI+ABhUGgcvTVoHgZY/MMHsfOgWzXdN8btbTx GQX4p0mNhenILX9uUvMffhXuYgFtEgszAOALElWIpV3aOxNtpDM+d46xjyHvm6WvcXMvqoQaKm1 1IrthaHTW1knucVO/6l00Fkn8jPKzl+fVYcD4/UTp4RZ15/DQmZLppsuudJLngnDSF0C9ANg5dk 0PsWlqmEnDEYMIkNEw== X-Google-Smtp-Source: AGHT+IELG1ixvgXjpQkcQZp7u89eXn+v+EqpAJ6lPiX4gzZ3VprPTBr1WTOe4RzEZR60eYHdIh3XEQ== X-Received: by 2002:a17:902:d492:b0:216:6f1a:1c81 with SMTP id d9443c01a7336-21dd7c35597mr28059235ad.2.1738133348518; Tue, 28 Jan 2025 22:49:08 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b323:d70b:a1b8:1683]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21da3d9c9f2sm92158755ad.42.2025.01.28.22.49.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Jan 2025 22:49:08 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv1 1/6] zsmalloc: factor out pool locking helpers Date: Wed, 29 Jan 2025 15:43:47 +0900 Message-ID: <20250129064853.2210753-2-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250129064853.2210753-1-senozhatsky@chromium.org> References: <20250129064853.2210753-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B785640005 X-Stat-Signature: ekdrd39mgtmaoxso3eysju34nz11zrcd X-HE-Tag: 1738133349-571782 X-HE-Meta: U2FsdGVkX1/YxpG470GBoQj8EwYLD9G7FMQ9D6FObZ09v0P9Kj4iQGhSFsGVGoTRn9MM+ZxD6G0mVOwOL6crySP6bNDJhrzLR6W/BZlUKstYrFSqX5z1lspVxqSWqXMlCv4sbQ8472t3bwVvRVYHClRJeJdv39GhodiBFq1JyCDFLc1eVAHp58W/ABVHEbkjhq3oGB7llAPhQ2sq3siaVR1nZetxIdABs/GGM40RIFFil76oLES4fHTiXTXVXdUaJeD0k1+b6tc98SrgYeKtM6wM0ncTDWS/6Nec8jZ+VQjlIvfyNW/EtAFKwNq81nc1k10J03FBC5Hz3jBHZvS9QKgKwGNXI9ayDm2+2O/+/80ZfHW4wkQDYCjBvrkNFSjY7QC/sPPzZ+CePePq+YrsQUfxqDHZbKBL6e+d0qYajhJW90pjbof1t6r2CuuVBLCVX6KGTNPqzk1KX3QhMLGCbETaDe2ulIN1iXwHvamEAuq0K9ik6WruqadrvzxrI8a52NEw+TNDfB7Uaie3qd5ZYOiRIBYCl6+Y6VZ7XFJckywNIC+1KAYGJFXfNiYzl1GhA0bBHOQypuwaTkUe/siiDnJyMe0/k/eBio43gznBZHbNztEPveTDVFYo1XuFcLkFJy6UQ+dMLIpX8jlAg6kHhU5A2x8s4Zz1UBUizr2JOTIx+gEZyDSBrQgzNovzV5c/Db7M8Tmvpv43WQQW/IXfub53eKiSkPfu20jwJBLQ94f30fDENHeEvelSbaPowpMTDk+fq2w3Jta0qyf4fkBDVAxDaw7uKfA8eqROKYRk9/Pmdp7tuvan0vKGrbjMo1bjyaew++hWy2lvle734WiHf6VulMdmyQ2ecto2vY//Jy3zztqjmybhEFbgwPRLdZHqMUp7jSILZOQOBCeUKNE6Y7wsy7h8f1Ow+c7AV1GZbnW9HAIWJiNLov9VdY0luvG49IEcKKBSD76qbQuEvCL JrvrQdNH FWyRDjRNpPsVk4PGEP0nRVIR/i/0PJ5EBhmf0/Wwd3tYlgDI6mY4/pvnZ/fMZKZwKbe33FTLJvQX3smftFy6xJSDb0SNv2u3qA38jbyA/EolmXPFVotE2B2P4Zlo2rYsmj7WV5CwWexiPV4F9lfFgq9AgbqgqU56OMlKVeLhzbC3+OJZDXZkVuOcwkN/DgueKYKZxyuFN1Gcj8qOZm7fOMmH+e4ECUwX9ChJc57xt+ehZ/XzzHZCJqIbXj6flsI6W00oFtnvsqSehpyQ8fJ3lzltyyOZTNHSE9n+VM1W/7SrJGLwDWRGzp+Fi6A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We currently have a mix of migrate_{read,write}_lock() helpers that lock zspages, but it's zs_pool that actually has a ->migrate_lock access to which is opene-coded. Factor out pool migrate locking into helpers, zspage migration locking API will be renamed to reduce confusion. Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 56 +++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 41 insertions(+), 15 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 817626a351f8..2f8a2b139919 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -204,7 +204,8 @@ struct link_free { }; struct zs_pool { - const char *name; + /* protect page/zspage migration */ + rwlock_t migrate_lock; struct size_class *size_class[ZS_SIZE_CLASSES]; struct kmem_cache *handle_cachep; @@ -213,6 +214,7 @@ struct zs_pool { atomic_long_t pages_allocated; struct zs_pool_stats stats; + atomic_t compaction_in_progress; /* Compact classes */ struct shrinker *shrinker; @@ -223,11 +225,35 @@ struct zs_pool { #ifdef CONFIG_COMPACTION struct work_struct free_work; #endif - /* protect page/zspage migration */ - rwlock_t migrate_lock; - atomic_t compaction_in_progress; + + const char *name; }; +static void pool_write_unlock(struct zs_pool *pool) +{ + write_unlock(&pool->migrate_lock); +} + +static void pool_write_lock(struct zs_pool *pool) +{ + write_lock(&pool->migrate_lock); +} + +static void pool_read_unlock(struct zs_pool *pool) +{ + read_unlock(&pool->migrate_lock); +} + +static void pool_read_lock(struct zs_pool *pool) +{ + read_lock(&pool->migrate_lock); +} + +static bool pool_lock_is_contended(struct zs_pool *pool) +{ + return rwlock_is_contended(&pool->migrate_lock); +} + static inline void zpdesc_set_first(struct zpdesc *zpdesc) { SetPagePrivate(zpdesc_page(zpdesc)); @@ -1206,7 +1232,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, BUG_ON(in_interrupt()); /* It guarantees it can get zspage from handle safely */ - read_lock(&pool->migrate_lock); + pool_read_lock(pool); obj = handle_to_obj(handle); obj_to_location(obj, &zpdesc, &obj_idx); zspage = get_zspage(zpdesc); @@ -1218,7 +1244,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, * which is smaller granularity. */ migrate_read_lock(zspage); - read_unlock(&pool->migrate_lock); + pool_read_unlock(pool); class = zspage_class(pool, zspage); off = offset_in_page(class->size * obj_idx); @@ -1453,13 +1479,13 @@ void zs_free(struct zs_pool *pool, unsigned long handle) * The pool->migrate_lock protects the race with zpage's migration * so it's safe to get the page from handle. */ - read_lock(&pool->migrate_lock); + pool_read_lock(pool); obj = handle_to_obj(handle); obj_to_zpdesc(obj, &f_zpdesc); zspage = get_zspage(f_zpdesc); class = zspage_class(pool, zspage); spin_lock(&class->lock); - read_unlock(&pool->migrate_lock); + pool_read_unlock(pool); class_stat_sub(class, ZS_OBJS_INUSE, 1); obj_free(class->size, obj); @@ -1796,7 +1822,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * The pool migrate_lock protects the race between zpage migration * and zs_free. */ - write_lock(&pool->migrate_lock); + pool_write_lock(pool); class = zspage_class(pool, zspage); /* @@ -1833,7 +1859,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * Since we complete the data copy and set up new zspage structure, * it's okay to release migration_lock. */ - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); spin_unlock(&class->lock); migrate_write_unlock(zspage); @@ -1956,7 +1982,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, * protect the race between zpage migration and zs_free * as well as zpage allocation/free */ - write_lock(&pool->migrate_lock); + pool_write_lock(pool); spin_lock(&class->lock); while (zs_can_compact(class)) { int fg; @@ -1983,14 +2009,14 @@ static unsigned long __zs_compact(struct zs_pool *pool, src_zspage = NULL; if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100 - || rwlock_is_contended(&pool->migrate_lock)) { + || pool_lock_is_contended(pool)) { putback_zspage(class, dst_zspage); dst_zspage = NULL; spin_unlock(&class->lock); - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); cond_resched(); - write_lock(&pool->migrate_lock); + pool_write_lock(pool); spin_lock(&class->lock); } } @@ -2002,7 +2028,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, putback_zspage(class, dst_zspage); spin_unlock(&class->lock); - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); return pages_freed; } From patchwork Wed Jan 29 06:43:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13953466 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86B7AC02192 for ; Wed, 29 Jan 2025 06:49:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E4173280036; Wed, 29 Jan 2025 01:49:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DF0F9280032; Wed, 29 Jan 2025 01:49:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C6B45280036; Wed, 29 Jan 2025 01:49:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A8426280032 for ; Wed, 29 Jan 2025 01:49:17 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2E1FAC09A8 for ; Wed, 29 Jan 2025 06:49:17 +0000 (UTC) X-FDA: 83059562754.27.0D9D17E Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf30.hostedemail.com (Postfix) with ESMTP id 503D48000F for ; Wed, 29 Jan 2025 06:49:15 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=Hk7koUlZ; spf=pass (imf30.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.175 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738133355; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=U5xlt93WdnDLIhYO6MmK7r5usvow5aZkNdIQfTNWjy0=; b=o0orT44mTMHAtPXfgz1G+2FOAtU+w3Ae32KRPj+asFxyM/lVFjGo5njlh84/ve3bG5d6qf e2Pkbg/uuAXp6Z3EtXFdJ8Q8KLrkBegYlmpNIFMOHHn4oIXhdTnW92Caq/aHl5iC1DMOt9 xFxCe429T7CFYWG81eUYsZg58uTfncc= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=Hk7koUlZ; spf=pass (imf30.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.175 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738133355; a=rsa-sha256; cv=none; b=tEF2S1lKoasmEiCN2mlXUJ2IaV738r9ZHeNmT1y84mpzR4dfxSTLro0j8/HPdHf+DJ3PdC 1fAGYa1njHJzcYj4eP7ggFOhyARs6PmyUVMv34zEsdMt/81d0EFiqhkdiXJHsc+BeGbwOf zpbM8HoU2koX9NOc7nmjWZDAf+itYoI= Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-21636268e43so141759805ad.2 for ; Tue, 28 Jan 2025 22:49:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738133354; x=1738738154; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=U5xlt93WdnDLIhYO6MmK7r5usvow5aZkNdIQfTNWjy0=; b=Hk7koUlZ6Uy+0HLrfot3esDksNg1p7t/ieQveZta8kNCCaPJOnPy5CEY+b1lMy7LWV 0CCvycbzR3SXThloPcolPBNEhfAFEbWnYPtqpd2oCtSIFGI7UUM/fy+wHmm+psL1ZE8M 1+31U3czC6C6aWCc/7ds9gCmUAQhm7kly+B1M= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738133354; x=1738738154; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=U5xlt93WdnDLIhYO6MmK7r5usvow5aZkNdIQfTNWjy0=; b=DvdZjS/0AB/Qg4lEpB0DcdyWKLEJG9eo0+cFRgD1dl5tWkOX37KDmDvcppWmM0hNF2 ygyU4pOij5Dq7eHqeLgnj4fUXzG2LdRBIvfUsak2L1YbZ+xWV0SMeZlZQSQHiBmBZR5P 21ojovZ7dOQObnWNslMsMiYH3oG+enMa/ZB3IgMF6lq2OQ95WpORFGr6B9NKzxjpioWm XIBi/InV4M4lklYeU//xtEcHLgFdzfL6o8nNXEfgUBeYSbOdjn3f2q1GC4aBu2wPWk61 /I2ESmPAFI6AWg2wk2OhpOoShj8k/3JklXEjg4kISsQJbHSszToXuyt4dKc/9Dk7wX9R vO2w== X-Gm-Message-State: AOJu0YwKJP9To5TyiIdSLCgvKu5CAk4M9HfTar2Il5woKXEpUYw1ki/e 1bUChCyjHzods4LNZdvdl7ryl5svcbU32+TZWHcEsptFV1OdIFcErBIsKwYhRw== X-Gm-Gg: ASbGncvjUsNH+h1O5r3G5PXvMQw451PhGLIpALqYQLTGmpBFlTRnwUal9CKkN/UzgPd 9J+B7UlvmmrSsr0NQzQkfD1Otolfi7+ClBuegIQ2nBaaOmSJNBd4K9ME+Ev0XKQ30O+1KHu5ofN shCY6YeNhxTEbA3K8fgDiOm7S5u16VwVi3qai4brIDv8iWz400M0lEzg6/GuGNkoQq1KJgxniWo ZoIwxTUSYpX3EDDoClHa3dpH/rFNGtEOxURaAYKUZlB6CHVjQKUKf49aM/JvzGFGwVKamlAYxYh Fv9WlMZsD4jdpKjVpg== X-Google-Smtp-Source: AGHT+IFzqhCwAv4hpOOWLJMonKQwPRQTh3PdlNYEJTZLHBEywcIgCKkqWY9wtUBWb0Oa2eHoYdi0kA== X-Received: by 2002:a17:902:da85:b0:215:6489:cfbf with SMTP id d9443c01a7336-21dd7c44a4dmr33973515ad.11.1738133354160; Tue, 28 Jan 2025 22:49:14 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b323:d70b:a1b8:1683]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21da414d87fsm92309165ad.172.2025.01.28.22.49.11 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Jan 2025 22:49:13 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv1 2/6] zsmalloc: factor out size-class locking helpers Date: Wed, 29 Jan 2025 15:43:48 +0900 Message-ID: <20250129064853.2210753-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250129064853.2210753-1-senozhatsky@chromium.org> References: <20250129064853.2210753-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 503D48000F X-Stat-Signature: kbw9yzzxn4g45kwwg1tm4kn9hgkqiikw X-Rspam-User: X-HE-Tag: 1738133355-170714 X-HE-Meta: U2FsdGVkX18CtQ8k47o/k1MQojn/2AWdIlvRiuPkVQ7sx9POFxheBSU0TTksVq9zjS4GWB2XpYtKw7aMV2bxghy1h11ATybR7pMbsreLqNkuIoNY5BH1o/d7IHMyZVqw+3ReHAjldU0nPwCSb3eTMLuwDC4owCqHksWAfoxOZcBqLE0T5R1LMk9QO9FzpC+v7SV5bLvzE5sM5dv9l9xDQS4JH5oRcyTN/X1ftrOV7eOb+hYaH8G4LHMs0+cN5R79J01TtyAdXtvCtQooSsI0Y5p6oR1ga4cUcHOOG6FXk7rUqbvcnK3Ustb7/4Cv6q4K1ghEQ4eD8sXTLxyf0ac29m+MvkuJ780HqQBZ7MNVrAqaVUSIy+NoowUgC2/nQ5FwiEBDF5hH7ZtUCNoyERiyNggSk7sF5rJVPDOE98o+F8ifnFAnm5eKEsKihtDVYaWKwrWJNC+Law1cN5fMqEXoj9mCrztgNaOH2KNTQb9UoSyHn+FxWxCFEQ3INv2kqpHyCCL9YNlwViOSSIDKqIKjYUlN/TIdGpndkL5NuvO7WQ4wY0ey1G9GSzCBtAwS1oaOPCObKzE9xGeyo71ZTG+veXBoz8r28mRIte5PHYEEBDXg29LQ5BUhc4T4T8QpfQtkKQY5VDGWzwI8Gc/0bRpD+FsbDUooxFlo/mWdt3+CQkrS6VvYRN330F86KEOZZVtxVaSuPw69ud7tO3Hkpeo1OywcGXerxWNnWSuZjux2CuGmFCPxrhAtMxxQMEAVH2+9zqClmwGog116Hcad02d5yuXPaq0FPO4Xw6uf16rdb6gCVeE0xH8eJDYTbzALoXcsANtbKAoevUSpxGcPwn3J1TukrXBoClmjk0/zxiYSPJhd4vFWbRUJEE6nD92vx0qUMTHo391HAXFNX5CKCS2aXW+bpmH2oQ5iOXIFD73Qi2huv74UKy3P5bAlrA7UYSjYQ4TCv0qwoKm7AqgyNPq JmlCyLZK 1ngxj9uHdqKaI8rbZgr4+/vaayrXs8rT7I6mViULdDgbcm8LkvZwCb7RSeVL3dRmI0ive2wLFuHH2ckE1XDdfCfZ8Cdm1CYUm4C40CFghUI5++QRQr1pM2G+2uD+Lg0td8A2+MTmKIxM/g3t5mI3akU0oNu8tvVLv1KY656j3wDSRCptWCrhgxF8Dw4w+qb0R1d++FXsFWl3iJDriMwwGmuDX7lr5N+qmgai9fRnfaiJhF6RBk336Pgy7Q0qiq+HYfsdUX/kQ3R/4ywGkv/t/EygXYO2OJkQkghc0UgEqLFK9d4d7dBm9NEDaOw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move open-coded size-class locking to dedicated helpers. Signed-off-by: Sergey Senozhatsky Reviewed-by: Yosry Ahmed --- mm/zsmalloc.c | 47 ++++++++++++++++++++++++++++------------------- 1 file changed, 28 insertions(+), 19 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 2f8a2b139919..0f575307675d 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -254,6 +254,16 @@ static bool pool_lock_is_contended(struct zs_pool *pool) return rwlock_is_contended(&pool->migrate_lock); } +static void size_class_lock(struct size_class *class) +{ + spin_lock(&class->lock); +} + +static void size_class_unlock(struct size_class *class) +{ + spin_unlock(&class->lock); +} + static inline void zpdesc_set_first(struct zpdesc *zpdesc) { SetPagePrivate(zpdesc_page(zpdesc)); @@ -614,8 +624,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) if (class->index != i) continue; - spin_lock(&class->lock); - + size_class_lock(class); seq_printf(s, " %5u %5u ", i, class->size); for (fg = ZS_INUSE_RATIO_10; fg < NR_FULLNESS_GROUPS; fg++) { inuse_totals[fg] += class_stat_read(class, fg); @@ -625,7 +634,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) obj_allocated = class_stat_read(class, ZS_OBJS_ALLOCATED); obj_used = class_stat_read(class, ZS_OBJS_INUSE); freeable = zs_can_compact(class); - spin_unlock(&class->lock); + size_class_unlock(class); objs_per_zspage = class->objs_per_zspage; pages_used = obj_allocated / objs_per_zspage * @@ -1400,7 +1409,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) class = pool->size_class[get_size_class_index(size)]; /* class->lock effectively protects the zpage migration */ - spin_lock(&class->lock); + size_class_lock(class); zspage = find_get_zspage(class); if (likely(zspage)) { obj_malloc(pool, zspage, handle); @@ -1411,7 +1420,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) goto out; } - spin_unlock(&class->lock); + size_class_unlock(class); zspage = alloc_zspage(pool, class, gfp); if (!zspage) { @@ -1419,7 +1428,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) return (unsigned long)ERR_PTR(-ENOMEM); } - spin_lock(&class->lock); + size_class_lock(class); obj_malloc(pool, zspage, handle); newfg = get_fullness_group(class, zspage); insert_zspage(class, zspage, newfg); @@ -1430,7 +1439,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) /* We completely set up zspage so mark them as movable */ SetZsPageMovable(pool, zspage); out: - spin_unlock(&class->lock); + size_class_unlock(class); return handle; } @@ -1484,7 +1493,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle) obj_to_zpdesc(obj, &f_zpdesc); zspage = get_zspage(f_zpdesc); class = zspage_class(pool, zspage); - spin_lock(&class->lock); + size_class_lock(class); pool_read_unlock(pool); class_stat_sub(class, ZS_OBJS_INUSE, 1); @@ -1494,7 +1503,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle) if (fullness == ZS_INUSE_RATIO_0) free_zspage(pool, class, zspage); - spin_unlock(&class->lock); + size_class_unlock(class); cache_free_handle(pool, handle); } EXPORT_SYMBOL_GPL(zs_free); @@ -1828,7 +1837,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, /* * the class lock protects zpage alloc/free in the zspage. */ - spin_lock(&class->lock); + size_class_lock(class); /* the migrate_write_lock protects zpage access via zs_map_object */ migrate_write_lock(zspage); @@ -1860,7 +1869,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * it's okay to release migration_lock. */ pool_write_unlock(pool); - spin_unlock(&class->lock); + size_class_unlock(class); migrate_write_unlock(zspage); zpdesc_get(newzpdesc); @@ -1904,10 +1913,10 @@ static void async_free_zspage(struct work_struct *work) if (class->index != i) continue; - spin_lock(&class->lock); + size_class_lock(class); list_splice_init(&class->fullness_list[ZS_INUSE_RATIO_0], &free_pages); - spin_unlock(&class->lock); + size_class_unlock(class); } list_for_each_entry_safe(zspage, tmp, &free_pages, list) { @@ -1915,10 +1924,10 @@ static void async_free_zspage(struct work_struct *work) lock_zspage(zspage); class = zspage_class(pool, zspage); - spin_lock(&class->lock); + size_class_lock(class); class_stat_sub(class, ZS_INUSE_RATIO_0, 1); __free_zspage(pool, class, zspage); - spin_unlock(&class->lock); + size_class_unlock(class); } }; @@ -1983,7 +1992,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, * as well as zpage allocation/free */ pool_write_lock(pool); - spin_lock(&class->lock); + size_class_lock(class); while (zs_can_compact(class)) { int fg; @@ -2013,11 +2022,11 @@ static unsigned long __zs_compact(struct zs_pool *pool, putback_zspage(class, dst_zspage); dst_zspage = NULL; - spin_unlock(&class->lock); + size_class_unlock(class); pool_write_unlock(pool); cond_resched(); pool_write_lock(pool); - spin_lock(&class->lock); + size_class_lock(class); } } @@ -2027,7 +2036,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, if (dst_zspage) putback_zspage(class, dst_zspage); - spin_unlock(&class->lock); + size_class_unlock(class); pool_write_unlock(pool); return pages_freed; From patchwork Wed Jan 29 06:43:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13953467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26DADC02192 for ; Wed, 29 Jan 2025 06:49:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ABF4F280024; Wed, 29 Jan 2025 01:49:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A6E2D28001A; Wed, 29 Jan 2025 01:49:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C139280024; Wed, 29 Jan 2025 01:49:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6BDD128001A for ; Wed, 29 Jan 2025 01:49:23 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 031F0C09BC for ; Wed, 29 Jan 2025 06:49:22 +0000 (UTC) X-FDA: 83059563006.27.52E5C0D Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf16.hostedemail.com (Postfix) with ESMTP id 2B69A180009 for ; Wed, 29 Jan 2025 06:49:20 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=TIdPVebp; spf=pass (imf16.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.179 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738133361; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QnAPrjmsHkTmFvv1w6err0gFxFxc4b8ZHvjvrGPNAh8=; b=xRgeDfyd9Q1O+6xkqJ0EVsI5jj0oaqZWwbDWoRktH9Rxyrqh1zXrrNrksDdXSEPPFGVSmI hueDFWJ7GWuEkhGRi3YwCoTwiJce3fnMVqwX6yu3AqGuXfZFUDv4CQF/e06pyX/pP42e64 fBcOYsl4IcEZmObL8HRwkHt7e/7lj9s= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738133361; a=rsa-sha256; cv=none; b=gBIsvykLY7nU6t2kSv7aKIH/pWWhRXIPeJnKrgzqMfSPTsoVhmad8t0OEHR3liHqXUlPGz h7O0U0BPoE5sIDGP1bcjVbjg8A+qHFQ+X8pMOYWVa0MfwM6G2wGku7ntJgWuuJ/u3+MhgA i9giRHFdnIktVo+uXK7t5XNlNQ0RR8w= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=TIdPVebp; spf=pass (imf16.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.179 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-2166022c5caso103360155ad.2 for ; Tue, 28 Jan 2025 22:49:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738133360; x=1738738160; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QnAPrjmsHkTmFvv1w6err0gFxFxc4b8ZHvjvrGPNAh8=; b=TIdPVebpmvhPv7ccDj71xGSUqXmPSZaTvxrJIakbR7sfbhDffMHhPf6vwa4jYwUlCy hz++CYgjgwERsw6Zr2UW3p+NN4LgkgtlxqGoAVSn/GLDGhrJ1MjgZ/Z7tC0GR2LzgsKu WTrUJ395t4JR8JxH0wH5VTRlLK1g94sWamrhw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738133360; x=1738738160; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QnAPrjmsHkTmFvv1w6err0gFxFxc4b8ZHvjvrGPNAh8=; b=ZDPHjMwVVe7Qk/MqOfMP36gjpf3hgaOt6BckWrGz8k7Yxk7s3E2Enb6KlHUPerYakR uDWHWtYku+laNOH+bQQkIJoSOktYUut+JK4+xn7SuYfKIQgyrMtmdsmCbL9z9bKJVrXB iLoOHccwKOfWRsrpJV6p5kyLyLwybKsOPSqIk8CfWDiLInFxV9g9pNGREj52/y3PnN7D y4vjQDAgyA7DEoUnIIIOiH87wEwM7iIisQM8dpYiqyxiRm60t+99IKsjPgWcmAbV6rme aCkmdDvZKin6ygnukD41wpvt5nwUr+E1eKG6653VGxXBBTljtcCQjuyR1+bHcnXDddy/ DI/g== X-Gm-Message-State: AOJu0YzpJhaGuvYk974ppocbS3jXqMUMfcoOv8XE0xl2t0ixkaPh6S5D dlJU6kaOWkviNKrn4j7PZnX82N7v7TNUPtlJ7JgA0nSbgX8kXYp7wJ3txNNHcg== X-Gm-Gg: ASbGnct5V+GLpfWjLMluGPwc6Id/FTGR+MItdVCSS16Qa7EgWUeFyRQahEo2L/sMvfo dybjg9tWGEZOV/9MbLDsEoqIHANwKpbCPArVXUkbiXGkEl3OYnsHrqHfP/hCpqrVDbIiyzrxbds BdaGx08ZA68rLFDFrXGX3WCcp+/MypygbmZNaFU3u4B7ASfE5Xp0oMy/x5V7j50zaRftwfN5TKg vf/1PXbmgJi+lRGmIrNKNSuBJGgzR6tQvuOQ/vmIqcOum9sghjaFSjzBBMpZlmlFnCzNXxjU7DE MV32jvagubf6B8CNng== X-Google-Smtp-Source: AGHT+IHjUFNFjYC3vAlkkDSXkUk59QE2BpQabcLqZFb+C3g33UA0CBbzBu0rrjhlkGsVrTPsZsZh4Q== X-Received: by 2002:a17:902:db03:b0:215:b33b:e26d with SMTP id d9443c01a7336-21dd7c5141fmr27092835ad.21.1738133359961; Tue, 28 Jan 2025 22:49:19 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b323:d70b:a1b8:1683]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21da424f61fsm91599795ad.237.2025.01.28.22.49.17 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Jan 2025 22:49:19 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv1 3/6] zsmalloc: make zspage lock preemptible Date: Wed, 29 Jan 2025 15:43:49 +0900 Message-ID: <20250129064853.2210753-4-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250129064853.2210753-1-senozhatsky@chromium.org> References: <20250129064853.2210753-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Stat-Signature: 3ss495eqp9yn9cyers1rwfzyozr5nazr X-Rspamd-Queue-Id: 2B69A180009 X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1738133360-721299 X-HE-Meta: U2FsdGVkX1+IqYT4GxBInspH9sSgdAsT9kQyxioA9DDvU5tv2vNZUJrOakgyzNFI7dKzlSYqrP4X2KEnaOMNNXKdq6+6DQo9oMUGL8OFYnzg3tg0xMQvUZtfadejIhV7kqZcK0SIYcfBR7ZxPBpFAmHekyGvqSxyaZ+bKxpZJf+bvVPxnczG2XKZE9nuQ3PGm+0RIuHPvrWoQ3q41EzNBiF0JnasuYX9WDPCQq32Zy+X8T4D7TXOAv8bf28pCQS00VUK8K6CXUtYcopbP7x2Enm3SgnqvHOAeh6rAQQ58Hyv9DcaBObhhHv0CA4RVCezb+mRcrcr1Ny8X5U1LMFHwv/LnJ7atozpviV+sXDl6SPmhDDqpJGETByUfkx9B9J/H37nsLnfa25K/C55Ftsy5haIGCVfVksbYX4Wdi79CLod3dTrO643looVjNO1SWoYt7koxll/Pw9Up+DIZBTOPVBJyS3zIxq3jeiStMMQHlKo7IIIVhDMpA3QrLYju1Nibhh7JobIG04Yiwc/4f2xxkbFHBpQGQt3fvLYw7U7RMhzprn2qLyVFMoUiMgq4TAXcJR666gLs3/G6ldQeu9kXtGvSeAy8cNiWSMODaPFfYJv2SqB9Gzdq8GGaM56GSCxuRt3G/pNHmy2WpE91edoHuqqPAjyHQj5cvZJ74DSKpHRjqQfeoa+FjGaopKnmetSo6xdI6i+m0RywEVpsvjymyscoWEPFUbyLEj8p40nektHPGl4EidkfA4w4sYuUJ+ygonwv3IPh5CLxSMiHSRPGzMHixuU4h/k6fXJq+ID0JBQrgnuOUSR0sEP+LZpTGREr59Ityt59rRmAE2nGzOAz6PhdERVOtbEtBtJfGMwj6x9rUa8GLAGeh28u2P/AYk+WOpg+VGC6QXdCqjdGpT3stf7hpvicQDbz0pPegYLtVVlcSOnc2RFX1HJNBC3GemNyL2m+fWdiu9XtJ8ESp/ 1yVmyryl f6HrJK+wNmTg9GQUyU6w/ukZVI0Cl/9ZOCKdeCyFaYBTQLRvIeLIjyAZxnL7TgHBEn10pwubJ/NSwGx69dfydEo+Qp6A7qlhUhV+ZhcV7VqjnaFEYGRoPbrZIotl4onlsYD7YUfnqaMmT+LA0LwI/04mUFYJrciAPsxR+ZJ7YyEHL8BnvJs4lKQR7XsiW9x+UjqUr+lUztJT7a7iY3j47J06vytObha9n6XFgxjJ6TNV3RiU2BRrvRKgF8SeqjHJJGLDPr8EW0awD3FI1ICOwW9CoPGdOcHhd7E+/u415zDwS/O5kygRFkw2tUA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Switch over from rwlock_t to a atomic_t variable that takes negative value when the page is under migration, or positive values when the page is used by zsmalloc users (object map, etc.) Using a rwsem per-zspage is a little too memory heavy, a simple atomic_t should suffice. zspage lock is a leaf lock for zs_map_object(), where it's read-acquired. Since this lock now permits preemption extra care needs to be taken when it is write-acquired - all writers grab it in atomic context, so they cannot spin and wait for (potentially preempted) reader to unlock zspage. There are only two writers at this moment - migration and compaction. In both cases we use write-try-lock and bail out if zspage is read locked. Writers, on the other hand, never get preempted, so readers can spin waiting for the writer to unlock zspage. With this we can implement a preemptible object mapping. Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 140 +++++++++++++++++++++++++++++++------------------- 1 file changed, 88 insertions(+), 52 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 0f575307675d..8f4011713bc8 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -293,6 +293,9 @@ static inline void free_zpdesc(struct zpdesc *zpdesc) __free_page(page); } +#define ZS_PAGE_UNLOCKED 0 +#define ZS_PAGE_WRLOCKED -1 + struct zspage { struct { unsigned int huge:HUGE_BITS; @@ -305,7 +308,7 @@ struct zspage { struct zpdesc *first_zpdesc; struct list_head list; /* fullness list */ struct zs_pool *pool; - rwlock_t lock; + atomic_t lock; }; struct mapping_area { @@ -315,6 +318,64 @@ struct mapping_area { enum zs_mapmode vm_mm; /* mapping mode */ }; +static void zspage_lock_init(struct zspage *zspage) +{ + atomic_set(&zspage->lock, ZS_PAGE_UNLOCKED); +} + +/* + * zspage lock permits preemption on the reader-side (there can be multiple + * readers). Writers (exclusive zspage ownership), on the other hand, are + * always run in atomic context and cannot spin waiting for a (potentially + * preempted) reader to unlock zspage. This, basically, means that writers + * can only call write-try-lock and must bail out if it didn't succeed. + * + * At the same time, writers cannot reschedule under zspage write-lock, + * so readers can spin waiting for the writer to unlock zspage. + */ +static void zspage_read_lock(struct zspage *zspage) +{ + atomic_t *lock = &zspage->lock; + int old; + + while (1) { + old = atomic_read(lock); + if (old == ZS_PAGE_WRLOCKED) { + cpu_relax(); + continue; + } + + if (atomic_try_cmpxchg(lock, &old, old + 1)) + return; + + cpu_relax(); + } +} + +static void zspage_read_unlock(struct zspage *zspage) +{ + atomic_dec(&zspage->lock); +} + +static int zspage_try_write_lock(struct zspage *zspage) +{ + atomic_t *lock = &zspage->lock; + int old = ZS_PAGE_UNLOCKED; + + preempt_disable(); + if (atomic_try_cmpxchg(lock, &old, ZS_PAGE_WRLOCKED)) + return 1; + + preempt_enable(); + return 0; +} + +static void zspage_write_unlock(struct zspage *zspage) +{ + atomic_set(&zspage->lock, ZS_PAGE_UNLOCKED); + preempt_enable(); +} + /* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */ static void SetZsHugePage(struct zspage *zspage) { @@ -326,12 +387,6 @@ static bool ZsHugePage(struct zspage *zspage) return zspage->huge; } -static void migrate_lock_init(struct zspage *zspage); -static void migrate_read_lock(struct zspage *zspage); -static void migrate_read_unlock(struct zspage *zspage); -static void migrate_write_lock(struct zspage *zspage); -static void migrate_write_unlock(struct zspage *zspage); - #ifdef CONFIG_COMPACTION static void kick_deferred_free(struct zs_pool *pool); static void init_deferred_free(struct zs_pool *pool); @@ -1027,7 +1082,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, return NULL; zspage->magic = ZSPAGE_MAGIC; - migrate_lock_init(zspage); + zspage_lock_init(zspage); for (i = 0; i < class->pages_per_zspage; i++) { struct zpdesc *zpdesc; @@ -1252,7 +1307,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, * zs_unmap_object API so delegate the locking from class to zspage * which is smaller granularity. */ - migrate_read_lock(zspage); + zspage_read_lock(zspage); pool_read_unlock(pool); class = zspage_class(pool, zspage); @@ -1312,7 +1367,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle) } local_unlock(&zs_map_area.lock); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); } EXPORT_SYMBOL_GPL(zs_unmap_object); @@ -1706,18 +1761,18 @@ static void lock_zspage(struct zspage *zspage) /* * Pages we haven't locked yet can be migrated off the list while we're * trying to lock them, so we need to be careful and only attempt to - * lock each page under migrate_read_lock(). Otherwise, the page we lock + * lock each page under zspage_read_lock(). Otherwise, the page we lock * may no longer belong to the zspage. This means that we may wait for * the wrong page to unlock, so we must take a reference to the page - * prior to waiting for it to unlock outside migrate_read_lock(). + * prior to waiting for it to unlock outside zspage_read_lock(). */ while (1) { - migrate_read_lock(zspage); + zspage_read_lock(zspage); zpdesc = get_first_zpdesc(zspage); if (zpdesc_trylock(zpdesc)) break; zpdesc_get(zpdesc); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); zpdesc_wait_locked(zpdesc); zpdesc_put(zpdesc); } @@ -1728,41 +1783,16 @@ static void lock_zspage(struct zspage *zspage) curr_zpdesc = zpdesc; } else { zpdesc_get(zpdesc); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); zpdesc_wait_locked(zpdesc); zpdesc_put(zpdesc); - migrate_read_lock(zspage); + zspage_read_lock(zspage); } } - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); } #endif /* CONFIG_COMPACTION */ -static void migrate_lock_init(struct zspage *zspage) -{ - rwlock_init(&zspage->lock); -} - -static void migrate_read_lock(struct zspage *zspage) __acquires(&zspage->lock) -{ - read_lock(&zspage->lock); -} - -static void migrate_read_unlock(struct zspage *zspage) __releases(&zspage->lock) -{ - read_unlock(&zspage->lock); -} - -static void migrate_write_lock(struct zspage *zspage) -{ - write_lock(&zspage->lock); -} - -static void migrate_write_unlock(struct zspage *zspage) -{ - write_unlock(&zspage->lock); -} - #ifdef CONFIG_COMPACTION static const struct movable_operations zsmalloc_mops; @@ -1804,7 +1834,7 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode) } static int zs_page_migrate(struct page *newpage, struct page *page, - enum migrate_mode mode) + enum migrate_mode mode) { struct zs_pool *pool; struct size_class *class; @@ -1820,15 +1850,12 @@ static int zs_page_migrate(struct page *newpage, struct page *page, VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc)); - /* We're committed, tell the world that this is a Zsmalloc page. */ - __zpdesc_set_zsmalloc(newzpdesc); - /* The page is locked, so this pointer must remain valid */ zspage = get_zspage(zpdesc); pool = zspage->pool; /* - * The pool migrate_lock protects the race between zpage migration + * The pool->migrate_lock protects the race between zpage migration * and zs_free. */ pool_write_lock(pool); @@ -1838,8 +1865,15 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * the class lock protects zpage alloc/free in the zspage. */ size_class_lock(class); - /* the migrate_write_lock protects zpage access via zs_map_object */ - migrate_write_lock(zspage); + /* the zspage write_lock protects zpage access via zs_map_object */ + if (!zspage_try_write_lock(zspage)) { + size_class_unlock(class); + pool_write_unlock(pool); + return -EINVAL; + } + + /* We're committed, tell the world that this is a Zsmalloc page. */ + __zpdesc_set_zsmalloc(newzpdesc); offset = get_first_obj_offset(zpdesc); s_addr = kmap_local_zpdesc(zpdesc); @@ -1870,7 +1904,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, */ pool_write_unlock(pool); size_class_unlock(class); - migrate_write_unlock(zspage); + zspage_write_unlock(zspage); zpdesc_get(newzpdesc); if (zpdesc_zone(newzpdesc) != zpdesc_zone(zpdesc)) { @@ -2006,9 +2040,11 @@ static unsigned long __zs_compact(struct zs_pool *pool, if (!src_zspage) break; - migrate_write_lock(src_zspage); + if (!zspage_try_write_lock(src_zspage)) + break; + migrate_zspage(pool, src_zspage, dst_zspage); - migrate_write_unlock(src_zspage); + zspage_write_unlock(src_zspage); fg = putback_zspage(class, src_zspage); if (fg == ZS_INUSE_RATIO_0) { From patchwork Wed Jan 29 06:43:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13953468 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6783C02192 for ; Wed, 29 Jan 2025 06:49:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 60413280026; Wed, 29 Jan 2025 01:49:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B44928001A; Wed, 29 Jan 2025 01:49:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42D8F280026; Wed, 29 Jan 2025 01:49:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 22E0F28001A for ; Wed, 29 Jan 2025 01:49:29 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 99F0C80988 for ; Wed, 29 Jan 2025 06:49:28 +0000 (UTC) X-FDA: 83059563216.22.902A76C Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf11.hostedemail.com (Postfix) with ESMTP id C4A5A40008 for ; Wed, 29 Jan 2025 06:49:26 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=E14jNnUR; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf11.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.181 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738133366; a=rsa-sha256; cv=none; b=S0TGOdyLC+0uuAx+56qtHvctuSQC+IMQRZdR/ocvQMK9TryZ+OdRXgCK3I//uUF9e7F9+a HCBa591p7uL95u5mPKNkJ+5WpG1xTnS0wiuX94IB4P6+Ao1lgRLvNOxex5fHlYOWGThskC jSP7gv4vVGe7MbDQXF3uKg8WbyNtJsU= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=E14jNnUR; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf11.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.181 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738133366; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KXGVSyFQ7wxPOOSznd8eqGTmOOS9W52Yl2SySbGkS2E=; b=o580RNVkRTxmu3bmsJKiqUlyJrSE1YK8hECS+FvJi4iO6xboFdr/T58UEUqJ5CjCl6zfZt drLx1TWWz8p1FMfhQRtFBPHjlA5mdJ31zCqJh2SVsDXsDm/lr1VSkIyxvCAl/Zpnld4fYU 9O2IvRjOc4RiH92Pi41B9/cVoFgkhfE= Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-21649a7bcdcso111509295ad.1 for ; Tue, 28 Jan 2025 22:49:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738133365; x=1738738165; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KXGVSyFQ7wxPOOSznd8eqGTmOOS9W52Yl2SySbGkS2E=; b=E14jNnURbGw84PXvWiB5gi/4cBQ3Ae7oZRt7inh/tt2BsMQqL+QnxMB5XuoRn9b3ps FHAc71HhHFMUMHnKIf1SUC7bKJQvLUtjfhPmt4k1SkcmZTyRQ+t10Avqb+X+/FFEpvQo hOlBf4qYDoh2ixCKIuXxHPTHCDePkQ2kyF9CY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738133365; x=1738738165; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KXGVSyFQ7wxPOOSznd8eqGTmOOS9W52Yl2SySbGkS2E=; b=eC7j463M0apkccMZHXUFU54hduJd6Faqzb0nZ1Ks+k0PLARB4vw6apTHPnIKEiUzKG Jv/ZCPgUwb5EXHqmMCiEV10sn7ceeLIY3x5bhsD1M6RXOI0RCdPWdHtbfcolZzs1Rpm7 vHcb43N6oWizVDgAMgy18eiwKetdpa+zv8FsHuFCcsGlC+icC+MgEE4AaQqCtlOBqpjJ 9wd542LJ34ud7jtFX6nu3DiK2yC+Cxy0T6l3WSl0mBzH8KR+7R12RkQoUEZ92A3yGMA/ T4Q4KF8kD+Gzk8jU5YcS6PzI0VmGeA9oc3oFh39i4hb3hRfW7pdLs8ue1Be1TynG+9IX tkoA== X-Gm-Message-State: AOJu0YzU3vQL1IHyk8Cz/DpGHRSOREcFv50JGpSUQSBqlyV596BxRPKG Bi3p0vSmi8uO9x6QhNup5RYTiO64kafM8NiudYBvWBnESM4MIGSXwz4ss9RdQQ== X-Gm-Gg: ASbGncteaJTNjzt+4gw1RnYj4cobuMk3mLVEiwgowRi9nnJiOPBzp3l3BViR2PLBMHr aZFaU2YopWwiwIT6zbaNdIUmOFsf9DNjjcMm5sEBgJGm2E4DZ9oi9ABv+3vxgF3FffieYwG/WjV cqJbHPKvOCfx98poFER8yqnZA0Yx0vzKw3umf0LE8foiThjjH2NM5Zfe4mLEYbp8DhUIX/TEt0I /WW+PJCJFcfoC8ZsL8r98qnkBgFrjlZhZn/dg7jeSMQAUP4QBlkVI7QHzU7rJnD73AtaBWkCd3H JImCq21om6NdcOXfPg== X-Google-Smtp-Source: AGHT+IHRM9r7XrdtSHnutLuiUDtUgDRAtF8VGeBOts49IOHmepNWeYTTAUBt2jiNw5hFypkaNy8rEg== X-Received: by 2002:a17:903:41cc:b0:215:bb50:6a05 with SMTP id d9443c01a7336-21dd7c499d2mr21877945ad.9.1738133365611; Tue, 28 Jan 2025 22:49:25 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b323:d70b:a1b8:1683]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21da3d9d9fbsm92959935ad.9.2025.01.28.22.49.23 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Jan 2025 22:49:25 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv1 4/6] zsmalloc: introduce new object mapping API Date: Wed, 29 Jan 2025 15:43:50 +0900 Message-ID: <20250129064853.2210753-5-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250129064853.2210753-1-senozhatsky@chromium.org> References: <20250129064853.2210753-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: C4A5A40008 X-Stat-Signature: mri81tbx4q8s45h1b5f4956besikx7sz X-Rspam-User: X-HE-Tag: 1738133366-700909 X-HE-Meta: U2FsdGVkX1/Pc6ww8tWqBvLr1wj+GN1c7EoAb9yMT3GUPVJjDkpx+hEuYu5X/hWks5v4n1UWUpxCLsqiQsZwWnWSTf4HJJmqt9K9ra5xy6pOkhV2EZITBW4qYMp4dvbvCaeaJJKOSTn28OFvOfjHZXLwUFrfOjo2MQilTFF6l2SitOT0geyMvrljkqukkQWHibXosGprMiCHaIlbpEps5m0Oy3F3xwzbOob5gOAF1Q2y++8OS0Fn/KXN4iv9DsyPE83F2nIQTetSGM8GGDpdMu9psMhQlQs2M5ssN1ovRSiF4Exv9DzhCjuqmsyULIzM6xroWzzmAhi/dmDXf0U2AXankvH6ncAfOKdTaxUPtGlSgh8a0sDH4LUVklYScy3YHkfyBzRKRvRB3DdJ/5TEMA5dEztcwNWymiJbC+QtSLQHH8IYuBiRbpU5SYO+ujHjZYjWKsIqi6bwd0ZZ6c0aypffPI4JUf30wnU1kpnxcIIfXA9qrafYr7IaUhIz+47RUGQVjRFihEhCU7rIQv899gST/WSn1joJmBUNcgeiYaX2ccuo/1Tru85hNiBrQHu5H+h8XuzIwXUSgMGY8OPBo5hrImcfFpu01fV5BdZbwaikiWTkBJz2OilPgnCc9WJKQOlT7/pXVbjynVDQ6SqO/C5Gpq5aci2aVNbHv3TmVGDiySEL8Jtf0AH7DszWr6/1EsEe12vVrxNN2PWu/yhuH0+cAhtAsfVfP2TTN6jrV0xnMgtlaDxhXlCGY8vdMr0z0ivIgbQstez3+kIXHFDMBaJ9b8ma7ut2FtP09fZcWm+jzSIMvCTEgazO6pNFyhQhLcO1Ht4K+4nlZ4LuY8ATl26hS4z/ZuePfxNWeuTQn1AbGrBOqj1nxavZzdcXRmWVKlHkftFyrHvkpdzqAbWaavbh9b7YYxQM5PVihCf06BLC+0l3E7zAZP0cyeaDBXH4ZCM4gma194qvjg2OYMv ueg40tg2 +JLOycq1IE90+Hey5q5/zNvy2ZJ/CYPrJkwBBJ4EPTioN3cZ1wmGm97zgU5ApUXmVRDxMAMYzkLB2+LtmGqGW//HeBpxFf4Sf9DN5j8YliuVEGI7s3Z8+P2hav61chRc0CGBwNBKFoUcTz+kSXL0aO1rzbJ7BinHaImaU/jX5HLZatR0Oa6yZRdUvr6VIlY2dluj5pdq5pAXzO4UZ0OlWyMlF4Vc1WFiwTstJu/M8TzAfEZLLygfINlho9XvYnAYteYx+xxEetHk3eywutO1oRKBrhSB/HKnZSPyI6hlXr3/ljt5sJ/ZGSSqW5w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Current object mapping API is a little cumbersome. First, it's inconsistent, sometimes it returns with page-faults disabled and sometimes with page-faults enabled. Second, and most importantly, it enforces atomicity restrictions on its users. zs_map_object() has to return a liner object address which is not always possible because some objects span multiple physical (non-contiguous) pages. For such objects zsmalloc uses a per-CPU buffer to which object's data is copied before a pointer to that per-CPU buffer is returned back to the caller. This leads to another, final, issue - extra memcpy(). Since the caller gets a pointer to per-CPU buffer it can memcpy() data only to that buffer, and during zs_unmap_object() zsmalloc will memcpy() from that per-CPU buffer to physical pages that object in question spans across. New API splits functions by access mode: - zs_obj_read_begin(handle, local_copy) Returns a pointer to handle memory. For objects that span two physical pages a local_copy buffer is used to store object's data before the address is returned to the caller. Otherwise the object's page is kmap_local mapped directly. - zs_obj_read_end(handle, buf) Unmaps the page if it was kmap_local mapped by zs_obj_read_begin(). - zs_obj_write(handle, buf, len) Copies len-bytes from compression buffer to handle memory (takes care of objects that span two pages). This does not need any additional (e.g. per-CPU) buffers and writes the data directly to zsmalloc pool pages. The old API will stay around until the remaining users switch to the new one. After that we'll also remove zsmalloc per-CPU buffer and CPU hotplug handling. Signed-off-by: Sergey Senozhatsky Reviewed-by: Yosry Ahmed --- include/linux/zsmalloc.h | 8 +++ mm/zsmalloc.c | 129 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 137 insertions(+) diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h index a48cd0ffe57d..625adae8e547 100644 --- a/include/linux/zsmalloc.h +++ b/include/linux/zsmalloc.h @@ -58,4 +58,12 @@ unsigned long zs_compact(struct zs_pool *pool); unsigned int zs_lookup_class_index(struct zs_pool *pool, unsigned int size); void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats); + +void zs_obj_read_end(struct zs_pool *pool, unsigned long handle, + void *handle_mem); +void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle, + void *local_copy); +void zs_obj_write(struct zs_pool *pool, unsigned long handle, + void *handle_mem, size_t mem_len); + #endif diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 8f4011713bc8..0e21bc57470b 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1371,6 +1371,135 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle) } EXPORT_SYMBOL_GPL(zs_unmap_object); +void zs_obj_write(struct zs_pool *pool, unsigned long handle, + void *handle_mem, size_t mem_len) +{ + struct zspage *zspage; + struct zpdesc *zpdesc; + unsigned long obj, off; + unsigned int obj_idx; + struct size_class *class; + + WARN_ON(in_interrupt()); + + /* Guarantee we can get zspage from handle safely */ + pool_read_lock(pool); + obj = handle_to_obj(handle); + obj_to_location(obj, &zpdesc, &obj_idx); + zspage = get_zspage(zpdesc); + + /* Make sure migration doesn't move any pages in this zspage */ + zspage_read_lock(zspage); + pool_read_unlock(pool); + + class = zspage_class(pool, zspage); + off = offset_in_page(class->size * obj_idx); + + if (off + class->size <= PAGE_SIZE) { + /* this object is contained entirely within a page */ + void *dst = kmap_local_zpdesc(zpdesc); + + if (!ZsHugePage(zspage)) + off += ZS_HANDLE_SIZE; + memcpy(dst + off, handle_mem, mem_len); + kunmap_local(dst); + } else { + size_t sizes[2]; + + /* this object spans two pages */ + off += ZS_HANDLE_SIZE; + sizes[0] = PAGE_SIZE - off; + sizes[1] = mem_len - sizes[0]; + + memcpy_to_page(zpdesc_page(zpdesc), off, + handle_mem, sizes[0]); + zpdesc = get_next_zpdesc(zpdesc); + memcpy_to_page(zpdesc_page(zpdesc), 0, + handle_mem + sizes[0], sizes[1]); + } + + zspage_read_unlock(zspage); +} +EXPORT_SYMBOL_GPL(zs_obj_write); + +void zs_obj_read_end(struct zs_pool *pool, unsigned long handle, + void *handle_mem) +{ + struct zspage *zspage; + struct zpdesc *zpdesc; + unsigned long obj, off; + unsigned int obj_idx; + struct size_class *class; + + obj = handle_to_obj(handle); + obj_to_location(obj, &zpdesc, &obj_idx); + zspage = get_zspage(zpdesc); + class = zspage_class(pool, zspage); + off = offset_in_page(class->size * obj_idx); + + if (off + class->size <= PAGE_SIZE) { + if (!ZsHugePage(zspage)) + off += ZS_HANDLE_SIZE; + handle_mem -= off; + kunmap_local(handle_mem); + } + + zspage_read_unlock(zspage); +} +EXPORT_SYMBOL_GPL(zs_obj_read_end); + +void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle, + void *local_copy) +{ + struct zspage *zspage; + struct zpdesc *zpdesc; + unsigned long obj, off; + unsigned int obj_idx; + struct size_class *class; + void *addr; + + WARN_ON(in_interrupt()); + + /* Guarantee we can get zspage from handle safely */ + pool_read_lock(pool); + obj = handle_to_obj(handle); + obj_to_location(obj, &zpdesc, &obj_idx); + zspage = get_zspage(zpdesc); + + /* Make sure migration doesn't move any pages in this zspage */ + zspage_read_lock(zspage); + pool_read_unlock(pool); + + class = zspage_class(pool, zspage); + off = offset_in_page(class->size * obj_idx); + + if (off + class->size <= PAGE_SIZE) { + /* this object is contained entirely within a page */ + addr = kmap_local_zpdesc(zpdesc); + addr += off; + } else { + size_t sizes[2]; + + /* this object spans two pages */ + sizes[0] = PAGE_SIZE - off; + sizes[1] = class->size - sizes[0]; + addr = local_copy; + + memcpy_from_page(addr, zpdesc_page(zpdesc), + off, sizes[0]); + zpdesc = get_next_zpdesc(zpdesc); + memcpy_from_page(addr + sizes[0], + zpdesc_page(zpdesc), + 0, sizes[1]); + } + + if (!ZsHugePage(zspage)) + addr += ZS_HANDLE_SIZE; + + return addr; +} +EXPORT_SYMBOL_GPL(zs_obj_read_begin); + /** * zs_huge_class_size() - Returns the size (in bytes) of the first huge * zsmalloc &size_class. From patchwork Wed Jan 29 06:43:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13953469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89885C02192 for ; Wed, 29 Jan 2025 06:49:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1490D280027; Wed, 29 Jan 2025 01:49:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F87A28001A; Wed, 29 Jan 2025 01:49:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F03BB280027; Wed, 29 Jan 2025 01:49:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D059728001A for ; Wed, 29 Jan 2025 01:49:34 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 87E5146256 for ; Wed, 29 Jan 2025 06:49:34 +0000 (UTC) X-FDA: 83059563468.12.8725EC3 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf05.hostedemail.com (Postfix) with ESMTP id A1468100007 for ; Wed, 29 Jan 2025 06:49:32 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b="c5/NepHR"; spf=pass (imf05.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.174 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738133372; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jm+b3R/xkgu1u5aIyHjEy9+3hXzV4kVBwpygpgrob3Q=; b=hSJ9+Sadrx20wIS2Jdzl3gJ+4ToGmHZemMTdvsyZvzd2HiZQwNntBmkL9XieoCu+Q8RTL9 yHLQLP230zZdX89xQJghchcWF7jojt7jLP4Uf7bQypFfJt6RhgfxH1APIVcsx/LbZyTsUd +ai6hHZ4T+WakxKJCx6lnVp8/Sggge0= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b="c5/NepHR"; spf=pass (imf05.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.174 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738133372; a=rsa-sha256; cv=none; b=Le+efu76y5IQZULeA+BV/85f/xv2ntKfWgaiCENW7mBP+7ftTy6SfOoLlZJB5ICC38NwYP /p6A+Bbps8TIOFxQbtC7hYrNJmoO9OfGnzGs/2yWmE85e3krCyMat+OUo65EIEWv5HyuZe BUebflASwxWZOrWa2wIfn+e9ySa/c+M= Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-21bc1512a63so122284975ad.1 for ; Tue, 28 Jan 2025 22:49:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738133371; x=1738738171; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jm+b3R/xkgu1u5aIyHjEy9+3hXzV4kVBwpygpgrob3Q=; b=c5/NepHRuYgbFzNLgUz15x283DSbgU0MOtxkQCMhxbaOhpgKwhNEOH+rNUQYkAQ9OH T7fB6uNXxdN0QVcL7ZURe5zw3Xw0w9S/j3A+iiKGJSOQmtMCdYT9JhZQuYff3+Ep2kh3 OftAGSPyA0j1lOKzkJVezVCPkSUpY2j3ipO14= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738133371; x=1738738171; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jm+b3R/xkgu1u5aIyHjEy9+3hXzV4kVBwpygpgrob3Q=; b=a4OElFuunRZFA9P72RzgimbPzfE5EBQMrK+pcN1Bd8ZND7CxfXNilcdHZmL9ZQKaRH pZIPKjudg3KGRb1a7GvS1CKDQQUXmzhHw6BjBygEC1yzNPUmeZIA3T9v1+tHv9gKysVF 8YVJGq11EuGnDtyQGYxawBY92phVI5aqnmqFlMhyL98MoMetTr9LGG4jjCYjun+yUm6k Bhc1dEIYmsmxrmYc8RQbMpvnhZaVkhnh1vaxS/wO588ayaMxUOJEdqKcYDwo7AzdUif5 OOlaWrvy0eP4hptz8kezFaKTxNQ484rbuFPWyaMbBgqKywph1C7jkBYMoY3mBI4CG/yn gcFw== X-Gm-Message-State: AOJu0YzVDOeVoRNVx5VnmL2s1p2+Q5vJncLZAWqPichHEcLPZfd6nTAv 9wEwJBVMd4Zrj43pokoU8c4miWFj2f9Knxaa9ppuUn0PSMV3FiaFtQwaLwquwA== X-Gm-Gg: ASbGncuz+SpDem7SYMyt5RXHBUbZ1MXHt70Os1IpvqdkmEyqFMdT6UvPOSpiV7fxeFz KXZzaU657dNFTmgaitfMK6lsw+jaEP5gVlzW9BHFTqhJPGj/PFKPRFfXaQkDkRvnuItltZGi5QF Ac3FmDzPu8NK8uDeehWlH8aga99hkRyiaSXAbZWASuCqUaPaoeityTQyNqElZKjGWGNffIgMf6u 3lgme2JCDwdy3dJwnvql7QVatXr+rFKEXEHLfiIE+Amk+RAk1WQ5QeT5Z8hVUAHj7IkcCOq8mTr 80DQb8FOIg8xqGC/xQ== X-Google-Smtp-Source: AGHT+IEK5agvR4qJHKICT0NvwP5kGDqjZ0/0rvwL4OJXcfuzB8y1QaZ9wi33U86Y9zDYeiv8HE16ww== X-Received: by 2002:a05:6a00:2406:b0:72d:9cbc:730d with SMTP id d2e1a72fcca58-72fd0c2246cmr2931620b3a.11.1738133371409; Tue, 28 Jan 2025 22:49:31 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b323:d70b:a1b8:1683]) by smtp.gmail.com with UTF8SMTPSA id d2e1a72fcca58-72f8a7608b1sm10312341b3a.116.2025.01.28.22.49.29 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Jan 2025 22:49:31 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv1 5/6] zram: switch to new zsmalloc object mapping API Date: Wed, 29 Jan 2025 15:43:51 +0900 Message-ID: <20250129064853.2210753-6-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250129064853.2210753-1-senozhatsky@chromium.org> References: <20250129064853.2210753-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A1468100007 X-Stat-Signature: igkrfccmxfubcsc8qnwmtx4jkbmkc3xr X-Rspam-User: X-HE-Tag: 1738133372-308337 X-HE-Meta: U2FsdGVkX19YmN95u/8d6aeMsywk+EtNiVq6MmHrzgj2LZvvmzajOhNLR9WTPNMN89w81vkYB7GtDhJ/nRy1Z0TuU92z1W1PylQzN49mRL0+anbEjNvYUjlrqU/RcQIgQJ2g6uHleYkg9/pIM16UAnHIw+kxur2Olvjx22AHYBa2HbbyrAyzu8S00GwnWsE/UEs2GYhIpVf4Gr1fYTMAUYJVvhStA63SCd3bnilYh8DdHLZAzXvYEWrPwuO0B93ZdVk9wz4L87gOvOKkmIhrkyigjNLw/ir60NKAsOfpMcpiBmUsQf1JG0cA5ZBH5AHPjaCViZuRFKEqrtZq3etQJa0uxTiTl3W2Sk3ig/fccb170sqNqIXLv3lep3m4Y/azmWjzcmpWZXp5/meOSjvxpxYNQe7NEycnwg2roU2dLE1JVfa/3SbgaAeWIu++XKBUqUhQXFPtf6LYRLVpd/yvrBdSJnRvFrPUBh4EL+1Na4nH/Ct7ol7IJI/yfRfML+D/smHdSGzat8b6Amu+TUARLu1PBfn4r78q1fHjqrEBgvcQWbqXAlI8ArUTSsB6t3Y4hPMg6n5x5O0EIpA9+OXvegLflSVjxBiM2Y22dDIgRZLtR9U6YRdBoIuUNhbExqII7OnIUs/MRqpC/4zXlXmUi4fpLA54qwvCBlxR3KoHhtXiuYbt/HIW9PEH9A3gI5EtrH4sH3u9F8CQXEwHgMF3ieZBC8X4Bp4ArmQnU3QSFSvWKzv32yb4ewZ6tZ6tN+JYX2Qx12a45PM1Pqd/hfJXcUKZ2uBURzc0eUSVLRIErniS7cF+UtjGx/nXxOZwc3buWOvzbEP0rAxb8dWOKtJhlu4Dwfki39Z7fw/zB897VnMge2OoUNrmrw0bapDstQzWsGvYSrmNt2KuoQkBjNTotFGoC1wSEZ7j1gbL3N1nGIVZnQnOeaeBP5oN3owZ7OrPGuC5fxfELiz3Z6fz+N0 lDCibnmD 6l7s0XrMk5e29hjPeSC+PCsKbGG+Yt8vbHmS18VnCP1QlWCv3+MVaJEk0LfETEPRybTX8o5S0y6cC/rWu3QQ9IiOd0Yy8qUcD1TPo7PKfIRe6rV2xBN5G4vGfQtSlO/VOFlJzKgc+J7nDEGF8Q0ynM2j22DNT5NNuRtI17yDXDJIz/peJrbEFG143G02GO42vajhEuDvhd5Y/9goIq+YrgPLOOjbkyumyMqr4gqJIk+LvW7oJAt1BaPef2SMD/zHBZ8bswG+/pQq5i6SXSkfP7qB3gAVApqlPJMznIrEs6YcjPJuutTJn5o1VCA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use new read/write zsmalloc object API. For cases when RO mapped object spans two physical pages (requires temp buffer) compression streams now carry around one extra physical page. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zcomp.c | 4 +++- drivers/block/zram/zcomp.h | 2 ++ drivers/block/zram/zram_drv.c | 28 ++++++++++------------------ 3 files changed, 15 insertions(+), 19 deletions(-) diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index efd5919808d9..675f2a51ad5f 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -45,6 +45,7 @@ static const struct zcomp_ops *backends[] = { static void zcomp_strm_free(struct zcomp *comp, struct zcomp_strm *strm) { comp->ops->destroy_ctx(&strm->ctx); + vfree(strm->local_copy); vfree(strm->buffer); kfree(strm); } @@ -66,12 +67,13 @@ static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp) return NULL; } + strm->local_copy = vzalloc(PAGE_SIZE); /* * allocate 2 pages. 1 for compressed data, plus 1 extra in case if * compressed data is larger than the original one. */ strm->buffer = vzalloc(2 * PAGE_SIZE); - if (!strm->buffer) { + if (!strm->buffer || !strm->local_copy) { zcomp_strm_free(comp, strm); return NULL; } diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h index 62330829db3f..9683d4aa822d 100644 --- a/drivers/block/zram/zcomp.h +++ b/drivers/block/zram/zcomp.h @@ -34,6 +34,8 @@ struct zcomp_strm { struct list_head entry; /* compression buffer */ void *buffer; + /* local copy of handle memory */ + void *local_copy; struct zcomp_ctx ctx; }; diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index ad3e8885b0d2..d73e8374e9cc 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1562,11 +1562,11 @@ static int read_incompressible_page(struct zram *zram, struct page *page, void *src, *dst; handle = zram_get_handle(zram, index); - src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO); + src = zs_obj_read_begin(zram->mem_pool, handle, NULL); dst = kmap_local_page(page); copy_page(dst, src); kunmap_local(dst); - zs_unmap_object(zram->mem_pool, handle); + zs_obj_read_end(zram->mem_pool, handle, src); return 0; } @@ -1584,11 +1584,11 @@ static int read_compressed_page(struct zram *zram, struct page *page, u32 index) prio = zram_get_priority(zram, index); zstrm = zcomp_stream_get(zram->comps[prio]); - src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO); + src = zs_obj_read_begin(zram->mem_pool, handle, zstrm->local_copy); dst = kmap_local_page(page); ret = zcomp_decompress(zram->comps[prio], zstrm, src, size, dst); kunmap_local(dst); - zs_unmap_object(zram->mem_pool, handle); + zs_obj_read_end(zram->mem_pool, handle, src); zcomp_stream_put(zram->comps[prio], zstrm); return ret; @@ -1684,7 +1684,7 @@ static int write_incompressible_page(struct zram *zram, struct page *page, u32 index) { unsigned long handle; - void *src, *dst; + void *src; /* * This function is called from preemptible context so we don't need @@ -1701,11 +1701,9 @@ static int write_incompressible_page(struct zram *zram, struct page *page, return -ENOMEM; } - dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); src = kmap_local_page(page); - memcpy(dst, src, PAGE_SIZE); + zs_obj_write(zram->mem_pool, handle, src, PAGE_SIZE); kunmap_local(src); - zs_unmap_object(zram->mem_pool, handle); zram_slot_write_lock(zram, index); zram_set_flag(zram, index, ZRAM_HUGE); @@ -1726,7 +1724,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) int ret = 0; unsigned long handle; unsigned int comp_len; - void *dst, *mem; + void *mem; struct zcomp_strm *zstrm; unsigned long element; bool same_filled; @@ -1769,11 +1767,8 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) return -ENOMEM; } - dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); - - memcpy(dst, zstrm->buffer, comp_len); + zs_obj_write(zram->mem_pool, handle, zstrm->buffer, comp_len); zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); - zs_unmap_object(zram->mem_pool, handle); zram_slot_write_lock(zram, index); zram_set_handle(zram, index, handle); @@ -1882,7 +1877,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, unsigned int class_index_old; unsigned int class_index_new; u32 num_recomps = 0; - void *src, *dst; + void *src; int ret; handle_old = zram_get_handle(zram, index); @@ -2020,12 +2015,9 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, return 0; } - dst = zs_map_object(zram->mem_pool, handle_new, ZS_MM_WO); - memcpy(dst, zstrm->buffer, comp_len_new); + zs_obj_write(zram->mem_pool, handle_new, zstrm->buffer, comp_len_new); zcomp_stream_put(zram->comps[prio], zstrm); - zs_unmap_object(zram->mem_pool, handle_new); - zram_free_page(zram, index); zram_set_handle(zram, index, handle_new); zram_set_obj_size(zram, index, comp_len_new); From patchwork Wed Jan 29 06:43:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13953470 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58CF4C02192 for ; Wed, 29 Jan 2025 06:49:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E40AE280028; Wed, 29 Jan 2025 01:49:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DF0CE28001A; Wed, 29 Jan 2025 01:49:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C91BD280028; Wed, 29 Jan 2025 01:49:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A929228001A for ; Wed, 29 Jan 2025 01:49:40 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5ED621209C0 for ; Wed, 29 Jan 2025 06:49:40 +0000 (UTC) X-FDA: 83059563720.21.CE99D63 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by imf19.hostedemail.com (Postfix) with ESMTP id 787211A0010 for ; Wed, 29 Jan 2025 06:49:38 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=j7XCIncJ; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf19.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.171 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738133378; a=rsa-sha256; cv=none; b=JcDuul05AwH3aLTNpd/caUCdhRu/HZdMd8vNOjqe9xv6wSh+kqAegbDolNXkXnpU1TSngM AV0NFASu5QQHTF+ZLdrNGN+E4JEh4w/3eMvS88T0TzMgyoJXluJ68XRmt2WFwM7WwD/mny p0+ajQpKJETE2ov85aGN6nQ3Um8B7e8= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=j7XCIncJ; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf19.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.171 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738133378; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LjUByDm3ba8JrZJ2xgEVKiM3B6T3dmGtNNYJyOcrU3Y=; b=gfp8uwAqX2fQQ557CvMamUHS46A4+CZJX9ECOuA80/GQWislkqGAjX12XkJehN02X+VJBY RkCqE/5Xgzp+gv2DInLQb9BlwcJyfuwfgtgVF9xAw0F/cbV7AwOx3S9IK1XHsEyQ9RO7a7 H2g4mT+IYyEL3i9+iMxqtuiyjinbyX0= Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-21680814d42so106158325ad.2 for ; Tue, 28 Jan 2025 22:49:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738133377; x=1738738177; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LjUByDm3ba8JrZJ2xgEVKiM3B6T3dmGtNNYJyOcrU3Y=; b=j7XCIncJlW3rrMNiV6wXcOpLK437jreZ7lvuVIq1sp4aWyoi8jIRvj4W/nVF6vPek6 4/y8IRuaUeSLEl3drL112q1xUawk32l5jUTi7M8yTfvwzj5Xuzvyesna1MpX1WcQAdwO nL63BbcWzGgKtoob+gA7jrU6hSjXa8btxwZr4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738133377; x=1738738177; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LjUByDm3ba8JrZJ2xgEVKiM3B6T3dmGtNNYJyOcrU3Y=; b=CeCigBEkV1nO96JjsX+YcDfoPfVLtdnOnHM4ZhA5ZGejruKzKJovobxbEoUVuv2kXP p1OBFQg4h7UhU5aXGTKVD6TWpDJOME3xN1yBbKGQ5FF2Y9GrV/wahWR/iF81pufYgyot gXCOBrP9C+TUSXNZqqN3ORCfG/UAMA1hGmsWTfcRbcFpXDuQXi8eDPJIuED+ycZJxG/i RFrwNNwKKCx59RPcNhCIMXurvVi68tbzbr+5oPdgPXHKs8BjLTRIRgcC/HOwewAKSa5x 6N9MkJZvf5aMGh0IchTQly0zKiLYRiuF3cwXSIWlwFWekJcZ29YpKoj/VZLSlFWpzIVS 3hVA== X-Gm-Message-State: AOJu0YyVe9ejgm51CY5Hf19BC6ryTN6nkwYIhKoC3nC3q53jgXyyxk4K cS2msVbfOv+oqLO5deXk1Lx01BNoUwS7pqlzaXrrlw2QMiQygeCs1YBu82K62g== X-Gm-Gg: ASbGncuAVM6pMiJxzfmErGj2LFeEDDdIC4bpaRZFEIFq337Ad+AKMeS7PwHUPhCU3XI a0BIsRRAAKYn0AMK626M+af0u98i9/geLxODKuWYW211LDcR9ZK2o/3+kP23jbykIhpgyTwayQi 9FWjmwN6morMjTRNjUGh5NVU1Hgty0GdfZCMpP6vBNonUV33SK0pM8Qnhggmspni3I8XwrmA2Sk jEJOhFLkM0hwnA5kZKUwOVdFRqltujqo04e/U6o6xQbwmom5oHEeBCcu2tSXVwXNL36FOMVkeIh kTWYi6cq3j2A4MV22w== X-Google-Smtp-Source: AGHT+IHpYdhdLXmXOk5qBuPck2oehFerwvlvK33kiZAY6yd9iS5YnP5pPEjTkuad2sOYZEHR81MPJw== X-Received: by 2002:a05:6300:668b:b0:1e1:a0b6:9861 with SMTP id adf61e73a8af0-1ed7a5f6fc0mr3451155637.12.1738133377322; Tue, 28 Jan 2025 22:49:37 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b323:d70b:a1b8:1683]) by smtp.gmail.com with UTF8SMTPSA id 41be03b00d2f7-ac495d58478sm9397597a12.54.2025.01.28.22.49.34 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Jan 2025 22:49:37 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv1 6/6] zram: add might_sleep to zcomp API Date: Wed, 29 Jan 2025 15:43:52 +0900 Message-ID: <20250129064853.2210753-7-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250129064853.2210753-1-senozhatsky@chromium.org> References: <20250129064853.2210753-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 787211A0010 X-Stat-Signature: 6nuypba8nwqqxiizo99943aq1werhc65 X-Rspam-User: X-HE-Tag: 1738133378-297331 X-HE-Meta: U2FsdGVkX1+GCFM2ewAwZiwG2Ej5XcO+hvjyQTe2yHOsBmPugYUQ/O7K9/VPEXwh7Xha9K7hY1G1mzVgv4nF/ssWSH+6gk9XWDXdQM7pjjWrzwJv67YAV6x2Dng5vuXRU4nqHZAngWlOoBzGB7ms6xQfuXoSfZStO0QauO/NrOIIfdAWVumw+hvaTrvLoEpiGeOZjzg2ao0y9vC+I+nzMducsVH3KnJJid2nRP8T9EXCdESZdDcD1LhcfH9FuvMTnCxPb1ONm33bUYjhxIKKdxuKYdGRQn/L0RMDhqhpaEosvICkGgfCqlc2AHnn1YZr0nMaond7VtBHW0h8+PMMfpH6z8+QCwlJ7HRLKIctJ5NCEyXc6Zil4tUhpl0nEXB9TZdYIOzZnRPVHSNqZXw7Dg+12roP/J3RfjIxDN3nP3YbVhitNrSdV4N8TcJ3pCPnZqzPr155f/ut8hDQ7SVQSyiSFoyqI4RqqkfdLj+bj3HNCoE+m5sI6p9vI1CcrOh/Y+vvff09w8klQmWfGLOmLZngClVEXcFjoOX0G8w5TMZ3wGt8PjIpDp/+6seBE+hwq9oI5QdlgCAfp5QWMkGiryb84pelaEfqa87QadSd3Gd088t6E2hR7nGDi2VqO/3gGWT97puXpoWgiBRFWwSewtDj1uvIMgcOcPY97HvPj4KpVkBa5SqXJgH9T+iJNnP7Ov1wHDDlhygmkJQF+rfShYiNKTxfx9CvWQgt/r9k5IsExC09LWEruzqamTlFEccZ2Lw3AIXPLEIqVTL4In5nll/ofYV1nOrp4J8D7s/+enx9od5f1OAAUK2cxuFkwogVp4j+J9JW77x2uE3FKIIGYs1mbFKdFn3Try/ACPjhZgi989Sq1eQWBTT7TsrdL/YEZR3qxlsGR38zKTTymd0xD1gOLo/OdUC2/33iPBWrJI17IoPSHfiEEFstclsxr4ym2Vj0S5zhe4nUz5ic8xH B5Ox5AoU 2HlbHIQWFLMNwJPakcqU60WfD3bke5CX64QbACEtoeP6FySE4rqGd5N1iktX3nO2gXYKNisBfzUvdueu+pokzyOoNtkEcXcBoxVLsGJowNWLeVTS2SeLk2KwsagjwPe3sTkXwPGyLjodxcQZI3wVBUtv8fakIWrLEKHPV8kF2rlaffYhmE/c3zBVoZDnSGYHt8y1QiTjdQ4FUm5MaGuKkdbVAnZX9NAT8Qwu40EfcBClJnL8nicKPK4NAIJjJ1hsa/3MJ+MO2y5rId5rGkoe3R0VjY7yBkc9HZxrvxQl2G+Gi3Ul25ax+gNRENQcvOVNAivPnRVP4Oy6yt2ShhtZnV1xDtI2uj1amPyLdEPEDxSipP1E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Explicitly state that zcomp compress/decompress must be called from non-atomic context. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zcomp.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index 675f2a51ad5f..f4235735787b 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -185,6 +185,7 @@ int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, }; int ret; + might_sleep(); ret = comp->ops->compress(comp->params, &zstrm->ctx, &req); if (!ret) *dst_len = req.dst_len; @@ -201,6 +202,7 @@ int zcomp_decompress(struct zcomp *comp, struct zcomp_strm *zstrm, .dst_len = PAGE_SIZE, }; + might_sleep(); return comp->ops->decompress(comp->params, &zstrm->ctx, &req); }