From patchwork Fri Feb 14 04:50:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13974473 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8ED5BC021A4 for ; Fri, 14 Feb 2025 04:53:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 27C3C6B00A2; Thu, 13 Feb 2025 23:53:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 205326B00A3; Thu, 13 Feb 2025 23:53:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 07F936B00A4; Thu, 13 Feb 2025 23:53:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D6D4B6B00A2 for ; Thu, 13 Feb 2025 23:53:30 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 93E581A1C7E for ; Fri, 14 Feb 2025 04:53:30 +0000 (UTC) X-FDA: 83117331780.06.CB8199C Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf25.hostedemail.com (Postfix) with ESMTP id A80A6A0008 for ; Fri, 14 Feb 2025 04:53:28 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=N0VnJd0g; spf=pass (imf25.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.173 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739508808; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qnJprTwBbnPgr4JVTYJPq6LS7kls9Uwm5mA/DEYKF7Y=; b=25kb6U8+ZvtWh9BXZMdxY9/9xa+5x9c9kQQHezpwaz8VcD1/NRtOSUpOBK7teCF3iyyHDs /0CENPVh14O4BnV11LYj9VBQinkPl/HLsR5xc2fLagR7KPe6sJjyKNvATA5mE8GkEPRv4U b3CZRCRLm//ovAo+P8SACpgtVfFmFWQ= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=N0VnJd0g; spf=pass (imf25.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.173 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739508808; a=rsa-sha256; cv=none; b=T4O/TPv8pdHymNKmBwJrKo0weMRrWT+DjLz8mslAj15qdT2yHyv/ZwlZQ+gTyOKoGi0DTs a7yqCmlurStDn91bg5ETbvQwRBofy6j/GQoH39jPIQ3/GIteN08KWUIXPcac1oCg/vyT5A 5Fh/KpPchB0lWs+OZ2H18T7dA/iB07U= Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-21f6a47d617so27995645ad.2 for ; Thu, 13 Feb 2025 20:53:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1739508807; x=1740113607; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qnJprTwBbnPgr4JVTYJPq6LS7kls9Uwm5mA/DEYKF7Y=; b=N0VnJd0gohFwiCWGaZJPTJ1LbHcT0vFVmEB2fsvkc7sdnEb7Q+v2IBH7yyKtcGXznS uTW+gQ90+dcyDzbIzSVRbd26eqn1G2TB5Y4pUoqGCAr0w68vuHuWtZta6sY7kbsU7TWZ 9PWwzjS7HGKvre/N/1kIu+jk1Q+9hmEOETKuw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739508807; x=1740113607; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qnJprTwBbnPgr4JVTYJPq6LS7kls9Uwm5mA/DEYKF7Y=; b=aZSOyao7JxSFkrT7mnnnufQZkhfdTuLUfhgrAEntsMpJcTSHe2fNZWFF34yoE07ilU 6KVDy+WGecaRIPyZQ3lWhAlgw517TYCuUK5bSdcP/QIK4ewd9WJXtHe0ZfUIp2VdVA/7 76CTRn6gLLK5geTOAlNwXbPjZnl+E4aDV2DKm8SvE2sm2/aq0Pln6CT4fSjlwYbPNyGm 2SX0jWZ87vlbaGAo3sinciUQeac+Addhr3/QmZylgh1TqZwXNpC3/Hm4CrehfLma2ZSn UiXDVx7pJrgjQUCP1Tnahn1o8CHv+Ve4tBQXtIpkWO7UB4Wu6os+m/SsEkVVwgXDyzzu nQgA== X-Forwarded-Encrypted: i=1; AJvYcCVq11vtnKTqcj7UHn012YXE5HGYm69JhXtCdvMXYk3EFb3pnQBkdL8gw/X9qOn90zvA8Qdx8MLpJQ==@kvack.org X-Gm-Message-State: AOJu0YwBHR3o/fXH0RRVIdOdcZfvFJnqQyybVNieUwRRCaKHJ8ycCZyN ePu0f1fX6qRGCBMai6QzU+pFifQUqOC+uQZXQJ3JM8QKbqHnHnqVhpuD58ugn+kMku1RozMQD2k = X-Gm-Gg: ASbGncvaWh3LrHin3yJeyEuZAwASZtW4JuhIbRiD+7aVpK4qF3JDtMqmEaqfwjIruIp yRcCMs+X33isigzGrHHiJgfKNtSiI5tjnOrokn4ri0wDipcRhQZz10+nw7bszR9yp/ZINqmOm2d cm+2wyoDPf6Yx0pwC7taM4JhMQ0uKLqLEX1fT6aevivkFvRTeCjoAuN4RIiPgM1F/z1Ycrp+WoZ YP+7pCqXrO+jL0++sdUaH4WXYzUiw0pDnbi/+yoyXIOUGH4XoUw4SuBMnJzGqzaHag8/lS183wg 0VikqwMbf4kd/dcunA== X-Google-Smtp-Source: AGHT+IFFD5AmsVRlFVnjhldrCVgoR4+3BSYYM094ysV/C/O/38mcNtXqChtRYmpdp4CXK407opgp9w== X-Received: by 2002:a17:903:32c8:b0:220:ea90:192a with SMTP id d9443c01a7336-220ea901a5fmr37761435ad.5.1739508807544; Thu, 13 Feb 2025 20:53:27 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:942d:9291:22aa:8126]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-220d53492d0sm20892945ad.35.2025.02.13.20.53.25 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 13 Feb 2025 20:53:27 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Yosry Ahmed , Hillf Danton , Kairui Song , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH v6 11/17] zsmalloc: make zspage lock preemptible Date: Fri, 14 Feb 2025 13:50:23 +0900 Message-ID: <20250214045208.1388854-12-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.601.g30ceb7b040-goog In-Reply-To: <20250214045208.1388854-1-senozhatsky@chromium.org> References: <20250214045208.1388854-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: A80A6A0008 X-Stat-Signature: f373ypytrdbd1squd3wzt1yynyoms11r X-HE-Tag: 1739508808-715924 X-HE-Meta: U2FsdGVkX18cHdZkKqXKyc1Mlt5E+J5PtSCEt2FEzBzfRKfUT5uCsiOSvkGs83p5vQR2/u0hQN/XnGz5bbCDx0xC2jnVdLXA3W7r+In7blcEKGXeEf0MvPrFCObCzvffRSj3aLVrxP0GYxPZqVyuwKxGN4btfnhWXu1K6zUExe1bpAc47yhIE42wStlgXe8wwXN0zoDFFcKegl0ibRVItQpiNiSeUXl9ai3I5ZhHfpjsuJxlXF151RatGk3n6iZ6a523XqdICS8of8UWl1xzjrvL4OzQYctgs6MW//QvLhmolhduL2F1+uXNtJPXqr1kMIXy1M+YZV0rQiItar3ZrOxdsDR39kqBgjudQhFv+VuzmMcE7It2EfPjt8uNAaKPyGYeMLB5vsPAnytoMZN23GuZ4dWFhy48LXple1qjUSNlZSYnwfpbPkwz0M0AzkvL8I1bt8vPorvH2RbLXsxCu7l3ORL/K+t7oTGpk2aSjlLFkBfHrzro+R36Yl+2Ff1TKynHWPCPKL76bTE0T9t6PEwjTtb45dLKPABWy1z9/zNQTY0wh2SEegtG9fYmEZzE1UNTrw39yvAL5Zq63r28EpY+SvE+JA9lBjaeeQcjLHlJkWJH3ZqFfSurDZ43YxcsChNV5Kbk3EQOQ6JgFrxUxEA4MxKlE3k1H25xMY+SDx+lwTdhwSCSdeuBWJ4wtXzXQSb7Mz3c7m+Zebhpj64GxpCk+OsjxIRqKtz9Pix6ldGUNRars4kCII01W5mxRDwIcSNWGOPQTVuZo8xCujNhSD4emvg5xlKQzXLX095AbANkHnldmzUN9adhO9Ku6t5T5I+r/MNnQyg6/zqSjW45cKn4WPZwwE/JEJ2ysvsIfxIN2MLnQiIYtzQIEFQExn5QERE18I8e/7D/bEBNoOBvU/LMPA1Wcs5vTls2v38A6PJK/+9Ge7ZPjqeSmISlytSGy1xTblCE4DycBcwzuSi NNuqqcSA qAP3s74hMGH/rUr4JGfa7kYUxgzV55alqNR7rczIcpZurc2n5/Av8SlSkvsHa+2RNG1gAYfbKDdXZkvwT0zHuYKEHfBTJysbbevpBWDOKpIRhBrnb33bwQbEsugDnnnJfsu/1SYS+2NJ9MBXxhKW3VSvxaMT8ULiTuZfM00gAbiRB2DR+22rh3Bm/ATcHdX7O3PLWmJG+6Xr+Bq83lj1Fl3YFU8+d+m0xfIkJ99DBVRqypEacuQAp5ZP0jqwbMYW8H/NFO/EOfTSJgYNCBhA+9zWQsZbr3vO7HGKHrF7ni2njwZzfXRIOT4JQ/E9IVZUptqMdgRMVo2C2rTJFzC5XpcASYPvTmApfOG9apffsoDyQyy51docOf1rb6+svJzpIshtEEiSPJ0QRLckLSJjV2MHbjw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In order to implement preemptible object mapping we need a zspage lock that satisfies several preconditions: - it should be reader-write type of a lock - it should be possible to hold it from any context, but also being preemptible if the context allows it - we never sleep while acquiring but can sleep while holding in read mode An rwsemaphore doesn't suffice, due to atomicity requirements, rwlock doesn't satisfy due to reader-preemptability requirement. It's also worth to mention, that per-zspage rwsem is a little too memory heavy (we can easily have double digits megabytes used only on rwsemaphores). Switch over from rwlock_t to a atomic_t-based implementation of a reader-writer semaphore that satisfies all of the preconditions. The spin-lock based zspage_lock is suggested by Hillf Danton. Suggested-by: Hillf Danton Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 246 +++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 192 insertions(+), 54 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 2e338cde0d21..bc679a3e1718 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -226,6 +226,9 @@ struct zs_pool { /* protect zspage migration/compaction */ rwlock_t lock; atomic_t compaction_in_progress; +#ifdef CONFIG_DEBUG_LOCK_ALLOC + struct lock_class_key lock_class; +#endif }; static inline void zpdesc_set_first(struct zpdesc *zpdesc) @@ -257,6 +260,18 @@ static inline void free_zpdesc(struct zpdesc *zpdesc) __free_page(page); } +#define ZS_PAGE_UNLOCKED 0 +#define ZS_PAGE_WRLOCKED -1 + +struct zspage_lock { + spinlock_t lock; + int cnt; + +#ifdef CONFIG_DEBUG_LOCK_ALLOC + struct lockdep_map dep_map; +#endif +}; + struct zspage { struct { unsigned int huge:HUGE_BITS; @@ -269,7 +284,7 @@ struct zspage { struct zpdesc *first_zpdesc; struct list_head list; /* fullness list */ struct zs_pool *pool; - rwlock_t lock; + struct zspage_lock zsl; }; struct mapping_area { @@ -279,6 +294,148 @@ struct mapping_area { enum zs_mapmode vm_mm; /* mapping mode */ }; +static void zspage_lock_init(struct zspage *zspage) +{ +#ifdef CONFIG_DEBUG_LOCK_ALLOC + lockdep_init_map(&zspage->zsl.dep_map, "zspage->lock", + &zspage->pool->lock_class, 0); +#endif + + spin_lock_init(&zspage->zsl.lock); + zspage->zsl.cnt = ZS_PAGE_UNLOCKED; +} + +#ifdef CONFIG_DEBUG_LOCK_ALLOC +static inline void __read_lock(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + rwsem_acquire_read(&zsl->dep_map, 0, 0, _RET_IP_); + + spin_lock(&zsl->lock); + zsl->cnt++; + spin_unlock(&zsl->lock); + + lock_acquired(&zsl->dep_map, _RET_IP_); +} + +static inline void __read_unlock(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + rwsem_release(&zsl->dep_map, _RET_IP_); + + spin_lock(&zsl->lock); + zsl->cnt--; + spin_unlock(&zsl->lock); +} + +static inline bool __write_trylock(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + spin_lock(&zsl->lock); + if (zsl->cnt == ZS_PAGE_UNLOCKED) { + zsl->cnt = ZS_PAGE_WRLOCKED; + rwsem_acquire(&zsl->dep_map, 0, 1, _RET_IP_); + lock_acquired(&zsl->dep_map, _RET_IP_); + return true; + } + + lock_contended(&zsl->dep_map, _RET_IP_); + spin_unlock(&zsl->lock); + return false; +} + +static inline void __write_unlock(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + rwsem_release(&zsl->dep_map, _RET_IP_); + + zsl->cnt = ZS_PAGE_UNLOCKED; + spin_unlock(&zsl->lock); +} +#else +static inline void __read_lock(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + spin_lock(&zsl->lock); + zsl->cnt++; + spin_unlock(&zsl->lock); +} + +static inline void __read_unlock(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + spin_lock(&zsl->lock); + zsl->cnt--; + spin_unlock(&zsl->lock); +} + +static inline bool __write_trylock(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + spin_lock(&zsl->lock); + if (zsl->cnt == ZS_PAGE_UNLOCKED) { + zsl->cnt = ZS_PAGE_WRLOCKED; + return true; + } + + spin_unlock(&zsl->lock); + return false; +} + +static inline void __write_unlock(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + zsl->cnt = ZS_PAGE_UNLOCKED; + spin_unlock(&zsl->lock); +} +#endif /* CONFIG_DEBUG_LOCK_ALLOC */ + +/* + * The zspage lock can be held from atomic contexts, but it needs to remain + * preemptible when held for reading because it remains held outside of those + * atomic contexts, otherwise we unnecessarily lose preemptibility. + * + * To achieve this, the following rules are enforced on readers and writers: + * + * - Writers are blocked by both writers and readers, while readers are only + * blocked by writers (i.e. normal rwlock semantics). + * + * - Writers are always atomic (to allow readers to spin waiting for them). + * + * - Writers always use trylock (as the lock may be held be sleeping readers). + * + * - Readers may spin on the lock (as they can only wait for atomic writers). + * + * - Readers may sleep while holding the lock (as writes only use trylock). + */ +static void zspage_read_lock(struct zspage *zspage) +{ + return __read_lock(zspage); +} + +static void zspage_read_unlock(struct zspage *zspage) +{ + return __read_unlock(zspage); +} + +static __must_check bool zspage_write_trylock(struct zspage *zspage) +{ + return __write_trylock(zspage); +} + +static void zspage_write_unlock(struct zspage *zspage) +{ + return __write_unlock(zspage); +} + /* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */ static void SetZsHugePage(struct zspage *zspage) { @@ -290,12 +447,6 @@ static bool ZsHugePage(struct zspage *zspage) return zspage->huge; } -static void migrate_lock_init(struct zspage *zspage); -static void migrate_read_lock(struct zspage *zspage); -static void migrate_read_unlock(struct zspage *zspage); -static void migrate_write_lock(struct zspage *zspage); -static void migrate_write_unlock(struct zspage *zspage); - #ifdef CONFIG_COMPACTION static void kick_deferred_free(struct zs_pool *pool); static void init_deferred_free(struct zs_pool *pool); @@ -992,7 +1143,9 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, return NULL; zspage->magic = ZSPAGE_MAGIC; - migrate_lock_init(zspage); + zspage->pool = pool; + zspage->class = class->index; + zspage_lock_init(zspage); for (i = 0; i < class->pages_per_zspage; i++) { struct zpdesc *zpdesc; @@ -1015,8 +1168,6 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, create_page_chain(class, zspage, zpdescs); init_zspage(class, zspage); - zspage->pool = pool; - zspage->class = class->index; return zspage; } @@ -1217,7 +1368,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, * zs_unmap_object API so delegate the locking from class to zspage * which is smaller granularity. */ - migrate_read_lock(zspage); + zspage_read_lock(zspage); read_unlock(&pool->lock); class = zspage_class(pool, zspage); @@ -1277,7 +1428,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle) } local_unlock(&zs_map_area.lock); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); } EXPORT_SYMBOL_GPL(zs_unmap_object); @@ -1671,18 +1822,18 @@ static void lock_zspage(struct zspage *zspage) /* * Pages we haven't locked yet can be migrated off the list while we're * trying to lock them, so we need to be careful and only attempt to - * lock each page under migrate_read_lock(). Otherwise, the page we lock + * lock each page under zspage_read_lock(). Otherwise, the page we lock * may no longer belong to the zspage. This means that we may wait for * the wrong page to unlock, so we must take a reference to the page - * prior to waiting for it to unlock outside migrate_read_lock(). + * prior to waiting for it to unlock outside zspage_read_lock(). */ while (1) { - migrate_read_lock(zspage); + zspage_read_lock(zspage); zpdesc = get_first_zpdesc(zspage); if (zpdesc_trylock(zpdesc)) break; zpdesc_get(zpdesc); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); zpdesc_wait_locked(zpdesc); zpdesc_put(zpdesc); } @@ -1693,41 +1844,16 @@ static void lock_zspage(struct zspage *zspage) curr_zpdesc = zpdesc; } else { zpdesc_get(zpdesc); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); zpdesc_wait_locked(zpdesc); zpdesc_put(zpdesc); - migrate_read_lock(zspage); + zspage_read_lock(zspage); } } - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); } #endif /* CONFIG_COMPACTION */ -static void migrate_lock_init(struct zspage *zspage) -{ - rwlock_init(&zspage->lock); -} - -static void migrate_read_lock(struct zspage *zspage) __acquires(&zspage->lock) -{ - read_lock(&zspage->lock); -} - -static void migrate_read_unlock(struct zspage *zspage) __releases(&zspage->lock) -{ - read_unlock(&zspage->lock); -} - -static void migrate_write_lock(struct zspage *zspage) -{ - write_lock(&zspage->lock); -} - -static void migrate_write_unlock(struct zspage *zspage) -{ - write_unlock(&zspage->lock); -} - #ifdef CONFIG_COMPACTION static const struct movable_operations zsmalloc_mops; @@ -1769,7 +1895,7 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode) } static int zs_page_migrate(struct page *newpage, struct page *page, - enum migrate_mode mode) + enum migrate_mode mode) { struct zs_pool *pool; struct size_class *class; @@ -1785,9 +1911,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page, VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc)); - /* We're committed, tell the world that this is a Zsmalloc page. */ - __zpdesc_set_zsmalloc(newzpdesc); - /* The page is locked, so this pointer must remain valid */ zspage = get_zspage(zpdesc); pool = zspage->pool; @@ -1803,8 +1926,15 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * the class lock protects zpage alloc/free in the zspage. */ spin_lock(&class->lock); - /* the migrate_write_lock protects zpage access via zs_map_object */ - migrate_write_lock(zspage); + /* the zspage write_lock protects zpage access via zs_map_object */ + if (!zspage_write_trylock(zspage)) { + spin_unlock(&class->lock); + write_unlock(&pool->lock); + return -EINVAL; + } + + /* We're committed, tell the world that this is a Zsmalloc page. */ + __zpdesc_set_zsmalloc(newzpdesc); offset = get_first_obj_offset(zpdesc); s_addr = kmap_local_zpdesc(zpdesc); @@ -1835,7 +1965,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, */ write_unlock(&pool->lock); spin_unlock(&class->lock); - migrate_write_unlock(zspage); + zspage_write_unlock(zspage); zpdesc_get(newzpdesc); if (zpdesc_zone(newzpdesc) != zpdesc_zone(zpdesc)) { @@ -1971,9 +2101,11 @@ static unsigned long __zs_compact(struct zs_pool *pool, if (!src_zspage) break; - migrate_write_lock(src_zspage); + if (!zspage_write_trylock(src_zspage)) + break; + migrate_zspage(pool, src_zspage, dst_zspage); - migrate_write_unlock(src_zspage); + zspage_write_unlock(src_zspage); fg = putback_zspage(class, src_zspage); if (fg == ZS_INUSE_RATIO_0) { @@ -2233,7 +2365,9 @@ struct zs_pool *zs_create_pool(const char *name) * trigger compaction manually. Thus, ignore return code. */ zs_register_shrinker(pool); - +#ifdef CONFIG_DEBUG_LOCK_ALLOC + lockdep_register_key(&pool->lock_class); +#endif return pool; err: @@ -2270,6 +2404,10 @@ void zs_destroy_pool(struct zs_pool *pool) kfree(class); } +#ifdef CONFIG_DEBUG_LOCK_ALLOC + lockdep_unregister_key(&pool->lock_class); +#endif + destroy_cache(pool); kfree(pool->name); kfree(pool);