From patchwork Thu Dec 5 17:49:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13895905 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77A93E7716E for ; Thu, 5 Dec 2024 17:50:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE7306B0193; Thu, 5 Dec 2024 12:50:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7EEF06B0194; Thu, 5 Dec 2024 12:50:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EF4D6B0197; Thu, 5 Dec 2024 12:50:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 00A046B018B for ; Thu, 5 Dec 2024 12:50:09 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 82B8E161644 for ; Thu, 5 Dec 2024 17:50:06 +0000 (UTC) X-FDA: 82861643550.22.5266931 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 6CC931C0006 for ; Thu, 5 Dec 2024 17:49:47 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WXmQ6CMF; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733420997; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Sm9ppCk60yVPpbeM79vwVZ4QIq0HArnQvFPHx/TPa7k=; b=n+48E7v7JK8v5FH/8aY5wa2NMlFeQ3DzIIicsP9GS5cRbMFOf4srjQDcZeawb5MTG5CXKS K+6rvejy1qams7wotOzRwP687CmrEQK/4f1+OUaNUv0dmwNKvimiyz78ClDA9RY8hrnQPf 9tIaGBNDAbwOTE4QrJjE/bmg8oAGr5A= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733420997; a=rsa-sha256; cv=none; b=woOuCMTwy2E6Gj4mYXvXU+JfHuZd78vJdzZ2bKb3+uF4I0rXCH+ktrDfwIxYOxiIterwY3 xFds13ZLnib9wKQCrE/sqCdPyDHxTV4xNde2tktoBmlFvpccUT6qSg4xvf/F9WlMx8yK+k uqF1/a8Qz6WfFOsibcjqBQbLN2MyMG4= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WXmQ6CMF; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Sm9ppCk60yVPpbeM79vwVZ4QIq0HArnQvFPHx/TPa7k=; b=WXmQ6CMFmYwHp8ep7piXYN6wJe o3w1IJMCGuEMapPAnBj25RPerEPWNqj42Le90GnQY7alQIycjl0VRW8AIOwiMf7Pqd4A14yMuqeKY 3dj/xt+rWIhmvKPEak39UjpiyPOssmt8/J1ZplIys0tI0/y69111O9i2I5W/XavU/+mnWjp+jR0fH QSGK/voxgKzZBg/SN8JYIoEn5dgUzqpikOixQa3aTVM41Kym58SfnbjRC3nma29z/dnIrs5Go47QR c8U1BQVAZ9P5Lw52uKuFmWoZHorpeZxQxHXuRWi9FP3ztPYGYts848MWDhmEblMoydcN9etO+QRRs FMmOD2MA==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1tJFzG-0000000DN6u-2AhD; Thu, 05 Dec 2024 17:50:02 +0000 From: "Matthew Wilcox (Oracle)" To: Minchan Kim , Sergey Senozhatsky Cc: Alex Shi , linux-mm@kvack.org, Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [PATCH v8 02/21] mm/zsmalloc: use zpdesc in trylock_zspage()/lock_zspage() Date: Thu, 5 Dec 2024 17:49:39 +0000 Message-ID: <20241205175000.3187069-3-willy@infradead.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241205175000.3187069-1-willy@infradead.org> References: <20241205175000.3187069-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 6CC931C0006 X-Stat-Signature: rap44f8aaczgz9tn71feb9xqjaewoqp9 X-Rspam-User: X-HE-Tag: 1733420987-21935 X-HE-Meta: U2FsdGVkX18dT/gJGBrWlOVhk9HpSB5Dx/2Y2I8I4tpUx4oLN0FDdAz5ZHCLhAntGIc9LAMgDrpJlXwUV/WeZmEHEkPDHguvtYUiLN4j7JBP7zyTzWSGX/LacgKuZhTXQE0Rovz048HFnhkW1La6/fGEdELkeNe/rEcmNglY914wkId3JLiF6u+yyBhqdlSeyQUAO4iPv5WrScdda8f7kGTCKECLDkvuUwa5maGmxdbuqrGlI7H6momMfx4/HfgYOhPaWZngOvRc4Nsj++hh8i5AXcdPfzLvtTgWc+ERZxZbJfbBiHIhbpXKq0mLcDDXGFbqM2Cu39mPtS6w1vWwnGOPBnCBllErjXkZH0A9aodHh2SOYYljy4byZ/XycVG4AMyqoE1GSdplimaquS9NGi8W7e1p3jsgxPbiRoOPvvxjJAYOrb3XZaN+z42nI28OO0sSqVIjny9a1XyG38a9N35+4hEMD6tKgVlG7hU/BXYNO/4YZT3TkFXjvWZetQg16MTVnk8fwCPd9PdTx0xtLmuMA8egClxquqYR8pvwHworCA4zUofYC+rlG7Xxz8UXReyq7b52h379/Xt5Q6vDVXNkwjXWBzAkjeQVJbgBrIAQlW1q0pzDrKQBadjXccxSsQ+wR2Z4Ns0fYsJqFTWRucx/qnKiA2wOWirEd3+LAeZoO+mXEdAkVgTjdobFDYFg05kMXZCLRc/vgfcLK0Jwi6Ctx7p1bjRJCABy6Awi3DpnI408r5M/1OaE64o62GyLU9ipJLB1/FShV86nWiCrVOTwJp69x+6az+Kuhf3VcqLlKZh5uFgpmcpHPs2VmwWRkHZCfdGl75Y07WVdwPySslkgyGQxQAxJwRaVYSsW0SEy53gOemKaBpmZ+S+g4tIDR4NyVXw61cFBXgURRE5rSAP0SXQv7Zwt9WcfrPi/mwSqMywg1rzTrVR/sTnDXHiW5s1P0IL6qz9skEl2Wxd YXymkynB pXDo9VXFPM0CE/ocXTqE3wetEnJii/nCog3nLO2npB5Uoi8YieUK+7/qXToYBJTYapqJWJX/GgjFUJDTqt2Xz3BYrtMhjbk1Uj5Gdb24u2QZZGwXCzCl9OcW0rqOZZoJ71C8+Ao6Pc0zICQhVMMK9rDhdvBoZf8a94KRLR70YttTzxOPzurmNy3RkgzAYdjSt7JMRwQpNrUwInep29HKCzEgGL1wxcl1c5rq0vNbNaEIiuImWvuk/YYYAIsTTwnuP1pNt1Iuwxc+4TOXZu6cV+yzyk9cP4qH/hLBNE7G/+sEjayzX3AnQPaGHW23gPvEIFq76xP//p6cuC8hu1owyGAZB4A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alex Shi To use zpdesc in trylock_zspage()/lock_zspage() funcs, we add couple of helpers: zpdesc_lock()/zpdesc_unlock()/zpdesc_trylock()/zpdesc_wait_locked() and zpdesc_get()/zpdesc_put() for this purpose. Here we use the folio series func in guts for 2 reasons, one zswap.zpool only get single page, and use folio could save some compound_head checking; two, folio_put could bypass devmap checking that we don't need. BTW, thanks Intel LKP found a build warning on the patch. Originally-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Alex Shi --- mm/zpdesc.h | 30 ++++++++++++++++++++++++ mm/zsmalloc.c | 64 ++++++++++++++++++++++++++++++++++----------------- 2 files changed, 73 insertions(+), 21 deletions(-) diff --git a/mm/zpdesc.h b/mm/zpdesc.h index 9ad232774469..4c7feee5ef1a 100644 --- a/mm/zpdesc.h +++ b/mm/zpdesc.h @@ -66,4 +66,34 @@ static_assert(sizeof(struct zpdesc) <= sizeof(struct page)); const struct page *: (const struct zpdesc *)(p), \ struct page *: (struct zpdesc *)(p))) +static inline void zpdesc_lock(struct zpdesc *zpdesc) +{ + folio_lock(zpdesc_folio(zpdesc)); +} + +static inline bool zpdesc_trylock(struct zpdesc *zpdesc) +{ + return folio_trylock(zpdesc_folio(zpdesc)); +} + +static inline void zpdesc_unlock(struct zpdesc *zpdesc) +{ + folio_unlock(zpdesc_folio(zpdesc)); +} + +static inline void zpdesc_wait_locked(struct zpdesc *zpdesc) +{ + folio_wait_locked(zpdesc_folio(zpdesc)); +} + +static inline void zpdesc_get(struct zpdesc *zpdesc) +{ + folio_get(zpdesc_folio(zpdesc)); +} + +static inline void zpdesc_put(struct zpdesc *zpdesc) +{ + folio_put(zpdesc_folio(zpdesc)); +} + #endif diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 469fda76ed8a..1d1dd4578ae3 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -446,13 +446,17 @@ static __maybe_unused int is_first_page(struct page *page) return PagePrivate(page); } +static inline bool is_first_zpdesc(struct zpdesc *zpdesc) +{ + return PagePrivate(zpdesc_page(zpdesc)); +} + /* Protected by class->lock */ static inline int get_zspage_inuse(struct zspage *zspage) { return zspage->inuse; } - static inline void mod_zspage_inuse(struct zspage *zspage, int val) { zspage->inuse += val; @@ -466,6 +470,14 @@ static inline struct page *get_first_page(struct zspage *zspage) return first_page; } +static struct zpdesc *get_first_zpdesc(struct zspage *zspage) +{ + struct zpdesc *first_zpdesc = zspage->first_zpdesc; + + VM_BUG_ON_PAGE(!is_first_zpdesc(first_zpdesc), zpdesc_page(first_zpdesc)); + return first_zpdesc; +} + #define FIRST_OBJ_PAGE_TYPE_MASK 0xffffff static inline unsigned int get_first_obj_offset(struct page *page) @@ -752,6 +764,16 @@ static struct page *get_next_page(struct page *page) return (struct page *)page->index; } +static struct zpdesc *get_next_zpdesc(struct zpdesc *zpdesc) +{ + struct zspage *zspage = get_zspage(zpdesc_page(zpdesc)); + + if (unlikely(ZsHugePage(zspage))) + return NULL; + + return zpdesc->next; +} + /** * obj_to_location - get (, ) from encoded object value * @obj: the encoded object value @@ -821,11 +843,11 @@ static void reset_page(struct page *page) static int trylock_zspage(struct zspage *zspage) { - struct page *cursor, *fail; + struct zpdesc *cursor, *fail; - for (cursor = get_first_page(zspage); cursor != NULL; cursor = - get_next_page(cursor)) { - if (!trylock_page(cursor)) { + for (cursor = get_first_zpdesc(zspage); cursor != NULL; cursor = + get_next_zpdesc(cursor)) { + if (!zpdesc_trylock(cursor)) { fail = cursor; goto unlock; } @@ -833,9 +855,9 @@ static int trylock_zspage(struct zspage *zspage) return 1; unlock: - for (cursor = get_first_page(zspage); cursor != fail; cursor = - get_next_page(cursor)) - unlock_page(cursor); + for (cursor = get_first_zpdesc(zspage); cursor != fail; cursor = + get_next_zpdesc(cursor)) + zpdesc_unlock(cursor); return 0; } @@ -1654,7 +1676,7 @@ static int putback_zspage(struct size_class *class, struct zspage *zspage) */ static void lock_zspage(struct zspage *zspage) { - struct page *curr_page, *page; + struct zpdesc *curr_zpdesc, *zpdesc; /* * Pages we haven't locked yet can be migrated off the list while we're @@ -1666,24 +1688,24 @@ static void lock_zspage(struct zspage *zspage) */ while (1) { migrate_read_lock(zspage); - page = get_first_page(zspage); - if (trylock_page(page)) + zpdesc = get_first_zpdesc(zspage); + if (zpdesc_trylock(zpdesc)) break; - get_page(page); + zpdesc_get(zpdesc); migrate_read_unlock(zspage); - wait_on_page_locked(page); - put_page(page); + zpdesc_wait_locked(zpdesc); + zpdesc_put(zpdesc); } - curr_page = page; - while ((page = get_next_page(curr_page))) { - if (trylock_page(page)) { - curr_page = page; + curr_zpdesc = zpdesc; + while ((zpdesc = get_next_zpdesc(curr_zpdesc))) { + if (zpdesc_trylock(zpdesc)) { + curr_zpdesc = zpdesc; } else { - get_page(page); + zpdesc_get(zpdesc); migrate_read_unlock(zspage); - wait_on_page_locked(page); - put_page(page); + zpdesc_wait_locked(zpdesc); + zpdesc_put(zpdesc); migrate_read_lock(zspage); } }