From patchwork Thu Jan 30 04:42:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13954302 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BEF2C0218A for ; Thu, 30 Jan 2025 04:45:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B0DC62800BF; Wed, 29 Jan 2025 23:45:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AB93D2800B9; Wed, 29 Jan 2025 23:45:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 95B142800BF; Wed, 29 Jan 2025 23:45:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 75B982800B9 for ; Wed, 29 Jan 2025 23:45:29 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id F2D8E460C6 for ; Thu, 30 Jan 2025 04:45:28 +0000 (UTC) X-FDA: 83062879536.14.5AA7D19 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf27.hostedemail.com (Postfix) with ESMTP id 150CA40004 for ; Thu, 30 Jan 2025 04:45:26 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=JF2xlHn+; spf=pass (imf27.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.172 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738212327; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7b48F13IaZHEp77kXliQ7Ulgv7dkxFEkUKlwPQJd7i8=; b=OD9FiNZ90YzXJAyiFM+/tbIKzb8ncVXThNhzp/lt69KIEp8L16QfylizujE4P7UeJCGzU3 8TwuwfTju6bm5qvAbGdgx8Y7HzIEAUICExHknRhKlW5v38YpUijs2yoAtRrphyXZlWdwQ8 lHKfAyIK9TB2WuFXZAti6fFMNWc0zB4= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=JF2xlHn+; spf=pass (imf27.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.172 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738212327; a=rsa-sha256; cv=none; b=k+sArxJwi7j+xrTQBm6laAeOU6IHdj8sc++KqAqEIlpC/MjxyDPNnmOh6AYVRqrIWuRqqM 6jRdfHp0Sh8S3ZdYFzJJxZ0gJEpDGVFTH9XVrqn37s+FDTylYtDH23IPy1u86h/FaD7HQV 8s9PbmG2uUtPsZV939JBMhuAK2WQ9zM= Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-2166022c5caso4764245ad.2 for ; Wed, 29 Jan 2025 20:45:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738212326; x=1738817126; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7b48F13IaZHEp77kXliQ7Ulgv7dkxFEkUKlwPQJd7i8=; b=JF2xlHn+3K6ce3DMSnpuC+cmSMI4B+ZT9jiWi7Lw9N6G+CUexUml7FJjJXaILm5vxE N8STXdFVddtFz66FdhnOTH1EWLsegWBHn1yA+rH8g6IxxEwLmvuMFh8gZGYNUU1/vnx+ 5NgQg1egKp1SkK84/oAL6QfKl/LR81A1MUDL4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738212326; x=1738817126; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7b48F13IaZHEp77kXliQ7Ulgv7dkxFEkUKlwPQJd7i8=; b=LbpDncnH/gu2rKFUIXz0clFdwjFUmSRmHNGNOs9HGBJ3ZdhvAcI1jh9O6YKmZgAE9s VB+hP5vsU4BUwyJg8uoWc9/UPmgKOFVpxH2arRxK+xkBgOkdDeMzquX3oTPoWCeBdYQI QUZjLDG/YbR8EasJv5CfzmFlWZsHwAMU/ciYW2T0fet5IpRa9bR/A0CKglHmgB9VG4kM 5JXNjwckk8EmwteQdcvm5zGf10XwE8X9p0MISYrjfXve9KRrTXx4SWk6myClKX2UjLxd DO3WZrlPnLBQAm70X3jqPGmZWbPBWRSwc5FoGaGGr3tjAuNF6IqAC9p6B+goXyKPZF+j bORQ== X-Forwarded-Encrypted: i=1; AJvYcCWsId4C0kjKi76F7C5vPW77VXnK/m3lCt6cbWyn24e8cn6OmBLT3msIq4DOuNqcfML/kfy9E8qQaA==@kvack.org X-Gm-Message-State: AOJu0YyRiWUhwXJHKoOvOi0hfO+Pt6X1ZwCTFDqNEIIzwxrRgmb/Pmcn bULKTGrrxarV8mR3APqjOG7aD/Qf/XmSbjB+DXWk+TMLE+kKRheoZAM2shtmSg== X-Gm-Gg: ASbGncuXGiAs47Z7Ms1z/kFFct3O0vvK2+8AbqiFwllE1Q/Y9e/ddmihQtZDU+YuIDe u4FloVaN9Gy9UFj0hbgDCB5SkOBcPOimKfN87q6nzEnnTjKTxpdEVxD3Rk/6jSoCfmkeyEakRRt bSNhUpUsLWhgaTQDh+9nuLYWEsS42I+K0Snja0nuMh5llkfqZ8PuneNUZKciCWyphOg/DylnhtV kT8jqmEgwQAq0ffFeunAUV13f6IfrE2MBaUt+nlCa2ZlIgoGFNNYZ/0eMAUGAoey5ekTherufCO hWhey69KHqxLLGazMw== X-Google-Smtp-Source: AGHT+IGash5oYuGdcs755/3JeUad/VkRFkG5UkbWIaV1ZyGTPZK/ra3hdD962IUxBHUrw6ORscpOlA== X-Received: by 2002:a17:903:594:b0:216:36ff:ba33 with SMTP id d9443c01a7336-21dd7c67e06mr67812015ad.26.1738212325806; Wed, 29 Jan 2025 20:45:25 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:d794:9e7a:5186:857e]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21de31ee137sm4551505ad.23.2025.01.29.20.45.23 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 20:45:25 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Yosry Ahmed Cc: Minchan Kim , Johannes Weiner , Nhat Pham , Uros Bizjak , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv2 4/7] zsmalloc: make zspage lock preemptible Date: Thu, 30 Jan 2025 13:42:47 +0900 Message-ID: <20250130044455.2642465-5-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250130044455.2642465-1-senozhatsky@chromium.org> References: <20250130044455.2642465-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 150CA40004 X-Stat-Signature: 6tx4ip3dspka9pag1tzcxc3ek8ma1q7q X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1738212326-781760 X-HE-Meta: U2FsdGVkX1+KP/ADrtwAEjjkQGPolumqcdWQI+NsQjnDsSAaCKjvN3yAmk3KoQvKKEy2PYKyjbuVvGU5FAehtw7WavIUVs/jdcl8Paaapm+ENt6zyem128aMM6MkNICBjjce9IQcVTHbwUDm/3nutnCecHOLw9meuj8EkysklWOLsqqjrjftLQ6yjW8lnt9vYLWwLoDb42WfNesmL1hUYB9+JMCPKB1/mTzWi/GcK1X7dHrYA3emh9gwLBQu28XXK+ms+8JD1HK2D5ESyDMZQkhKYI8Jd6bYo2Xa81vEuK3xUhZXUQWu7nUgjL9J657dKZsQ/xVIaRuU8Yzckk9xFfMZ+mUniWz/3GVLwvmDoo4HUx0yKRnK81KDG/BczCNGYramuJLiR/mH5mvX6VpILGT54okhAmEWk1i6tDGEoXDrAdD9IT+Vlpswe4z1evlZOOyueAi6dDpvQxR3NSvWq3bHTm2Q/zB+kFt7aN2O/AUt2PcE6pOPCY7zdQcSrArj6Q+l7WwGjoGvraq9eFSkXpwxrI3l8Bymi5vkXIaMEr6Cj4rbtSE+CJdsWJm0CjkA1yZVS/j4GgtV3KRQJc0CuH8zQ5v4bjb9EehfiB3S/m0325lBpME4d9s6aW/5sA3HRdPwj/m2aoKm2pNiYv83iaCXa65VBI1J0xOMheByUOk7+p7drZR3ck7KmGQ2Fiuys1pfP4hh+P2UNifv8Wap43dQGlwRn/KjsjyAZGS9u8ullSwXsSEaPS77cJ/uRPztTb8ecXmQL7uTbuSwVMHCeJUkgiP26c4HX2CTPayghGbwJquLPWD4zuOvNT4kgj6yJc1/iFg/X3+YyiVxmUM9tc7q4xpWz3p6lwAHRMtpk/MaR1XTKxyKnsbhGVq2yk+OE4wZ+uw7N2p30WsJ0Vrgqm3X7mYNq7vUfMqLIcd2nUlWbAhWWpahgV3GeLeOIeexKzlRZQR+3+xZVUwN48K LHTrBW8N HqTh6oa3QyUuBNKj9JKILbj1x9lBHDAIi+thDDWxQLl2a2bX/1IXJTZnTurFQgbqE74KkcaAMvQqSb6ZNJOPrltbU1aA8Y06DjcnpJx3RYe1pvN5eRTsRXQBpcBl0+OkBpmxJJFx4VTnhK0bje7oRweAlSLTwIORu35FjY/nogo+6QBChHf3FSIfY6Y2wDZsoR/DK+vUN6gYlpaUN72uE2m0Xw7TsdiNyHFlrg1bR6X5Fjw6OTY7UyhhcpTbOJB1vKXSDg+5T8g39zKai0pfPdfmI1fXxOJCV5LGaZQMvmmmTahqwD8hrp2eZUA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Switch over from rwlock_t to a atomic_t variable that takes negative value when the page is under migration, or positive values when the page is used by zsmalloc users (object map, etc.) Using a rwsem per-zspage is a little too memory heavy, a simple atomic_t should suffice. zspage lock is a leaf lock for zs_map_object(), where it's read-acquired. Since this lock now permits preemption extra care needs to be taken when it is write-acquired - all writers grab it in atomic context, so they cannot spin and wait for (potentially preempted) reader to unlock zspage. There are only two writers at this moment - migration and compaction. In both cases we use write-try-lock and bail out if zspage is read locked. Writers, on the other hand, never get preempted, so readers can spin waiting for the writer to unlock zspage. With this we can implement a preemptible object mapping. Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 135 +++++++++++++++++++++++++++++++------------------- 1 file changed, 83 insertions(+), 52 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 9053777035af..d8cc8e2598cc 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -294,6 +294,9 @@ static inline void free_zpdesc(struct zpdesc *zpdesc) __free_page(page); } +#define ZS_PAGE_UNLOCKED 0 +#define ZS_PAGE_WRLOCKED -1 + struct zspage { struct { unsigned int huge:HUGE_BITS; @@ -306,7 +309,7 @@ struct zspage { struct zpdesc *first_zpdesc; struct list_head list; /* fullness list */ struct zs_pool *pool; - rwlock_t lock; + atomic_t lock; }; struct mapping_area { @@ -316,6 +319,59 @@ struct mapping_area { enum zs_mapmode vm_mm; /* mapping mode */ }; +static void zspage_lock_init(struct zspage *zspage) +{ + atomic_set(&zspage->lock, ZS_PAGE_UNLOCKED); +} + +/* + * zspage lock permits preemption on the reader-side (there can be multiple + * readers). Writers (exclusive zspage ownership), on the other hand, are + * always run in atomic context and cannot spin waiting for a (potentially + * preempted) reader to unlock zspage. This, basically, means that writers + * can only call write-try-lock and must bail out if it didn't succeed. + * + * At the same time, writers cannot reschedule under zspage write-lock, + * so readers can spin waiting for the writer to unlock zspage. + */ +static void zspage_read_lock(struct zspage *zspage) +{ + atomic_t *lock = &zspage->lock; + int old = atomic_read(lock); + + do { + if (old == ZS_PAGE_WRLOCKED) { + cpu_relax(); + old = atomic_read(lock); + continue; + } + } while (!atomic_try_cmpxchg(lock, &old, old + 1)); +} + +static void zspage_read_unlock(struct zspage *zspage) +{ + atomic_dec(&zspage->lock); +} + +static bool zspage_try_write_lock(struct zspage *zspage) +{ + atomic_t *lock = &zspage->lock; + int old = ZS_PAGE_UNLOCKED; + + preempt_disable(); + if (atomic_try_cmpxchg(lock, &old, ZS_PAGE_WRLOCKED)) + return true; + + preempt_enable(); + return false; +} + +static void zspage_write_unlock(struct zspage *zspage) +{ + atomic_set(&zspage->lock, ZS_PAGE_UNLOCKED); + preempt_enable(); +} + /* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */ static void SetZsHugePage(struct zspage *zspage) { @@ -327,12 +383,6 @@ static bool ZsHugePage(struct zspage *zspage) return zspage->huge; } -static void migrate_lock_init(struct zspage *zspage); -static void migrate_read_lock(struct zspage *zspage); -static void migrate_read_unlock(struct zspage *zspage); -static void migrate_write_lock(struct zspage *zspage); -static void migrate_write_unlock(struct zspage *zspage); - #ifdef CONFIG_COMPACTION static void kick_deferred_free(struct zs_pool *pool); static void init_deferred_free(struct zs_pool *pool); @@ -1028,7 +1078,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, return NULL; zspage->magic = ZSPAGE_MAGIC; - migrate_lock_init(zspage); + zspage_lock_init(zspage); for (i = 0; i < class->pages_per_zspage; i++) { struct zpdesc *zpdesc; @@ -1253,7 +1303,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, * zs_unmap_object API so delegate the locking from class to zspage * which is smaller granularity. */ - migrate_read_lock(zspage); + zspage_read_lock(zspage); pool_read_unlock(pool); class = zspage_class(pool, zspage); @@ -1313,7 +1363,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle) } local_unlock(&zs_map_area.lock); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); } EXPORT_SYMBOL_GPL(zs_unmap_object); @@ -1707,18 +1757,18 @@ static void lock_zspage(struct zspage *zspage) /* * Pages we haven't locked yet can be migrated off the list while we're * trying to lock them, so we need to be careful and only attempt to - * lock each page under migrate_read_lock(). Otherwise, the page we lock + * lock each page under zspage_read_lock(). Otherwise, the page we lock * may no longer belong to the zspage. This means that we may wait for * the wrong page to unlock, so we must take a reference to the page - * prior to waiting for it to unlock outside migrate_read_lock(). + * prior to waiting for it to unlock outside zspage_read_lock(). */ while (1) { - migrate_read_lock(zspage); + zspage_read_lock(zspage); zpdesc = get_first_zpdesc(zspage); if (zpdesc_trylock(zpdesc)) break; zpdesc_get(zpdesc); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); zpdesc_wait_locked(zpdesc); zpdesc_put(zpdesc); } @@ -1729,41 +1779,16 @@ static void lock_zspage(struct zspage *zspage) curr_zpdesc = zpdesc; } else { zpdesc_get(zpdesc); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); zpdesc_wait_locked(zpdesc); zpdesc_put(zpdesc); - migrate_read_lock(zspage); + zspage_read_lock(zspage); } } - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); } #endif /* CONFIG_COMPACTION */ -static void migrate_lock_init(struct zspage *zspage) -{ - rwlock_init(&zspage->lock); -} - -static void migrate_read_lock(struct zspage *zspage) __acquires(&zspage->lock) -{ - read_lock(&zspage->lock); -} - -static void migrate_read_unlock(struct zspage *zspage) __releases(&zspage->lock) -{ - read_unlock(&zspage->lock); -} - -static void migrate_write_lock(struct zspage *zspage) -{ - write_lock(&zspage->lock); -} - -static void migrate_write_unlock(struct zspage *zspage) -{ - write_unlock(&zspage->lock); -} - #ifdef CONFIG_COMPACTION static const struct movable_operations zsmalloc_mops; @@ -1805,7 +1830,7 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode) } static int zs_page_migrate(struct page *newpage, struct page *page, - enum migrate_mode mode) + enum migrate_mode mode) { struct zs_pool *pool; struct size_class *class; @@ -1821,15 +1846,12 @@ static int zs_page_migrate(struct page *newpage, struct page *page, VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc)); - /* We're committed, tell the world that this is a Zsmalloc page. */ - __zpdesc_set_zsmalloc(newzpdesc); - /* The page is locked, so this pointer must remain valid */ zspage = get_zspage(zpdesc); pool = zspage->pool; /* - * The pool migrate_lock protects the race between zpage migration + * The pool->migrate_lock protects the race between zpage migration * and zs_free. */ pool_write_lock(pool); @@ -1839,8 +1861,15 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * the class lock protects zpage alloc/free in the zspage. */ size_class_lock(class); - /* the migrate_write_lock protects zpage access via zs_map_object */ - migrate_write_lock(zspage); + /* the zspage write_lock protects zpage access via zs_map_object */ + if (!zspage_try_write_lock(zspage)) { + size_class_unlock(class); + pool_write_unlock(pool); + return -EINVAL; + } + + /* We're committed, tell the world that this is a Zsmalloc page. */ + __zpdesc_set_zsmalloc(newzpdesc); offset = get_first_obj_offset(zpdesc); s_addr = kmap_local_zpdesc(zpdesc); @@ -1871,7 +1900,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, */ pool_write_unlock(pool); size_class_unlock(class); - migrate_write_unlock(zspage); + zspage_write_unlock(zspage); zpdesc_get(newzpdesc); if (zpdesc_zone(newzpdesc) != zpdesc_zone(zpdesc)) { @@ -2007,9 +2036,11 @@ static unsigned long __zs_compact(struct zs_pool *pool, if (!src_zspage) break; - migrate_write_lock(src_zspage); + if (!zspage_try_write_lock(src_zspage)) + break; + migrate_zspage(pool, src_zspage, dst_zspage); - migrate_write_unlock(src_zspage); + zspage_write_unlock(src_zspage); fg = putback_zspage(class, src_zspage); if (fg == ZS_INUSE_RATIO_0) {