From patchwork Mon Jan 13 17:57:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937867 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AD04C02183 for ; Mon, 13 Jan 2025 17:59:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EBBF76B0088; Mon, 13 Jan 2025 12:59:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E1B366B008C; Mon, 13 Jan 2025 12:59:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C94FD6B0095; Mon, 13 Jan 2025 12:59:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id AB1106B0088 for ; Mon, 13 Jan 2025 12:59:57 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 660091A024D for ; Mon, 13 Jan 2025 17:59:57 +0000 (UTC) X-FDA: 83003192034.03.85A7315 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf20.hostedemail.com (Postfix) with ESMTP id 7A54F1C000A for ; Mon, 13 Jan 2025 17:59:55 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=N+vVmHjL; spf=pass (imf20.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791195; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3bA8B6i+M3WoYB0+zo3p6m39muwwLLEWjZCQybnFwG0=; b=hQ1P0V/ovUMLh0RcMrYcTeZVHqYpOsRV15VULnniknnanCnbOYbhVe4VvKlOo5IbvMDyXD YPlqtYh9IKpTpyLhOzaTQahhJKr+K06AedLeNku9cukf8U60JBTAjBiK9bchWaQLpm2IoJ HnRZhvOGFgeS9RTdTj7rc4htjfaMpsQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791195; a=rsa-sha256; cv=none; b=xOAmTXP/7tZibx5rMKtw8f2U68jZeuinR7allX1zj4izektBBxoFG/neigA4+gz+AZWls0 NKMQl6gMrh3sHUZQ8PjtG8Gp77i8J4PK3uUpODx26gsEBq4fR2IbDRGmChrg2rf+idut+T QO3l1W+eFB9Fw+8xXz8Bpwe7BoNlcyo= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=N+vVmHjL; spf=pass (imf20.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-21675fd60feso104510675ad.2 for ; Mon, 13 Jan 2025 09:59:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791194; x=1737395994; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=3bA8B6i+M3WoYB0+zo3p6m39muwwLLEWjZCQybnFwG0=; b=N+vVmHjLB369Wcbvnv5OmMOt/usY9MozFsk5xbtjFPwg+UG7rETyu4qqxdHPwXSAze D3TbJ5I1IO2itJRU7tiAUdwUe9a0nlMT61KV3Hu7b14/BxCtA2hbkCFbVMH+ej5Ti6uL efL6xk4qms0134cu7korO2JPKtgE6MnWd15/9GjYk677ccATkfaM9bhc8nTUc/kSSp7n /VltbFEqCLGr5V78fQePgs/vUxBsSIV4KwWYOAf6E7TL0zzOZ1Honw1i+Vp5zFA7xXsB hvn+RXGelom403fcyEb+HJ8gaQ9+ErF3J4mYtnw37kfySIksijzh7Hc4I7yF55PISGE1 0oMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791194; x=1737395994; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=3bA8B6i+M3WoYB0+zo3p6m39muwwLLEWjZCQybnFwG0=; b=gkqE019NJTYK02Nl580IOtZpBcXyjQqs/XbtESZLLOfHx1gpNDunuMZNw2ajfM+0/z Gx1xJmpwjBCuMK1mNB2qdYhdojd8BbScDrCByp4ZiZvR+cd6gz63ewO2i975M1Moam/u qcKNmC+Pi4lcsJI3+Pjzuwz59VXwRoBH5geu08LiE7HcyIJvAoSrM+vjGJStwdwkzs9f b5v8PoakKyfTZb9RSa4UzhLoKGQ0lyrWSpdcuy5b2dXb32Hm8x/uKm6L2Lhjoe49Xt/+ 5jTdnRxmsxpWMW1Y9pOmBgcLzLRfPiCszYqUqMSkMVoDpLVFXKzXGUULuellJBZcRqQU GUqw== X-Gm-Message-State: AOJu0YzpsM9Jw83DnvpKu8rYh40wdHipcT69L6PWiDqKrYYrMXekVbzk kaEMHSO6p59U2kSD4KyjrQEOvR+uQWsVL2466co9fMaNQhSXPq+jA6Gz/VHYdTI= X-Gm-Gg: ASbGnct7kdZ/I+/A0wEU3oUPtXasK2ji098EnkDuRuizXXeSsPDVJHBKb1k/SWXzGRU Tqjzmbv3LU8roXUXhCMA50PYM+SrcW7KhYtfq9Kiq3Uwggx3dXGFlZW/+nBkt2xN80qOmJr54PB Bc+p5cjKWo5T1I0LLCupr6lDoupNOh0dWG4GXGzNd0DMeFoICKdGyfkuR+3Bpe1GksoEV3dOp06 UHgug4VQt8ff1CoxzbRGcPAk5XOg2Cveq+p0p2xyzci5hbBJLsr/mHQaHfetYXUyF+tVuGEWj7M 5g== X-Google-Smtp-Source: AGHT+IHG9J+kj0NwuV4dCCUaflW/hQEMT5mplvb1rxDF2wEH2kzs02TDJ+5TN5Jg0cDR5NohlVyXfw== X-Received: by 2002:a17:903:41c4:b0:212:5786:7bb6 with SMTP id d9443c01a7336-21a83f469b8mr301683785ad.3.1736791193793; Mon, 13 Jan 2025 09:59:53 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.09.59.50 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 09:59:53 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 01/13] mm, swap: minor clean up for swap entry allocation Date: Tue, 14 Jan 2025 01:57:20 +0800 Message-ID: <20250113175732.48099-2-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 7A54F1C000A X-Stat-Signature: ezmkifdkuu4qe9kjcub1igx39nay9pjx X-Rspam-User: X-HE-Tag: 1736791195-83405 X-HE-Meta: U2FsdGVkX18FIDoimbH2z0xUbCjQAsSIYx3jYeceYV5jclyaQ8VTF4rIV13xnyOKV9GoU7j+IKg03wfzoIo9LkKj+whbmxMHQ/uXa+1fYxC7dp3oRhbukLVQKE39wmKFQkZfbLGmtQ1X4bF5dEoOg3YjRbkKvTnRZCGle8cpDSNYVtu7kW5g/uY3UQx9gbzDLTPxF6BocfDZZimU0fNj+Dx+RQKFoX/W3SuqkAhhvMEQC9ulF4E7Jp6UJ+yk0lAKIrKincoEDUkSCh3MSPOpZhlnlbB2c2ioO9FgziLjflBGYNTdvimXlWNtDaPleho0OeHHOQ28Kg7ri7uJoYseIk7rmalI60mSr5AmRv18rIyYWMO2xx3RWUDn2glF88wM+iEH254orYp6zUEZpJ62X1ySo17+61yEsBt4D8QQLqBmmJK5YDawTpsBtnAdsR1nA2eMQGz+KsrT0CKiCrr8c6ScW2j94xa+vzzV/z6LnMbuswVfBlNo+jkwBhC7cyN8llqgzPjFBg7DmIfHRHT/cmdEjbrh1KVKiFEB/5vxbO3cysX7Mz1D2M+OMm4hM/nhJQurwI3U8WES/hF5h+i49pLmEMqqP2WBcCCh8iTiUxUU3NUzKYAy+PLIQmy9WKZCPD/62v+Sup1qoWSdYtS393xBaVEzFr3h9RcCSiLkG6WedcFfHbB07QMUywb/eRcBe/ctdqC6iWRq2ZXqVfw3oWop+XgwJE/33NHj9H/BY4Hvxud/zqQiO2IzZWb35WycfKErl68oRNu1eYsXT9R6+CykWPTvBjU5ZCgu5vIxsfIEt5Yfgkl+g9V5484SncSn281oiSN5oPkOB2ZYwrSpjI4my8FipR299N1tmwQ8CHl+/ZoQst7PEIqL+i1EZ1r5tPcKOgWW+YVFKujmR6lT9RxN5Rtq1O3Sg55NwL9f1tsuGpCmto303RJWLej2YeBkG6r0ea9AXzCwvCW5Snj j4+HL3No W+J0CwdT1Jlrwj4BP9ipPRmbyPextBl5HKsPDkUBLB1p+TFNR/tc+rSJSkG3lTZXOJ1dIlvBBHdTev15xGOLAuuLGXbcpy4k9hrJ5NC4rYQBBnmqsTa5sB6ZeAfeq6SQAlf98jJi43BZbm7PnbBboB/MthaS9ET7rCDnr0PX+LGWwloSZwrG0bJPmvP2hy6tion9OmT6VvCpBma2dDhV5ur1Zh+4qKF7JnVTULl5q/st5ZI72NwxC/Cnonhe92ZXjjh1Hp5yVHAfbkDkXLG0QhnzRJeLhwyqMNvjIQMejeNMAlIlGYaKP0v7SWwIYjhjsXq8CjzJiJJx9QHF31EMbilFIDIuVsW9yoBbcBUY9kUOyASfSfD/aFhQeT0pEIO2hjbeHK9osMA0To3DpHMAfhFxoscEydxHiLWeOaX3oXF/QF/zPtoN3eRED5sKExr3Ve7qMnQOJncCGZ+Fyo28RBXLJ2D4dusqBA1Oo4IuDhdjN9JJBykNYTaSk1ecaYgiTgrQz9hkMk9rPmi1w7lSLq2OhHXOf6w2s/0xUUmtubX5+WUA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Direct reclaim can skip the whole folio after reclaimed a set of folio based slots. Also simplify the code for allocation, reduce indention. Signed-off-by: Kairui Song Reviewed-by: Baoquan He --- mm/swapfile.c | 59 +++++++++++++++++++++++++-------------------------- 1 file changed, 29 insertions(+), 30 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index b0a9071cfe1d..f8002f110104 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -604,23 +604,28 @@ static bool cluster_reclaim_range(struct swap_info_struct *si, unsigned long start, unsigned long end) { unsigned char *map = si->swap_map; - unsigned long offset; + unsigned long offset = start; + int nr_reclaim; spin_unlock(&ci->lock); spin_unlock(&si->lock); - for (offset = start; offset < end; offset++) { + do { switch (READ_ONCE(map[offset])) { case 0: - continue; + offset++; + break; case SWAP_HAS_CACHE: - if (__try_to_reclaim_swap(si, offset, TTRS_ANYWAY | TTRS_DIRECT) > 0) - continue; - goto out; + nr_reclaim = __try_to_reclaim_swap(si, offset, TTRS_ANYWAY | TTRS_DIRECT); + if (nr_reclaim > 0) + offset += nr_reclaim; + else + goto out; + break; default: goto out; } - } + } while (offset < end); out: spin_lock(&si->lock); spin_lock(&ci->lock); @@ -838,35 +843,30 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o &found, order, usage); frags++; if (found) - break; + goto done; } - if (!found) { + /* + * Nonfull clusters are moved to frag tail if we reached + * here, count them too, don't over scan the frag list. + */ + while (frags < si->frag_cluster_nr[order]) { + ci = list_first_entry(&si->frag_clusters[order], + struct swap_cluster_info, list); /* - * Nonfull clusters are moved to frag tail if we reached - * here, count them too, don't over scan the frag list. + * Rotate the frag list to iterate, they were all failing + * high order allocation or moved here due to per-CPU usage, + * this help keeping usable cluster ahead. */ - while (frags < si->frag_cluster_nr[order]) { - ci = list_first_entry(&si->frag_clusters[order], - struct swap_cluster_info, list); - /* - * Rotate the frag list to iterate, they were all failing - * high order allocation or moved here due to per-CPU usage, - * this help keeping usable cluster ahead. - */ - list_move_tail(&ci->list, &si->frag_clusters[order]); - offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), - &found, order, usage); - frags++; - if (found) - break; - } + list_move_tail(&ci->list, &si->frag_clusters[order]); + offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), + &found, order, usage); + frags++; + if (found) + goto done; } } - if (found) - goto done; - if (!list_empty(&si->discard_clusters)) { /* * we don't have free cluster but have some clusters in @@ -904,7 +904,6 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o goto done; } } - done: cluster->next[order] = offset; return found; From patchwork Mon Jan 13 17:57:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937868 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B60F7C02180 for ; Mon, 13 Jan 2025 18:00:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A8756B0096; Mon, 13 Jan 2025 13:00:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 459A26B0098; Mon, 13 Jan 2025 13:00:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25D246B0099; Mon, 13 Jan 2025 13:00:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id F27FF6B0096 for ; Mon, 13 Jan 2025 13:00:01 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A1D9E1401B5 for ; Mon, 13 Jan 2025 18:00:01 +0000 (UTC) X-FDA: 83003192202.01.8250E75 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) by imf24.hostedemail.com (Postfix) with ESMTP id ADC22180003 for ; Mon, 13 Jan 2025 17:59:59 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=RPu9rjWC; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791199; a=rsa-sha256; cv=none; b=EwZ26lM0MMMr2RTdXUOw70m/r+H4CVp16BsLteCv1GDA+OFoWzmAOSUF1mEMugTsOr8xnV J7T/wisa788bBgdoTSoc78oH7WBew3lqnf0xc8tovfCZw0xkcpPt/mcgesMoUpDFAe5Pv1 YQLS5aJDHZqJV/vIobspoH0wAJuivJ4= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=RPu9rjWC; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.175 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791199; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KIPMfjkfQ89B1WDwc6Z1AsnxOVIeae7YspqiuRkp0ic=; b=tIow03QgMS0IKQvLs34mbmO9ZvhesO6Fy2l719yUx7bXkXq7a4vHnEevUcjSxw8KZRI/hf hZQzL4peVkvF4uQ2hU6Pydz7OdOISwTWyfcdI/qnqiRPskSLULRzMyt1JigFqW7cHIqUKC rDcjz0v2QgcaeGvmLr/rl2VBNd4p+W4= Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-2161eb95317so80694735ad.1 for ; Mon, 13 Jan 2025 09:59:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791197; x=1737395997; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=KIPMfjkfQ89B1WDwc6Z1AsnxOVIeae7YspqiuRkp0ic=; b=RPu9rjWCuTx314LPqUFOFMcEm9vrzmSLfYeWyE09S0tHwo03Bsr+YH0LzQQCkff415 7bTyFaCNY5+2Vmg1ZmVDFnZgMgPR8YQJk/arGyE5MeqeeFN2VyY+xnrYJIVWc5JFZ1Q0 mpJDORVMTub60MXttmnFBn667oyHYMXSQfzP6JVqNaz3c4xSglC1DFoq11gTgCtXV3Ni +G5G30vgjUdKYjkyYob27fGWjJGWlMqD2yiXjI/4tn9/eoAA6pDVtr7J7r/pmSyI3v7M s1p/tTEHcwcWxv0vTa1Uz20K1UwJElJ7oXxUI+6I+7LS1N6JFOkLv7+aMRvO5A07k1JX avvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791197; x=1737395997; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=KIPMfjkfQ89B1WDwc6Z1AsnxOVIeae7YspqiuRkp0ic=; b=ru2dpe0MnscFQ2c+7mRww0cgFpu2w7qvRhtMD8BBLW5S2uE5l5rJyqXPy4RLKevTur qNXVDZv3XjWCWyPRvMCN8tc43CbdgRCdO+DPGBW/O7610kCkEMoKTDNjTqkOiz2mDnYf CyXQdeDGBBzC5bjuDYwzh/5sfhn/ctsxNnK1n6QKKnw98m8VqG71Dq/epQ0QJbUn4AwN 921WBlpktIkMTzsa7i0BhZSt3xdlLtgJTfRlK/MeMHrmtPX9JY3J7KIBFR+xqYnAmydO 3cnIzjfCXCPQf+QgL/5AAT/IYkgcWPp8kOUTKjv0f10nj+oUH+cHxPyx9Qec0+RU44Iq vJ1Q== X-Gm-Message-State: AOJu0YxTaWYvXUrmiFw8yHJ8e7+F7/8105tQxpAacUGHy94vh7chpluy 3vMZiKE5TCG3WGBT8iFL1qggqAtfC8WEbnQsQpcGfyfCJk6PgpgDSA0Zhg1RHVk= X-Gm-Gg: ASbGncuVDz2qurp2r7dGmqeaxAme/eYAgkfngQr4UhauDmx9//CG1BdEDDlFYwQsOqt OQIXm6FJyCi9hzVh+1v7mDS9GeJRPRzMZD2bPvNOtZEFLEVOc/nGIWkkXvoXo5ZEdE147K+g1Ye 7iXLUS3tpUELWD1roin3SbSIPHS3Uqxrx+9WTkt8Rh92+SYSNv1VhFVHt8HQFsR/IphS+l8pKFv XIHxZiKiNhQzXzMYdJYu7PXzmwUm7dRWynRsu7Zjsv2W9csh+GsrG5f4AuVXAEnvPuLF76Do0Za Yg== X-Google-Smtp-Source: AGHT+IEnHU1K7n4HvnC+iaw48qp8W3x2ZslOxNgGfCpcaDHl0MYIhads/GivBX4OKMlfg5wSZ5/7JQ== X-Received: by 2002:a17:902:e808:b0:215:b75f:a18d with SMTP id d9443c01a7336-21a83f36d9emr335932635ad.11.1736791197432; Mon, 13 Jan 2025 09:59:57 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.09.59.54 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 09:59:57 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 02/13] mm, swap: fold swap_info_get_cont in the only caller Date: Tue, 14 Jan 2025 01:57:21 +0800 Message-ID: <20250113175732.48099-3-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Stat-Signature: ge6fyp3aihh7prkzg7ms4gmbcqkcyiey X-Rspam-User: X-Rspamd-Queue-Id: ADC22180003 X-Rspamd-Server: rspam08 X-HE-Tag: 1736791199-480009 X-HE-Meta: U2FsdGVkX1+zrEnc+7NXMylyN9mO4zh7eN9hRdb3oJ33ksHiHQw/3Y/OMVaUEAnTSamtLPn9c3hwJlX5flvX3wpOg34708FEoRAYxKc5jVUfRmmDZNSLZkanEpEYfhWXgI9Iul2MytdxqIkOwt9uufG/Axy6ohC0YoYPFEqmKyUD3oSe+NY2oK4Z4RwTbnNuR/Fi7gTuEN2aOhz8j6Fmh6fKxs9/HJUil0L+YWUor5oOwHqxdYoY11j2IpZh+qWo8Cy+hJsIu85Ye7QnvSylFw9DQj4tuTojAPLeDkPx8D40W6ejk6wTx68sGyHIZy84+iQZP2/gT0+teKTG28DKfx+Ch9nw2PtokpTiVSuuo3SWPVmKbKoLMGlTvpbI4y18JqtekjKNdHoaWRWTRSvijSbDgVAkKkdy/lA9jmJtKGNz50V+s2li74Nea8zBJLgHT8ZufTqn5ROeCc9Ljg88LScMHmU5nUbMLNSXGL3ilpXd9l273+jWnsS3im6g9/ZrY4xjWiJqWpsSlXhfA9B1Nh4nu+O42WZUjIT3lAR4BB4sZSXcvT27l6/clBpUxd0UVIyxQ8UeguJOt72nCn0B+9gZ7QR1YkMy2wsjDcv9VpJCXxqvbeXat9T36+XyEex+9f8v634ondXt3nRjIayq5PZJ/tXXI/kX3ou5pmigl8pOoVkntfxBFRkZEYtSM9AiboKMskajzOE2TDDxDERBvffOVJMzQ6APyhatI/5NyUoxWEMgSaopTGk4Bb5Mse3qTnDRT4tFny2Hgv4ND0YNzFbxMWBYoHjIt9ko5CLIwSiyV7MT+4kn1D3ErBVBJVWTupRrF/h7afXcd7J/pUpW2C5Og9b+YFSqe18644zbzE1JUOBOnhsf5Ef/BVuNIx5nO5D8q7BHxR8oVRlx0Q+dFQ2Y4+uTBjSHTcoiMOFqlcrCbhqgidB8CQQQPyVsqMHkqU8EgE6QHnFzJLaHQlD fmZwu8Uh umE+YRAOjf7FwSwCHGQtHF0Lq3+3kKeYwQ9HZDm0SOe6oyw3uBPZvNxSyFF7Xf1C6/v7Nqnu5gJyupkyYgWK7b5XocEjsjqWpd/CSm65QDzgz4qJb6fMp8YrAUGTCz0ilN63u6fkhS28tJUEbfj6gfFf5ypOESRm63m/ac6fF6NKjCB0FiEKPThciSYf8MUemVRWMxqEPZ/La6NVDMSmNi2+0gAv+D2X+O525/BDN+DIYbbORsokEqfY9p9IEfpMlM4tgQqcBlnyu/iGlwQa7ugOCgU9Qx+TBkEl9Ty4bL4t0govNH5Qz32LiEfvpXL0ajzzhU20B79J1D9yZX7JjAR+3Ar1qiTMow/dpB05i13qly3/zIW1vF2iiw2Yvh4moOmUVkw/b4R4nhgsUQerZHO6PovMx0pOVp5B+FXlWDPgClkkD/rZ+aaG8Lz2FoDb0lBNF3jvz+VYQbMdrN5yk1ohgGlk0lBE8QrQJUK+w5ay1FI/YjtBnIyyoD+GCDzJugCAZ0d2+2nVcTCV4F2sEtvC+yuDbQUG+LfOGAjB2tSap60Et+tj4kyXf6u7cbWld4jWzEldOjI6KJUx1NwO9b1bvGg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song The name of the function is confusing, and the code is much easier to follow after folding, also rename the confusing naming "p" to more meaningful "si". Signed-off-by: Kairui Song Reviewed-by: Baoquan He --- mm/swapfile.c | 39 +++++++++++++++------------------------ 1 file changed, 15 insertions(+), 24 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index f8002f110104..574059158627 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1375,22 +1375,6 @@ static struct swap_info_struct *_swap_info_get(swp_entry_t entry) return NULL; } -static struct swap_info_struct *swap_info_get_cont(swp_entry_t entry, - struct swap_info_struct *q) -{ - struct swap_info_struct *p; - - p = _swap_info_get(entry); - - if (p != q) { - if (q != NULL) - spin_unlock(&q->lock); - if (p != NULL) - spin_lock(&p->lock); - } - return p; -} - static unsigned char __swap_entry_free_locked(struct swap_info_struct *si, unsigned long offset, unsigned char usage) @@ -1687,14 +1671,14 @@ static int swp_entry_cmp(const void *ent1, const void *ent2) void swapcache_free_entries(swp_entry_t *entries, int n) { - struct swap_info_struct *p, *prev; + struct swap_info_struct *si, *prev; int i; if (n <= 0) return; prev = NULL; - p = NULL; + si = NULL; /* * Sort swap entries by swap device, so each lock is only taken once. @@ -1704,13 +1688,20 @@ void swapcache_free_entries(swp_entry_t *entries, int n) if (nr_swapfiles > 1) sort(entries, n, sizeof(entries[0]), swp_entry_cmp, NULL); for (i = 0; i < n; ++i) { - p = swap_info_get_cont(entries[i], prev); - if (p) - swap_entry_range_free(p, entries[i], 1); - prev = p; + si = _swap_info_get(entries[i]); + + if (si != prev) { + if (prev != NULL) + spin_unlock(&prev->lock); + if (si != NULL) + spin_lock(&si->lock); + } + if (si) + swap_entry_range_free(si, entries[i], 1); + prev = si; } - if (p) - spin_unlock(&p->lock); + if (si) + spin_unlock(&si->lock); } int __swap_count(swp_entry_t entry) From patchwork Mon Jan 13 17:57:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22EE3C02180 for ; Mon, 13 Jan 2025 18:00:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A4F8A6B0098; Mon, 13 Jan 2025 13:00:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FFFC6B009A; Mon, 13 Jan 2025 13:00:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 850486B009B; Mon, 13 Jan 2025 13:00:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5E6CA6B0098 for ; Mon, 13 Jan 2025 13:00:05 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 173831401D0 for ; Mon, 13 Jan 2025 18:00:05 +0000 (UTC) X-FDA: 83003192370.10.FBD4364 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf07.hostedemail.com (Postfix) with ESMTP id 0DCB440010 for ; Mon, 13 Jan 2025 18:00:02 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MlDsMDZL; spf=pass (imf07.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791203; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1QSJCLnDWZKCsPmvpnvEILycrNkHAmJg2j2a6zzrUYg=; b=uNDCXjwFFlUprK3/m92+7EN4kIbYz7kHWTKClKzVLeqc5DH4T5Vk2EJAw6QpPxU6b50zTF Hrpj7L7K3jBbsdObPI6cuA1lFG/YGILlfGfcY5QJ4TtvqCO4POYTCQAZtoyAsrs/UbcOc0 KhuL/q/MOpx2kxEREkRTfDIKP8e3mqQ= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MlDsMDZL; spf=pass (imf07.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791203; a=rsa-sha256; cv=none; b=yLJcTUd6hhVHg7z1PfFfWrHsETTcYQDKKIdS4uLy3LZRhpobijU6/FdOTaJd48aptT8y1s 8BaRxeeca9JizmqD8Fvpfd+zgMFTDfIgLColMyzJ2/nkUj7j8xq7UtRSa2lY9ZhAk+raN+ RlKTZJReFyAVd0aafzrucLRRHy4v6W8= Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-21a7ed0155cso79021205ad.3 for ; Mon, 13 Jan 2025 10:00:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791201; x=1737396001; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=1QSJCLnDWZKCsPmvpnvEILycrNkHAmJg2j2a6zzrUYg=; b=MlDsMDZLNbWvBh7QE9qbpfInVqeBDc2AmWZwQbojmcayV4vc1E99Oa4AKUnpixdV+D YT8EDW15ZpsJvk0kT//+BRjW7HA/hSAO6BmfYtL/E3ic7XF4tP2T5eZRwZq8jOIePr2H VRQa+aLedRbb++XFL3Ox/rQ6pv3T7ig+UagGimGf8t4jh2RELtVaRINjgCon6/9B/c0h rry3o0RgdxAxqa27LKs88cpk/fteee1Uyjp2fBXwblNnOGuwaNmrvjFg0a3o+M0+tfyJ Rnc3mE8Dh5jvB6CI/0nYQVsUDb5bwGmz70bfj4pKXZ6/IHphPdFexMZeN1AQtmqNRimT aoUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791201; x=1737396001; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=1QSJCLnDWZKCsPmvpnvEILycrNkHAmJg2j2a6zzrUYg=; b=FHU4LZ3TWWUJ2hyfUNRsE28DBYiQzFix4bpoaXleeIaW0nDH3yRpMIDdNlXTzYSFO9 VDQJGIe5VWZTcq01zX4g5Hl++s/qkMtxF9TU3XpnZBrRTg2yBeaD/pzKeFWPNmvOMFoZ zCKMPy5G9RzmZAV5hiOsdMoJFtLW/A7QggyGpq0cqJU0sfR7jy3IN8RMIKZXxnwWDpUz HCumYf6P1+iKqxM/JBb1pirDomM3CqcIdT1PHaNMiW1l0vRYt9JVeWZXmmkX/vsFe82k 4GZJIx+yqBYQjlfB0GXdw9fpF93a8Fh4m4o3VPmBChGnqGV+xx3nlgb9Nn6XSgFRBo9Y ITrA== X-Gm-Message-State: AOJu0YxUwcOZmfQ7+qJD1ic1ozeIZjzoBj/pxEY1z/g5g0Q+mveQfME8 AcaqaYsPqlECgB21YEh/uXdxu0kD6hocRdE9Fldf62cB8KDa+ecHrUUdERZjkNI= X-Gm-Gg: ASbGncsirhBKjjpP02/WFBoFNevaR19UPkayFXx1AIUWQS8Wi0+oT/P9TiqLBogejak SaFfaFJDDSHK8qwpfH7M+I7yyJQpakz59dbYAOZ4RWn+H53uOdJKzab1zkfCI0wR3krn10vFw4I wxEtNWJFcFwLIrX81OgKDIYwAWiDO73wTvQXazI5+l5gDvmvIqRszpYrAGHPsw6PAoHrksV6DVi P5sz8sXmT12a2WGTEC25UbtE/I1J2JcQLJ2W6PIwS/+TWteV7RELqQicRnj7/sqa12Q8kz0YpXi fA== X-Google-Smtp-Source: AGHT+IHsOcq0v8r/IsD/rTblP/e+H0X8Pne7rzRS4aDwWP++SL+VwES933KtHDTI+0HHkHetWhZy5w== X-Received: by 2002:a17:902:ccc2:b0:216:4165:c05e with SMTP id d9443c01a7336-21a83f67982mr398301215ad.24.1736791201254; Mon, 13 Jan 2025 10:00:01 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.09.59.57 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 10:00:00 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 03/13] mm, swap: remove old allocation path for HDD Date: Tue, 14 Jan 2025 01:57:22 +0800 Message-ID: <20250113175732.48099-4-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 0DCB440010 X-Rspam-User: X-Stat-Signature: nrsq63rjfg1mjhpspehm3c1ebgxw76yn X-HE-Tag: 1736791202-544261 X-HE-Meta: U2FsdGVkX19DAWyfkyj94NS7cslckmyWIjnzQrji8A6PFp2sZaQSMHBeW3g2Ap5PMUdVRdZlfkMhJvbWxW7gEsc/eY1fEN1KzlkI6KQ1PjcbPjwIO3fyo+IpMCEBOHmUybBdASRignV6kBBB2nXh4Jai7t5CMazDLCcBf6KT6y6kk5Az42eTkTfz5ikLN1Pdf5qvEnfleT2QrYgMfUDet3226Nmt5DSYMfD7u8wDIBpsyiCdP8Wr98XTk9xUYwmoC3fWlEG73ORAkGNi7Ly5LuY6+29ml40m6KU8BzWVm5or4Q4diiBt/tzinHN1fhVNktLRDG5IiUI2GxWwHvm3ndd90L0JHVOWS6QUH2M6bqa95XQMmBGb7K/gHrROjigty9c4zuPcpZvpG2To7lynbwwTg7HUWATa9vQkHfQtBVUnwbCMMNzkGrnRiZwOgnASTodDmW9jfMckPm06FhIpV60RagLWrKzuJtPg9x4YQMYcr5Yl2GaQ5HqruwfMp6hAIhfrsKanYDFcoN74G7km9el+ollxR7ZI65p2ymnmhOu/AKBayYGxsrhtAgb4xAd8AOIH1amop310kjdDmWS0QZMX1b9oQ/F78BCMx6xLkUDixKqWzAAGS7BTACN1pvn3Q0u5WtYREA6Y6usxqim5QdVYvdcggcSLzpdZnlOF0XqMapYurm/GypCVsvdalcgoy2OfpZf3GIcHXT9Aey32HGGixztuGdIqFAfACU1I3prAPBLoF9KQiWYEdyYmSjwn0/p5rM/g/jJ+h6/p+09u8tm4jHQ/HgcsmnVTIHMt+shO71TfjhyVOesNlyp9cmrvivyP1lUB1grtMO/o3v0W5x4TrNrhajq/aqqDB54wUL1QFo7JR5CiSbOXzPBEuvUd/AxU2je8A6z1jdSFcLyv15VpVXJPiFLcBYoSktqwXmBXOgoUx9legQykLTq+bD9W+TUDpmHpGjsLIjNVofc uIa9pq7O 0mfdTEzemJF63SsTvxZwdUcyg39SaDugd56V4lQZU5U+KEsyu6F5RTUq4N0XoqUKTQZdHOAf74moZu76rO7HJcxpQTKNELW6J3RpIzwCjT8XJmfxgheZG4+7JBXkcjvCda1GQYlOJSe67rPBEwDr8fkh8gDjX2KSuxn8EO7ieY4Y3wo2mcxdaoRYAXCPqmzmq3lI7WYCJATluAyNjL4FGXG4oxpPqb9sLiEw9zRJnVeftVTHpds5xHJvD9w/uX8pA1GHiVpBdupzo6DoPsEJHimDsqTj0IAxLgh5KJyYzpOPVPlALCC3squSsMXwnF1urMnjizoKX5DiiTTPyAzSYMwIIXqQtz+qL02NfmUCoAwQy5vvLQdCwj5jaysAkvnLl8oCQ6f8/15aG9N4Me5BvVTXwdaiOYjK3uTK9PGyXj1jJM2FTmZdIuMP/Ka7n9c2IQWaHGTQyPWqBAOkoyTR/hyLTuJN1cyBmgJA7fPQtxgfZ3kXPzR1MuFD2rU525NpqldMkWINDHA099NLCc5QZVpXr/BuW2AoiFe8ij38GvAGl3YQZp+GhZyV9hUxrcqieL1DPnMEF0faXFQ/lW6w1d/fuRw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song We are currently using different swap allocation algorithm for HDD and non-HDD. This leads to the existence of a different set of locks, and the code path is heavily bloated, causing difficulties for further optimization and maintenance. This commit removes all HDD swap allocation and related dead code, and uses the cluster allocation algorithm instead. The performance may drop temporarily, but this should be negligible: The main advantage of the legacy HDD allocation algorithm is that it tends to use continuous slots, but swap device gets fragmented quickly anyway, and the attempt to use continuous slots will fail easily. This commit also enables mTHP swap on HDD, which is expected to be beneficial, and following commits will adapt and optimize the cluster allocator for HDD. Suggested-by: Chris Li Suggested-by: "Huang, Ying" Signed-off-by: Kairui Song Reviewed-by: Baoquan He --- include/linux/swap.h | 3 - mm/swapfile.c | 235 ++----------------------------------------- 2 files changed, 9 insertions(+), 229 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 187715eec3cb..0c681aa5cb98 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -310,9 +310,6 @@ struct swap_info_struct { unsigned int highest_bit; /* index of last free in swap_map */ unsigned int pages; /* total of usable pages of swap */ unsigned int inuse_pages; /* number of those currently in use */ - unsigned int cluster_next; /* likely index for next allocation */ - unsigned int cluster_nr; /* countdown to next cluster search */ - unsigned int __percpu *cluster_next_cpu; /*percpu index for next allocation */ struct percpu_cluster __percpu *percpu_cluster; /* per cpu's swap location */ struct rb_root swap_extent_root;/* root of the swap extent rbtree */ struct block_device *bdev; /* swap device or bdev of swap file */ diff --git a/mm/swapfile.c b/mm/swapfile.c index 574059158627..fca58d43b836 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1001,49 +1001,6 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset, WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries); } -static void set_cluster_next(struct swap_info_struct *si, unsigned long next) -{ - unsigned long prev; - - if (!(si->flags & SWP_SOLIDSTATE)) { - si->cluster_next = next; - return; - } - - prev = this_cpu_read(*si->cluster_next_cpu); - /* - * Cross the swap address space size aligned trunk, choose - * another trunk randomly to avoid lock contention on swap - * address space if possible. - */ - if ((prev >> SWAP_ADDRESS_SPACE_SHIFT) != - (next >> SWAP_ADDRESS_SPACE_SHIFT)) { - /* No free swap slots available */ - if (si->highest_bit <= si->lowest_bit) - return; - next = get_random_u32_inclusive(si->lowest_bit, si->highest_bit); - next = ALIGN_DOWN(next, SWAP_ADDRESS_SPACE_PAGES); - next = max_t(unsigned int, next, si->lowest_bit); - } - this_cpu_write(*si->cluster_next_cpu, next); -} - -static bool swap_offset_available_and_locked(struct swap_info_struct *si, - unsigned long offset) -{ - if (data_race(!si->swap_map[offset])) { - spin_lock(&si->lock); - return true; - } - - if (vm_swap_full() && READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) { - spin_lock(&si->lock); - return true; - } - - return false; -} - static int cluster_alloc_swap(struct swap_info_struct *si, unsigned char usage, int nr, swp_entry_t slots[], int order) @@ -1071,13 +1028,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si, unsigned char usage, int nr, swp_entry_t slots[], int order) { - unsigned long offset; - unsigned long scan_base; - unsigned long last_in_cluster = 0; - int latency_ration = LATENCY_LIMIT; unsigned int nr_pages = 1 << order; - int n_ret = 0; - bool scanned_many = false; /* * We try to cluster swap pages by allocating them sequentially @@ -1089,7 +1040,6 @@ static int scan_swap_map_slots(struct swap_info_struct *si, * But we do now try to find an empty cluster. -Andrea * And we let swap pages go all over an SSD partition. Hugh */ - if (order > 0) { /* * Should not even be attempting large allocations when huge @@ -1109,158 +1059,7 @@ static int scan_swap_map_slots(struct swap_info_struct *si, return 0; } - if (si->cluster_info) - return cluster_alloc_swap(si, usage, nr, slots, order); - - si->flags += SWP_SCANNING; - - /* For HDD, sequential access is more important. */ - scan_base = si->cluster_next; - offset = scan_base; - - if (unlikely(!si->cluster_nr--)) { - if (si->pages - si->inuse_pages < SWAPFILE_CLUSTER) { - si->cluster_nr = SWAPFILE_CLUSTER - 1; - goto checks; - } - - spin_unlock(&si->lock); - - /* - * If seek is expensive, start searching for new cluster from - * start of partition, to minimize the span of allocated swap. - */ - scan_base = offset = si->lowest_bit; - last_in_cluster = offset + SWAPFILE_CLUSTER - 1; - - /* Locate the first empty (unaligned) cluster */ - for (; last_in_cluster <= READ_ONCE(si->highest_bit); offset++) { - if (si->swap_map[offset]) - last_in_cluster = offset + SWAPFILE_CLUSTER; - else if (offset == last_in_cluster) { - spin_lock(&si->lock); - offset -= SWAPFILE_CLUSTER - 1; - si->cluster_next = offset; - si->cluster_nr = SWAPFILE_CLUSTER - 1; - goto checks; - } - if (unlikely(--latency_ration < 0)) { - cond_resched(); - latency_ration = LATENCY_LIMIT; - } - } - - offset = scan_base; - spin_lock(&si->lock); - si->cluster_nr = SWAPFILE_CLUSTER - 1; - } - -checks: - if (!(si->flags & SWP_WRITEOK)) - goto no_page; - if (!si->highest_bit) - goto no_page; - if (offset > si->highest_bit) - scan_base = offset = si->lowest_bit; - - /* reuse swap entry of cache-only swap if not busy. */ - if (vm_swap_full() && si->swap_map[offset] == SWAP_HAS_CACHE) { - int swap_was_freed; - spin_unlock(&si->lock); - swap_was_freed = __try_to_reclaim_swap(si, offset, TTRS_ANYWAY | TTRS_DIRECT); - spin_lock(&si->lock); - /* entry was freed successfully, try to use this again */ - if (swap_was_freed > 0) - goto checks; - goto scan; /* check next one */ - } - - if (si->swap_map[offset]) { - if (!n_ret) - goto scan; - else - goto done; - } - memset(si->swap_map + offset, usage, nr_pages); - - swap_range_alloc(si, offset, nr_pages); - slots[n_ret++] = swp_entry(si->type, offset); - - /* got enough slots or reach max slots? */ - if ((n_ret == nr) || (offset >= si->highest_bit)) - goto done; - - /* search for next available slot */ - - /* time to take a break? */ - if (unlikely(--latency_ration < 0)) { - if (n_ret) - goto done; - spin_unlock(&si->lock); - cond_resched(); - spin_lock(&si->lock); - latency_ration = LATENCY_LIMIT; - } - - if (si->cluster_nr && !si->swap_map[++offset]) { - /* non-ssd case, still more slots in cluster? */ - --si->cluster_nr; - goto checks; - } - - /* - * Even if there's no free clusters available (fragmented), - * try to scan a little more quickly with lock held unless we - * have scanned too many slots already. - */ - if (!scanned_many) { - unsigned long scan_limit; - - if (offset < scan_base) - scan_limit = scan_base; - else - scan_limit = si->highest_bit; - for (; offset <= scan_limit && --latency_ration > 0; - offset++) { - if (!si->swap_map[offset]) - goto checks; - } - } - -done: - if (order == 0) - set_cluster_next(si, offset + 1); - si->flags -= SWP_SCANNING; - return n_ret; - -scan: - VM_WARN_ON(order > 0); - spin_unlock(&si->lock); - while (++offset <= READ_ONCE(si->highest_bit)) { - if (unlikely(--latency_ration < 0)) { - cond_resched(); - latency_ration = LATENCY_LIMIT; - scanned_many = true; - } - if (swap_offset_available_and_locked(si, offset)) - goto checks; - } - offset = si->lowest_bit; - while (offset < scan_base) { - if (unlikely(--latency_ration < 0)) { - cond_resched(); - latency_ration = LATENCY_LIMIT; - scanned_many = true; - } - if (swap_offset_available_and_locked(si, offset)) - goto checks; - offset++; - } - spin_lock(&si->lock); - -no_page: - si->flags -= SWP_SCANNING; - return n_ret; + return cluster_alloc_swap(si, usage, nr, slots, order); } int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) @@ -2871,8 +2670,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) mutex_unlock(&swapon_mutex); free_percpu(p->percpu_cluster); p->percpu_cluster = NULL; - free_percpu(p->cluster_next_cpu); - p->cluster_next_cpu = NULL; vfree(swap_map); kvfree(zeromap); kvfree(cluster_info); @@ -3184,8 +2981,6 @@ static unsigned long read_swap_header(struct swap_info_struct *si, } si->lowest_bit = 1; - si->cluster_next = 1; - si->cluster_nr = 0; maxpages = swapfile_maximum_size; last_page = swap_header->info.last_page; @@ -3271,7 +3066,6 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si, unsigned long maxpages) { unsigned long nr_clusters = DIV_ROUND_UP(maxpages, SWAPFILE_CLUSTER); - unsigned long col = si->cluster_next / SWAPFILE_CLUSTER % SWAP_CLUSTER_COLS; struct swap_cluster_info *cluster_info; unsigned long i, j, k, idx; int cpu, err = -ENOMEM; @@ -3283,15 +3077,6 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si, for (i = 0; i < nr_clusters; i++) spin_lock_init(&cluster_info[i].lock); - si->cluster_next_cpu = alloc_percpu(unsigned int); - if (!si->cluster_next_cpu) - goto err_free; - - /* Random start position to help with wear leveling */ - for_each_possible_cpu(cpu) - per_cpu(*si->cluster_next_cpu, cpu) = - get_random_u32_inclusive(1, si->highest_bit); - si->percpu_cluster = alloc_percpu(struct percpu_cluster); if (!si->percpu_cluster) goto err_free; @@ -3333,7 +3118,7 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si, * sharing same address space. */ for (k = 0; k < SWAP_CLUSTER_COLS; k++) { - j = (k + col) % SWAP_CLUSTER_COLS; + j = k % SWAP_CLUSTER_COLS; for (i = 0; i < DIV_ROUND_UP(nr_clusters, SWAP_CLUSTER_COLS); i++) { struct swap_cluster_info *ci; idx = i * SWAP_CLUSTER_COLS + j; @@ -3483,18 +3268,18 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) if (si->bdev && bdev_nonrot(si->bdev)) { si->flags |= SWP_SOLIDSTATE; - - cluster_info = setup_clusters(si, swap_header, maxpages); - if (IS_ERR(cluster_info)) { - error = PTR_ERR(cluster_info); - cluster_info = NULL; - goto bad_swap_unlock_inode; - } } else { atomic_inc(&nr_rotate_swap); inced_nr_rotate_swap = true; } + cluster_info = setup_clusters(si, swap_header, maxpages); + if (IS_ERR(cluster_info)) { + error = PTR_ERR(cluster_info); + cluster_info = NULL; + goto bad_swap_unlock_inode; + } + if ((swap_flags & SWAP_FLAG_DISCARD) && si->bdev && bdev_max_discard_sectors(si->bdev)) { /* @@ -3575,8 +3360,6 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) bad_swap: free_percpu(si->percpu_cluster); si->percpu_cluster = NULL; - free_percpu(si->cluster_next_cpu); - si->cluster_next_cpu = NULL; inode = NULL; destroy_swap_extents(si); swap_cgroup_swapoff(si->type); From patchwork Mon Jan 13 17:57:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937870 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB63BC02180 for ; Mon, 13 Jan 2025 18:00:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67D666B009B; Mon, 13 Jan 2025 13:00:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 62BAB6B009C; Mon, 13 Jan 2025 13:00:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 480856B009D; Mon, 13 Jan 2025 13:00:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 29BE46B009B for ; Mon, 13 Jan 2025 13:00:09 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C74601C613B for ; Mon, 13 Jan 2025 18:00:08 +0000 (UTC) X-FDA: 83003192496.24.DF553AD Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf28.hostedemail.com (Postfix) with ESMTP id CE3B0C0007 for ; Mon, 13 Jan 2025 18:00:06 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=H9fkf0Lk; spf=pass (imf28.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791206; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yCxLYUgkmwrizvVbRUMYt+5+F/5vJNcyB3RTwQ2S4sU=; b=7Ss+1BHPhSWuMsjOUl5Ngh5hsp1vzGBqhhQZ2MNzI+GSVoIW2hN73nnsI3/tOCML8eEIEj 6VmXq8cXRrPWhOcQ+IG1ewvWlUc5/TE1Zs9sRz73XIfp8YYKdM7jXyMozmC8yBw710NCdO 4oODPnSdrXdDUCaeGjklvYuQZST8k2g= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791206; a=rsa-sha256; cv=none; b=s8WxB8AvpanH6o9fN9ZK6M0ccOISK71pZnOEN0j8OgJv+JjyhluePq451oSyT2CXKL3YfY tAMQ2nLYa+tdLs+RC66W9sTXZCVby7yux/2fl2fJJAEf5QxtKuvRaXoWobPyOiuijii2zY DVI4LGbo8g0KTWDTPUgExMlSic1flfk= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=H9fkf0Lk; spf=pass (imf28.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-21634338cfdso64901545ad.2 for ; Mon, 13 Jan 2025 10:00:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791205; x=1737396005; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=yCxLYUgkmwrizvVbRUMYt+5+F/5vJNcyB3RTwQ2S4sU=; b=H9fkf0Lk/aBW8qDfDYvCbp5b+dgu8hObGLSkwJVkmIreTrpNbZ2+4iyrrdWZ+n7IBx Fz9QZB4ZzerZrcHiUTC5NKQ9GXbRI3u8RBss4/4fsvavfPbPPkDbbT57sD+Kv4ixMQy2 Y/S1qOLlVTY75GB2z25A3GknzicYgjV3SbdOrymFwwCzeXotdvG4ew1A3dseGOv8bre1 DvI9zrNEl8ZdRCseBf+JlFYEBQvjY35yUX8qwwRvhRS93Jca2E5iKu+LU//aSJoXne+l 2KQlHP9NgDNrCwd7A0Q+3JY9D6X9edfuQacZU/fkjJfGQ+FE3p2XtooSULPUBPMT/3DI g83A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791205; x=1737396005; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=yCxLYUgkmwrizvVbRUMYt+5+F/5vJNcyB3RTwQ2S4sU=; b=C/ay/DUFfgXAu0sn7mNNs3pl3uQXO8OMuwSYR6bzwn6FZs1PrDO5sBOsfHhUWlrDwA 1jneCH07KQ2FXdvn4dH2HFowDhy7sVuY1ytTWeEtvJXS6+yPD/CCqunXxiECkQ9SXwHq /Gao5g7KIuIN/QcNlnKgj/sYLLQEIStn+Suu86uYKHwV4FTgv7MXlQFUs5ttVCZxLgAG gjW8CA0IsyAaphnMefYhJJ3NQ7oW2eqm8Wp4SqgqwAXsqr/6xHrQHs9lM8ikA7TsDCC/ SxRl/g1azKd2p8otQfJgk+eshrnGp6KI9ZLe16//kOc7RkRnmVgOQrjNSe7BjW07N29H KYnw== X-Gm-Message-State: AOJu0YxWiVlw4XIKpOoANLf2trYope53X6aKGLkJB8JZmewNv6dTBC4P X+oIznXNzfZC0322cOX93PM7bJlaX9S2ds7D4xbTxLd1luII2MvC4qgVuytFa1U= X-Gm-Gg: ASbGncvytlvWNkjPValtReI1J2dCH8I7uO49i0Dkc6s7kK5mZiTcVPyXklywOHl1mw5 BsXp7vV8Flh2BZ4BIacwrhvbHjfEzIi7QLdKpzdIxxKk+HWhSIMoRWg9yU5s5ChaXF7VRUnC7Yj Anez23ouw25aEJUc8NDhg1urFihChTZWks+eX0RSCzKokjniggKrF8aw/l/buC64++Syv6JDzxr l5Nhhu4v9UEQfOG+yrMgjwK6naqyD9xJzaeVfz6qIESjPwpWICQs/AxTL/kyUMvDAozaGRxIJSH Yw== X-Google-Smtp-Source: AGHT+IEl2jGioK0E7VuX6f9gWotqstQiXPHroDcbV/zlArnbCa7XHio7Xb5+yluLOMEp/Y9ntNLtXg== X-Received: by 2002:a17:902:f68b:b0:216:32c4:f807 with SMTP id d9443c01a7336-21a83fdea82mr340270875ad.45.1736791205072; Mon, 13 Jan 2025 10:00:05 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.10.00.01 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 10:00:04 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 04/13] mm, swap: use cluster lock for HDD Date: Tue, 14 Jan 2025 01:57:23 +0800 Message-ID: <20250113175732.48099-5-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: CE3B0C0007 X-Stat-Signature: tazrkfhud6qc3ftpg87fd9tryojtjsi9 X-Rspam-User: X-HE-Tag: 1736791206-603334 X-HE-Meta: U2FsdGVkX18rvEy97pSwwudWG/c7etDFQ+ojeCVyEDkFIVr/vxFGFYMjBYAIo6DKp6j4dXxia11csMFy9SeQCnzB5T+gGJ7gZXG5bI4pMh0sPV6VMH2VhjV6fvfTkG/ZT7TpcdGw5tqa0E+Db4999JZJRmT9EXluhTY4Uv+S3q3epFMaoOoV66GSEZGoGauHp5WHdtijvqiEgAo5Whjh7HivYl/n/oAEVXTc+YGktNl+hko3HrQZqIdMKUe7BLjtJ6ylVMx4RJaTsePPS7E04eQZZNePvaH5d3B+74gosELs4czSIXPzdOC9de3+QFoTIATsEzju1wTvGglTt/38fVFSnIBPXWq/wPTKW6XHQRQZlYYGGrzzDxP7AaA70QN2ekxm+qQHT5w96ZxJ8Apk7KKIWNOdsjtSyZ1PI7whjzp5LYOqxSoqvjWKjz6xacS2RQAHdg2Uo6car+LrZRrqf9pBZKbAk9V6RLRVuOJcjhl//0lxK24D7b2x3aEXtZTH/Cfr9NrokQRQk42iv69SvRVXppniX+iTVHYjglI7opTjjrpVIzJlonHcM5shseiDKXQ7M1cqTWIrvAFXstnI//qn7TZfe9U62TAxN0ktb/0WHMvFVRy0utra8ErmEjSCQgWXK5614DgHtdLInVjtwd9Lx3YGFLKnDJaeS6kV2mHGXauntyNBLa7fQTHA8aq+0zHeUJr2VIYH5Mvn582rQtPZGwnsgxgfICyOdSsOMGtMaMemgGW2ZZPIAmc+S4jI8hxKmTPO2npxGEcMmfJkZLab2caMh8oEwYAB4WdJHIAYEEpMXc3WOpWPAVH1F1IaEHK/pxIehXKJTEL4CDbhxWaJKUpwTah55qkNvLPGvjtDn2V+q6tnm5VGEtEayfbSIXiKi/0Rx5rkueADQxkNAJFLzdO83O2ytjSH6WhtlSg07n/c90XlOxP+IADgaBKXOtSjocvIu8LGTS0l/nv sgqOTVs9 WL9ZEOXKlSge+nzIWF0LWq7fLVFMbveXWoGKgv/4CHwXPTENCq5Fuw0O4GMz7zv9L9RteaCQzhkUG9YKVRvSDld7ZRuIIFQJhWhSJoBdlS9FE7XwIUZtLmijzdjJCx5Kz6Fl2nf4Leif+i0FaHNkqql/lPMwt5r+5SIpwa3hoTEdQHrodIXoN1tqRIfGRbYFh43Rl71Sb49UZx0RE7ERYBVf5YAoo4Q7regAaBLNRdUSKpHeoiYuJ6kZYop4oNyjaP/rlxVqc/Jxaa3HocmG5BCRzIhzovoVL9M8e7zfIQOygo/mMZaI0OVIxte5v2+lRApvlD2mTycIUHhsbJY5NaeYLfxioh8wuzlnAhMAKxjRVl/eDibOxHLwjxR+3cgU626BEwdqEPKBEQZ4Np49jIvGFREfYxeGDWCQONPSJs5Od4MnRNr+lUIOpyaV8N4aiB9NltE/E1lLnbVJDJd+qmDzEZWpD44u34WefVjW4kFhZtr7P2Lxl1+IdcjysWYO/qva1M/oxcr1GohNasqnz4Vxk06MzswdOxma9kaDtWbftAsOo9qpkwzpeyspXM1a35r+J X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Cluster lock (ci->lock) was introduced to reduce contention for certain operations. Using cluster lock for HDD is not helpful as HDD have a poor performance, so locking isn't the bottleneck. But having different set of locks for HDD / non-HDD prevents further rework of device lock (si->lock). This commit just changed all lock_cluster_or_swap_info to lock_cluster, which is a safe and straight conversion since cluster info is always allocated now, also removed all cluster_info related checks. Suggested-by: Chris Li Signed-off-by: Kairui Song Reviewed-by: Baoquan He --- mm/swapfile.c | 109 ++++++++++++++++---------------------------------- 1 file changed, 35 insertions(+), 74 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index fca58d43b836..83ebc24cc94b 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -58,10 +58,9 @@ static void swap_entry_range_free(struct swap_info_struct *si, swp_entry_t entry static void swap_range_alloc(struct swap_info_struct *si, unsigned long offset, unsigned int nr_entries); static bool folio_swapcache_freeable(struct folio *folio); -static struct swap_cluster_info *lock_cluster_or_swap_info( - struct swap_info_struct *si, unsigned long offset); -static void unlock_cluster_or_swap_info(struct swap_info_struct *si, - struct swap_cluster_info *ci); +static struct swap_cluster_info *lock_cluster(struct swap_info_struct *si, + unsigned long offset); +static void unlock_cluster(struct swap_cluster_info *ci); static DEFINE_SPINLOCK(swap_lock); static unsigned int nr_swapfiles; @@ -222,9 +221,9 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si, * swap_map is HAS_CACHE only, which means the slots have no page table * reference or pending writeback, and can't be allocated to others. */ - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); need_reclaim = swap_is_has_cache(si, offset, nr_pages); - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); if (!need_reclaim) goto out_unlock; @@ -404,45 +403,15 @@ static inline struct swap_cluster_info *lock_cluster(struct swap_info_struct *si { struct swap_cluster_info *ci; - ci = si->cluster_info; - if (ci) { - ci += offset / SWAPFILE_CLUSTER; - spin_lock(&ci->lock); - } - return ci; -} - -static inline void unlock_cluster(struct swap_cluster_info *ci) -{ - if (ci) - spin_unlock(&ci->lock); -} - -/* - * Determine the locking method in use for this device. Return - * swap_cluster_info if SSD-style cluster-based locking is in place. - */ -static inline struct swap_cluster_info *lock_cluster_or_swap_info( - struct swap_info_struct *si, unsigned long offset) -{ - struct swap_cluster_info *ci; - - /* Try to use fine-grained SSD-style locking if available: */ - ci = lock_cluster(si, offset); - /* Otherwise, fall back to traditional, coarse locking: */ - if (!ci) - spin_lock(&si->lock); + ci = &si->cluster_info[offset / SWAPFILE_CLUSTER]; + spin_lock(&ci->lock); return ci; } -static inline void unlock_cluster_or_swap_info(struct swap_info_struct *si, - struct swap_cluster_info *ci) +static inline void unlock_cluster(struct swap_cluster_info *ci) { - if (ci) - unlock_cluster(ci); - else - spin_unlock(&si->lock); + spin_unlock(&ci->lock); } /* Add a cluster to discard list and schedule it to do discard */ @@ -558,9 +527,6 @@ static void inc_cluster_info_page(struct swap_info_struct *si, unsigned long idx = page_nr / SWAPFILE_CLUSTER; struct swap_cluster_info *ci; - if (!cluster_info) - return; - ci = cluster_info + idx; ci->count++; @@ -576,9 +542,6 @@ static void inc_cluster_info_page(struct swap_info_struct *si, static void dec_cluster_info_page(struct swap_info_struct *si, struct swap_cluster_info *ci, int nr_pages) { - if (!si->cluster_info) - return; - VM_BUG_ON(ci->count < nr_pages); VM_BUG_ON(cluster_is_free(ci)); lockdep_assert_held(&si->lock); @@ -940,7 +903,7 @@ static void swap_range_alloc(struct swap_info_struct *si, unsigned long offset, si->highest_bit = 0; del_from_avail_list(si); - if (si->cluster_info && vm_swap_full()) + if (vm_swap_full()) schedule_work(&si->reclaim_work); } } @@ -1007,8 +970,6 @@ static int cluster_alloc_swap(struct swap_info_struct *si, { int n_ret = 0; - VM_BUG_ON(!si->cluster_info); - si->flags += SWP_SCANNING; while (n_ret < nr) { @@ -1052,10 +1013,10 @@ static int scan_swap_map_slots(struct swap_info_struct *si, } /* - * Swapfile is not block device or not using clusters so unable + * Swapfile is not block device so unable * to allocate large entries. */ - if (!(si->flags & SWP_BLKDEV) || !si->cluster_info) + if (!(si->flags & SWP_BLKDEV)) return 0; } @@ -1295,9 +1256,9 @@ static unsigned char __swap_entry_free(struct swap_info_struct *si, unsigned long offset = swp_offset(entry); unsigned char usage; - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); usage = __swap_entry_free_locked(si, offset, 1); - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); if (!usage) free_swap_slot(entry); @@ -1320,14 +1281,14 @@ static bool __swap_entries_free(struct swap_info_struct *si, if (nr > SWAPFILE_CLUSTER - offset % SWAPFILE_CLUSTER) goto fallback; - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); if (!swap_is_last_map(si, offset, nr, &has_cache)) { - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); goto fallback; } for (i = 0; i < nr; i++) WRITE_ONCE(si->swap_map[offset + i], SWAP_HAS_CACHE); - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); if (!has_cache) { for (i = 0; i < nr; i++) @@ -1383,7 +1344,7 @@ static void cluster_swap_free_nr(struct swap_info_struct *si, DECLARE_BITMAP(to_free, BITS_PER_LONG) = { 0 }; int i, nr; - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); while (nr_pages) { nr = min(BITS_PER_LONG, nr_pages); for (i = 0; i < nr; i++) { @@ -1391,18 +1352,18 @@ static void cluster_swap_free_nr(struct swap_info_struct *si, bitmap_set(to_free, i, 1); } if (!bitmap_empty(to_free, BITS_PER_LONG)) { - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); for_each_set_bit(i, to_free, BITS_PER_LONG) free_swap_slot(swp_entry(si->type, offset + i)); if (nr == nr_pages) return; bitmap_clear(to_free, 0, BITS_PER_LONG); - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); } offset += nr; nr_pages -= nr; } - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); } /* @@ -1441,9 +1402,9 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) if (!si) return; - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); if (size > 1 && swap_is_has_cache(si, offset, size)) { - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); spin_lock(&si->lock); swap_entry_range_free(si, entry, size); spin_unlock(&si->lock); @@ -1451,14 +1412,14 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) } for (int i = 0; i < size; i++, entry.val++) { if (!__swap_entry_free_locked(si, offset + i, SWAP_HAS_CACHE)) { - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); free_swap_slot(entry); if (i == size - 1) return; - lock_cluster_or_swap_info(si, offset); + lock_cluster(si, offset); } } - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); } static int swp_entry_cmp(const void *ent1, const void *ent2) @@ -1522,9 +1483,9 @@ int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry) struct swap_cluster_info *ci; int count; - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); count = swap_count(si->swap_map[offset]); - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); return count; } @@ -1547,7 +1508,7 @@ int swp_swapcount(swp_entry_t entry) offset = swp_offset(entry); - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); count = swap_count(si->swap_map[offset]); if (!(count & COUNT_CONTINUED)) @@ -1570,7 +1531,7 @@ int swp_swapcount(swp_entry_t entry) n *= (SWAP_CONT_MAX + 1); } while (tmp_count & COUNT_CONTINUED); out: - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); return count; } @@ -1585,8 +1546,8 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, int i; bool ret = false; - ci = lock_cluster_or_swap_info(si, offset); - if (!ci || nr_pages == 1) { + ci = lock_cluster(si, offset); + if (nr_pages == 1) { if (swap_count(map[roffset])) ret = true; goto unlock_out; @@ -1598,7 +1559,7 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, } } unlock_out: - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); return ret; } @@ -3428,7 +3389,7 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr) offset = swp_offset(entry); VM_WARN_ON(nr > SWAPFILE_CLUSTER - offset % SWAPFILE_CLUSTER); VM_WARN_ON(usage == 1 && nr > 1); - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); err = 0; for (i = 0; i < nr; i++) { @@ -3483,7 +3444,7 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr) } unlock_out: - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); return err; } From patchwork Mon Jan 13 17:57:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11891C02180 for ; Mon, 13 Jan 2025 18:00:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9FE326B009D; Mon, 13 Jan 2025 13:00:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9AEAD6B009E; Mon, 13 Jan 2025 13:00:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 84FEF6B009F; Mon, 13 Jan 2025 13:00:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 6478D6B009D for ; Mon, 13 Jan 2025 13:00:14 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 10EDD80219 for ; Mon, 13 Jan 2025 18:00:14 +0000 (UTC) X-FDA: 83003192748.09.8F2BA41 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf14.hostedemail.com (Postfix) with ESMTP id 0C36F100023 for ; Mon, 13 Jan 2025 18:00:11 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ls+M2d2n; spf=pass (imf14.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791212; a=rsa-sha256; cv=none; b=gPxrSZsGThAz4BD1k9nouA5OlkfLR68XPfncL+f6UPeAlZirJ1QfV+/XgIZWRJVGjwyhoy eT+tNHaSTMu+xsb0tk51/9uUkH0nAMLjdnjgOrRuDS/9P4U7owYyu9HZZia+v3tEBNtJYy 2aBrb1QhpA0tJEf65Yqon04Fxwfmvbk= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ls+M2d2n; spf=pass (imf14.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791212; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=caiI/VOqAJNiDWWa2+iKhM4Oze87dUQbCW5ZECHeaJM=; b=ckYiWg4VnbO9LbqPK19sf4aEuw0s2CQUmDtfWfr2dOx9N2oiY33/mUHHxQNHWJrWQxLoi1 y5o7GH0sJL6XeKBK9OCnRFCb+Y4C4r+iAjKtLvwH1PO9AWtbcopwlbIDGiOvTgg8OlcDST N3awL4e2htGpyvSU+slsJEe6s/IUBpU= Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-21636268e43so107039105ad.2 for ; Mon, 13 Jan 2025 10:00:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791210; x=1737396010; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=caiI/VOqAJNiDWWa2+iKhM4Oze87dUQbCW5ZECHeaJM=; b=Ls+M2d2noTjh55aYCIrWGEF2I+pIJ3A2LVrdWtl8p5kZhKCtW51MWDjq2xaRtYB5kO bQUctumUwD/3lDTFWs1G5kjHx2gsR+9MaCWkB+UXhuP9V51XJXHh2ZzlGmzkTLlnL2/S iHfvjrnUcyUS3on/bQpDln3LoQuHQVxp3ngcKrEexPVUHLKEOEqBYptCS7Zg6Duwk7hw RqMkNtrjUdtBNyyH5qUwLYWbvloY8va8yX+UTbjKdP1aUq0Qbz5SoB5hmntk+AQEUS7I hzfvodWI2qHedcRitbwBvN43hYEN8IYOvt0zyUSh4TucmEitG217P/kZe9suPUod/+Ts rY4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791210; x=1737396010; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=caiI/VOqAJNiDWWa2+iKhM4Oze87dUQbCW5ZECHeaJM=; b=DelOB1l/sxbuMDblMc539y3mIp/CWM8xVkauIddHefgTzQh4C/a2Pvuhw/RKbQ6QQ+ joc+CnhI81SWCp06BFc7YvoIwkho/IDpuBOM65p8SHGslqAfhP234vg6YKrwU33ckkho FJ2vk9kvBCkUY+cxsiLk+WCVH/EthGr99xk0dUAR7XPW8cpp8uatk5dul2gI+tkhWpOf be8csmeSPp9whI3dBsCcECK1iM8Q1cDzDy0A7LhTXCKZdEp21ag+9eldh0xJ31z9An7A Q1gpHgP6vyno/wXIeDlGRbK6oIfrk4nenjCKJlV+qVeXf7QRw9neDfJDc3raA0ZiJOKm mLWw== X-Gm-Message-State: AOJu0YxFzRvzb1aqAK33fSjJasYHeOs+kDCjrFFVgmU5RqKK7BlIHRDQ MuW77FPvCaIhcGXsgI21n7GkAkCTo/PDs2KrESdeVkZV+kx2R02XB9pNafS1ciw= X-Gm-Gg: ASbGncvnRCM8gsXUl0mE0/1fq3WXukF3J03Ibf2/QkWMZycQQW1siksKBGStPz+rjdI LwnaW0FAy8eJ9WkFs/5G4N6FbTRKHVpSlV7W6m79IOL/YZZwJ3LK6V6hJC7bPc+RPrt7AyjG42F Q2hrxhQH1jl6E4g4KgcA2mjzi1ZqfSOIJ3DWdUbFNaW46bC+Dy3vId7NwFBCaccaMt0yiFc2Nt+ HGhxToWeU/7oDkzVLhxOoT5mmowVuSy2tYHka7KFeKZATRxCRu75gXRq4Fv9x+tar7mJFRumsLy Ig== X-Google-Smtp-Source: AGHT+IHY2pN1dmNgQ/jErEKoR8jYGQXUl/IK9P33gRI+ptN1dWK+uFbMLz+TQgZSbpLd4QV4khSiBw== X-Received: by 2002:a17:902:d48a:b0:216:4064:53ad with SMTP id d9443c01a7336-21a840109d2mr324618055ad.48.1736791210057; Mon, 13 Jan 2025 10:00:10 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.10.00.05 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 10:00:08 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 05/13] mm, swap: clean up device availability check Date: Tue, 14 Jan 2025 01:57:24 +0800 Message-ID: <20250113175732.48099-6-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Queue-Id: 0C36F100023 X-Stat-Signature: m9ame8nnd9ewejwonecw5zgpq17uj6ij X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1736791211-512758 X-HE-Meta: U2FsdGVkX19JQ6RW1/cKxaiyalWQ/ximilb0IbD3NfIGmEqWHzyu4ziNzfx/KdzksfaYZq9ihXiHHO+QicWQwA62s/D76JvEyfm8UTDvXV+409wg2W0xHDEng5smDkLevenQ5RTRX8EbF9QChlE1IpDiicpiRICuiXg9Iw6exliMj8e+iETSSYBKnOTSn3/N372OZs4dDJ/1avOtltyYyRzFqi6mz7/86IRdsc7PGjUN2qREVGilTgzNC//nk1991T8K1HMuuUnicXnsuNBnRVPKYL3Cdb/YoqfdmzEZ38OQGx66FB9uHmfkqicdlL3zMtwr1MAQXdDMTXkzxoKqRkMyZ9LIxMsG0hjhA63z6frTWZpg8GQWPfvEZbXqfH2C2rrwfBdrMtS7LW7vhLG6pG1Q1ML8xeIYhwLLtVD+ksyfVxR7b9DqdQBucwZlibu2btiYrYsSX5cvPQpplobGrs2+qN1q3QLXJMnm6HpyIC8yiSH7re5Kk4YyLbmKbzRUEHqNVasH8j3KoHyQbxZ5BZKvKuLyDtQpxu3LbQlhEeC4YCyOkYBigB5l3f5DCz1dILG1T3bKky4vTjgfLBmXm0HS0VhWh9DzVQ17d5EnBX3yE3p2WMaC60mgzwwiUfWGuFd3I+EGJALSJJ8Yl0RnxHx4JqpJpnzOUDyInrHHbFDVVbwTyMFY8IrnVJNiyBu9n3R1s3pbU0vezXtHc9BayhagMIqtNHMneqp3uGUfho+xJHUc4mCRzhJvsIa4RflmLTbRnoOwSa6HDQjmRFJtKKEmgETHGKm+ihuJKGRw2YcOVYJyFAFDzLKcgUGp0jg2TXWoOeUIYBdIzcwgZWLBPSEaqZWw+fkDNZ0OZNzvFIgCIfj8F6JZC8/9XeCdt7o1mIb3w8OJVuOB29OMb+7wBwFBNoVeZrlkZJh18ibBrPwKMFYVktUYud9b++dmO4x0JRhNq7c9PX3T7nVfkGu /WXeUTnU dKnyZ7fXOx7WM83wqYQ2D/KEMR+f8qZC9E54xPB4OZM4AYQyuBr3O4VaXvcYGB/+ShuEBtYYC0QwlZ4R6/YhlBGyNfkMxRz/hAqvKbVzSaL7giE60Hrr7HLQyG79NQrwWSzoDcGKAkHCXdiC9u8FeQke7AThByZd/r2+Uvf/BKsvHXQgGRCkGsHHWh9Lyb13GFeAKOK1+ojhd1cBJTY818tIjjxcvlxN/qH4+zvfW9xBrfxKfIhNzVdsUlZgjMHyANUvtrIZDVXomgr5nXom9CO/m9flAskddqst4ZkEa7tfi3gcA3Qyxr/FL2elAD97lg6pCSPRLrgAI1zhXctJcwdltcxM5Q0OGN+MR6Y+u/zOSQCQhzdvmh3jQw5ROo1zSk5OGCNLsILAZObisL/lnbC5Kc+VAVQI8NJT68Mk6ogK0VUblmq0vyMJDySLpPwHDVmNRMcWOcKFDkxIXrMjvL/s21DIR5rHtTVGNYcVLVcoOnN6y6JOrAh/KoNh7q3S3cm/ytuTz/PgQ6SF1mu7esDQRE1BI3hiJJDzrppOLGlrWoVg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Remove highest_bit and lowest_bit. After the HDD allocation path has been removed, the only purpose of these two fields is to determine whether the device is full or not, which can instead be determined by checking the inuse_pages. Signed-off-by: Kairui Song Reviewed-by: Baoquan He --- fs/btrfs/inode.c | 1 - fs/f2fs/data.c | 1 - fs/iomap/swapfile.c | 1 - include/linux/swap.h | 2 -- mm/page_io.c | 1 - mm/swapfile.c | 38 ++++++++------------------------------ 6 files changed, 8 insertions(+), 36 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 27b2fe7f735d..3b99b1e19371 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -10110,7 +10110,6 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file, *span = bsi.highest_ppage - bsi.lowest_ppage + 1; sis->max = bsi.nr_pages; sis->pages = bsi.nr_pages - 1; - sis->highest_bit = bsi.nr_pages - 1; return bsi.nr_extents; } #else diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index a2478c2afb3a..a9eddd782dbc 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -4043,7 +4043,6 @@ static int check_swap_activate(struct swap_info_struct *sis, cur_lblock = 1; /* force Empty message */ sis->max = cur_lblock; sis->pages = cur_lblock - 1; - sis->highest_bit = cur_lblock - 1; out: if (not_aligned) f2fs_warn(sbi, "Swapfile (%u) is not align to section: 1) creat(), 2) ioctl(F2FS_IOC_SET_PIN_FILE), 3) fallocate(%lu * N)", diff --git a/fs/iomap/swapfile.c b/fs/iomap/swapfile.c index 5fc0ac36dee3..b90d0eda9e51 100644 --- a/fs/iomap/swapfile.c +++ b/fs/iomap/swapfile.c @@ -189,7 +189,6 @@ int iomap_swapfile_activate(struct swap_info_struct *sis, *pagespan = 1 + isi.highest_ppage - isi.lowest_ppage; sis->max = isi.nr_pages; sis->pages = isi.nr_pages - 1; - sis->highest_bit = isi.nr_pages - 1; return isi.nr_extents; } EXPORT_SYMBOL_GPL(iomap_swapfile_activate); diff --git a/include/linux/swap.h b/include/linux/swap.h index 0c681aa5cb98..0c222017b5c6 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -306,8 +306,6 @@ struct swap_info_struct { struct list_head frag_clusters[SWAP_NR_ORDERS]; /* list of cluster that are fragmented or contented */ unsigned int frag_cluster_nr[SWAP_NR_ORDERS]; - unsigned int lowest_bit; /* index of first free in swap_map */ - unsigned int highest_bit; /* index of last free in swap_map */ unsigned int pages; /* total of usable pages of swap */ unsigned int inuse_pages; /* number of those currently in use */ struct percpu_cluster __percpu *percpu_cluster; /* per cpu's swap location */ diff --git a/mm/page_io.c b/mm/page_io.c index 4b4ea8e49cf6..9b983de351f9 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -163,7 +163,6 @@ int generic_swapfile_activate(struct swap_info_struct *sis, page_no = 1; /* force Empty message */ sis->max = page_no; sis->pages = page_no - 1; - sis->highest_bit = page_no - 1; out: return ret; bad_bmap: diff --git a/mm/swapfile.c b/mm/swapfile.c index 83ebc24cc94b..2686032d3510 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -55,7 +55,7 @@ static bool swap_count_continued(struct swap_info_struct *, pgoff_t, static void free_swap_count_continuations(struct swap_info_struct *); static void swap_entry_range_free(struct swap_info_struct *si, swp_entry_t entry, unsigned int nr_pages); -static void swap_range_alloc(struct swap_info_struct *si, unsigned long offset, +static void swap_range_alloc(struct swap_info_struct *si, unsigned int nr_entries); static bool folio_swapcache_freeable(struct folio *folio); static struct swap_cluster_info *lock_cluster(struct swap_info_struct *si, @@ -650,7 +650,7 @@ static bool cluster_alloc_range(struct swap_info_struct *si, struct swap_cluster } memset(si->swap_map + start, usage, nr_pages); - swap_range_alloc(si, start, nr_pages); + swap_range_alloc(si, nr_pages); ci->count += nr_pages; if (ci->count == SWAPFILE_CLUSTER) { @@ -888,19 +888,11 @@ static void del_from_avail_list(struct swap_info_struct *si) spin_unlock(&swap_avail_lock); } -static void swap_range_alloc(struct swap_info_struct *si, unsigned long offset, +static void swap_range_alloc(struct swap_info_struct *si, unsigned int nr_entries) { - unsigned int end = offset + nr_entries - 1; - - if (offset == si->lowest_bit) - si->lowest_bit += nr_entries; - if (end == si->highest_bit) - WRITE_ONCE(si->highest_bit, si->highest_bit - nr_entries); WRITE_ONCE(si->inuse_pages, si->inuse_pages + nr_entries); if (si->inuse_pages == si->pages) { - si->lowest_bit = si->max; - si->highest_bit = 0; del_from_avail_list(si); if (vm_swap_full()) @@ -933,15 +925,8 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset, for (i = 0; i < nr_entries; i++) clear_bit(offset + i, si->zeromap); - if (offset < si->lowest_bit) - si->lowest_bit = offset; - if (end > si->highest_bit) { - bool was_full = !si->highest_bit; - - WRITE_ONCE(si->highest_bit, end); - if (was_full && (si->flags & SWP_WRITEOK)) - add_to_avail_list(si); - } + if (si->inuse_pages == si->pages) + add_to_avail_list(si); if (si->flags & SWP_BLKDEV) swap_slot_free_notify = si->bdev->bd_disk->fops->swap_slot_free_notify; @@ -1051,15 +1036,12 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) plist_requeue(&si->avail_lists[node], &swap_avail_heads[node]); spin_unlock(&swap_avail_lock); spin_lock(&si->lock); - if (!si->highest_bit || !(si->flags & SWP_WRITEOK)) { + if ((si->inuse_pages == si->pages) || !(si->flags & SWP_WRITEOK)) { spin_lock(&swap_avail_lock); if (plist_node_empty(&si->avail_lists[node])) { spin_unlock(&si->lock); goto nextsi; } - WARN(!si->highest_bit, - "swap_info %d in list but !highest_bit\n", - si->type); WARN(!(si->flags & SWP_WRITEOK), "swap_info %d in list but !SWP_WRITEOK\n", si->type); @@ -2441,8 +2423,8 @@ static void _enable_swap_info(struct swap_info_struct *si) */ plist_add(&si->list, &swap_active_head); - /* add to available list iff swap device is not full */ - if (si->highest_bit) + /* add to available list if swap device is not full */ + if (si->inuse_pages < si->pages) add_to_avail_list(si); } @@ -2606,7 +2588,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) drain_mmlist(); /* wait for anyone still in scan_swap_map_slots */ - p->highest_bit = 0; /* cuts scans short */ while (p->flags >= SWP_SCANNING) { spin_unlock(&p->lock); spin_unlock(&swap_lock); @@ -2941,8 +2922,6 @@ static unsigned long read_swap_header(struct swap_info_struct *si, return 0; } - si->lowest_bit = 1; - maxpages = swapfile_maximum_size; last_page = swap_header->info.last_page; if (!last_page) { @@ -2959,7 +2938,6 @@ static unsigned long read_swap_header(struct swap_info_struct *si, if ((unsigned int)maxpages == 0) maxpages = UINT_MAX; } - si->highest_bit = maxpages - 1; if (!maxpages) return 0; From patchwork Mon Jan 13 17:57:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937872 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2C8AC02180 for ; Mon, 13 Jan 2025 18:00:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 459A96B0085; Mon, 13 Jan 2025 13:00:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 408F56B009E; Mon, 13 Jan 2025 13:00:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 283BB6B009F; Mon, 13 Jan 2025 13:00:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 028C16B0085 for ; Mon, 13 Jan 2025 13:00:19 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B6EB712024D for ; Mon, 13 Jan 2025 18:00:19 +0000 (UTC) X-FDA: 83003192958.21.98DFD71 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf02.hostedemail.com (Postfix) with ESMTP id B576480023 for ; Mon, 13 Jan 2025 18:00:17 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=YBW3pmIx; spf=pass (imf02.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791217; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qtETt4hJiQh7efUCU6F69OscBxra6w0MDgbP2RHhprU=; b=qL08Wd2lDCEM8QdTRJ7Eo6q+34WWkk2eZpQeMar4ha91/dV9k50S8vETcmO4CdlOi0H0au f061nr4v9JY3houoqlFXxEbk+azJMbAep53Li8r6aEBPwlb50J8FwzRh04QnKhVjhZY2di uQMW1g1UhAA66xVrVuu6tteaqSt78Fg= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=YBW3pmIx; spf=pass (imf02.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791217; a=rsa-sha256; cv=none; b=ayekbS/GIa63Bhw6cXf9jTOogHEUobTJnzt8Q8NfSEc2fVPXDYRMSEfl0o7BtSF/NWyM+R NvyJIO2vGN2p/sTc4hBya1MPGnMNduFVlJUr2OTIybd+BTaH2xkrPqyH7KwTVQ4MQpSJbo DgcRGk/E2KRcLiSf1SydJoSTYAi8G18= Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-21644aca3a0so101078745ad.3 for ; Mon, 13 Jan 2025 10:00:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791216; x=1737396016; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=qtETt4hJiQh7efUCU6F69OscBxra6w0MDgbP2RHhprU=; b=YBW3pmIxDt9XhbiuT7sJoj+h6bmz1KGOTGU4CH7E03Y0JzVghxPgJV071BpW3yNf5u LMnIH2NvgBwMBme0Hb68hDtlKllZ8a+1IguDiMoM2bwulUZ1cjC0H5IFBWSekCnolf1X tSRpsmpECE01M3klVok3H6gqCgTkNwhWgesZ9l4De8J1pa365Hap8gOemx8b9zr6tmyh B1JQXlxwlkUUoFAZIB009oVCwRPsyWm0nYtbr/xdr39Ef1hrMeuZ5GIw/4zYeRQIgJ7l 8PBl7WzqldxuGACQoE2yDppvQcKI8rsswiZW4icNIm28BY6UeD5x/6ouORg4VI+UgLMS XR4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791216; x=1737396016; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=qtETt4hJiQh7efUCU6F69OscBxra6w0MDgbP2RHhprU=; b=sT28cxTH0R6rjVEU/HwYYY4ENOPZhivXyXxbYyqrH1XDThcD1jzrmoAKrlfcWYvm32 wMnA9WoXuYC6BGRTn3H6rkg9KBkzwx1gV5hOXMGcBT7fUrZAQXfipD/x0IUBJjy4SobF VSezVZtSYag8s0zWQKf5puu4XtF/ip0byZUWPj7Gx+Iy/FvulqpnzJN0DhXYYv6ZvGJx LekHPYJ7+OIpApx5DK9l+OPMwlhAaZtDdHNFHLAxH7ty5hGaZUfj5Qnjes+9MiEOv5Uz LgZYBUmxHUCpaE5Ah4SxLb2+ujnffF4HT9iW4sKf1S8aF0XzZJpvFfBZhrHt+NZdPTtg mPnw== X-Gm-Message-State: AOJu0YxY4s+FX5sO379KgakYH3TrUW4pCeMX8SsNo9jay+tvAaDNT0K/ zyTIIFjoe0IwXjKOY4qVpJhv4oxose/efVAC18EX4TiMbyXHSFwsDUtHwHFuzIY= X-Gm-Gg: ASbGncsyiPW1QdMZzzVW0dd68IlKP/VzAc+KsJZ6TClmXVO+VN8+8HMbCm63RibJI91 ijY7fSIDtMifLM3HlGaR84KjUhUedKhy/up9ktaEItmF8Ny3LImh32XilkORSpkUMrcsuUVEdqS 61jsNUiMNZBurXvmX5TT80/c1pDDKEy5eYcO8nG7IhoYEKcd3N4wSE3tLNaYI+q/vIXm20cSarO KiP6PIIVu2E7lx1Mq0LJa2c42yKOvJMoTQaNAunr2cXfXyMPFPtBpH6smvJwD/+oOZJdc3W/yzm bw== X-Google-Smtp-Source: AGHT+IHgJeS+srOGqdry/3K8xXT02L3VLVOokYD22RUqbvTi1GzphHH6M9xyOJ1unQcXSjNg3M09tw== X-Received: by 2002:a17:902:db0e:b0:215:a57e:88e7 with SMTP id d9443c01a7336-21a83f48cd6mr314107555ad.3.1736791213848; Mon, 13 Jan 2025 10:00:13 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.10.00.10 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 10:00:13 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 06/13] mm, swap: clean up plist removal and adding Date: Tue, 14 Jan 2025 01:57:25 +0800 Message-ID: <20250113175732.48099-7-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B576480023 X-Rspam-User: X-Stat-Signature: wadizi6s7jtwgt8mxtftzsgiwi7pj7y3 X-HE-Tag: 1736791217-66013 X-HE-Meta: U2FsdGVkX1+G8qielALEX2Ubm3xtnglths03/u67kBUjt58IpeiWhG6TAaEyDMpqwilNBKlZCyjgCuAfU4RxYa+IqDxa0GrCfD3C/nu5QMjL/vwcDo51GcbfqQhPfsyFowoxGETll70Gwn5Q6x2ICqWFoBwfZFAWiHv+GcJYfhKh0pk89wNLPWEog8dUE54831NEba5r1WKkmnSPmGyX6GWYAHwK8q9p0wIDHg2OHqFzw+9n+4zKGoxNxzxjCqh4h1YZFAOa+z17a36Chv0jfB3GXR0723cT5FflA4bQ07zKFDeBVEAwajMmEdY+7pe9sZ652PxuNXgzBN9t9GYvlY7TV/SMbAZRurAKSUTEVlsq6dZ96GTab9iUgwwX+lDGqaj3oHmg02BHwKwWKJnbK/EnKxoS/KGKfI3a8U7oKEqWD4clRkg6CZ9fw3pZRkquyCDIcGe1CUfp1lLGfIFLeDjDhXpNxeEeYD7+DMb1dO9Rn4SjOLr9u0soNnowbcC26zAn5Go3JH9RkPxadzxaQQ7Pbl1dXbKhosz35mTG5VAj/FKX6EySlA2BPr5qCxR+EpS2Kk5DjJnyx/NAhqvdbdwC6kCS3JV7JIctQ0N1rRmqtnMBowxgFLXWujh0ET+wF+/lN+6X1iR7TNqDZKaYJmpDGbLKqkLftlVjp0gCKVkRFjlIr/f41XPkckb48j7YRok9CWt4WsC3NHTM0fEiThC6I3c96yG/xVj+Po+3wvSFWmpnQ5eBVr+PYcvSdkfc5ahxD4YTgVw3e8V3tmThG9sqUCQq7dIHUk7BPtrtbYs/gXYYTQZaF2MIc6UlcWlvqt1G9sl+TeBBM80qlLzKdnyl0S1x/2x2v3jaS08mad1AGVFojeSRkc/5cBysp8FisZQdRui1tq//GJBKnhhKJjoZzfrAqMVHYSwypWhrq/Dt96qEONuOIef7yXYRjzn8skHGbJKy9SGUoWRfpcq svtQ8A6m RD2HlY9I2gVbhjsZyN4mtFfVkDTRQVkJlld5UEGdM0yMZAlmX3uIeyncAqxKp8IsVcA0+4DDA79zMLX5dmT4+4KhVvE1M7DOwIjLsm8XbFE3qUmeEhkE1BQZkLg/damrfhSrfBNVizQPFpt6LwGLjb0wEqEV8+NCRbKcl6YHTaYyR9VIlZoGdyNveqPyA8AFUpbA5V3jc36/oNcD3cMMup/0BlLznf5gG0kgrKQ5+cLGCpdeNDCmysly9uEGV8NjetqtQC4UvODStTC9RCQwHs72vwmzhwQ9KXQLygz88i7mHL/PtEKtI2nA2KRWmfzYgV10tcdwgiwK7NIctBHQKEjXU3WlMgZD5WqCPMeDyWPa+yR8j+0hodi79o6SnVj5kCJEDBjLcdwdXzR+Gf9wfF7/6W+oe5aARprBHQnmw4feQBNTEfS4lzNhagDXPdkpYBnX8cg35UOQH++LDA9gFG+qYb29bjKS7xW36xD9VG5kMJu6RBOLN8ydWPhB9ad/ZmIrP5b8JFLvZPKDGEZ401WERkA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song When the swap device is full (inuse_pages == pages), it should be removed from the allocation available plist. If any slot is freed, the swap device should be added back to the plist. Additionally, during swapon or swapoff, the swap device is forcefully added or removed. Currently, the condition (inuse_pages == pages) is checked after every counter update, then remove or add the device accordingly. This is serialized by si->lock. This commit decouples it from the protection of si->lock and reworked plist removal and adding, making it possible to get rid of the hard dependency on si->lock in allocation path in later commits. To achieve this, simply using another lock is not an optimal approach, as the overhead is observable for a hot counter, and may cause complex locking issues. Thus, this commit manages to make it a lock-free atomic operation, by embedding the plist state into the second highest bit of the atomic counter. Simply making the counter an atomic will not work, if the update and plist status check are not performed atomically, we may miss an addition or removal. With the embedded info we can update the counter and check the plist status with single atomic operations, and avoid any extra overheads: If the counter is full (inuse_pages == pages) and the off-list bit is unset, we attempt to remove it from the plist. If the counter is not full (inuse_pages != pages) and the off-list bit is set, we attempt to add it to the plist. Removing, adding and bit update is serialized with a lock, which is a cold path. Ordinary counter updates will be lock-free. Signed-off-by: Kairui Song --- include/linux/swap.h | 2 +- mm/swapfile.c | 186 +++++++++++++++++++++++++++++++------------ 2 files changed, 138 insertions(+), 50 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 0c222017b5c6..e1eeea6307cd 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -307,7 +307,7 @@ struct swap_info_struct { /* list of cluster that are fragmented or contented */ unsigned int frag_cluster_nr[SWAP_NR_ORDERS]; unsigned int pages; /* total of usable pages of swap */ - unsigned int inuse_pages; /* number of those currently in use */ + atomic_long_t inuse_pages; /* number of those currently in use */ struct percpu_cluster __percpu *percpu_cluster; /* per cpu's swap location */ struct rb_root swap_extent_root;/* root of the swap extent rbtree */ struct block_device *bdev; /* swap device or bdev of swap file */ diff --git a/mm/swapfile.c b/mm/swapfile.c index 2686032d3510..91faf2073006 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -128,6 +128,26 @@ static inline unsigned char swap_count(unsigned char ent) return ent & ~SWAP_HAS_CACHE; /* may include COUNT_CONTINUED flag */ } +/* + * Use the second highest bit of inuse_pages counter as the indicator + * if one swap device is on the available plist, so the atomic can + * still be updated arithmetically while having special data embedded. + * + * inuse_pages counter is the only thing indicating if a device should + * be on avail_lists or not (except swapon / swapoff). By embedding the + * off-list bit in the atomic counter, updates no longer need any lock + * to check the list status. + * + * This bit will be set if the device is not on the plist and not + * usable, will be cleared if the device is on the plist. + */ +#define SWAP_USAGE_OFFLIST_BIT (1UL << (BITS_PER_TYPE(atomic_t) - 2)) +#define SWAP_USAGE_COUNTER_MASK (~SWAP_USAGE_OFFLIST_BIT) +static long swap_usage_in_pages(struct swap_info_struct *si) +{ + return atomic_long_read(&si->inuse_pages) & SWAP_USAGE_COUNTER_MASK; +} + /* Reclaim the swap entry anyway if possible */ #define TTRS_ANYWAY 0x1 /* @@ -717,7 +737,7 @@ static void swap_reclaim_full_clusters(struct swap_info_struct *si, bool force) int nr_reclaim; if (force) - to_scan = si->inuse_pages / SWAPFILE_CLUSTER; + to_scan = swap_usage_in_pages(si) / SWAPFILE_CLUSTER; while (!list_empty(&si->full_clusters)) { ci = list_first_entry(&si->full_clusters, struct swap_cluster_info, list); @@ -872,42 +892,128 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o return found; } -static void __del_from_avail_list(struct swap_info_struct *si) +/* SWAP_USAGE_OFFLIST_BIT can only be set by this helper. */ +static void del_from_avail_list(struct swap_info_struct *si, bool swapoff) { int nid; + unsigned long pages; + + spin_lock(&swap_avail_lock); + + if (swapoff) { + /* + * Forcefully remove it. Clear the SWP_WRITEOK flags for + * swapoff here so it's synchronized by both si->lock and + * swap_avail_lock, to ensure the result can be seen by + * add_to_avail_list. + */ + lockdep_assert_held(&si->lock); + si->flags &= ~SWP_WRITEOK; + atomic_long_or(SWAP_USAGE_OFFLIST_BIT, &si->inuse_pages); + } else { + /* + * If not called by swapoff, take it off-list only if it's + * full and SWAP_USAGE_OFFLIST_BIT is not set (strictly + * si->inuse_pages == pages), any concurrent slot freeing, + * or device already removed from plist by someone else + * will make this return false. + */ + pages = si->pages; + if (!atomic_long_try_cmpxchg(&si->inuse_pages, &pages, + pages | SWAP_USAGE_OFFLIST_BIT)) + goto skip; + } - assert_spin_locked(&si->lock); for_each_node(nid) plist_del(&si->avail_lists[nid], &swap_avail_heads[nid]); + +skip: + spin_unlock(&swap_avail_lock); } -static void del_from_avail_list(struct swap_info_struct *si) +/* SWAP_USAGE_OFFLIST_BIT can only be cleared by this helper. */ +static void add_to_avail_list(struct swap_info_struct *si, bool swapon) { + int nid; + long val; + unsigned long pages; + spin_lock(&swap_avail_lock); - __del_from_avail_list(si); + + /* Corresponding to SWP_WRITEOK clearing in del_from_avail_list */ + if (swapon) { + lockdep_assert_held(&si->lock); + si->flags |= SWP_WRITEOK; + } else { + if (!(READ_ONCE(si->flags) & SWP_WRITEOK)) + goto skip; + } + + if (!(atomic_long_read(&si->inuse_pages) & SWAP_USAGE_OFFLIST_BIT)) + goto skip; + + val = atomic_long_fetch_and_relaxed(~SWAP_USAGE_OFFLIST_BIT, &si->inuse_pages); + + /* + * When device is full and device is on the plist, only one updater will + * see (inuse_pages == si->pages) and will call del_from_avail_list. If + * that updater happen to be here, just skip adding. + */ + pages = si->pages; + if (val == pages) { + /* Just like the cmpxchg in del_from_avail_list */ + if (atomic_long_try_cmpxchg(&si->inuse_pages, &pages, + pages | SWAP_USAGE_OFFLIST_BIT)) + goto skip; + } + + for_each_node(nid) + plist_add(&si->avail_lists[nid], &swap_avail_heads[nid]); + +skip: spin_unlock(&swap_avail_lock); } -static void swap_range_alloc(struct swap_info_struct *si, - unsigned int nr_entries) +/* + * swap_usage_add / swap_usage_sub of each slot are serialized by ci->lock + * within each cluster, so the total contribution to the global counter should + * always be positive and cannot exceed the total number of usable slots. + */ +static bool swap_usage_add(struct swap_info_struct *si, unsigned int nr_entries) { - WRITE_ONCE(si->inuse_pages, si->inuse_pages + nr_entries); - if (si->inuse_pages == si->pages) { - del_from_avail_list(si); + long val = atomic_long_add_return_relaxed(nr_entries, &si->inuse_pages); - if (vm_swap_full()) - schedule_work(&si->reclaim_work); + /* + * If device is full, and SWAP_USAGE_OFFLIST_BIT is not set, + * remove it from the plist. + */ + if (unlikely(val == si->pages)) { + del_from_avail_list(si, false); + return true; } + + return false; } -static void add_to_avail_list(struct swap_info_struct *si) +static void swap_usage_sub(struct swap_info_struct *si, unsigned int nr_entries) { - int nid; + long val = atomic_long_sub_return_relaxed(nr_entries, &si->inuse_pages); - spin_lock(&swap_avail_lock); - for_each_node(nid) - plist_add(&si->avail_lists[nid], &swap_avail_heads[nid]); - spin_unlock(&swap_avail_lock); + /* + * If device is not full, and SWAP_USAGE_OFFLIST_BIT is set, + * remove it from the plist. + */ + if (unlikely(val & SWAP_USAGE_OFFLIST_BIT)) + add_to_avail_list(si, false); +} + +static void swap_range_alloc(struct swap_info_struct *si, + unsigned int nr_entries) +{ + if (swap_usage_add(si, nr_entries)) { + if (vm_swap_full()) + schedule_work(&si->reclaim_work); + } } static void swap_range_free(struct swap_info_struct *si, unsigned long offset, @@ -925,8 +1031,6 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset, for (i = 0; i < nr_entries; i++) clear_bit(offset + i, si->zeromap); - if (si->inuse_pages == si->pages) - add_to_avail_list(si); if (si->flags & SWP_BLKDEV) swap_slot_free_notify = si->bdev->bd_disk->fops->swap_slot_free_notify; @@ -946,7 +1050,7 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset, */ smp_wmb(); atomic_long_add(nr_entries, &nr_swap_pages); - WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries); + swap_usage_sub(si, nr_entries); } static int cluster_alloc_swap(struct swap_info_struct *si, @@ -1036,19 +1140,6 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) plist_requeue(&si->avail_lists[node], &swap_avail_heads[node]); spin_unlock(&swap_avail_lock); spin_lock(&si->lock); - if ((si->inuse_pages == si->pages) || !(si->flags & SWP_WRITEOK)) { - spin_lock(&swap_avail_lock); - if (plist_node_empty(&si->avail_lists[node])) { - spin_unlock(&si->lock); - goto nextsi; - } - WARN(!(si->flags & SWP_WRITEOK), - "swap_info %d in list but !SWP_WRITEOK\n", - si->type); - __del_from_avail_list(si); - spin_unlock(&si->lock); - goto nextsi; - } n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE, n_goal, swp_entries, order); spin_unlock(&si->lock); @@ -1057,7 +1148,6 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) cond_resched(); spin_lock(&swap_avail_lock); -nextsi: /* * if we got here, it's likely that si was almost full before, * and since scan_swap_map_slots() can drop the si->lock, @@ -1789,7 +1879,7 @@ unsigned int count_swap_pages(int type, int free) if (sis->flags & SWP_WRITEOK) { n = sis->pages; if (free) - n -= sis->inuse_pages; + n -= swap_usage_in_pages(sis); } spin_unlock(&sis->lock); } @@ -2124,7 +2214,7 @@ static int try_to_unuse(unsigned int type) swp_entry_t entry; unsigned int i; - if (!READ_ONCE(si->inuse_pages)) + if (!swap_usage_in_pages(si)) goto success; retry: @@ -2137,7 +2227,7 @@ static int try_to_unuse(unsigned int type) spin_lock(&mmlist_lock); p = &init_mm.mmlist; - while (READ_ONCE(si->inuse_pages) && + while (swap_usage_in_pages(si) && !signal_pending(current) && (p = p->next) != &init_mm.mmlist) { @@ -2165,7 +2255,7 @@ static int try_to_unuse(unsigned int type) mmput(prev_mm); i = 0; - while (READ_ONCE(si->inuse_pages) && + while (swap_usage_in_pages(si) && !signal_pending(current) && (i = find_next_to_unuse(si, i)) != 0) { @@ -2200,7 +2290,7 @@ static int try_to_unuse(unsigned int type) * folio_alloc_swap(), temporarily hiding that swap. It's easy * and robust (though cpu-intensive) just to keep retrying. */ - if (READ_ONCE(si->inuse_pages)) { + if (swap_usage_in_pages(si)) { if (!signal_pending(current)) goto retry; return -EINTR; @@ -2227,7 +2317,7 @@ static void drain_mmlist(void) unsigned int type; for (type = 0; type < nr_swapfiles; type++) - if (swap_info[type]->inuse_pages) + if (swap_usage_in_pages(swap_info[type])) return; spin_lock(&mmlist_lock); list_for_each_safe(p, next, &init_mm.mmlist) @@ -2406,7 +2496,6 @@ static void setup_swap_info(struct swap_info_struct *si, int prio, static void _enable_swap_info(struct swap_info_struct *si) { - si->flags |= SWP_WRITEOK; atomic_long_add(si->pages, &nr_swap_pages); total_swap_pages += si->pages; @@ -2423,9 +2512,8 @@ static void _enable_swap_info(struct swap_info_struct *si) */ plist_add(&si->list, &swap_active_head); - /* add to available list if swap device is not full */ - if (si->inuse_pages < si->pages) - add_to_avail_list(si); + /* Add back to available list */ + add_to_avail_list(si, true); } static void enable_swap_info(struct swap_info_struct *si, int prio, @@ -2523,7 +2611,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) goto out_dput; } spin_lock(&p->lock); - del_from_avail_list(p); + del_from_avail_list(p, true); if (p->prio < 0) { struct swap_info_struct *si = p; int nid; @@ -2541,7 +2629,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) plist_del(&p->list, &swap_active_head); atomic_long_sub(p->pages, &nr_swap_pages); total_swap_pages -= p->pages; - p->flags &= ~SWP_WRITEOK; spin_unlock(&p->lock); spin_unlock(&swap_lock); @@ -2721,7 +2808,7 @@ static int swap_show(struct seq_file *swap, void *v) } bytes = K(si->pages); - inuse = K(READ_ONCE(si->inuse_pages)); + inuse = K(swap_usage_in_pages(si)); file = si->swap_file; len = seq_file_path(swap, file, " \t\n\\"); @@ -2838,6 +2925,7 @@ static struct swap_info_struct *alloc_swap_info(void) } spin_lock_init(&p->lock); spin_lock_init(&p->cont_lock); + atomic_long_set(&p->inuse_pages, SWAP_USAGE_OFFLIST_BIT); init_completion(&p->comp); return p; @@ -3335,7 +3423,7 @@ void si_swapinfo(struct sysinfo *val) struct swap_info_struct *si = swap_info[type]; if ((si->flags & SWP_USED) && !(si->flags & SWP_WRITEOK)) - nr_to_be_unused += READ_ONCE(si->inuse_pages); + nr_to_be_unused += swap_usage_in_pages(si); } val->freeswap = atomic_long_read(&nr_swap_pages) + nr_to_be_unused; val->totalswap = total_swap_pages + nr_to_be_unused; From patchwork Mon Jan 13 17:57:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F503C02184 for ; Mon, 13 Jan 2025 18:00:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9FEE56B009E; Mon, 13 Jan 2025 13:00:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 987756B009F; Mon, 13 Jan 2025 13:00:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 801B46B00A0; Mon, 13 Jan 2025 13:00:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5E89E6B009E for ; Mon, 13 Jan 2025 13:00:22 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 0EFE81401EE for ; Mon, 13 Jan 2025 18:00:22 +0000 (UTC) X-FDA: 83003193084.06.DC1ACE4 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf30.hostedemail.com (Postfix) with ESMTP id 1EE0C80027 for ; Mon, 13 Jan 2025 18:00:19 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=UNix00e0; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791220; a=rsa-sha256; cv=none; b=3TL0RLF5WgKvgMI9EUFSxr0b3Nn6xo4soVhG4de4WAJE1EdZ9JMjHB39rzASINtnX8gDXP +3U1P/SkSKJ5lSoh7lVQ4XIeqGSMGWkx6NX0Y0ZuiW6AdBGdpCJrfC1me+Y4N1sOLdpBu9 YxVyIZPhX7t6stvj5EyUJY9PLyhWcs4= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=UNix00e0; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf30.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791220; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dWh63O4GHmxsuBKSbBliKUVuioOhHJ78YLSnvJ5urEw=; b=jaxn/TaF/IPe6QSOZ5pDbrP4VMpJneWpJA+FmsPxvXGTg/y3pskAYQ2+hiP7/LObXooQYW P1nTYQGx3FSVfQkZjeWG/Mg/dvsX1cLTjJxBCTIIdaVErkVQtLB/pSA4jBaM8dCvg33nwO HFsLnnNRdeOxoB2iwvlp+M6saPp3W9w= Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-2165cb60719so82362585ad.0 for ; Mon, 13 Jan 2025 10:00:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791218; x=1737396018; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=dWh63O4GHmxsuBKSbBliKUVuioOhHJ78YLSnvJ5urEw=; b=UNix00e0U9iJAxL1zq/OJSXLKp+CJoIx45sN7R59i5okEyWznaojZBDDJfesG3kbVq uaTISa0xX7DIuig5RiEQJ+hhQvIUMBE5xp1RzLIuIGyjVJ5wQkbxjKv7IL4K8eldvBE+ 22T6GEKsqDH6mFC2k3ko8z9G32scvZBjwzE00T0cw2/nXUbSL7QsgL7iMLDMf4VVNvN7 rHHJfHVYznf2tf/luXEHvQN8Xc9lVoa09zsnoMmefAw5Ikowc0akw5Y4xbweOFcGRBfb vUZaTjAvOy4T+QmwxKSvwXi5q254vC1gl4n/f/qkMR15xac7A5QgjhRYWTcKup9AZg7i juRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791218; x=1737396018; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=dWh63O4GHmxsuBKSbBliKUVuioOhHJ78YLSnvJ5urEw=; b=tjILilRkwH/p4GzlAch1+Hc9GseLCwk1V4nQPy+9v+K3jqPGVu7P2N5dwMedqwG2kb WS4/S/SNCEDAoHDXbe59mhCzt4FumUygimtz1p9DmD/3UnzgLaxNA/ofIdEgP2Xius17 b3KhKUTLtEgah4GxkOcOKS8p03/lJYRz8GxGyxI47Ynch9JzAL2SU1CTaN5Iez89kulO FYJQjAuYTRMaBI38eoYLLb5jfxQ19bLCsbQ3ZL6AtrrJSV4RbWVX7+I2N/hjfxfDuuij Xnoen49WDGpb4w+HQqr/V/yQTu/1HJqNUKXjA8RIc6A2kmSPvkNtp0TJe0zrM8NKBj8K Q2oA== X-Gm-Message-State: AOJu0YzCDwhh8WthgkYU8UrxP104zG8H4kZ7OAdcgaKQPdkmRTxigq4n /NyscQoX8Lu6NdFYmH74jKYcaWObwOGiUea1e3btGf4E41ppGP61UbV/y9VJqkY= X-Gm-Gg: ASbGncvetFFsL6i6AX5JhBaPndlOn3bZxQzR/ccQM7NFAW+I/XmF9NAQpzY+/GD8nzo yYwEFQuhjz1x1f1kHHYo1DQ88Tg0nxJbEcNYJRTEsYzB3xum44HoFFUVeLCMHnsW7fDIIwNjEMA FXxwnOspUj2U31vzQizuN4/5i+rIo0mnZ6eMak/Dlv1MwmXMp71lpEzbOcFjMDQa+Ep7XYoZJp2 FVOJWBRL1G3bvbc/euz+nzh2qE/+GguX9NV5Qun7YnbbPrTO8Z547Llg93KbKppH1F64K//WAEl yQ== X-Google-Smtp-Source: AGHT+IFkbMojpX7s6VN8Lz/QQdWEvSeMZ/ugRu8uwONPdYOHeJ6nn0l7HLskgv5eoQxu2cxPIS9zAg== X-Received: by 2002:a17:903:2311:b0:215:a2e2:53ff with SMTP id d9443c01a7336-21a83f36e1dmr352526495ad.11.1736791218117; Mon, 13 Jan 2025 10:00:18 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.10.00.14 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 10:00:17 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 07/13] mm, swap: hold a reference during scan and cleanup flag usage Date: Tue, 14 Jan 2025 01:57:26 +0800 Message-ID: <20250113175732.48099-8-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 1EE0C80027 X-Stat-Signature: xc6nhmqzab5e81xje6dqqqzei7bcn997 X-HE-Tag: 1736791219-622357 X-HE-Meta: U2FsdGVkX18w9qrWqssp4L+OtB5XLf2TEX4CGLPIFCezKqwjALsQJuOMmpe7CWwJdtupe5ARz2Ezt1x+OLNljBUrLr3RpmOWWrSO8ZWJw4BpLtX2Y/t5L5EZOoeIyz26SSN1FUegCbQxm6O6H4i/RX2H/pnETgPFfG11q7e/G9BOCsPAd703OwjdSC77NpeX546iFpWAJ/rArzttOBNGiiiDkE2iNpPJEtDGRipVRrle3B9uI+L8eiZyLKNPwKWUAsD7tz9R87idKvmfRU5w3n4gtqPHYSrDXCeT33D9W76lbN+pQe0Y+HSvomZiAUz1dFLaTvL9Tq1zbvYg/IA1eQ8QA+Li9w9rWKeBt+NQde9o/6YUBiWgLlRK5PXhux/xvY0mjg0Idg0xgm9e/YS6bNUgglK4qClAiewwF9eIWga2dqV7iZLnElz1Mtp/yhzFsNvxW4P+YCOdYY1FqnyHyFdn/lufgOcw974t3c9dVnrb2545mO1DlRt0t+e8DO6GHv43ms2DjxmlSHFFYNho/O4cpcBKWcM6uhqhlS2JMgb+h+0VrER4dIa7E4/Qbler6e//gc+8J9v4oXBPodpkjT4QOGWCcEErKBqN4ohE2bO8v06EhrT1idMsEtqv7Qi8yFNVvKgzHrKyosUWpfyvYb9KulwQtDbwgmogc6hi9ZwAyQFzkTV7TstOAJBSV5tOq+FGQf0g63kC9rtZt7TosdiKU8Q1uhXWcMMcDPBtOKx7muHtv+nrfrxJX2RguAkPXPVlhEoZtDnp+J7UvFTF3uznuZyG0X49Lgltg/4rQko2x0yeC+GeMALaK0PTX3KqQTycAwaIIY6JLGX80bR4N34qmE6wAG7TSi1/8gptSrEQVAgZu495gIC+rZK+E2M0JeoZmxc+QqBVsoqknGOLXOk+pqqkdikESgWuYnNWOl4eim8G9FNRKVwrmY9lI1mki7TxmA62tI/mLp2WQbY qLii2a49 JAC+eNno39sU6ECS7bT1GbkriF/QB1AVrHg5l8egB5dIKFzYgzoHxkYoaPp5+YdoPpHwgcsIRFlZkjChnknKek7lIdMqRnX37bfCAR5ROhdn9USOY5YR7mznTdT6hgWuRnjVj0CXBuF2I+tV+0CsBo4c1LjbzdkNKTFteOkNDcqQeRPBJ0/F1b75efB7hVv3HXKbmgQZ8N1U/QiG2/KBhFRa1BLJuTMCeCu7Umti5p+6lrykmCYM6UBWA3jiw5xvuTCEEhLz3rBcUChZPKKWh/f+9NUnqVAru+/h6oT9ZHWrDkxw7I7WKmi9KDITEY3bSi9X5Q9IToe9K38RgTLUTK2V3zv2rY6YpU66f+X/JFle0HuJIOI1PJQpYpeayTBxgxFsKSj+ykETE/PKqTDQAcheVjBCxt1AHUI9JeLQzVmK4xq2sBH2no6mqL/tFKrkKRt+5xQOMomMNfxoEVNU2VtzTg7AIXcn5HzefOKEEqq67y/7TmzfjXqCLl7RqbODIjNTELNUXpD3T96dC4NAE9yzXBw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song The flag SWP_SCANNING was used as an indicator of whether a device is being scanned for allocation, and prevents swapoff. Combined with SWP_WRITEOK, they work as a set of barriers for a clean swapoff: 1. Swapoff clears SWP_WRITEOK, allocation requests will see ~SWP_WRITEOK and abort as it's serialized by si->lock. 2. Swapoff unuses all allocated entries. 3. Swapoff waits for SWP_SCANNING flag to be cleared, so ongoing allocations will stop, preventing UAF. 4. Now swapoff can free everything safely. This will make the allocation path have a hard dependency on si->lock. Allocation always have to acquire si->lock first for setting SWP_SCANNING and checking SWP_WRITEOK. This commit removes this flag, and just uses the existing per-CPU refcount instead to prevent UAF in step 3, which serves well for such usage without dependency on si->lock, and scales very well too. Just hold a reference during the whole scan and allocation process. Swapoff will kill and wait for the counter. And for preventing any allocation from happening after step 1 so the unuse in step 2 can ensure all slots are free, swapoff will acquire the ci->lock of each cluster one by one to ensure all allocations see ~SWP_WRITEOK and abort. This way these dependences on si->lock are gone. And worth noting we can't kill the refcount as the first step for swapoff as the unuse process have to acquire the refcount. Signed-off-by: Kairui Song --- include/linux/swap.h | 1 - mm/swapfile.c | 90 ++++++++++++++++++++++++++++---------------- 2 files changed, 57 insertions(+), 34 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index e1eeea6307cd..02120f1005d5 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -219,7 +219,6 @@ enum { SWP_STABLE_WRITES = (1 << 11), /* no overwrite PG_writeback pages */ SWP_SYNCHRONOUS_IO = (1 << 12), /* synchronous IO is efficient */ /* add others here before... */ - SWP_SCANNING = (1 << 14), /* refcount in scan_swap_map */ }; #define SWAP_CLUSTER_MAX 32UL diff --git a/mm/swapfile.c b/mm/swapfile.c index 91faf2073006..3898576f947a 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -658,6 +658,8 @@ static bool cluster_alloc_range(struct swap_info_struct *si, struct swap_cluster { unsigned int nr_pages = 1 << order; + lockdep_assert_held(&ci->lock); + if (!(si->flags & SWP_WRITEOK)) return false; @@ -1059,8 +1061,6 @@ static int cluster_alloc_swap(struct swap_info_struct *si, { int n_ret = 0; - si->flags += SWP_SCANNING; - while (n_ret < nr) { unsigned long offset = cluster_alloc_swap_entry(si, order, usage); @@ -1069,8 +1069,6 @@ static int cluster_alloc_swap(struct swap_info_struct *si, slots[n_ret++] = swp_entry(si->type, offset); } - si->flags -= SWP_SCANNING; - return n_ret; } @@ -1112,6 +1110,22 @@ static int scan_swap_map_slots(struct swap_info_struct *si, return cluster_alloc_swap(si, usage, nr, slots, order); } +static bool get_swap_device_info(struct swap_info_struct *si) +{ + if (!percpu_ref_tryget_live(&si->users)) + return false; + /* + * Guarantee the si->users are checked before accessing other + * fields of swap_info_struct, and si->flags (SWP_WRITEOK) is + * up to dated. + * + * Paired with the spin_unlock() after setup_swap_info() in + * enable_swap_info(), and smp_wmb() in swapoff. + */ + smp_rmb(); + return true; +} + int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) { int order = swap_entry_order(entry_order); @@ -1139,13 +1153,16 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) /* requeue si to after same-priority siblings */ plist_requeue(&si->avail_lists[node], &swap_avail_heads[node]); spin_unlock(&swap_avail_lock); - spin_lock(&si->lock); - n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE, - n_goal, swp_entries, order); - spin_unlock(&si->lock); - if (n_ret || size > 1) - goto check_out; - cond_resched(); + if (get_swap_device_info(si)) { + spin_lock(&si->lock); + n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE, + n_goal, swp_entries, order); + spin_unlock(&si->lock); + put_swap_device(si); + if (n_ret || size > 1) + goto check_out; + cond_resched(); + } spin_lock(&swap_avail_lock); /* @@ -1296,16 +1313,8 @@ struct swap_info_struct *get_swap_device(swp_entry_t entry) si = swp_swap_info(entry); if (!si) goto bad_nofile; - if (!percpu_ref_tryget_live(&si->users)) + if (!get_swap_device_info(si)) goto out; - /* - * Guarantee the si->users are checked before accessing other - * fields of swap_info_struct. - * - * Paired with the spin_unlock() after setup_swap_info() in - * enable_swap_info(). - */ - smp_rmb(); offset = swp_offset(entry); if (offset >= si->max) goto put_out; @@ -1785,10 +1794,13 @@ swp_entry_t get_swap_page_of_type(int type) goto fail; /* This is called for allocating swap entry, not cache */ - spin_lock(&si->lock); - if ((si->flags & SWP_WRITEOK) && scan_swap_map_slots(si, 1, 1, &entry, 0)) - atomic_long_dec(&nr_swap_pages); - spin_unlock(&si->lock); + if (get_swap_device_info(si)) { + spin_lock(&si->lock); + if ((si->flags & SWP_WRITEOK) && scan_swap_map_slots(si, 1, 1, &entry, 0)) + atomic_long_dec(&nr_swap_pages); + spin_unlock(&si->lock); + put_swap_device(si); + } fail: return entry; } @@ -2562,6 +2574,25 @@ bool has_usable_swap(void) return ret; } +/* + * Called after clearing SWP_WRITEOK, ensures cluster_alloc_range + * see the updated flags, so there will be no more allocations. + */ +static void wait_for_allocation(struct swap_info_struct *si) +{ + unsigned long offset; + unsigned long end = ALIGN(si->max, SWAPFILE_CLUSTER); + struct swap_cluster_info *ci; + + BUG_ON(si->flags & SWP_WRITEOK); + + for (offset = 0; offset < end; offset += SWAPFILE_CLUSTER) { + ci = lock_cluster(si, offset); + unlock_cluster(ci); + offset += SWAPFILE_CLUSTER; + } +} + SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) { struct swap_info_struct *p = NULL; @@ -2632,6 +2663,8 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) spin_unlock(&p->lock); spin_unlock(&swap_lock); + wait_for_allocation(p); + disable_swap_slots_cache_lock(); set_current_oom_origin(); @@ -2674,15 +2707,6 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) spin_lock(&p->lock); drain_mmlist(); - /* wait for anyone still in scan_swap_map_slots */ - while (p->flags >= SWP_SCANNING) { - spin_unlock(&p->lock); - spin_unlock(&swap_lock); - schedule_timeout_uninterruptible(1); - spin_lock(&swap_lock); - spin_lock(&p->lock); - } - swap_file = p->swap_file; p->swap_file = NULL; p->max = 0; From patchwork Mon Jan 13 17:57:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937874 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05D2EC02183 for ; Mon, 13 Jan 2025 18:00:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 89F7B6B007B; Mon, 13 Jan 2025 13:00:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 828166B00A0; Mon, 13 Jan 2025 13:00:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A2CB6B00A1; Mon, 13 Jan 2025 13:00:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 455056B007B for ; Mon, 13 Jan 2025 13:00:26 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E4573A0208 for ; Mon, 13 Jan 2025 18:00:25 +0000 (UTC) X-FDA: 83003193210.11.EAB03C1 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) by imf24.hostedemail.com (Postfix) with ESMTP id D3A67180020 for ; Mon, 13 Jan 2025 18:00:23 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=I23tGI2X; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.216.46 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791223; a=rsa-sha256; cv=none; b=m7E0GQ1tO+ggAqaWsQORiSaWBS3XLMEfTMZe+/RZDa07PIqsFm0PbQv2hVXx2mLx7QhGJQ SFdt6R3v7eVnxqXRifFIcnZDdLOGVZrD16V1t5ynEQBzIOfJO6LL4t6jYkGlIWdDyXJT1i G3WetfZDficWa6lmgjSHasy8xMNfK54= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=I23tGI2X; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.216.46 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791223; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZcBqrejXjDN/SI+43XhdviH2TahNKIN+YXabcnMV+tI=; b=McvXnXRz/TXwR0pCKq/2TzT6NE9GzEJIEw+EbdHNsw8B6wpP0Q855XrO8uTor3quXLBK/y kFXjPiXSblYlx11YE5EkLGYB+mpWDChAhmfBQMJCuCv3QG9hpryEv78JPqO6dPV4G19aZD Jrbr75zyOaY8e5IhY4zviEB17SrVoMA= Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-2eeb4d643a5so8058872a91.3 for ; Mon, 13 Jan 2025 10:00:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791222; x=1737396022; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=ZcBqrejXjDN/SI+43XhdviH2TahNKIN+YXabcnMV+tI=; b=I23tGI2XMiubGHc+opkLv8g6JovBykU7ASxp/QtHqdRevAcemgtuW6h4cuoiw0+yQw NyNg2kR+ogZeMQfag+yXMv0iVC31oOjh9EyfDVuo8YfPXTCGbObABOvFBzL5U+eaOP6X aez7kmrQXQSeDKYSmkv97viAgQXByKJUEbDWRd8ceE2FxtUjGc+1Y/BXDGsiMKCJqJU5 R5VSsv0vc4WPdaUoYe1Gh7LRRxIlHawQMzNFPx2+wZlFtIhm7xzVqKdhFcxBjPmDA33Q pFpkyuUdOibJVvShXIHhuc5kwQDc+AuSNZIpL/QgqsOzu2rhw1ajylklV7L4ZHIDZHOn u6tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791222; x=1737396022; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ZcBqrejXjDN/SI+43XhdviH2TahNKIN+YXabcnMV+tI=; b=KhFp/BYpKgLXyvsalqcZNUUuxJkFOq3Kf64r5ttLx2//Hbntrv3C/1x7oLS3DD7eyS r0XNxMmOhyU/f+UnHYWrMJWVPpUfJqTNmql6Xylkc3t/SFLhi3yknfRntdgcWU3nrceD CRhrufXcuBPiea3szbANqYQ31mZpv4/VOVdervN7Ax42PKL4QMlyEipNNsNpxS3I4VSA 4TLWvf3alF+rH6AUMdQHe1AM/iMYAvOSwbk5i3PJ67PEiR7nkXG4C939IzJrLZ3AKuk9 SNLBUBYfPk2gU89Ytm6psodG1Alo8uyR9s/f4JU7EoD2qcvkrOWrgFbMW/8bRfmSibzl Ivig== X-Gm-Message-State: AOJu0YwYBhz1T2ROamwb26O5QydoosNPV0p83IdDUb23GAyd3H3e3LFH wHC+l6rwj1LoP1sDLgpXumktmgtT41DHbdMV1NWruaWVMekI2rdBKAaLI6oLtyI= X-Gm-Gg: ASbGncu7dfTQtT/IIBlKAeElBrVIzTt1RvAkq9ysznmHZIVxZ9rgikufce5d3PHSnMc XcqbIZUZzO72p4y7Fk6swjL2LUdwejFmbrrs9b8ObZxh0JhWQCZxvibWFd2qHsA0NLChT9dGokA mIUsxchTf9EweYRWIMDRW+XmtgdAe5G3TOd+UnCKuIcqVyt4s1AQgKc2DMhSVygD9fchoA8V3ue Ki2HsQmcJu7aAZJ+nnE8/771e6ZaQoRsSCtwMfC8kY1s4ocJ300tf+S2qo9Yb5L88JEL8wO8ZfY +A== X-Google-Smtp-Source: AGHT+IF7Lv6p+QRH7O5OPr3Tvh76Vf7drCt/uBv+BS+KwCQv0OckMgkTAS/vMPDOVtUVb/e8JFn2VQ== X-Received: by 2002:a17:90b:54c4:b0:2ee:8ea0:6b9c with SMTP id 98e67ed59e1d1-2f548f33220mr37465138a91.12.1736791221839; Mon, 13 Jan 2025 10:00:21 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.10.00.18 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 10:00:21 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 08/13] mm, swap: use an enum to define all cluster flags and wrap flags changes Date: Tue, 14 Jan 2025 01:57:27 +0800 Message-ID: <20250113175732.48099-9-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: D3A67180020 X-Stat-Signature: 9rs79niri74p3jgku9p17tcah1et9pqo X-Rspam-User: X-HE-Tag: 1736791223-95037 X-HE-Meta: U2FsdGVkX1/E8NR6jgq2ULrpqvTaNf5ctXni02YbtFwY9nf4E1ASjjBoujobOPn0LDAFOwHNxVggnL2NVPej7oB3hWP7xXJUpEsIUP8dtDzItF7prygUwY5lExZy7H3CUWeRKy/cyyoYeOOwVbdUKYMx5zOkZRjrFR2efScjNLkS+EMyMO1nss4MHNvFkBvytKF9pF6d94AuoPxYtFkIJ7bKo+1SQKqRAOWMUlshoDoxuI/4gz9N+j5zGc4pRkNr8sq24vWtjsRPoPtBqwR4uP2R+943lA3k4doUYZj24gxwsUO1oKmtEIDLr5mIMSC86vqZvtL900Ht1WSOr3+ILPMQ/rkqeDihKwkwsw75Vwyl2CwUOvz98Ei/VnoAyEPOyhAc0OUoAWHvxJnaQbhLtt8elhLkLQO3FQS5iXTNISuRy+ClDV0lEYl40wPleN/WdZxgzoVlfdSBPHeuVfXw4v6bSG33MBGsWY57v8cXGJO1yqqyHdgVtcXJk6t7vp+f+o4X3knT48+TBGCXBIOmuncodGYP7zoIMtpWrvde4Oz4fHHi2h8EExKHOilpjkiNbPeKTawsPHLznOfIN1o1nJPimeoy+lhAEaTk/SQwXFQ2LVL6iCaslZilFwyQqBXvL9/415ge3wNra1LjNahNUESrEOSst9GnGmj7XoCEyCLlSdNury2mt79ENXQb+VMK9TaHZyyUsox4Ja1eE8nD98b7bT4CNPB3Vs9kXYo1jiZFdZfoaPhq6B34K8sha0uaTwxSwZkuD9Ct5dc+3pmiv0glaDap6JW8nr4xiuQiK1kUo64k8dR0VoKYjOieUmY4v+jyrbIWhwaXjO72CdxWYnSXqhFSz7i7JjNiLXPkYG+BOt5ExmEWGx73DRfYyd96NrTTO8p1InjuYLAY7Ek6yloJbv9kWzT4f3bTADSNVvqn24N1OX/x1/ME0O0+TIn6GxcVDhuhYnrZ3bIsmRS DrKLLBLB bmkXRY594b+DB3SnCY8GRZgmstAVI6HPRIwTN8Y/ldzjEwqdV9kMM/+x2aYNIRAAZq5fkBiryzQnKnvTn8GQ5ijWYtKk39YkGzjxL8UE+3X21rx+hsXu7m4pL0zipIEDbPftsedYCJrbtn6X1W3rVHD/0R1tmAI63bL/yKbiM17HVi7l5g/JG214HWeLIiIMXBv104JjW2hc4Zudbf6gRsEKfyv8Y1OkbGKb9ZBCHdup1gCxTKillfHwW/PJTHb6Dz8K0a2cnyHTQ14As0OtIZbs6wheL3BQvTKYOHeTyzVx/3bRbfk07MIcScjl8KD2epYc+Ds8KneZVBBFtuXMey3nNfVBRngDqY+oTbdYVHme7emvIdvHh4Cjk719CeAuoXtyhQpsy13qeMDajpPNjudhfayumA1lFy/PdfbaULIK+fFFDNREK7QMM/G1Akfb74FogmsNmiM+3Bv6wkrGQ0vxFfv6mbmAz4cs6LArWDvoLX7DBW7lT4EBOFzmD9Ls6bBk/MKrRvEqP6N/Hb6DQsbWmH/oHF9NKDBeXy/qoSJMy0NY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Currently, we are only using flags to indicate which list the cluster is on. Using one bit for each list type might be a waste, as the list type grows, we will consume too many bits. Additionally, the current mixed usage of '&' and '==' is a bit confusing. Make it clean by using an enum to define all possible cluster statuses. Only an off-list cluster will have the NONE (0) flag. And use a wrapper to annotate and sanitize all flag settings and list movements. Suggested-by: Chris Li Signed-off-by: Kairui Song --- include/linux/swap.h | 17 +++++++--- mm/swapfile.c | 76 +++++++++++++++++++++++--------------------- 2 files changed, 53 insertions(+), 40 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 02120f1005d5..339d7f0192ff 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -257,10 +257,19 @@ struct swap_cluster_info { u8 order; struct list_head list; }; -#define CLUSTER_FLAG_FREE 1 /* This cluster is free */ -#define CLUSTER_FLAG_NONFULL 2 /* This cluster is on nonfull list */ -#define CLUSTER_FLAG_FRAG 4 /* This cluster is on nonfull list */ -#define CLUSTER_FLAG_FULL 8 /* This cluster is on full list */ + +/* All on-list cluster must have a non-zero flag. */ +enum swap_cluster_flags { + CLUSTER_FLAG_NONE = 0, /* For temporary off-list cluster */ + CLUSTER_FLAG_FREE, + CLUSTER_FLAG_NONFULL, + CLUSTER_FLAG_FRAG, + /* Clusters with flags above are allocatable */ + CLUSTER_FLAG_USABLE = CLUSTER_FLAG_FRAG, + CLUSTER_FLAG_FULL, + CLUSTER_FLAG_DISCARD, + CLUSTER_FLAG_MAX, +}; /* * The first page in the swap file is the swap header, which is always marked diff --git a/mm/swapfile.c b/mm/swapfile.c index 3898576f947a..b754c9e16c3b 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -403,7 +403,7 @@ static void discard_swap_cluster(struct swap_info_struct *si, static inline bool cluster_is_free(struct swap_cluster_info *info) { - return info->flags & CLUSTER_FLAG_FREE; + return info->flags == CLUSTER_FLAG_FREE; } static inline unsigned int cluster_index(struct swap_info_struct *si, @@ -434,6 +434,28 @@ static inline void unlock_cluster(struct swap_cluster_info *ci) spin_unlock(&ci->lock); } +static void move_cluster(struct swap_info_struct *si, + struct swap_cluster_info *ci, struct list_head *list, + enum swap_cluster_flags new_flags) +{ + VM_WARN_ON(ci->flags == new_flags); + + BUILD_BUG_ON(1 << sizeof(ci->flags) * BITS_PER_BYTE < CLUSTER_FLAG_MAX); + + if (ci->flags == CLUSTER_FLAG_NONE) { + list_add_tail(&ci->list, list); + } else { + if (ci->flags == CLUSTER_FLAG_FRAG) { + VM_WARN_ON(!si->frag_cluster_nr[ci->order]); + si->frag_cluster_nr[ci->order]--; + } + list_move_tail(&ci->list, list); + } + ci->flags = new_flags; + if (new_flags == CLUSTER_FLAG_FRAG) + si->frag_cluster_nr[ci->order]++; +} + /* Add a cluster to discard list and schedule it to do discard */ static void swap_cluster_schedule_discard(struct swap_info_struct *si, struct swap_cluster_info *ci) @@ -447,10 +469,8 @@ static void swap_cluster_schedule_discard(struct swap_info_struct *si, */ memset(si->swap_map + idx * SWAPFILE_CLUSTER, SWAP_MAP_BAD, SWAPFILE_CLUSTER); - - VM_BUG_ON(ci->flags & CLUSTER_FLAG_FREE); - list_move_tail(&ci->list, &si->discard_clusters); - ci->flags = 0; + VM_BUG_ON(ci->flags == CLUSTER_FLAG_FREE); + move_cluster(si, ci, &si->discard_clusters, CLUSTER_FLAG_DISCARD); schedule_work(&si->discard_work); } @@ -458,12 +478,7 @@ static void __free_cluster(struct swap_info_struct *si, struct swap_cluster_info { lockdep_assert_held(&si->lock); lockdep_assert_held(&ci->lock); - - if (ci->flags) - list_move_tail(&ci->list, &si->free_clusters); - else - list_add_tail(&ci->list, &si->free_clusters); - ci->flags = CLUSTER_FLAG_FREE; + move_cluster(si, ci, &si->free_clusters, CLUSTER_FLAG_FREE); ci->order = 0; } @@ -479,6 +494,8 @@ static void swap_do_scheduled_discard(struct swap_info_struct *si) while (!list_empty(&si->discard_clusters)) { ci = list_first_entry(&si->discard_clusters, struct swap_cluster_info, list); list_del(&ci->list); + /* Must clear flag when taking a cluster off-list */ + ci->flags = CLUSTER_FLAG_NONE; idx = cluster_index(si, ci); spin_unlock(&si->lock); @@ -519,9 +536,6 @@ static void free_cluster(struct swap_info_struct *si, struct swap_cluster_info * lockdep_assert_held(&si->lock); lockdep_assert_held(&ci->lock); - if (ci->flags & CLUSTER_FLAG_FRAG) - si->frag_cluster_nr[ci->order]--; - /* * If the swap is discardable, prepare discard the cluster * instead of free it immediately. The cluster will be freed @@ -573,13 +587,9 @@ static void dec_cluster_info_page(struct swap_info_struct *si, return; } - if (!(ci->flags & CLUSTER_FLAG_NONFULL)) { - VM_BUG_ON(ci->flags & CLUSTER_FLAG_FREE); - if (ci->flags & CLUSTER_FLAG_FRAG) - si->frag_cluster_nr[ci->order]--; - list_move_tail(&ci->list, &si->nonfull_clusters[ci->order]); - ci->flags = CLUSTER_FLAG_NONFULL; - } + if (ci->flags != CLUSTER_FLAG_NONFULL) + move_cluster(si, ci, &si->nonfull_clusters[ci->order], + CLUSTER_FLAG_NONFULL); } static bool cluster_reclaim_range(struct swap_info_struct *si, @@ -663,11 +673,13 @@ static bool cluster_alloc_range(struct swap_info_struct *si, struct swap_cluster if (!(si->flags & SWP_WRITEOK)) return false; + VM_BUG_ON(ci->flags == CLUSTER_FLAG_NONE); + VM_BUG_ON(ci->flags > CLUSTER_FLAG_USABLE); + if (cluster_is_free(ci)) { - if (nr_pages < SWAPFILE_CLUSTER) { - list_move_tail(&ci->list, &si->nonfull_clusters[order]); - ci->flags = CLUSTER_FLAG_NONFULL; - } + if (nr_pages < SWAPFILE_CLUSTER) + move_cluster(si, ci, &si->nonfull_clusters[order], + CLUSTER_FLAG_NONFULL); ci->order = order; } @@ -675,14 +687,8 @@ static bool cluster_alloc_range(struct swap_info_struct *si, struct swap_cluster swap_range_alloc(si, nr_pages); ci->count += nr_pages; - if (ci->count == SWAPFILE_CLUSTER) { - VM_BUG_ON(!(ci->flags & - (CLUSTER_FLAG_FREE | CLUSTER_FLAG_NONFULL | CLUSTER_FLAG_FRAG))); - if (ci->flags & CLUSTER_FLAG_FRAG) - si->frag_cluster_nr[ci->order]--; - list_move_tail(&ci->list, &si->full_clusters); - ci->flags = CLUSTER_FLAG_FULL; - } + if (ci->count == SWAPFILE_CLUSTER) + move_cluster(si, ci, &si->full_clusters, CLUSTER_FLAG_FULL); return true; } @@ -821,9 +827,7 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o while (!list_empty(&si->nonfull_clusters[order])) { ci = list_first_entry(&si->nonfull_clusters[order], struct swap_cluster_info, list); - list_move_tail(&ci->list, &si->frag_clusters[order]); - ci->flags = CLUSTER_FLAG_FRAG; - si->frag_cluster_nr[order]++; + move_cluster(si, ci, &si->frag_clusters[order], CLUSTER_FLAG_FRAG); offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), &found, order, usage); frags++; From patchwork Mon Jan 13 17:57:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F3B0C02180 for ; Mon, 13 Jan 2025 18:00:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E1E2B6B00A0; Mon, 13 Jan 2025 13:00:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DA6816B00A1; Mon, 13 Jan 2025 13:00:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BD4646B00A2; Mon, 13 Jan 2025 13:00:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 968316B00A0 for ; Mon, 13 Jan 2025 13:00:30 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5596C801ED for ; Mon, 13 Jan 2025 18:00:30 +0000 (UTC) X-FDA: 83003193420.07.A870EA9 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf22.hostedemail.com (Postfix) with ESMTP id 42117C001D for ; Mon, 13 Jan 2025 18:00:28 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=fmsEzIkN; spf=pass (imf22.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791228; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MatFqnsLfI0J5qXug55VUY2bL9Wigg14qsEA0grIGoU=; b=G/AaLX89WG9TZXK+w/WnfoWOeuzk4830Fc6Kfmv8Eiq8EbvMh/9X3PSP52oCHmfDQfBwcO dt8LjHSHyWXLGPZk5TSq5TntKqydUGNjs5Pm6M7a0jAM6Wf2LPeoP6qpccv4XJjnvMnIu9 8n+fRjK6lbpcGyfKMbwdlk9Q6Or1VFM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791228; a=rsa-sha256; cv=none; b=gQMhozrbFr1I1yJVuIbuwT9D0D36jhJQxniSwiAARfvrWZYvDykCCLM3yWY5gWBt9LOLle EPFILG6q4kjVLrd+33zwFTEOAAPcPpSSGyZXr59DnQhULZJtHyC/EW3AY/5FhcS8otLGrY 9SXYgn+zQyjp+qP+72ClxKPaZcZJ5VI= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=fmsEzIkN; spf=pass (imf22.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-21670dce0a7so98094575ad.1 for ; Mon, 13 Jan 2025 10:00:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791226; x=1737396026; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=MatFqnsLfI0J5qXug55VUY2bL9Wigg14qsEA0grIGoU=; b=fmsEzIkNBxU0wtLtOZ8KxxHAXWxTAwStlMR2ZbBDDDsFBpTQ9uqpJ0iA3Ad42uhdfJ 17oTCuWhZmg5czgvx5cTWvA7tSdBeOk/GFsu5UNGLGsAYYpzz7kLzGgTBFjMQOJc9yDR QtdP5SnluXyJsXokEMRqlDTSv6IwiNcpF0Rt0A8PIMANj3MswRQARr5a5f+/XZeDETGr Gsfz1yT9WkdQcCGrWtyk++Es6/XZ5uCgZIbNI4M004BlrydxpUX3SCTH1sykib42V0I4 nwqRQq6xqvaareccZQGHkuUGUFd/NOrnEKTSxhv0OB5OEsXX4FieCZlyRaPLuKwzytdj t7Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791226; x=1737396026; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=MatFqnsLfI0J5qXug55VUY2bL9Wigg14qsEA0grIGoU=; b=esjAT7Nd/or0rQK/jcKb7rcIgiGukGw+RmDun09jcso5Z+cj2ncAkynJCkHJ7Okp4m FxEw/CIsDGVPIp/6evVek2hBqIdII9f0U8JSoKXq7qaoRtZGfNlZop0K5bQET5CVv7U+ V4o0jv7p5RhxE505xQFA58Fqg5XY5lS810xRZltrSoMWv6f21r/36+Eyr3VcQQafBdSl NNzKwx3rpdDVZDHBt9cgYtgUxSXsajYiEI4RpOxGhleIp0w2QDFg1/5AiRnuHafelaZn TY7dMz0BVHtm2tK2Z2aihX/3zfU9xMhhjm+qwTAZtDNDW/8ny9hL2Egyby0mv7f5Hr8f RD7A== X-Gm-Message-State: AOJu0YwsU3tG/mt36I2YnZ14IL20pQXBq6FkQVe3DLRkbqZt8dM19roj ebWECiSG3AB2BbYdlSyiFMiHGzH+3DjXNjbfsN+P0ey3H3F4auFMvdnfxQdVY/Q= X-Gm-Gg: ASbGncsZSSnM0jOI/9UN8brt7t96V1vPiIjLcd/cadPJHoxF6KUfqm6L0D/DxWPM1XE gfyMOqOFPwbFqww6+PpkcBYh/dp+14VhiCxNVNaS+ZSTqJsz25xgvxajtvhGn0nmq+iGORNhDOz zjGzfa9z8zgPzN2t+Md9+ciKGHZVSGBskdkTGIH5M5LUNqqzIHRvdO0gjwbPO7/xMjE3OWKUPSi E9aGqCi3txw2tdBrFqDkQ41HXvtnfcSBBstMRzBt9agKrmRxHyiyw+egArVc0JAzVKM+khyKwQx yA== X-Google-Smtp-Source: AGHT+IFmnF/a9IwBa/i3UBvbehPRpoxJTxM4S0nHx+6HfhdZT9vedpg6yyvBXC+d6d5PBeQ2RciM/Q== X-Received: by 2002:a17:902:ecc5:b0:215:bf1b:a894 with SMTP id d9443c01a7336-21a83f76704mr333067675ad.24.1736791225664; Mon, 13 Jan 2025 10:00:25 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.10.00.22 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 10:00:25 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 09/13] mm, swap: reduce contention on device lock Date: Tue, 14 Jan 2025 01:57:28 +0800 Message-ID: <20250113175732.48099-10-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 42117C001D X-Stat-Signature: xjatpkmwqwdhucc33akjgmg5eryjtyh4 X-Rspam-User: X-HE-Tag: 1736791228-907268 X-HE-Meta: U2FsdGVkX1/mcXX1lyBfm7an1cQrf3INE/CWRD+o7ZEPLKCENn2eGzRIgTw4JvHkMIUCJvdm8oHxcPHWP2rxMMjJBSVnVf0VDQ4lLdeNM5RRwCnTsFnRnOlR1GEwPXEwJbKe3aRkVcVf+UIRXNNGyi/QezOIc+L5vIu9lNlvvi51AQLR+zMu+JiTgEIRXoPL8huWoZ1EGimLGFRWLjnQv8+4v2bU4S2WBMzlTevEA5T7Q2UPmqEY5NEeDChAQFD6P9R5qQrapf5aEAuo3oFmb7HxzitT4l6YhZ8i4gMBI0XuCU2Q8SM9qqz/xT94kd+OXF6HXXdQs+cQY4v8eBpNX46QdAl5Ml4uKVZF9msKgYIhZzBUqn0vUxyMCOzalthtHQ6C3RaBeg/Wehlv6ZQI5JbNyvmus3a2Sn184GRNvsc9RIvWGqxew5sh22ZibE8z0QZwR8fDDs48qJnu7GFeHpilD1gRoKSVQaPRKIwyNeymPUCzNXjgAuMPbq2HJKwCm9Y4kBFS6QWYk3u2IeAUoIhz6u5b2nBWPco6oBYkRFz9iLjTDfvXdaLOyRfDug6JlfNwFD2Df0v722FrbhEU0qhar1zrVEhoXIu42GXkgwOiqDmElIjatUvEGkBTB6grcdA3uYHccmdZXaaG9Y37suq5sCK9rZjYpYah8A2QpnCq7TfUtL9WqZ87RYecyZQab68VBGCZQom34vtTZMakNhNE6PX4AHCLQs2YLjfWiHmbOP5zvz05yHSfHnrD6UJO8Wa/XTTBAcZ4AQlrC7wHiNf8oDSRFIf0/rChVZ1nqjs6+P4Jb6ciuTDggr9My26zLRYe62Ufh1b7FQLCbh0+FehdeGBgG8KFIob2jV6aP7zjOjgxQT+ShsOglZj8Sl8t2j79BqxsFIw0HC1xt8qZybFYLHTrE1ScTtpGbMVwPqSZY5OI77SaQb2ehd/L1PYksdD7UZ6Ycrl2l75/ZJF 2vEgtKUU np5AKBcq8OSsVaEPmZTGZvRa95vAGy9ymP1rUiiXxz1aNV/Dz8wF2GA0iFa0KUHy3DzXV0JPnTAvFPUarmaE7cj7/ELM/+3GdYDEcUXDkDrbDlWFinExyc5rgydDNhBcJPMWBkfqbm1AA01ZE4B26wVSFrPQGpDk9cJM6yBQSEOo2OA91zxy46vHF8OhUDKOnxL3/obJZPE7JoIaOe3KGyOKJu8Hqtzadw8vwG73ktMn4olRty9VK0mzrZBpfZV3OftYoGPVzB9zZ7toP/O+lirXCo8FGCwK9yqkCqd0MW8m23xlJ87KIG2srDycRDffKNCwF+I/vq3utOgApf4zb7CpCLXcN9TwjjM+8zE8SriVOaOPnTQLZhNt68tRrcabCpbj/srSoLaCMiXn2tGGW9iRO30sJnJLv6ZpcRmuyelqG3+ClIZch1Z65Ieuec4DtA6+LKiToifo19/ceFgHCIsXs6+aUmt5H7oKHrmuF7OBBkrbqEHbGrfJy4vSCWMUJSpSCKGdxSEyCAf4NVTkBBgXI8ZIKhayv/QUFe+ikeoQnWMk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Currently, swap locking is mainly composed of two locks: the cluster lock (ci->lock) and the device lock (si->lock). The cluster lock is much more fine-grained, so it is best to use ci->lock instead of si->lock as much as possible. We have cleaned up other hard dependencies on si->lock. Following the new cluster allocator design, most operations don't need to touch si->lock at all. In practice, we only need to take si->lock when moving clusters between lists. To achieve this, this commit reworks the locking pattern of all si->lock and ci->lock users, eliminates all usage of ci->lock inside si->lock, and introduces a new design to avoid touching si->lock unless needed. For minimal contention and easier understanding of the system, two ideas are introduced with the corresponding helpers: isolation and relocation. - Clusters will be `isolated` from the list when iterating the list to search for an allocatable cluster. This ensures other CPUs won't walk into the same cluster easily, and it releases si->lock after acquiring ci->lock, providing the only place that handles the inversion of two locks, and avoids contention. Iterating the cluster list almost always moves the cluster (free -> nonfull, nonfull -> frag, frag -> frag tail), but it doesn't know where the cluster should be moved to until scanning is done. So keeping the cluster off-list is a good option with low overhead. The off-list time window of a cluster is also minimal. In the worst case, one CPU will return the cluster after scanning the 512 entries on it, which we used to busy wait with a spin lock. This is done with the new helper `isolate_lock_cluster`. - Clusters will be `relocated` after allocation or freeing, according to their usage count and status. Allocations no longer hold si->lock now, and may drop ci->lock for reclaim, so the cluster could be moved to any location while no lock is held. Besides, isolation clears all flags when it takes the cluster off the list (the flags must be in sync with the list status, so cluster users don't need to touch si->lock for checking its list status). So the cluster has to be relocated to the right list according to its usage after allocation or freeing. Relocation is optional, if the cluster flags indicate it's already on the right list, it will skip touching the list or si->lock. This is done with `relocate_cluster` after allocation or with `[partial_]free_cluster` after freeing. This handled usage of all kinds of clusters in a clean way. Scanning and allocation by iterating the cluster list is handled by "isolate - - relocate". Scanning and allocation of per-CPU clusters will only involve " - relocate", as it knows which cluster to lock and use. Freeing will only involve "relocate". Each CPU will keep using its per-CPU cluster until the 512 entries are all consumed. Freeing also has to free 512 entries to trigger cluster movement in the best case, so si->lock is rarely touched. Testing with building the Linux kernel with defconfig showed huge improvement: tiem make -j96 / 768M memcg, 4K pages, 10G ZRAM, on Intel 8255C: Before: Sys time: 73578.30, Real time: 864.05 After: (-50.7% sys time, -44.8% real time) Sys time: 36227.49, Real time: 476.66 time make -j96 / 1152M memcg, 64K mTHP, 10G ZRAM, on Intel 8255C: (avg of 4 test run) Before: Sys time: 74044.85, Real time: 846.51 hugepages-64kB/stats/swpout: 1735216 hugepages-64kB/stats/swpout_fallback: 430333 After: (-40.4% sys time, -37.1% real time) Sys time: 44160.56, Real time: 532.07 hugepages-64kB/stats/swpout: 1786288 hugepages-64kB/stats/swpout_fallback: 243384 time make -j32 / 512M memcg, 4K pages, 5G ZRAM, on AMD 7K62: Before: Sys time: 8098.21, Real time: 401.3 After: (-22.6% sys time, -12.8% real time ) Sys time: 6265.02, Real time: 349.83 The allocation success rate also slightly improved as we sanitized the usage of clusters with new defined helpers, previously dropping si->lock or ci->lock during scan will cause cluster order shuffle. Suggested-by: Chris Li Signed-off-by: Kairui Song --- include/linux/swap.h | 3 +- mm/swapfile.c | 432 ++++++++++++++++++++++++------------------- 2 files changed, 247 insertions(+), 188 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 339d7f0192ff..c4ff31cb6bde 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -291,6 +291,7 @@ enum swap_cluster_flags { * throughput. */ struct percpu_cluster { + local_lock_t lock; /* Protect the percpu_cluster above */ unsigned int next[SWAP_NR_ORDERS]; /* Likely next allocation offset */ }; @@ -313,7 +314,7 @@ struct swap_info_struct { /* list of cluster that contains at least one free slot */ struct list_head frag_clusters[SWAP_NR_ORDERS]; /* list of cluster that are fragmented or contented */ - unsigned int frag_cluster_nr[SWAP_NR_ORDERS]; + atomic_long_t frag_cluster_nr[SWAP_NR_ORDERS]; unsigned int pages; /* total of usable pages of swap */ atomic_long_t inuse_pages; /* number of those currently in use */ struct percpu_cluster __percpu *percpu_cluster; /* per cpu's swap location */ diff --git a/mm/swapfile.c b/mm/swapfile.c index b754c9e16c3b..489ac6997a0c 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -261,12 +261,10 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si, folio_ref_sub(folio, nr_pages); folio_set_dirty(folio); - spin_lock(&si->lock); /* Only sinple page folio can be backed by zswap */ if (nr_pages == 1) zswap_invalidate(entry); swap_entry_range_free(si, entry, nr_pages); - spin_unlock(&si->lock); ret = nr_pages; out_unlock: folio_unlock(folio); @@ -401,9 +399,23 @@ static void discard_swap_cluster(struct swap_info_struct *si, #endif #define LATENCY_LIMIT 256 -static inline bool cluster_is_free(struct swap_cluster_info *info) +static inline bool cluster_is_empty(struct swap_cluster_info *info) +{ + return info->count == 0; +} + +static inline bool cluster_is_discard(struct swap_cluster_info *info) +{ + return info->flags == CLUSTER_FLAG_DISCARD; +} + +static inline bool cluster_is_usable(struct swap_cluster_info *ci, int order) { - return info->flags == CLUSTER_FLAG_FREE; + if (unlikely(ci->flags > CLUSTER_FLAG_USABLE)) + return false; + if (!order) + return true; + return cluster_is_empty(ci) || order == ci->order; } static inline unsigned int cluster_index(struct swap_info_struct *si, @@ -441,19 +453,20 @@ static void move_cluster(struct swap_info_struct *si, VM_WARN_ON(ci->flags == new_flags); BUILD_BUG_ON(1 << sizeof(ci->flags) * BITS_PER_BYTE < CLUSTER_FLAG_MAX); + lockdep_assert_held(&ci->lock); - if (ci->flags == CLUSTER_FLAG_NONE) { + spin_lock(&si->lock); + if (ci->flags == CLUSTER_FLAG_NONE) list_add_tail(&ci->list, list); - } else { - if (ci->flags == CLUSTER_FLAG_FRAG) { - VM_WARN_ON(!si->frag_cluster_nr[ci->order]); - si->frag_cluster_nr[ci->order]--; - } + else list_move_tail(&ci->list, list); - } + spin_unlock(&si->lock); + + if (ci->flags == CLUSTER_FLAG_FRAG) + atomic_long_dec(&si->frag_cluster_nr[ci->order]); + else if (new_flags == CLUSTER_FLAG_FRAG) + atomic_long_inc(&si->frag_cluster_nr[ci->order]); ci->flags = new_flags; - if (new_flags == CLUSTER_FLAG_FRAG) - si->frag_cluster_nr[ci->order]++; } /* Add a cluster to discard list and schedule it to do discard */ @@ -476,39 +489,91 @@ static void swap_cluster_schedule_discard(struct swap_info_struct *si, static void __free_cluster(struct swap_info_struct *si, struct swap_cluster_info *ci) { - lockdep_assert_held(&si->lock); lockdep_assert_held(&ci->lock); move_cluster(si, ci, &si->free_clusters, CLUSTER_FLAG_FREE); ci->order = 0; } +/* + * Isolate and lock the first cluster that is not contented on a list, + * clean its flag before taken off-list. Cluster flag must be in sync + * with list status, so cluster updaters can always know the cluster + * list status without touching si lock. + * + * Note it's possible that all clusters on a list are contented so + * this returns NULL for an non-empty list. + */ +static struct swap_cluster_info *isolate_lock_cluster( + struct swap_info_struct *si, struct list_head *list) +{ + struct swap_cluster_info *ci, *ret = NULL; + + spin_lock(&si->lock); + + if (unlikely(!(si->flags & SWP_WRITEOK))) + goto out; + + list_for_each_entry(ci, list, list) { + if (!spin_trylock(&ci->lock)) + continue; + + /* We may only isolate and clear flags of following lists */ + VM_BUG_ON(!ci->flags); + VM_BUG_ON(ci->flags > CLUSTER_FLAG_USABLE && + ci->flags != CLUSTER_FLAG_FULL); + + list_del(&ci->list); + ci->flags = CLUSTER_FLAG_NONE; + ret = ci; + break; + } +out: + spin_unlock(&si->lock); + + return ret; +} + /* * Doing discard actually. After a cluster discard is finished, the cluster - * will be added to free cluster list. caller should hold si->lock. -*/ -static void swap_do_scheduled_discard(struct swap_info_struct *si) + * will be added to free cluster list. Discard cluster is a bit special as + * they don't participate in allocation or reclaim, so clusters marked as + * CLUSTER_FLAG_DISCARD must remain off-list or on discard list. + */ +static bool swap_do_scheduled_discard(struct swap_info_struct *si) { struct swap_cluster_info *ci; + bool ret = false; unsigned int idx; + spin_lock(&si->lock); while (!list_empty(&si->discard_clusters)) { ci = list_first_entry(&si->discard_clusters, struct swap_cluster_info, list); + /* + * Delete the cluster from list to prepare for discard, but keep + * the CLUSTER_FLAG_DISCARD flag, there could be percpu_cluster + * pointing to it, or ran into by relocate_cluster. + */ list_del(&ci->list); - /* Must clear flag when taking a cluster off-list */ - ci->flags = CLUSTER_FLAG_NONE; idx = cluster_index(si, ci); spin_unlock(&si->lock); - discard_swap_cluster(si, idx * SWAPFILE_CLUSTER, SWAPFILE_CLUSTER); - spin_lock(&si->lock); spin_lock(&ci->lock); - __free_cluster(si, ci); + /* + * Discard is done, clear its flags as it's off-list, then + * return the cluster to allocation list. + */ + ci->flags = CLUSTER_FLAG_NONE; memset(si->swap_map + idx * SWAPFILE_CLUSTER, 0, SWAPFILE_CLUSTER); + __free_cluster(si, ci); spin_unlock(&ci->lock); + ret = true; + spin_lock(&si->lock); } + spin_unlock(&si->lock); + return ret; } static void swap_discard_work(struct work_struct *work) @@ -517,9 +582,7 @@ static void swap_discard_work(struct work_struct *work) si = container_of(work, struct swap_info_struct, discard_work); - spin_lock(&si->lock); swap_do_scheduled_discard(si); - spin_unlock(&si->lock); } static void swap_users_ref_free(struct percpu_ref *ref) @@ -530,10 +593,14 @@ static void swap_users_ref_free(struct percpu_ref *ref) complete(&si->comp); } +/* + * Must be called after freeing if ci->count == 0, moves the cluster to free + * or discard list. + */ static void free_cluster(struct swap_info_struct *si, struct swap_cluster_info *ci) { VM_BUG_ON(ci->count != 0); - lockdep_assert_held(&si->lock); + VM_BUG_ON(ci->flags == CLUSTER_FLAG_FREE); lockdep_assert_held(&ci->lock); /* @@ -550,6 +617,48 @@ static void free_cluster(struct swap_info_struct *si, struct swap_cluster_info * __free_cluster(si, ci); } +/* + * Must be called after freeing if ci->count != 0, moves the cluster to + * nonfull list. + */ +static void partial_free_cluster(struct swap_info_struct *si, + struct swap_cluster_info *ci) +{ + VM_BUG_ON(!ci->count || ci->count == SWAPFILE_CLUSTER); + lockdep_assert_held(&ci->lock); + + if (ci->flags != CLUSTER_FLAG_NONFULL) + move_cluster(si, ci, &si->nonfull_clusters[ci->order], + CLUSTER_FLAG_NONFULL); +} + +/* + * Must be called after allocation, moves the cluster to full or frag list. + * Note: allocation doesn't acquire si lock, and may drop the ci lock for + * reclaim, so the cluster could be any where when called. + */ +static void relocate_cluster(struct swap_info_struct *si, + struct swap_cluster_info *ci) +{ + lockdep_assert_held(&ci->lock); + + /* Discard cluster must remain off-list or on discard list */ + if (cluster_is_discard(ci)) + return; + + if (!ci->count) { + free_cluster(si, ci); + } else if (ci->count != SWAPFILE_CLUSTER) { + if (ci->flags != CLUSTER_FLAG_FRAG) + move_cluster(si, ci, &si->frag_clusters[ci->order], + CLUSTER_FLAG_FRAG); + } else { + if (ci->flags != CLUSTER_FLAG_FULL) + move_cluster(si, ci, &si->full_clusters, + CLUSTER_FLAG_FULL); + } +} + /* * The cluster corresponding to page_nr will be used. The cluster will not be * added to free cluster list and its usage counter will be increased by 1. @@ -568,30 +677,6 @@ static void inc_cluster_info_page(struct swap_info_struct *si, VM_BUG_ON(ci->flags); } -/* - * The cluster ci decreases @nr_pages usage. If the usage counter becomes 0, - * which means no page in the cluster is in use, we can optionally discard - * the cluster and add it to free cluster list. - */ -static void dec_cluster_info_page(struct swap_info_struct *si, - struct swap_cluster_info *ci, int nr_pages) -{ - VM_BUG_ON(ci->count < nr_pages); - VM_BUG_ON(cluster_is_free(ci)); - lockdep_assert_held(&si->lock); - lockdep_assert_held(&ci->lock); - ci->count -= nr_pages; - - if (!ci->count) { - free_cluster(si, ci); - return; - } - - if (ci->flags != CLUSTER_FLAG_NONFULL) - move_cluster(si, ci, &si->nonfull_clusters[ci->order], - CLUSTER_FLAG_NONFULL); -} - static bool cluster_reclaim_range(struct swap_info_struct *si, struct swap_cluster_info *ci, unsigned long start, unsigned long end) @@ -601,8 +686,6 @@ static bool cluster_reclaim_range(struct swap_info_struct *si, int nr_reclaim; spin_unlock(&ci->lock); - spin_unlock(&si->lock); - do { switch (READ_ONCE(map[offset])) { case 0: @@ -620,9 +703,7 @@ static bool cluster_reclaim_range(struct swap_info_struct *si, } } while (offset < end); out: - spin_lock(&si->lock); spin_lock(&ci->lock); - /* * Recheck the range no matter reclaim succeeded or not, the slot * could have been be freed while we are not holding the lock. @@ -636,11 +717,11 @@ static bool cluster_reclaim_range(struct swap_info_struct *si, static bool cluster_scan_range(struct swap_info_struct *si, struct swap_cluster_info *ci, - unsigned long start, unsigned int nr_pages) + unsigned long start, unsigned int nr_pages, + bool *need_reclaim) { unsigned long offset, end = start + nr_pages; unsigned char *map = si->swap_map; - bool need_reclaim = false; for (offset = start; offset < end; offset++) { switch (READ_ONCE(map[offset])) { @@ -649,16 +730,13 @@ static bool cluster_scan_range(struct swap_info_struct *si, case SWAP_HAS_CACHE: if (!vm_swap_full()) return false; - need_reclaim = true; + *need_reclaim = true; continue; default: return false; } } - if (need_reclaim) - return cluster_reclaim_range(si, ci, start, end); - return true; } @@ -673,23 +751,17 @@ static bool cluster_alloc_range(struct swap_info_struct *si, struct swap_cluster if (!(si->flags & SWP_WRITEOK)) return false; - VM_BUG_ON(ci->flags == CLUSTER_FLAG_NONE); - VM_BUG_ON(ci->flags > CLUSTER_FLAG_USABLE); - - if (cluster_is_free(ci)) { - if (nr_pages < SWAPFILE_CLUSTER) - move_cluster(si, ci, &si->nonfull_clusters[order], - CLUSTER_FLAG_NONFULL); + /* + * The first allocation in a cluster makes the + * cluster exclusive to this order + */ + if (cluster_is_empty(ci)) ci->order = order; - } memset(si->swap_map + start, usage, nr_pages); swap_range_alloc(si, nr_pages); ci->count += nr_pages; - if (ci->count == SWAPFILE_CLUSTER) - move_cluster(si, ci, &si->full_clusters, CLUSTER_FLAG_FULL); - return true; } @@ -700,37 +772,55 @@ static unsigned int alloc_swap_scan_cluster(struct swap_info_struct *si, unsigne unsigned long start = offset & ~(SWAPFILE_CLUSTER - 1); unsigned long end = min(start + SWAPFILE_CLUSTER, si->max); unsigned int nr_pages = 1 << order; + bool need_reclaim, ret; struct swap_cluster_info *ci; - if (end < nr_pages) - return SWAP_NEXT_INVALID; - end -= nr_pages; + ci = &si->cluster_info[offset / SWAPFILE_CLUSTER]; + lockdep_assert_held(&ci->lock); - ci = lock_cluster(si, offset); - if (ci->count + nr_pages > SWAPFILE_CLUSTER) { + if (end < nr_pages || ci->count + nr_pages > SWAPFILE_CLUSTER) { offset = SWAP_NEXT_INVALID; - goto done; + goto out; } - while (offset <= end) { - if (cluster_scan_range(si, ci, offset, nr_pages)) { - if (!cluster_alloc_range(si, ci, offset, usage, order)) { - offset = SWAP_NEXT_INVALID; - goto done; - } - *foundp = offset; - if (ci->count == SWAPFILE_CLUSTER) { + for (end -= nr_pages; offset <= end; offset += nr_pages) { + need_reclaim = false; + if (!cluster_scan_range(si, ci, offset, nr_pages, &need_reclaim)) + continue; + if (need_reclaim) { + ret = cluster_reclaim_range(si, ci, start, end); + /* + * Reclaim drops ci->lock and cluster could be used + * by another order. Not checking flag as off-list + * cluster has no flag set, and change of list + * won't cause fragmentation. + */ + if (!cluster_is_usable(ci, order)) { offset = SWAP_NEXT_INVALID; - goto done; + goto out; } - offset += nr_pages; - break; + if (cluster_is_empty(ci)) + offset = start; + /* Reclaim failed but cluster is usable, try next */ + if (!ret) + continue; + } + if (!cluster_alloc_range(si, ci, offset, usage, order)) { + offset = SWAP_NEXT_INVALID; + goto out; + } + *foundp = offset; + if (ci->count == SWAPFILE_CLUSTER) { + offset = SWAP_NEXT_INVALID; + goto out; } offset += nr_pages; + break; } if (offset > end) offset = SWAP_NEXT_INVALID; -done: +out: + relocate_cluster(si, ci); unlock_cluster(ci); return offset; } @@ -747,18 +837,17 @@ static void swap_reclaim_full_clusters(struct swap_info_struct *si, bool force) if (force) to_scan = swap_usage_in_pages(si) / SWAPFILE_CLUSTER; - while (!list_empty(&si->full_clusters)) { - ci = list_first_entry(&si->full_clusters, struct swap_cluster_info, list); - list_move_tail(&ci->list, &si->full_clusters); + while ((ci = isolate_lock_cluster(si, &si->full_clusters))) { offset = cluster_offset(si, ci); end = min(si->max, offset + SWAPFILE_CLUSTER); to_scan--; - spin_unlock(&si->lock); while (offset < end) { if (READ_ONCE(map[offset]) == SWAP_HAS_CACHE) { + spin_unlock(&ci->lock); nr_reclaim = __try_to_reclaim_swap(si, offset, TTRS_ANYWAY | TTRS_DIRECT); + spin_lock(&ci->lock); if (nr_reclaim) { offset += abs(nr_reclaim); continue; @@ -766,8 +855,8 @@ static void swap_reclaim_full_clusters(struct swap_info_struct *si, bool force) } offset++; } - spin_lock(&si->lock); + unlock_cluster(ci); if (to_scan <= 0) break; } @@ -779,9 +868,7 @@ static void swap_reclaim_work(struct work_struct *work) si = container_of(work, struct swap_info_struct, reclaim_work); - spin_lock(&si->lock); swap_reclaim_full_clusters(si, true); - spin_unlock(&si->lock); } /* @@ -792,29 +879,34 @@ static void swap_reclaim_work(struct work_struct *work) static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int order, unsigned char usage) { - struct percpu_cluster *cluster; struct swap_cluster_info *ci; unsigned int offset, found = 0; -new_cluster: - lockdep_assert_held(&si->lock); - cluster = this_cpu_ptr(si->percpu_cluster); - offset = cluster->next[order]; + /* Fast path using per CPU cluster */ + local_lock(&si->percpu_cluster->lock); + offset = __this_cpu_read(si->percpu_cluster->next[order]); if (offset) { - offset = alloc_swap_scan_cluster(si, offset, &found, order, usage); + ci = lock_cluster(si, offset); + /* Cluster could have been used by another order */ + if (cluster_is_usable(ci, order)) { + if (cluster_is_empty(ci)) + offset = cluster_offset(si, ci); + offset = alloc_swap_scan_cluster(si, offset, &found, + order, usage); + } else { + unlock_cluster(ci); + } if (found) goto done; } - if (!list_empty(&si->free_clusters)) { - ci = list_first_entry(&si->free_clusters, struct swap_cluster_info, list); - offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), &found, order, usage); - /* - * Either we didn't touch the cluster due to swapoff, - * or the allocation must success. - */ - VM_BUG_ON((si->flags & SWP_WRITEOK) && !found); - goto done; +new_cluster: + ci = isolate_lock_cluster(si, &si->free_clusters); + if (ci) { + offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), + &found, order, usage); + if (found) + goto done; } /* Try reclaim from full clusters if free clusters list is drained */ @@ -822,49 +914,42 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o swap_reclaim_full_clusters(si, false); if (order < PMD_ORDER) { - unsigned int frags = 0; + unsigned int frags = 0, frags_existing; - while (!list_empty(&si->nonfull_clusters[order])) { - ci = list_first_entry(&si->nonfull_clusters[order], - struct swap_cluster_info, list); - move_cluster(si, ci, &si->frag_clusters[order], CLUSTER_FLAG_FRAG); + while ((ci = isolate_lock_cluster(si, &si->nonfull_clusters[order]))) { offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), &found, order, usage); - frags++; if (found) goto done; + /* Clusters failed to allocate are moved to frag_clusters */ + frags++; } - /* - * Nonfull clusters are moved to frag tail if we reached - * here, count them too, don't over scan the frag list. - */ - while (frags < si->frag_cluster_nr[order]) { - ci = list_first_entry(&si->frag_clusters[order], - struct swap_cluster_info, list); + frags_existing = atomic_long_read(&si->frag_cluster_nr[order]); + while (frags < frags_existing && + (ci = isolate_lock_cluster(si, &si->frag_clusters[order]))) { + atomic_long_dec(&si->frag_cluster_nr[order]); /* - * Rotate the frag list to iterate, they were all failing - * high order allocation or moved here due to per-CPU usage, - * this help keeping usable cluster ahead. + * Rotate the frag list to iterate, they were all + * failing high order allocation or moved here due to + * per-CPU usage, but they could contain newly released + * reclaimable (eg. lazy-freed swap cache) slots. */ - list_move_tail(&ci->list, &si->frag_clusters[order]); offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), &found, order, usage); - frags++; if (found) goto done; + frags++; } } - if (!list_empty(&si->discard_clusters)) { - /* - * we don't have free cluster but have some clusters in - * discarding, do discard now and reclaim them, then - * reread cluster_next_cpu since we dropped si->lock - */ - swap_do_scheduled_discard(si); + /* + * We don't have free cluster but have some clusters in + * discarding, do discard now and reclaim them, then + * reread cluster_next_cpu since we dropped si->lock + */ + if ((si->flags & SWP_PAGE_DISCARD) && swap_do_scheduled_discard(si)) goto new_cluster; - } if (order) goto done; @@ -875,26 +960,25 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o * Clusters here have at least one usable slots and can't fail order 0 * allocation, but reclaim may drop si->lock and race with another user. */ - while (!list_empty(&si->frag_clusters[o])) { - ci = list_first_entry(&si->frag_clusters[o], - struct swap_cluster_info, list); + while ((ci = isolate_lock_cluster(si, &si->frag_clusters[o]))) { + atomic_long_dec(&si->frag_cluster_nr[o]); offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), - &found, 0, usage); + &found, order, usage); if (found) goto done; } - while (!list_empty(&si->nonfull_clusters[o])) { - ci = list_first_entry(&si->nonfull_clusters[o], - struct swap_cluster_info, list); + while ((ci = isolate_lock_cluster(si, &si->nonfull_clusters[o]))) { offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), - &found, 0, usage); + &found, order, usage); if (found) goto done; } } done: - cluster->next[order] = offset; + __this_cpu_write(si->percpu_cluster->next[order], offset); + local_unlock(&si->percpu_cluster->lock); + return found; } @@ -1158,14 +1242,11 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order) plist_requeue(&si->avail_lists[node], &swap_avail_heads[node]); spin_unlock(&swap_avail_lock); if (get_swap_device_info(si)) { - spin_lock(&si->lock); n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE, n_goal, swp_entries, order); - spin_unlock(&si->lock); put_swap_device(si); if (n_ret || size > 1) goto check_out; - cond_resched(); } spin_lock(&swap_avail_lock); @@ -1378,9 +1459,7 @@ static bool __swap_entries_free(struct swap_info_struct *si, if (!has_cache) { for (i = 0; i < nr; i++) zswap_invalidate(swp_entry(si->type, offset + i)); - spin_lock(&si->lock); swap_entry_range_free(si, entry, nr); - spin_unlock(&si->lock); } return has_cache; @@ -1409,16 +1488,27 @@ static void swap_entry_range_free(struct swap_info_struct *si, swp_entry_t entry unsigned char *map_end = map + nr_pages; struct swap_cluster_info *ci; + /* It should never free entries across different clusters */ + VM_BUG_ON((offset / SWAPFILE_CLUSTER) != ((offset + nr_pages - 1) / SWAPFILE_CLUSTER)); + ci = lock_cluster(si, offset); + VM_BUG_ON(cluster_is_empty(ci)); + VM_BUG_ON(ci->count < nr_pages); + + ci->count -= nr_pages; do { VM_BUG_ON(*map != SWAP_HAS_CACHE); *map = 0; } while (++map < map_end); - dec_cluster_info_page(si, ci, nr_pages); - unlock_cluster(ci); mem_cgroup_uncharge_swap(entry, nr_pages); swap_range_free(si, offset, nr_pages); + + if (!ci->count) + free_cluster(si, ci); + else + partial_free_cluster(si, ci); + unlock_cluster(ci); } static void cluster_swap_free_nr(struct swap_info_struct *si, @@ -1490,9 +1580,7 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) ci = lock_cluster(si, offset); if (size > 1 && swap_is_has_cache(si, offset, size)) { unlock_cluster(ci); - spin_lock(&si->lock); swap_entry_range_free(si, entry, size); - spin_unlock(&si->lock); return; } for (int i = 0; i < size; i++, entry.val++) { @@ -1507,46 +1595,19 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) unlock_cluster(ci); } -static int swp_entry_cmp(const void *ent1, const void *ent2) -{ - const swp_entry_t *e1 = ent1, *e2 = ent2; - - return (int)swp_type(*e1) - (int)swp_type(*e2); -} - void swapcache_free_entries(swp_entry_t *entries, int n) { - struct swap_info_struct *si, *prev; int i; + struct swap_info_struct *si = NULL; if (n <= 0) return; - prev = NULL; - si = NULL; - - /* - * Sort swap entries by swap device, so each lock is only taken once. - * nr_swapfiles isn't absolutely correct, but the overhead of sort() is - * so low that it isn't necessary to optimize further. - */ - if (nr_swapfiles > 1) - sort(entries, n, sizeof(entries[0]), swp_entry_cmp, NULL); for (i = 0; i < n; ++i) { si = _swap_info_get(entries[i]); - - if (si != prev) { - if (prev != NULL) - spin_unlock(&prev->lock); - if (si != NULL) - spin_lock(&si->lock); - } if (si) swap_entry_range_free(si, entries[i], 1); - prev = si; } - if (si) - spin_unlock(&si->lock); } int __swap_count(swp_entry_t entry) @@ -1799,10 +1860,8 @@ swp_entry_t get_swap_page_of_type(int type) /* This is called for allocating swap entry, not cache */ if (get_swap_device_info(si)) { - spin_lock(&si->lock); if ((si->flags & SWP_WRITEOK) && scan_swap_map_slots(si, 1, 1, &entry, 0)) atomic_long_dec(&nr_swap_pages); - spin_unlock(&si->lock); put_swap_device(si); } fail: @@ -3142,6 +3201,7 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si, cluster = per_cpu_ptr(si->percpu_cluster, cpu); for (i = 0; i < SWAP_NR_ORDERS; i++) cluster->next[i] = SWAP_NEXT_INVALID; + local_lock_init(&cluster->lock); } /* @@ -3165,7 +3225,7 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si, for (i = 0; i < SWAP_NR_ORDERS; i++) { INIT_LIST_HEAD(&si->nonfull_clusters[i]); INIT_LIST_HEAD(&si->frag_clusters[i]); - si->frag_cluster_nr[i] = 0; + atomic_long_set(&si->frag_cluster_nr[i], 0); } /* @@ -3647,7 +3707,6 @@ int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask) */ goto outer; } - spin_lock(&si->lock); offset = swp_offset(entry); @@ -3712,7 +3771,6 @@ int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask) spin_unlock(&si->cont_lock); out: unlock_cluster(ci); - spin_unlock(&si->lock); put_swap_device(si); outer: if (page) From patchwork Mon Jan 13 17:57:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937876 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0682EC02184 for ; Mon, 13 Jan 2025 18:00:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 89FDD6B00A2; Mon, 13 Jan 2025 13:00:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 850826B00A3; Mon, 13 Jan 2025 13:00:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F17E6B00A4; Mon, 13 Jan 2025 13:00:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 46C526B00A2 for ; Mon, 13 Jan 2025 13:00:34 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E8DFB160222 for ; Mon, 13 Jan 2025 18:00:33 +0000 (UTC) X-FDA: 83003193546.21.63E2BE5 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by imf29.hostedemail.com (Postfix) with ESMTP id 5EB7512000E for ; Mon, 13 Jan 2025 18:00:31 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=M3Birquf; spf=pass (imf29.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791231; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=w3/A4K7Bt53NzbRkeVq3PRJRV4Gi+Nq+TM9VfYB4Gnc=; b=h9nZTj3SMr9ADtz88b0MJpGBkElwOetsNB9lMEIFf8HGGZpMDpv2ZRVTvkD9mJ2BmJ32Tr alcIvDf5GbSCQeIFJ3hC67SpB+/EwUJt5QBoKw3pwjXwUdKKNnZuQKcgsXQ+cGkGDC5tuv O8HlRitWAt8YZs633xSc2dlZ0njYjeA= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=M3Birquf; spf=pass (imf29.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791231; a=rsa-sha256; cv=none; b=XjgMOVVvSJHBZkeKT4/x26XD990jmB0Cpe7MY0N7f2YoHcZxfJVYmMa+a3vfZOGiwR4P9t r2KJDoLAMqIj8xdrvgY/dXizXgDUMHTjKrWlOvmyhHRB+VXO5KNMgA8Tc/mKzrFiyp9NUf pdkuYX87Hp4MgN1tFihzn6PmgYw/rgA= Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-21619108a6bso79469725ad.3 for ; Mon, 13 Jan 2025 10:00:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791230; x=1737396030; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=w3/A4K7Bt53NzbRkeVq3PRJRV4Gi+Nq+TM9VfYB4Gnc=; b=M3Birquf+PXe6u2xiez7B3HCh6K3G1BFEkeTLllZ9oMXxD3ElYJr/vyrPjE2fo/yqg hES3aa2FaLw6L/uepv1Ri2EtUwVzwwcRV8GYslbFaQLhWC/d3qniwaKk+kHCPtE7SVYP yuvM+6As9gCM3D4wy8muBPwAc2zfsiJF9XWkhHnGfus1MxDLM9zKR7ZkmJuLrTwUUIVv Q9/igm3+ViL+mHepJNIJIFOylP9/Br/mdT7cSzhB7CvuWyu28DQK6jbO0t5OqEtFTi4c Oty92FTR1ahZ/UYJ0VTgVx7Kf9KtTSFCQNBFU7tevA4n3S96K+UjA0vDLGD3PDL0XkHu doyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791230; x=1737396030; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=w3/A4K7Bt53NzbRkeVq3PRJRV4Gi+Nq+TM9VfYB4Gnc=; b=OBbB+pt8DaiL6d3brIZvabyXe6wGmYPhrwRHr+ZhBmpz0wlYuKOHDagz6ujGrH4Rhg q9c2N+wawqb+n5fbQTN8D2McIZqOsAmTbWgWhNAmAWIcYiod5K0PSY7aBw/eAQx/UCoo Q59ZctGWJAU3U2kMEP0M78j4he96IpP4zfAa2B8+u2Fxc+DyHhn8xI8xr1O5w2O6jVWF OwT83baY+YoOzKiKsX+54B/RNFffQjewQ0HzUXORaXPeM2dM8MSWVi0zDyC1kwluJrSO wjStwPNWwRau62XndJzwHHoTosRJ7R0EecF6Wf8ZE0vXNf79dlDbTaFdsFFuQ4rdCddj ABkw== X-Gm-Message-State: AOJu0YyfC7pFm54uaTiwkwV7hgaAqfA1CYFEwZzi5FLnA/iEWkpeaiH4 Czeq7QXKRkR6MP1tNrzOCdL800Z2kjepVDeyxHJkgsHbdA+AmDjWzWbwKL1FyZc= X-Gm-Gg: ASbGncs9JyDnzZA8koMT9B9+1VfBI9/uGp0lP81P6LhH3A5vPl6VIw9gYbcZ9ca3AXS a8Je9OxsYtRYK6M2ENwCaF+QqkuFPbOuj/rxvxt1nlfjVW2Ae5igI+EXc4K9lee8kYMLMi3nW24 RktxHySOmrqukhgHrF+owDy0a9KRasE+99gkuiCgdFh47UCCfG3dyeav+OLJXC4qVNIibC7/M65 NumXCwUy72fRsKc92TWwVcfp+EGuDT3tsPryAbJoDAvZwrhUeFOgf1p6r2L8dj6l2hT9wp9LvMx 4A== X-Google-Smtp-Source: AGHT+IEb+lacgc6xktF1aaBt7hTsj1nPaNZqgDFnny/A4HmF9V2aouf4o4ZmssdJsfpBSua/NdU+2Q== X-Received: by 2002:a17:902:ea09:b0:216:2a36:5b2e with SMTP id d9443c01a7336-21a83f76879mr320846885ad.32.1736791229397; Mon, 13 Jan 2025 10:00:29 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.10.00.26 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 10:00:28 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 10/13] mm, swap: simplify percpu cluster updating Date: Tue, 14 Jan 2025 01:57:29 +0800 Message-ID: <20250113175732.48099-11-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Stat-Signature: 7cgszzbq8dgrc7bpgoua94s5mnog6xkt X-Rspamd-Queue-Id: 5EB7512000E X-Rspam-User: X-HE-Tag: 1736791231-962225 X-HE-Meta: U2FsdGVkX1/mwJNV9jganSF+/NTs31Q3NLL4MWElofMpUZ+D4lStouTz3u7osYo3Rr6oBl+MlaXELy57Z3NgXKSTdldem9TlIoaslcFdQkUpDohpZ49Zz65VJorW9KP0M3hk4r/DPg/7DuNnH8eadRn1BUxh8Z1vcUq6QjMQ4NfqlEzVWMOleBBIEuhgi2EMRTT31JqrR4m0MWR5UY2iBBnCZ8rCCc/G/+qkht13zel1bM35ZF85i4hXkLJy0O21K1QhG9nk3v1YbChhXC8Lu5Q5WQzrPRfJyTemLWPOo5vbuGPF7+vXo4WVNAAj9TKqhUc96aabRRn8w6iB2Jafew3WTa82oBGEcfC+zIQSnEmCpuKPvwIXYJnltL7aC+KVpcyteNYu0VrBphf9LMpKC8R9gEWiFJ7BAZ1nmQcDhUBlJo0q0T7dIhQpswcdfNXKtkUHsz/6H2S1xM46e+W1YypSSmniIday67vtQvThYta1H+TOLBVVbF9bQ/34nR7ouwk1J/FbmKqi3Fk4errfQFUOD9jPOhKCzGbXslM4RqOGiN4zQ/Gft4cLRWwaNnmVaVc6M5PsMYcy2pSXdZrk3f0shgjsy1O51jYtpmVlezmFp84QQx7CV+LbaVXDh46GRn5ReTjEf8raw5PbtafXNTTp4EpdYjK088mHc9DDJUkQ42Wg7eUg5zaHXFqm4+gtWgflXRcvOWGcvlhpSJsEHPqhvHYs9ikEI2qoiWjBSpVDndShwR7rcuTRx8/MvL6UQWiIExTCoz6qZ0/OP7GB7DKj5H3UjXvSIE6xf1yvAwwaJmyzfXW6ZMQr8DVsTZ7F3R765TwxwjFwxCUifbyO4fh9lPkIQDjObT1y6jPmVJtJK1rRqeb8stSID3ebYXgm8ifUeKipqEAzYYmQNO+/g+jEzG7+jWg5vRghucvK0GkKcQVsFcCoGlPV1X0Z0X59EFwtl6C0If9YKyuj67i JvBx87+d zcvAwTHcsXHYUGGJZNpUnGWQ7M72X3f2lllOHdqLEkhtXSxqZ0rGQOapj7VDOsuwgFVq370s4WsT22M2x+KdOXseuGTKaTNf9yEDSy55Grj9x9tDLjySHJ0q+8SNA1IJzKqkLMQUPkrL0c8VeEW3SJSuJWAIRkZBpmv2EGUwzAvU5PoVtmE2zASwfzV/nU5VS4KzC/Wk7+W8glRJIAG3RtB9qalIZQvqBUFJ9sSLbZ0MRTSFAjTUqdeRHwBwy6t4f0k9HkzOyLn+ZTbtJ3bi5Slyx30khrGUyHnlcxgsH/IXJtTha+Z/QuXdH0EwYMONO8LJWWYXkrzXOB/uiWs+vw+66mlIF/ZobVJV9n9/agaPUEjay10fPd7rPzB+tSWSp/C/4cXLcb2uMCE1aPKsRTfnlIv+uNlU7vfYmBmarI8j3nib/QvmvmoEpyjwF9/GSiZwn/qs3sYzhSt7YQXzbiYBUrherYXIB8hPgqFjwrLstzUM3wWzikJJdOPdgxYsOWSbTH8m+YMiNMpEI1OUALEGAbQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Instead of using a returning argument, we can simply store the next cluster offset to the fixed percpu location, which reduce the stack usage and simplify the function: Object size: ./scripts/bloat-o-meter mm/swapfile.o mm/swapfile.o.new add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-271 (-271) Function old new delta get_swap_pages 2847 2733 -114 alloc_swap_scan_cluster 894 737 -157 Total: Before=30833, After=30562, chg -0.88% Stack usage: Before: swapfile.c:1190:5:get_swap_pages 240 static After: swapfile.c:1185:5:get_swap_pages 216 static Signed-off-by: Kairui Song --- include/linux/swap.h | 4 +-- mm/swapfile.c | 66 +++++++++++++++++++------------------------- 2 files changed, 31 insertions(+), 39 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index c4ff31cb6bde..4c1d2e69689f 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -275,9 +275,9 @@ enum swap_cluster_flags { * The first page in the swap file is the swap header, which is always marked * bad to prevent it from being allocated as an entry. This also prevents the * cluster to which it belongs being marked free. Therefore 0 is safe to use as - * a sentinel to indicate next is not valid in percpu_cluster. + * a sentinel to indicate an entry is not valid. */ -#define SWAP_NEXT_INVALID 0 +#define SWAP_ENTRY_INVALID 0 #ifdef CONFIG_THP_SWAP #define SWAP_NR_ORDERS (PMD_ORDER + 1) diff --git a/mm/swapfile.c b/mm/swapfile.c index 489ac6997a0c..6da2f3aa55fb 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -765,23 +765,23 @@ static bool cluster_alloc_range(struct swap_info_struct *si, struct swap_cluster return true; } -static unsigned int alloc_swap_scan_cluster(struct swap_info_struct *si, unsigned long offset, - unsigned int *foundp, unsigned int order, +/* Try use a new cluster for current CPU and allocate from it. */ +static unsigned int alloc_swap_scan_cluster(struct swap_info_struct *si, + struct swap_cluster_info *ci, + unsigned long offset, + unsigned int order, unsigned char usage) { - unsigned long start = offset & ~(SWAPFILE_CLUSTER - 1); + unsigned int next = SWAP_ENTRY_INVALID, found = SWAP_ENTRY_INVALID; + unsigned long start = ALIGN_DOWN(offset, SWAPFILE_CLUSTER); unsigned long end = min(start + SWAPFILE_CLUSTER, si->max); unsigned int nr_pages = 1 << order; bool need_reclaim, ret; - struct swap_cluster_info *ci; - ci = &si->cluster_info[offset / SWAPFILE_CLUSTER]; lockdep_assert_held(&ci->lock); - if (end < nr_pages || ci->count + nr_pages > SWAPFILE_CLUSTER) { - offset = SWAP_NEXT_INVALID; + if (end < nr_pages || ci->count + nr_pages > SWAPFILE_CLUSTER) goto out; - } for (end -= nr_pages; offset <= end; offset += nr_pages) { need_reclaim = false; @@ -795,34 +795,27 @@ static unsigned int alloc_swap_scan_cluster(struct swap_info_struct *si, unsigne * cluster has no flag set, and change of list * won't cause fragmentation. */ - if (!cluster_is_usable(ci, order)) { - offset = SWAP_NEXT_INVALID; + if (!cluster_is_usable(ci, order)) goto out; - } if (cluster_is_empty(ci)) offset = start; /* Reclaim failed but cluster is usable, try next */ if (!ret) continue; } - if (!cluster_alloc_range(si, ci, offset, usage, order)) { - offset = SWAP_NEXT_INVALID; - goto out; - } - *foundp = offset; - if (ci->count == SWAPFILE_CLUSTER) { - offset = SWAP_NEXT_INVALID; - goto out; - } + if (!cluster_alloc_range(si, ci, offset, usage, order)) + break; + found = offset; offset += nr_pages; + if (ci->count < SWAPFILE_CLUSTER && offset <= end) + next = offset; break; } - if (offset > end) - offset = SWAP_NEXT_INVALID; out: relocate_cluster(si, ci); unlock_cluster(ci); - return offset; + __this_cpu_write(si->percpu_cluster->next[order], next); + return found; } /* Return true if reclaimed a whole cluster */ @@ -891,8 +884,8 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o if (cluster_is_usable(ci, order)) { if (cluster_is_empty(ci)) offset = cluster_offset(si, ci); - offset = alloc_swap_scan_cluster(si, offset, &found, - order, usage); + found = alloc_swap_scan_cluster(si, ci, offset, + order, usage); } else { unlock_cluster(ci); } @@ -903,8 +896,8 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o new_cluster: ci = isolate_lock_cluster(si, &si->free_clusters); if (ci) { - offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), - &found, order, usage); + found = alloc_swap_scan_cluster(si, ci, cluster_offset(si, ci), + order, usage); if (found) goto done; } @@ -917,8 +910,8 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o unsigned int frags = 0, frags_existing; while ((ci = isolate_lock_cluster(si, &si->nonfull_clusters[order]))) { - offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), - &found, order, usage); + found = alloc_swap_scan_cluster(si, ci, cluster_offset(si, ci), + order, usage); if (found) goto done; /* Clusters failed to allocate are moved to frag_clusters */ @@ -935,8 +928,8 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o * per-CPU usage, but they could contain newly released * reclaimable (eg. lazy-freed swap cache) slots. */ - offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), - &found, order, usage); + found = alloc_swap_scan_cluster(si, ci, cluster_offset(si, ci), + order, usage); if (found) goto done; frags++; @@ -962,21 +955,20 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o */ while ((ci = isolate_lock_cluster(si, &si->frag_clusters[o]))) { atomic_long_dec(&si->frag_cluster_nr[o]); - offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), - &found, order, usage); + found = alloc_swap_scan_cluster(si, ci, cluster_offset(si, ci), + 0, usage); if (found) goto done; } while ((ci = isolate_lock_cluster(si, &si->nonfull_clusters[o]))) { - offset = alloc_swap_scan_cluster(si, cluster_offset(si, ci), - &found, order, usage); + found = alloc_swap_scan_cluster(si, ci, cluster_offset(si, ci), + 0, usage); if (found) goto done; } } done: - __this_cpu_write(si->percpu_cluster->next[order], offset); local_unlock(&si->percpu_cluster->lock); return found; @@ -3200,7 +3192,7 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si, cluster = per_cpu_ptr(si->percpu_cluster, cpu); for (i = 0; i < SWAP_NR_ORDERS; i++) - cluster->next[i] = SWAP_NEXT_INVALID; + cluster->next[i] = SWAP_ENTRY_INVALID; local_lock_init(&cluster->lock); } From patchwork Mon Jan 13 17:57:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937877 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB66AC02180 for ; Mon, 13 Jan 2025 18:00:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 56D496B00A4; Mon, 13 Jan 2025 13:00:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 51E056B00A5; Mon, 13 Jan 2025 13:00:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3722E6B00A6; Mon, 13 Jan 2025 13:00:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 15D9F6B00A4 for ; Mon, 13 Jan 2025 13:00:37 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BC3C6160227 for ; Mon, 13 Jan 2025 18:00:36 +0000 (UTC) X-FDA: 83003193672.07.00D7E05 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf11.hostedemail.com (Postfix) with ESMTP id CA45840011 for ; Mon, 13 Jan 2025 18:00:34 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=JSXsIP9a; spf=pass (imf11.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791234; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=84ptMZMYZj3rVArRcmuRvTI8mbqV19hN4DkCoI7h/g0=; b=aQ9DMl3sSBUK3RkrfAdLdL+zOfprcgjLNl8vTsSxbjyBSolXrmC6z7KSRXxJI1YyCVe2sC ovWYseRP4tuHrqWRXe9Ho13PKVNcxwh8tAU0RD/x3AECgU8YXbuJhPITeEp21VZJh/wnDg ZFpxP82lDqo9l3SOQHm7cLzky0EqDEs= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=JSXsIP9a; spf=pass (imf11.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791234; a=rsa-sha256; cv=none; b=0z+Iz2fXftoBadp16Nq5v6l0PFxGdISOdL6kPYwsvi2nPbWqIN7WWdt47ZnCSbn4e1LRRd y8aAmFI1glpZSTvrNdrzv38KqtgULdZwuUPzF/z6sMTVwxYtuPDB6ZKPtw2+SJ/etzUx5A ci0v8yOsKNNfp186pszwO9dWFgnEjls= Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-2167141dfa1so81038985ad.1 for ; Mon, 13 Jan 2025 10:00:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791233; x=1737396033; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=84ptMZMYZj3rVArRcmuRvTI8mbqV19hN4DkCoI7h/g0=; b=JSXsIP9ae/TboQM/WzaoGk0G3rcW0NjKI9EdH3sZ5UFpcWhrA5sSwMRVyOx2OmZL/o nFVR0TqzgWMsDo4DGxFTuof/mmVGhVolQ7eNkV+YKe9CnB4gOtYS9V301UD3MA8H7GqC hKmKbGSvpVxui8EECN0c6f/l8BtTOCFbB8a+A9W9mvoYHoNYncpXpOdRa02c0cI3eZzB 4zThj5CrZnAmetNjkJ0FVxFvf94KjpkwOb5yrWj056b62oN2w1/aBmsQsrvv4Lnh/Oss ouKpyS6hrjfj8IunifO8OG0oQZMLmKAyD/fh4gV9OEaN9xTwcYNhXpzYLBV6+8UVPjN9 1/0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791233; x=1737396033; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=84ptMZMYZj3rVArRcmuRvTI8mbqV19hN4DkCoI7h/g0=; b=Xi17FkjFZdzG34TB4iKiut2wMc/9oJnZpov4qjuDCJCtekiQDjiL8DGZGMCGcLSp7e PiKL0KI5QA6ko3hADMrXknuY4UKd4R2OHVMccqoIzI/jLg5wfpTnfwo4/JVt4jQgZi32 VEJkj3y/OKgeL+0ZK5+tY7H9LLuA1hOQC2PCP/qsDxmFaFXQG3UFmuuCGRchsZf04V33 dBhqFQ8spgzFvTX0bswbN12C2VuO9pCiZzio+hQIBiv+Vdy/wfdUtTfRii3kBr6FPHc/ sh4iUoGfrZ1EUf7yJAk5FyEcfSm9WG4eph/i+lL3/659Ob/8xKXGXrcfYTIGyHcpvHn2 qYRg== X-Gm-Message-State: AOJu0YwcSByvZ+s1fH8oHvdmfBHCM0N9TwQuuSN9lQbggmEl0lEPHfyj iaeIsbRrv2G1gk4g1kG4cx0ZOPVL5wrECGllmQUOlGRT/vpKvkQspnUmmRk4XHQ= X-Gm-Gg: ASbGncvWklyMV+SS5Fp+zFYdfelz6FzSxRUZOVmmOIjIDFE1Wig3OE8b2gs9LbdSdbX cgCMr11XdnoLN4wEJ9OcGvUselekGRS/E+wQUkJuHe72zwey7H96ZxMYiMWwSUCtTH1YX+r6eZ7 tJaxw6t4VQrMm7GOcZKmDs9Jz9qn8pUCkfsZsDj/SURn3tqD+QPlKVPm9GcId7WzvpqNiLdvdTS ivdD7GB3qiQzq4KtM9wGGWFuT26cghKYobfeUN0r6LEobtdmGx91WhbJLPLGVPz+yIO9Wd7lrzp Ug== X-Google-Smtp-Source: AGHT+IHndLsitkjnbmdqmvdITPRrE8bpKXtfyrzCQgnWcbfVYdpyJsxfkIpMOC/VykeL5wA03UubJA== X-Received: by 2002:a17:902:fc47:b0:215:7e49:8202 with SMTP id d9443c01a7336-21ad9f3c0a0mr170609865ad.13.1736791233083; Mon, 13 Jan 2025 10:00:33 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.10.00.29 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 10:00:32 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 11/13] mm, swap: introduce a helper for retrieving cluster from offset Date: Tue, 14 Jan 2025 01:57:30 +0800 Message-ID: <20250113175732.48099-12-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Queue-Id: CA45840011 X-Rspamd-Server: rspam12 X-Stat-Signature: dhcik79sfj44ukxzu1zqdeurxqs3waex X-Rspam-User: X-HE-Tag: 1736791234-740572 X-HE-Meta: U2FsdGVkX193yhnQP8I8cTHUh3BknFNfVFdmh7/zweH0PnM+fzLDGGi6ijzu4PXpIu/B42Y2y2DYgh9K3kOc88nBUNv7m/yZD/RkLZuGw5yioDtr43ov3Bu+lq3cXh4mUvU5mB5KyBWiOOhM3aDuMSUkEJHeOlam+cbySNx6tTjoDZWGd2MSsEp8JdEeTYVo0jE2nH8bmbdBS9oN2qqVF19qbzo7hnsbBuQ7PxXordXLJsTxTds1mCN3Wu8b8Gvcs7XjuQlJfcnxqM3ocn/sIjXyj/468VbkeTQfXBUHVp2u0czTg4JGeZ/7CIl3LW8LhoH2grDpDIt28opG7eliknsuws4jO6j++dls57SticJQnLtvlWR6LiX0kKuC9/biuFDEGmfeTwF6YerUdNfDym+u0w1lmkuywvwauwY6e/iTGi9TyEviLPWO1vaywsIobFtWq/+hD+S8RkwWl1p29Tqyl83glHJckRGc/aWFK9vgFfPruwGxwNBWm243/6tLql1eJlVfE2Hq4mvu6GBl6WEi4lW+Tga5FDlUsCh4bNea5ec4E9Lby82jIo0/ZDay4Ff+Q1wL+1ZF05HdlauimhK+DbFvfYEkI5eq07Pm+ng3ZUJ8y8PdtmgWz0/mKiw0W+RcCZbwCYOzDqzelNv0AS3R6CEL2M5jrxze9B1ilH6+Rh5PUKoZQZ0+8kkhCSpPbGpWsGNjHc7akQft3p296wcVIeoxFUfoNDukYIxM3yUUxMBTkTMH3oAzmqP0tK2NpmRhuwk+1FRW+HJ54Baw5rDhTv1xSAlxNJ9f4FpUyuz2yyp0jxjzqxeCpzixW5TzY7Aet7o/ukVEY8p3uJJTqSh70OcwFv9DEZmCzB1hP7BhXHvWb8eBKLZEctEG4zg1Tlzpejvc9Ej6R0vxVJohKrVESM3YHk6hhlYclqrfbkgt1uCsYRgUXixP9rqwxtQ4kDl39vIikJUXT2arY3m FA2/XS0T ucyeSndTQMUKsO6f/RltHev/+Xij4JoVwx8JuReK3D9tZT30ZCKO19+3qPwiIcjnHsJD+vuZCIttAHcTgPdCvaL+mhDqWKSKmJYGRC+NVyD+Enk0c94kMT/akQdodSeQ4wJhAjvy0aBrWUVdd7HAFwHNSs5abcbq/rI2g/rFBDHYHEa34XnmHU+B9NmKznjT6+yfPcoXR1q758faQMCHFh6StAEZejybY0Bgw7JlOf77hHkrj4KQflz8KxJbE0nUREOKJm4D/nbMfrS+uPwmLSL97lOZMn6m1GwEQ9YgG//CadRZFir2932QDBMtbl/mjSuzHEt+k6DsNk5o90G1p5uiVcXqfyyIe24SOQyJuqrK9bsD4gsaSXo66R0nxBpV3n1strb2kL+eI75URB9/89DnTrEH/PqPniDEOy7CAJi49s7ZQ5YnQ6KiM56EM0rJeKBDSbArzpqat3LiNEwcLZ92nXKKBZKYg8ZtAzllIlQOS1wU/y38S6qMHGRXcul0RlfrydqsQFN1tysdV5Uo3wWUbKUdPe+bI5Tn9EOWBuvQ34Fo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song It's a common operation to retrieve the cluster info from offset, introduce a helper for this. Suggested-by: Chris Li Signed-off-by: Kairui Song --- mm/swapfile.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 6da2f3aa55fb..37d540fa0310 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -424,6 +424,12 @@ static inline unsigned int cluster_index(struct swap_info_struct *si, return ci - si->cluster_info; } +static inline struct swap_cluster_info *offset_to_cluster(struct swap_info_struct *si, + unsigned long offset) +{ + return &si->cluster_info[offset / SWAPFILE_CLUSTER]; +} + static inline unsigned int cluster_offset(struct swap_info_struct *si, struct swap_cluster_info *ci) { @@ -435,7 +441,7 @@ static inline struct swap_cluster_info *lock_cluster(struct swap_info_struct *si { struct swap_cluster_info *ci; - ci = &si->cluster_info[offset / SWAPFILE_CLUSTER]; + ci = offset_to_cluster(si, offset); spin_lock(&ci->lock); return ci; @@ -1480,10 +1486,10 @@ static void swap_entry_range_free(struct swap_info_struct *si, swp_entry_t entry unsigned char *map_end = map + nr_pages; struct swap_cluster_info *ci; - /* It should never free entries across different clusters */ - VM_BUG_ON((offset / SWAPFILE_CLUSTER) != ((offset + nr_pages - 1) / SWAPFILE_CLUSTER)); - ci = lock_cluster(si, offset); + + /* It should never free entries across different clusters */ + VM_BUG_ON(ci != offset_to_cluster(si, offset + nr_pages - 1)); VM_BUG_ON(cluster_is_empty(ci)); VM_BUG_ON(ci->count < nr_pages); From patchwork Mon Jan 13 17:57:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2ED9C02180 for ; Mon, 13 Jan 2025 18:00:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6557B6B00A7; Mon, 13 Jan 2025 13:00:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6076C6B00A8; Mon, 13 Jan 2025 13:00:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4804F6B00A9; Mon, 13 Jan 2025 13:00:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 26FAA6B00A7 for ; Mon, 13 Jan 2025 13:00:41 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BDD6D1601E5 for ; Mon, 13 Jan 2025 18:00:40 +0000 (UTC) X-FDA: 83003193840.15.03D4014 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by imf12.hostedemail.com (Postfix) with ESMTP id ACA8E4000D for ; Mon, 13 Jan 2025 18:00:38 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mcVVlBYp; spf=pass (imf12.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791238; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=At1biLHm6e57SEHEw6uwq9JjLDEXAlo5gNBhgwUICrU=; b=LTWyUN9Tp5+VYvXmZ3ZlXl/SWYS0hs++L0VzbxwOMzG+elz+RpiPwYdonW1H0CxC3jgj/4 sJCSNhJUMkQDEy12tawG6j5RQXrN+bDZFeIa+CMVxVPBrGr3BuFlNOl/44gltiPC7c8pR6 dFHSoab2PH2e1988jvn2y6nqu3u0QBI= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mcVVlBYp; spf=pass (imf12.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791238; a=rsa-sha256; cv=none; b=DabFqPod3n5gmkX576dBAB6hCllp5Xw/nRL4vV6dOYU26gReQC8bZ1bfO6PZwDj0BBmjGQ YNn+dhgJUSk1qJaeYm7CS24ewu8aeS3QMfRAu3UbpPmvqAReGS5y2ml0JxoM+2zh+GAvvm p370T+UUk/c07yfTAO6kVtakHHJkKzc= Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-216395e151bso56012185ad.0 for ; Mon, 13 Jan 2025 10:00:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791237; x=1737396037; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=At1biLHm6e57SEHEw6uwq9JjLDEXAlo5gNBhgwUICrU=; b=mcVVlBYp2E20yw5VGtQfeZj4jNYH/aS3nOnXgtBlwCwyAEnaaoZAz3FVU/CYA/q5X4 qK7jDZb6I2NLDzwO9J8MDqqQiaASkvR56A+kH7LdOUMUKTp7eOjpmo7+ThC7+TB4U+xc jbFlpPRl9ZyEI1dNawsf/Mxiu5gjZjRqrYd5aWn1pyw+RIAubTOwX8mO29smP+VjfoE6 kOo6ux+iofsYQdV5JTuMmOYFBOqv2cYkFeiVXP+EpQSTieXQBg9PPW51ffXtHzwg9niQ CXVAPWKYix4oIanY0Omzk72bT6OPHQOIBZWE9qs7mcdxLI1xiyeUzyIF68fThG345Jg5 E3MQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791237; x=1737396037; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=At1biLHm6e57SEHEw6uwq9JjLDEXAlo5gNBhgwUICrU=; b=Z+B0erWT7rIISecYCclWh5LHqqbzzuSG0NB5IPQ54fljv+NqmunrZdR76u5XuDNO+u 3BWTz4WVXe8prE3gPm4A5Dc75vtO3a/yc8ohfWJOdmrX33soeKlLf2MRG+WyC+2nIIHQ FRwqihDaX1b7nU0+cv9ZG0N7BWnv6JTq94+cYOFeADP4H5MgrkNz78xVEU6GVSi3VwKF 8zyUvjsNW7mYsjvgEphSLX+bqJNdcDMsEYysMRiGzAaBiTaFv3nEU5WarTm4ljK4suL9 w7mmrH/aXyA6rYsuCyyQN0wGzkraX8CEYnHL+JEoGR7qmBDbjbzUiBUijhIUzcOSG9TA YJoA== X-Gm-Message-State: AOJu0Yyiwx/aeyjUeqzI5PfmC+j59mAj6xUJhVQs4Ukb/K1rZzc1nzAq BTRPtOGCZ5dI3mebQ7A6D30VM3W9fdypC1YW2aXa38Ys52SHVsry2GJXv9vRNfU= X-Gm-Gg: ASbGncsBtN9KT8qHIfOsoygpg6oKlscpc4YKhNEhr1+E6hKI7NhEznlrPVkm48zjyxM V2Hdaga1lI5j9vqN7cGGjfDB2oSpWBMkZPxpnpmKPd5zsjLxydb3MMAPR2e903SMhUWraxPa0gA MkWGx1mhHK/efRRVAv29O/lOM7q/c/i9QeFdBTRPQoQ9dU7PyzVXlO2ry7+4wGTmN5ZwzQVa7+D whz3LqgbEblgNw46Th5572GcIiQuEZ9S8kRN0i84GH/XzakQn+UstaHRHvTmRpo6MyQZHbpRkoA DQ== X-Google-Smtp-Source: AGHT+IHxGwhG9J2j0xHRNOesp9JYTQhHyHxaHzGPoN76JvXqmpGF+xTYQiMe5d834HF94l/UImFywQ== X-Received: by 2002:a17:902:d4c9:b0:216:6ef9:60d with SMTP id d9443c01a7336-21a8d6e9625mr285446795ad.23.1736791236839; Mon, 13 Jan 2025 10:00:36 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.10.00.33 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 10:00:36 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 12/13] mm, swap: use a global swap cluster for non-rotation devices Date: Tue, 14 Jan 2025 01:57:31 +0800 Message-ID: <20250113175732.48099-13-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Stat-Signature: tdh9q1zt9343pgbhy6kw6r4aqr4aiepi X-Rspamd-Queue-Id: ACA8E4000D X-Rspam-User: X-HE-Tag: 1736791238-426537 X-HE-Meta: U2FsdGVkX1/ClTzADUan90U/I/3YHUS3ucwWt7Xl0HDO7w91bL8W0JAOjteXz6camtqOv/5GW/8z8aR/RE7zfJaL4qdH8JEez+YSNf8IEewTVxVvEV2UJuuTrFHgb1ofOSKhLNmTJaHs6eVWnieRiZYZMiGXstRMO6TIR5Jw/BJlv3DZCsdtCEmRk12IiHvwh0Yps+/YMoz1gWpc1SlVVwjzrDeakgluDNyJ5tWseWjpJ35hOO43JXRXC/AaeMfVmi4Sk1UH/9WTMcw0pan0LV7VV8Ysv9nTRylkBaYm1+iTqqsp9AzTZL7+lTRhqcJk0sdPYn/EbjpDUkJ2SgR8vFDXkcCBAsF7ubtGNujKrMgZlqNEGBxlBHwS/oAUeXuuO7hT0Z7A7CTgNgHWuzUGMIDHz5ejo6MCZcxIzbQhH/kH5gGNgZXLpx2XX6Cr4KZ4RMM+ESWWt7+L+IW3LeDiXIhs3Q9U1UZvnuhEOFZRLI28f6m+rZZHeAf0xF7HnIdoyJ/34Q2w1O5lZiF4UmN4VG4dFLNaihRMvSd/uaTBnAxHDMMYrgbAOM2UnHd2dJ5crtpcXhb10BVC8tnMhbwwrZ4AM97oCVMyWkLeFa6RnblwxnSYvWQn13OmtXFVErQS+ALLJXE1boJIZUaS8WHXCga/xVt6uWQcxEhzYRhD3v1OJEtGvK4rUspBqcn/NhP4V7EIv3gEZXbf8M/NsRNfGpbc5Nz/OHgu0LMo0Rei3FRMzD2A8RlOkb/7g9cQvC7tCH9eMWICBRu7hMlVYZoj3LehRjAUP+qzUH0JMTtvgoXMmfQjt5dL9P1ewdtCmkgbdjdUhkaFfp/lteQNvXB4QWeydYImwVCUD3/DQSaw9x82EYCmP7c9NLDjtl8tF+HOChPEsuH/DX5IbKI/Kp5BfYfmzPGyCqTK9TlmkWbLW2434HnhrRsiD+u7MaQrwxp31Zz3PbnCltJSJ5Pmsz5 KFaXCs/p lat2Ix/8XUXTUEWmrZSDn6yHl0GFZC9YrXyoQp9GFxyFwqtUC/oaUqJbdNQvgvukTCKEhmt9YQLoO+aIHD+TL3eVOvMJgpi1n94aaSKDFeqxNhIupVoKFR6Uja+B0i81hSumCv3oRkbEHBm7jpQUcqT6LCP9KO+ggk36ghEZ5qrcqrTll8mXU/+mN5PaQxD4AaioRZ8va3P71EXCdRNKOeN0x9NvKzSUjjig2htfnKEW4EAaQLjNSBb+DdwMS8p1wmjHr2/xOZuAG6mTM4qfdh3y5M9h2+mNtEscazWYp5QzPzc1K5ntHzEVxA0gTHzZRz3X1jD/UqZ8ugm/+6wxGrSL++o09K7rg+lBupcCAKAziuvH1A0/OrhKgpVzE3/iybLCpafdGsXbiFOMiZMc3Ar4YZQf1l/jOdZqYkjb4m4Du4LD85z50Qx3ySf3//klaQ3gzVjg+EWp+3lWxyQZrMv0s9pDd7Sp1kUgHW7CZRG3Bf9nmwu8OgLgbAyViSCS6ec3Tk2ocNmTnqfjGlvAGObnrzdd9zoBoD+GgEc55g9YIclQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Non-rotational devices (SSD / ZRAM) can tolerate fragmentation, so the goal of the SWAP allocator is to avoid contention for clusters. It uses a per-CPU cluster design, and each CPU will use a different cluster as much as possible. However, HDDs are very sensitive to fragmentation, contention is trivial in comparison. Therefore, we use one global cluster instead. This ensures that each order will be written to the same cluster as much as possible, which helps make the I/O more continuous. This ensures that the performance of the cluster allocator is as good as that of the old allocator. Tests after this commit compared to those before this series: Tested using 'make -j32' with tinyconfig, a 1G memcg limit, and HDD swap: make -j32 with tinyconfig, using 1G memcg limit and HDD swap: Before this series: 114.44user 29.11system 39:42.90elapsed 6%CPU (0avgtext+0avgdata 157284maxresident)k 2901232inputs+0outputs (238877major+4227640minor)pagefaults After this commit: 113.90user 23.81system 38:11.77elapsed 6%CPU (0avgtext+0avgdata 157260maxresident)k 2548728inputs+0outputs (235471major+4238110minor)pagefaults Suggested-by: Chris Li Signed-off-by: Kairui Song --- include/linux/swap.h | 2 ++ mm/swapfile.c | 51 ++++++++++++++++++++++++++++++++------------ 2 files changed, 39 insertions(+), 14 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4c1d2e69689f..b13b72645db3 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -318,6 +318,8 @@ struct swap_info_struct { unsigned int pages; /* total of usable pages of swap */ atomic_long_t inuse_pages; /* number of those currently in use */ struct percpu_cluster __percpu *percpu_cluster; /* per cpu's swap location */ + struct percpu_cluster *global_cluster; /* Use one global cluster for rotating device */ + spinlock_t global_cluster_lock; /* Serialize usage of global cluster */ struct rb_root swap_extent_root;/* root of the swap extent rbtree */ struct block_device *bdev; /* swap device or bdev of swap file */ struct file *swap_file; /* seldom referenced */ diff --git a/mm/swapfile.c b/mm/swapfile.c index 37d540fa0310..793b2fd1a2a8 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -820,7 +820,10 @@ static unsigned int alloc_swap_scan_cluster(struct swap_info_struct *si, out: relocate_cluster(si, ci); unlock_cluster(ci); - __this_cpu_write(si->percpu_cluster->next[order], next); + if (si->flags & SWP_SOLIDSTATE) + __this_cpu_write(si->percpu_cluster->next[order], next); + else + si->global_cluster->next[order] = next; return found; } @@ -881,9 +884,16 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o struct swap_cluster_info *ci; unsigned int offset, found = 0; - /* Fast path using per CPU cluster */ - local_lock(&si->percpu_cluster->lock); - offset = __this_cpu_read(si->percpu_cluster->next[order]); + if (si->flags & SWP_SOLIDSTATE) { + /* Fast path using per CPU cluster */ + local_lock(&si->percpu_cluster->lock); + offset = __this_cpu_read(si->percpu_cluster->next[order]); + } else { + /* Serialize HDD SWAP allocation for each device. */ + spin_lock(&si->global_cluster_lock); + offset = si->global_cluster->next[order]; + } + if (offset) { ci = lock_cluster(si, offset); /* Cluster could have been used by another order */ @@ -975,8 +985,10 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o } } done: - local_unlock(&si->percpu_cluster->lock); - + if (si->flags & SWP_SOLIDSTATE) + local_unlock(&si->percpu_cluster->lock); + else + spin_unlock(&si->global_cluster_lock); return found; } @@ -2784,6 +2796,8 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) mutex_unlock(&swapon_mutex); free_percpu(p->percpu_cluster); p->percpu_cluster = NULL; + kfree(p->global_cluster); + p->global_cluster = NULL; vfree(swap_map); kvfree(zeromap); kvfree(cluster_info); @@ -3189,17 +3203,24 @@ static struct swap_cluster_info *setup_clusters(struct swap_info_struct *si, for (i = 0; i < nr_clusters; i++) spin_lock_init(&cluster_info[i].lock); - si->percpu_cluster = alloc_percpu(struct percpu_cluster); - if (!si->percpu_cluster) - goto err_free; + if (si->flags & SWP_SOLIDSTATE) { + si->percpu_cluster = alloc_percpu(struct percpu_cluster); + if (!si->percpu_cluster) + goto err_free; - for_each_possible_cpu(cpu) { - struct percpu_cluster *cluster; + for_each_possible_cpu(cpu) { + struct percpu_cluster *cluster; - cluster = per_cpu_ptr(si->percpu_cluster, cpu); + cluster = per_cpu_ptr(si->percpu_cluster, cpu); + for (i = 0; i < SWAP_NR_ORDERS; i++) + cluster->next[i] = SWAP_ENTRY_INVALID; + local_lock_init(&cluster->lock); + } + } else { + si->global_cluster = kmalloc(sizeof(*si->global_cluster), GFP_KERNEL); for (i = 0; i < SWAP_NR_ORDERS; i++) - cluster->next[i] = SWAP_ENTRY_INVALID; - local_lock_init(&cluster->lock); + si->global_cluster->next[i] = SWAP_ENTRY_INVALID; + spin_lock_init(&si->global_cluster_lock); } /* @@ -3473,6 +3494,8 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) bad_swap: free_percpu(si->percpu_cluster); si->percpu_cluster = NULL; + kfree(si->global_cluster); + si->global_cluster = NULL; inode = NULL; destroy_swap_extents(si); swap_cgroup_swapoff(si->type); From patchwork Mon Jan 13 17:57:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13937879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C437CC02183 for ; Mon, 13 Jan 2025 18:00:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 51FC36B00A8; Mon, 13 Jan 2025 13:00:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A63B6B00A9; Mon, 13 Jan 2025 13:00:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FB946B00AA; Mon, 13 Jan 2025 13:00:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id F40286B00A8 for ; Mon, 13 Jan 2025 13:00:44 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A2BB7AE601 for ; Mon, 13 Jan 2025 18:00:44 +0000 (UTC) X-FDA: 83003194008.03.1951357 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf16.hostedemail.com (Postfix) with ESMTP id 93DC0180019 for ; Mon, 13 Jan 2025 18:00:42 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=RTvSnOG9; spf=pass (imf16.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736791242; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FrVAA4WZZ90HWrqJV3lVdkYD6VffUgtaPFosI/u8elE=; b=vVNB6aeHibVY5cf3hCX7NbLJ1lA2xmwzY9Zq1oqTsPsBFfCPovED/+QZTQoKVs7BtwBIu5 5uQBrPVxUPxpufbKj+fCxbCtUF0mDiV2PavVNkpAJKi5J+DBMZz0fS+FAFp1xfp71Pf0h6 LoispkO1BEDtlq5pwWNd6Y4hjWQ2Q4E= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=RTvSnOG9; spf=pass (imf16.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736791242; a=rsa-sha256; cv=none; b=nFYjZJby14RUZEeTkAL7oTLq/Ga+HaR10yECrMRONuIaJrWIFcw3Gpve+dqOQmx9HZ6eI9 LXXIvg7KXj8eUSlnbsecuGlzKoAXDKMRHZKBv0YsagTUdTHwM84gEDa3F6jhoQVl/Wb3gL OelXHi8hhZz1VeVJYGPMItLHEfOHUIk= Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-216395e151bso56013045ad.0 for ; Mon, 13 Jan 2025 10:00:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791241; x=1737396041; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=FrVAA4WZZ90HWrqJV3lVdkYD6VffUgtaPFosI/u8elE=; b=RTvSnOG9fkA9F67bMevGlpUWmTjfehUPehU87QhL56ODkRGBtlD3sk+qKFxUqnb6ob WkRNTPNUESSVbw4t/krefgOi2u1U0DwCEXkaz96StZt7Kz2g4/Aksf6B8P8s3tEPV36B ggXhobs1D/OOIhevMVS1u3KIDoVWM0gBHA/aM3iXbTNnJt4WQjbGhMn1R8/Ei6rmcurI yfmMkwO1b0g0oeIcezbVNotuGJVwAS8e2VzKIJst7OQAOyOlylx6baskHR/voDGkS/Uz VehuL5XW6+dIMDqoVtPZ4FZsb/4fudHeK38Pxi0cDTJd9R3BwVnJXAxA6WHqm9YX0Iup af2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791241; x=1737396041; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=FrVAA4WZZ90HWrqJV3lVdkYD6VffUgtaPFosI/u8elE=; b=ur9VsfwgdUFEwaGSxQcEz1U31OATSMCT/tRelX/gMnzkobZOMKpIRCQEgeiNdSvdp1 LmO0pcnDKrljsurxpVRBXKDdgd6rLxGPQZ/SOBf8FbsZ9afsyNNRLLCAuACcdLQVxEo6 W5rqBgd99Os7OKH+6MpSwtzSxUKn68d4I6WlozVOSh0+21VFCqHnrEQt+hCI/9/4Lqjb ZwMesl5yU0tRMiYFNbNAjlSuAvkgjbN+pC8VDt8rbn3PmejrlgDyfiStgHRpkHAklID+ O1kreBUU3SzdVu6dbbspaPyHSqYl37rWkai/VKBrbOknFiZAN6jtvjSoeAHk+z2ReBS+ dqRQ== X-Gm-Message-State: AOJu0Yz8FZ7Qzd2AoSmwvvtC8qaCzx4iibpLVNxdyA/cYHaknjLLn9a4 b1TwigjOFghb18x0zmNvnhxbPmGzpij3BtnpLZveKTWeqe16XQoNqOXfjf4Sbig= X-Gm-Gg: ASbGncu3Sf/qnDd2EoQeJqEaxxc8EZPMA6LciPuJEOBCEmZ8aY03KBz0fZH8F72w1n7 5vnRK1mf8PnFKx6mWQtuex62VsmISU+c1+2eZ8gzhGVXQHVvc/WFoo+b0roPEaRUSHlQH/p0hOi jPYBc3ukfzUKlgySy80zpV1SPu3viqmtKxOb3JW57SIn+OF29a2aBy8HL3zaFav7GZywnACbPqa yybfpefP8EhlAuNb30ppaYr/NLz+9eP7TSQdPI2s+oldUGSC+ww8SN/aBJRGXiSNkecaCwtqr1f Xw== X-Google-Smtp-Source: AGHT+IHICrhgTxcI0KQLiwilMoiewVem2y8BV2E+Xf0iSPUlg9PW0V2LTkmSTxZnAq9iJkrodfAO5Q== X-Received: by 2002:a17:903:2b10:b0:215:a56f:1e50 with SMTP id d9443c01a7336-21ad9ee7348mr157415045ad.8.1736791240589; Mon, 13 Jan 2025 10:00:40 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.10.00.37 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 10:00:40 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 13/13] mm, swap_slots: remove slot cache for freeing path Date: Tue, 14 Jan 2025 01:57:32 +0800 Message-ID: <20250113175732.48099-14-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspamd-Queue-Id: 93DC0180019 X-Rspamd-Server: rspam12 X-Stat-Signature: zqqn7i1eqhxrhzycfk7nhojpa7utt1ak X-Rspam-User: X-HE-Tag: 1736791242-661048 X-HE-Meta: U2FsdGVkX1+nc2HUPsd8BZELrnurEi5NSjEGhzSc2giPqiTlvA0yV26v1P/fWuXXAVgWRw4UXhK+/Wm4y+P0BUMDkfNxmRdxI+ZkIIZtNWcym/5eWAFU0pyTIp2+m7qY/nFX8PnD976cl7AiXmwZAH1OEMuWAb2PMEzwF7OcpOFbVFUF0JaFVf4d6AYJ0JnCiITEXxOkbQUgK0DXGMOUkD1MJW9Td9av5aXvss4C5XOJPuXKNpvf9U6TCM0UamnW0FInsv1kRARqCUIksNTnwwSN7o+BB1f6j5PaR2Z1CSr7FyKCTENy2M12NBuI+ow7rq8B/WTDmBZRV0K1/Y19gHzqpVzQCM2kiymuGhBoe6iDREKex8bMtKBjUYTxMnhWntA4XPNmBNxwwVeUoGkVil+wIXVYMqCewPRf+U+9Jf78VbgBnyrLlbDj0XXrqf/U0iOT48UfysFL+/5+VsIPckiWBhjlBSpb+xXDq6YxcR3eVk6hDeyxr8E+DqW5eYBUx6Cvif7LmX41SA1QR61vFnAjuiNCx8XkxfmdgIGs4M/S+JHrP5Hwzrbwib+ueWKTs1cVIDVfST6Mj0qW7muj+flBGjKb2dZQ3NDEmoZVWIo9oR0NAd9TokLFK1kYCpC6TOvY6NZGi7pAjhSiYt3IhVS0RHp8gCXC3fwww8cby4nBGSHezQLhjZRzY1Yqw8fswUhDpl8w9LV5Cjv9qYF88aGyfS9mVRIlxEFzIGxeTFk5kG74IvOzcGcwGNI/L06XKp4w/2Sj7BEa51i4eFQrtUbaDPpBBtpaps1zXe/DHiinCCHWY1bv279Pi+uOWqONwmXZHqtmW8cVflcQaAtQsab9J+Ksb7W2bRaOb77EzsUX3+whUabbnuRDHvUFhAdigII0BIwgApDiNlYJS35MyU8E08rLKF8gMOA6iorJg1sUDPKqPMBIY1KO6HcJO/vkBRnJHrRr7NFa8djLhWu W1VF6TVb JO2VFo+MiFyWGiNWqp+bRW4J0sQXTjAUD+RaJR1Auo11YRTg98ZRR9N6I4iIna39WXhIB2D5/oCccY9G/Q3RNecmlHDM45DP1P2LUcHFXFz7qqfarHHT1CHOc7mnJKwRu5l5eAg87UrHnVtiIYdQdG/oFtvh1T0yX38vlUkOmcR01q6ifXG1JXw0rbeQO5S92jVNjzGlRhO7QxXgr7h1kuxDzMGZDRuoMiMeQR2cL8JfTKVWaBXfoJjH36I9q3/3B2e25zxom1uBt0A2QRxa3pfcmXxTe506rSs2r49V8hNpti0aVfYQK9b9xMwbCpUkdouISWw7nHHlyHTtkhHb9aP8peawa6gV81rNOEZ91xQuicyfJTIhUpE2kPIae/AluqT4c8BZiSIO5Cf6yoYiURzCpXrAiVsSIXcwjBUSYQU2D4M8wHovnKJWSc32nX1RJefnXYNMA/MOTDOP2JPP19VzXVpf4Yd8yAe7WmdAb0zfCTyRd16tXAgOJTosL3b44VcGe/gj1v5+ZNPWqXmGrLfhN9JyuDpzTIpXgmC8pzBlUkkZLnEmf86eeuv+u+Lbw+KGXgVvC3wsORIekdrdItkhOqPMwNk9jcJuSPe+9u5PhAj8peS4HrB3nZw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song The slot cache for freeing path is mostly for reducing the overhead of si->lock. As we have basically eliminated the si->lock usage for freeing path, it can be removed. This helps simplify the code, and avoids swap entries from being hold in cache upon freeing. The delayed freeing of entries have been causing trouble for further optimizations for zswap [1] and in theory will also cause more fragmentation, and extra overhead. Test with build linux kernel showed both performance and fragmentation is better without the cache: tiem make -j96 / 768M memcg, 4K pages, 10G ZRAM, avg of 4 test run:: Before: Sys time: 36047.78, Real time: 472.43 After: (-7.6% sys time, -7.3% real time) Sys time: 33314.76, Real time: 437.67 time make -j96 / 1152M memcg, 64K mTHP, 10G ZRAM, avg of 4 test run: Before: Sys time: 46859.04, Real time: 562.63 hugepages-64kB/stats/swpout: 1783392 hugepages-64kB/stats/swpout_fallback: 240875 After: (-23.3% sys time, -21.3% real time) Sys time: 35958.87, Real time: 442.69 hugepages-64kB/stats/swpout: 1866267 hugepages-64kB/stats/swpout_fallback: 158330 Sequential SWAP should be also slightly faster, tests didn't show a measurable difference though, at least no regression: Swapin 4G zero page on ZRAM (time in us): Before (avg. 1923756) 1912391 1927023 1927957 1916527 1918263 1914284 1934753 1940813 1921791 After (avg. 1922290): 1919101 1925743 1916810 1917007 1923930 1935152 1917403 1923549 1921913 Link: https://lore.kernel.org/all/CAMgjq7ACohT_uerSz8E_994ZZCv709Zor+43hdmesW_59W1BWw@mail.gmail.com/[1] Suggested-by: Chris Li Signed-off-by: Kairui Song --- include/linux/swap_slots.h | 3 -- mm/swap_slots.c | 78 +++++---------------------------- mm/swapfile.c | 89 +++++++++++++++----------------------- 3 files changed, 44 insertions(+), 126 deletions(-) diff --git a/include/linux/swap_slots.h b/include/linux/swap_slots.h index 15adfb8c813a..840aec3523b2 100644 --- a/include/linux/swap_slots.h +++ b/include/linux/swap_slots.h @@ -16,15 +16,12 @@ struct swap_slots_cache { swp_entry_t *slots; int nr; int cur; - spinlock_t free_lock; /* protects slots_ret, n_ret */ - swp_entry_t *slots_ret; int n_ret; }; void disable_swap_slots_cache_lock(void); void reenable_swap_slots_cache_unlock(void); void enable_swap_slots_cache(void); -void free_swap_slot(swp_entry_t entry); extern bool swap_slot_cache_enabled; diff --git a/mm/swap_slots.c b/mm/swap_slots.c index 13ab3b771409..9c7c171df7ba 100644 --- a/mm/swap_slots.c +++ b/mm/swap_slots.c @@ -43,17 +43,15 @@ static DEFINE_MUTEX(swap_slots_cache_mutex); /* Serialize swap slots cache enable/disable operations */ static DEFINE_MUTEX(swap_slots_cache_enable_mutex); -static void __drain_swap_slots_cache(unsigned int type); +static void __drain_swap_slots_cache(void); #define use_swap_slot_cache (swap_slot_cache_active && swap_slot_cache_enabled) -#define SLOTS_CACHE 0x1 -#define SLOTS_CACHE_RET 0x2 static void deactivate_swap_slots_cache(void) { mutex_lock(&swap_slots_cache_mutex); swap_slot_cache_active = false; - __drain_swap_slots_cache(SLOTS_CACHE|SLOTS_CACHE_RET); + __drain_swap_slots_cache(); mutex_unlock(&swap_slots_cache_mutex); } @@ -72,7 +70,7 @@ void disable_swap_slots_cache_lock(void) if (swap_slot_cache_initialized) { /* serialize with cpu hotplug operations */ cpus_read_lock(); - __drain_swap_slots_cache(SLOTS_CACHE|SLOTS_CACHE_RET); + __drain_swap_slots_cache(); cpus_read_unlock(); } } @@ -113,7 +111,7 @@ static bool check_cache_active(void) static int alloc_swap_slot_cache(unsigned int cpu) { struct swap_slots_cache *cache; - swp_entry_t *slots, *slots_ret; + swp_entry_t *slots; /* * Do allocation outside swap_slots_cache_mutex @@ -125,28 +123,19 @@ static int alloc_swap_slot_cache(unsigned int cpu) if (!slots) return -ENOMEM; - slots_ret = kvcalloc(SWAP_SLOTS_CACHE_SIZE, sizeof(swp_entry_t), - GFP_KERNEL); - if (!slots_ret) { - kvfree(slots); - return -ENOMEM; - } - mutex_lock(&swap_slots_cache_mutex); cache = &per_cpu(swp_slots, cpu); - if (cache->slots || cache->slots_ret) { + if (cache->slots) { /* cache already allocated */ mutex_unlock(&swap_slots_cache_mutex); kvfree(slots); - kvfree(slots_ret); return 0; } if (!cache->lock_initialized) { mutex_init(&cache->alloc_lock); - spin_lock_init(&cache->free_lock); cache->lock_initialized = true; } cache->nr = 0; @@ -160,19 +149,16 @@ static int alloc_swap_slot_cache(unsigned int cpu) */ mb(); cache->slots = slots; - cache->slots_ret = slots_ret; mutex_unlock(&swap_slots_cache_mutex); return 0; } -static void drain_slots_cache_cpu(unsigned int cpu, unsigned int type, - bool free_slots) +static void drain_slots_cache_cpu(unsigned int cpu, bool free_slots) { struct swap_slots_cache *cache; - swp_entry_t *slots = NULL; cache = &per_cpu(swp_slots, cpu); - if ((type & SLOTS_CACHE) && cache->slots) { + if (cache->slots) { mutex_lock(&cache->alloc_lock); swapcache_free_entries(cache->slots + cache->cur, cache->nr); cache->cur = 0; @@ -183,20 +169,9 @@ static void drain_slots_cache_cpu(unsigned int cpu, unsigned int type, } mutex_unlock(&cache->alloc_lock); } - if ((type & SLOTS_CACHE_RET) && cache->slots_ret) { - spin_lock_irq(&cache->free_lock); - swapcache_free_entries(cache->slots_ret, cache->n_ret); - cache->n_ret = 0; - if (free_slots && cache->slots_ret) { - slots = cache->slots_ret; - cache->slots_ret = NULL; - } - spin_unlock_irq(&cache->free_lock); - kvfree(slots); - } } -static void __drain_swap_slots_cache(unsigned int type) +static void __drain_swap_slots_cache(void) { unsigned int cpu; @@ -224,13 +199,13 @@ static void __drain_swap_slots_cache(unsigned int type) * There are no slots on such cpu that need to be drained. */ for_each_online_cpu(cpu) - drain_slots_cache_cpu(cpu, type, false); + drain_slots_cache_cpu(cpu, false); } static int free_slot_cache(unsigned int cpu) { mutex_lock(&swap_slots_cache_mutex); - drain_slots_cache_cpu(cpu, SLOTS_CACHE | SLOTS_CACHE_RET, true); + drain_slots_cache_cpu(cpu, true); mutex_unlock(&swap_slots_cache_mutex); return 0; } @@ -269,39 +244,6 @@ static int refill_swap_slots_cache(struct swap_slots_cache *cache) return cache->nr; } -void free_swap_slot(swp_entry_t entry) -{ - struct swap_slots_cache *cache; - - /* Large folio swap slot is not covered. */ - zswap_invalidate(entry); - - cache = raw_cpu_ptr(&swp_slots); - if (likely(use_swap_slot_cache && cache->slots_ret)) { - spin_lock_irq(&cache->free_lock); - /* Swap slots cache may be deactivated before acquiring lock */ - if (!use_swap_slot_cache || !cache->slots_ret) { - spin_unlock_irq(&cache->free_lock); - goto direct_free; - } - if (cache->n_ret >= SWAP_SLOTS_CACHE_SIZE) { - /* - * Return slots to global pool. - * The current swap_map value is SWAP_HAS_CACHE. - * Set it to 0 to indicate it is available for - * allocation in global pool - */ - swapcache_free_entries(cache->slots_ret, cache->n_ret); - cache->n_ret = 0; - } - cache->slots_ret[cache->n_ret++] = entry; - spin_unlock_irq(&cache->free_lock); - } else { -direct_free: - swapcache_free_entries(&entry, 1); - } -} - swp_entry_t folio_alloc_swap(struct folio *folio) { swp_entry_t entry; diff --git a/mm/swapfile.c b/mm/swapfile.c index 793b2fd1a2a8..b3154e52cb45 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -53,14 +53,15 @@ static bool swap_count_continued(struct swap_info_struct *, pgoff_t, unsigned char); static void free_swap_count_continuations(struct swap_info_struct *); -static void swap_entry_range_free(struct swap_info_struct *si, swp_entry_t entry, - unsigned int nr_pages); +static void swap_entry_range_free(struct swap_info_struct *si, + struct swap_cluster_info *ci, + swp_entry_t entry, unsigned int nr_pages); static void swap_range_alloc(struct swap_info_struct *si, unsigned int nr_entries); static bool folio_swapcache_freeable(struct folio *folio); static struct swap_cluster_info *lock_cluster(struct swap_info_struct *si, unsigned long offset); -static void unlock_cluster(struct swap_cluster_info *ci); +static inline void unlock_cluster(struct swap_cluster_info *ci); static DEFINE_SPINLOCK(swap_lock); static unsigned int nr_swapfiles; @@ -261,10 +262,9 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si, folio_ref_sub(folio, nr_pages); folio_set_dirty(folio); - /* Only sinple page folio can be backed by zswap */ - if (nr_pages == 1) - zswap_invalidate(entry); - swap_entry_range_free(si, entry, nr_pages); + ci = lock_cluster(si, offset); + swap_entry_range_free(si, ci, entry, nr_pages); + unlock_cluster(ci); ret = nr_pages; out_unlock: folio_unlock(folio); @@ -1128,8 +1128,10 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset, * Use atomic clear_bit operations only on zeromap instead of non-atomic * bitmap_clear to prevent adjacent bits corruption due to simultaneous writes. */ - for (i = 0; i < nr_entries; i++) + for (i = 0; i < nr_entries; i++) { clear_bit(offset + i, si->zeromap); + zswap_invalidate(swp_entry(si->type, offset + i)); + } if (si->flags & SWP_BLKDEV) swap_slot_free_notify = @@ -1434,9 +1436,9 @@ static unsigned char __swap_entry_free(struct swap_info_struct *si, ci = lock_cluster(si, offset); usage = __swap_entry_free_locked(si, offset, 1); - unlock_cluster(ci); if (!usage) - free_swap_slot(entry); + swap_entry_range_free(si, ci, swp_entry(si->type, offset), 1); + unlock_cluster(ci); return usage; } @@ -1464,13 +1466,10 @@ static bool __swap_entries_free(struct swap_info_struct *si, } for (i = 0; i < nr; i++) WRITE_ONCE(si->swap_map[offset + i], SWAP_HAS_CACHE); + if (!has_cache) + swap_entry_range_free(si, ci, entry, nr); unlock_cluster(ci); - if (!has_cache) { - for (i = 0; i < nr; i++) - zswap_invalidate(swp_entry(si->type, offset + i)); - swap_entry_range_free(si, entry, nr); - } return has_cache; fallback: @@ -1490,15 +1489,13 @@ static bool __swap_entries_free(struct swap_info_struct *si, * Drop the last HAS_CACHE flag of swap entries, caller have to * ensure all entries belong to the same cgroup. */ -static void swap_entry_range_free(struct swap_info_struct *si, swp_entry_t entry, - unsigned int nr_pages) +static void swap_entry_range_free(struct swap_info_struct *si, + struct swap_cluster_info *ci, + swp_entry_t entry, unsigned int nr_pages) { unsigned long offset = swp_offset(entry); unsigned char *map = si->swap_map + offset; unsigned char *map_end = map + nr_pages; - struct swap_cluster_info *ci; - - ci = lock_cluster(si, offset); /* It should never free entries across different clusters */ VM_BUG_ON(ci != offset_to_cluster(si, offset + nr_pages - 1)); @@ -1518,7 +1515,6 @@ static void swap_entry_range_free(struct swap_info_struct *si, swp_entry_t entry free_cluster(si, ci); else partial_free_cluster(si, ci); - unlock_cluster(ci); } static void cluster_swap_free_nr(struct swap_info_struct *si, @@ -1526,28 +1522,13 @@ static void cluster_swap_free_nr(struct swap_info_struct *si, unsigned char usage) { struct swap_cluster_info *ci; - DECLARE_BITMAP(to_free, BITS_PER_LONG) = { 0 }; - int i, nr; + unsigned long end = offset + nr_pages; ci = lock_cluster(si, offset); - while (nr_pages) { - nr = min(BITS_PER_LONG, nr_pages); - for (i = 0; i < nr; i++) { - if (!__swap_entry_free_locked(si, offset + i, usage)) - bitmap_set(to_free, i, 1); - } - if (!bitmap_empty(to_free, BITS_PER_LONG)) { - unlock_cluster(ci); - for_each_set_bit(i, to_free, BITS_PER_LONG) - free_swap_slot(swp_entry(si->type, offset + i)); - if (nr == nr_pages) - return; - bitmap_clear(to_free, 0, BITS_PER_LONG); - ci = lock_cluster(si, offset); - } - offset += nr; - nr_pages -= nr; - } + do { + if (!__swap_entry_free_locked(si, offset, usage)) + swap_entry_range_free(si, ci, swp_entry(si->type, offset), 1); + } while (++offset < end); unlock_cluster(ci); } @@ -1588,18 +1569,12 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) return; ci = lock_cluster(si, offset); - if (size > 1 && swap_is_has_cache(si, offset, size)) { - unlock_cluster(ci); - swap_entry_range_free(si, entry, size); - return; - } - for (int i = 0; i < size; i++, entry.val++) { - if (!__swap_entry_free_locked(si, offset + i, SWAP_HAS_CACHE)) { - unlock_cluster(ci); - free_swap_slot(entry); - if (i == size - 1) - return; - lock_cluster(si, offset); + if (swap_is_has_cache(si, offset, size)) + swap_entry_range_free(si, ci, entry, size); + else { + for (int i = 0; i < size; i++, entry.val++) { + if (!__swap_entry_free_locked(si, offset + i, SWAP_HAS_CACHE)) + swap_entry_range_free(si, ci, entry, 1); } } unlock_cluster(ci); @@ -1608,6 +1583,7 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) void swapcache_free_entries(swp_entry_t *entries, int n) { int i; + struct swap_cluster_info *ci; struct swap_info_struct *si = NULL; if (n <= 0) @@ -1615,8 +1591,11 @@ void swapcache_free_entries(swp_entry_t *entries, int n) for (i = 0; i < n; ++i) { si = _swap_info_get(entries[i]); - if (si) - swap_entry_range_free(si, entries[i], 1); + if (si) { + ci = lock_cluster(si, swp_offset(entries[i])); + swap_entry_range_free(si, ci, entries[i], 1); + unlock_cluster(ci); + } } }