From patchwork Tue Dec 24 14:38:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13920192 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81E42E77188 for ; Tue, 24 Dec 2024 14:39:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 19DE66B0093; Tue, 24 Dec 2024 09:39:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0FD5C6B0095; Tue, 24 Dec 2024 09:39:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E43BF6B0096; Tue, 24 Dec 2024 09:39:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C21536B0093 for ; Tue, 24 Dec 2024 09:39:49 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 79C15C14D6 for ; Tue, 24 Dec 2024 14:39:49 +0000 (UTC) X-FDA: 82930110396.21.21BB9E7 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf28.hostedemail.com (Postfix) with ESMTP id B5B44C0006 for ; Tue, 24 Dec 2024 14:39:04 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=i+Irnk8m; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735051169; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OUd7wiP5I69QsBZ6QEaFt/A5eIkrt9OzZ0llQ5gAam8=; b=fk1pLPuA0CTMKn+elJTwp+rGnhhlhhKsAt+Ej3wwZBz9tSwTxHd9Q6iNQhksdNUGRT9vDm TF4I7Llsw2TVukquBefHWuH0v84PD77qvds9VScrdPSBImCbRyAhpIbw4sujq0X56jOjmE iJsIWNRSJDXnUywrXp9gabsEL4SAcNA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735051169; a=rsa-sha256; cv=none; b=zNKQkNDvya8vnfHL+naNaisKxIGvPZCuAtIFmDkmV4WthvLQTZRw90POrqW+K45RIWmqR4 Z2cvNzMg69K+oztgFPhSqmJmBIxiLOIeS+rEyqvcZ7vsYSuw8qDXsfmXT2G8+fjZxtvncQ e9SDNgdWt6+ToHLVRo8LujbY+arcRoE= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=i+Irnk8m; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf28.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=ryncsn@gmail.com Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-21636268e43so66522245ad.2 for ; Tue, 24 Dec 2024 06:39:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1735051185; x=1735655985; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=OUd7wiP5I69QsBZ6QEaFt/A5eIkrt9OzZ0llQ5gAam8=; b=i+Irnk8mGN+mjuzfbDHfzTYAa32tJeh2KWVBP1YkImsZGCE6f+oHkyLMwICOZ2C+pJ ryKeB7SDl+P3FNGjA0eWuGP0n9j4R09PTx12LVQq7H/5cNSgw2/E49TzJw9gEwq8D3/Z fmGhMY+zJ/OniCAMQe3TEzV1Mrs/yPg9VtsF/maYwRWGT4zI4+3ecrnNY74hDfTJZWNd gzwviarC4UNAuwrReObgribY6bOu6DONLbsqEkZaZPv3XUtaK7nsquiL65S4hRX4w0Wl /XMbDxgE09gIjmmH6el9ajZZzwVFo4F602A5vRDWbkFTLGUnUEtMbBkbpt7q8I5ermk5 /n7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735051185; x=1735655985; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=OUd7wiP5I69QsBZ6QEaFt/A5eIkrt9OzZ0llQ5gAam8=; b=vwMjgeWCvCaYESrKtmpn6mYhz3pVQr68jlsS3/a5TPCCG/QNW+wwOKbrWTI8RXKl8y 2UnuSZdL5Jtes/ykXCbjRMzPi1tXlcZ0+qhOWo16ZWyivDNojADuXqm0AY6Fwk1PDw9t nEzDro17ad5xdW5cW8GD4AgZbsCNIYSKE6/hyZbQelUc1siXCBhwR027VaFNuWWqZ9de HrFsFy89eKGX58w2S5/3MwfU7sjDjIf3y6C94eJD2z/y5v0HaiFddnuinDSAPOu4gqKt 0QKg/QoTOTkF/C7IU0TaapDbHLa2aAm7JwhXTYRmSPnxc7dXxy+Nk+CkVSy1Omhpl+vk kA3Q== X-Gm-Message-State: AOJu0YxX0FG4JUCD50KQyxtaDHI9y2lMdbg1f0Q6NUJ1NlYyu8K0ijx/ szgXvE5QdBHwJJNtEiccqnMcNRQLx370uM9q3fp/9+BvWIEytFlcF6TG5ooARoM= X-Gm-Gg: ASbGnctkgzHmhSvhOketvYjbFaZBNFfGQQkq61BDlh+WjUitjCao2JYZWTEy4+u7f6i mMSHkZ/A0LpEUknaRo+tuZ1d+rD9jQGv1GI45W5yXoCKh/2qZMe2JSjoXiKzXnFarsTYmlrcJIT B7XWnB5xKj2H+rWuG9QXiuDc4uSzS/fg7lljqCBbEuZ4aEpP4qpsja53TrR+Nj8BH1T1RJoFTU5 aY27/ktKA2GsrEEuvfa9IG1/VWetcPWf83r1e5qxKzLjgEnrmLeQl4vxn287RAwCg0K01acmhzB TA== X-Google-Smtp-Source: AGHT+IE4+37LkTR/shstkl6vwn0NHWgUrerMEPAg5MTpigYxllj3TZwSffwTFxM6mFeGjyntkHJiPQ== X-Received: by 2002:a17:902:dace:b0:216:4064:53ad with SMTP id d9443c01a7336-219e6f26fd6mr211789875ad.48.1735051185324; Tue, 24 Dec 2024 06:39:45 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.189]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-219dc9cdec1sm90598735ad.136.2024.12.24.06.39.41 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 24 Dec 2024 06:39:44 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v2 04/13] mm, swap: use cluster lock for HDD Date: Tue, 24 Dec 2024 22:38:02 +0800 Message-ID: <20241224143811.33462-5-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241224143811.33462-1-ryncsn@gmail.com> References: <20241224143811.33462-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Stat-Signature: wr5ecd4je5n6buedxrimya3itemm8tkq X-Rspamd-Queue-Id: B5B44C0006 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1735051144-915459 X-HE-Meta: U2FsdGVkX1/jhOXI4Uwq9oXQMghMYkQRZp9f9OdZQSEb9BttVckPvhr/U9cxFGdhgtNj2xkdHm/3eEq4poCykoAINJiC8j5CbDgS1XVWh/1ug5HsAOoZ8xrnoEABs23kTvHinOBiRczp21E0rco896Lfl+qUHWwcebmPCEJTKZDM6kyDR2bV91DV/f144dqCJRp4P+Lh/vm0Cm9jTCuFPMiHq09zM85ol/l5wXK2SDoVeLtgs5uQ2Iwjh5Ak3JNzdANcAEsya0Utxn+bm71la9G5RQ9KSSHpb4ZCO2au1m+FUeWvpqCwZ8W79qkIAN+agTtrqwYaeNCjczKPvf0gBq70pnfieARvCKh/Njn0D0XKiqq+kczUWocpnRWVkk4nB4EHTB+xdcOp9RWjPYuT3XT89dBd12b/npNQnotwA9pa27UFVya0q1ejm8H/gL2G04Awm+py/CuHljPK6KMABnKd/zOIR0y9xJZxki89iJrxqDkNBlGiQsy31tBJPfBBXxyuZr6NBnDvJ0VYflNQ3POcPLYNKjm4tPFCCwP6LURA1lBiqf5XQ8tTKp2utAReG/bk7gWANodAFReTc/opWWOUSBcpTRjTjN/GYMM1LaC8bvCa9IyYYiJp9C4SpP1aQur4pkF6MlZvY/VEY7Gqepc8iYefyMTS2iEBM7qnsVYPQMbY1NMi7boe/zHOB8UQYonzcBnvpaIMeq07k3wD2EqExxwIuJX35kmEoMZHXAksLD3IZVDEhwWNUvMHjpvfpxpTlyKehUrnqZQmVl8D+p7Iz38/jr8ZC4P7Jjic3yX1NS+KfsNhF2bJq0hKqouEV9tjdZDMSM83gsD/8Cq+D+Fz+k1DTdgkmz03yZN6GScrHRXKlSewDijf1KSPnlJKunT+ZYzbxOIJZVCgokKZwZjNhS4k/k//vCdPIAOh6pehTxqDJo7Cuah5ovNNEP+en7NQKVhc1Hz5XJLlCxl /tomRJqx AwYCxA2bauAPalF3HCWstnV3JRbB6JIMd3uk3E3bR9o7rL52FTIRSRmXMhmWuckofArtD3QUXELsSob/skh5pom1IHDf9qzBF+mlyZtVyoGfSvyqTkO6BGLhZYLg/JfM29E4m7veZbMLQ/rQHNRF5M6CSoRw+xmazZM7b6ygkd7EzxTkjRO5dmHTcHleJzNt2a6HUeSaTed/kcuL/fW9/PxkzRBbbnuqg7azfgJ0vG2muSfNBxlV3JTiJpF4UL19gjvItAvBf80KiRCQDpbaQVmF2Dj5HEXkjaBrdzXXqEB6gCdBjdF79E5PqDmzTFxD+PiIhj26dJfIpY5XsDY6wjwA1bDLGViJ24KW8KPQ64tZD6FS78FMfCKJw/WisXC8eY+WhQp6z6ipnSzPoqRKcpuc/2bwor46YHz8ep7TFIsVnj2IV292kYytE/cLoKU9h7UCi6vGHRKNPn4CHmRCWCsz1AvXxwJI+bbWu7rbi+fdHsCnHLilrtL7ZvsykYUs2dnQ+ZOEZDDrQJfYZQHth2BwM34p7SqQ9wXXE4qiBGGTmro0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song Cluster lock (ci->lock) was introduce to reduce contention for certain operations. Using cluster lock for HDD is not helpful as HDD have a poor performance, so locking isn't the bottleneck. But having different set of locks for HDD / non-HDD prevents further rework of device lock (si->lock). This commit just changed all lock_cluster_or_swap_info to lock_cluster, which is a safe and straight conversion since cluster info is always allocated now, also removed all cluster_info related checks. Suggested-by: Chris Li Signed-off-by: Kairui Song --- mm/swapfile.c | 107 ++++++++++++++++---------------------------------- 1 file changed, 34 insertions(+), 73 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index fca58d43b836..d0e5b9fa0c48 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -58,10 +58,9 @@ static void swap_entry_range_free(struct swap_info_struct *si, swp_entry_t entry static void swap_range_alloc(struct swap_info_struct *si, unsigned long offset, unsigned int nr_entries); static bool folio_swapcache_freeable(struct folio *folio); -static struct swap_cluster_info *lock_cluster_or_swap_info( - struct swap_info_struct *si, unsigned long offset); -static void unlock_cluster_or_swap_info(struct swap_info_struct *si, - struct swap_cluster_info *ci); +static struct swap_cluster_info *lock_cluster(struct swap_info_struct *si, + unsigned long offset); +static void unlock_cluster(struct swap_cluster_info *ci); static DEFINE_SPINLOCK(swap_lock); static unsigned int nr_swapfiles; @@ -222,9 +221,9 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si, * swap_map is HAS_CACHE only, which means the slots have no page table * reference or pending writeback, and can't be allocated to others. */ - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); need_reclaim = swap_is_has_cache(si, offset, nr_pages); - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); if (!need_reclaim) goto out_unlock; @@ -404,45 +403,15 @@ static inline struct swap_cluster_info *lock_cluster(struct swap_info_struct *si { struct swap_cluster_info *ci; - ci = si->cluster_info; - if (ci) { - ci += offset / SWAPFILE_CLUSTER; - spin_lock(&ci->lock); - } - return ci; -} - -static inline void unlock_cluster(struct swap_cluster_info *ci) -{ - if (ci) - spin_unlock(&ci->lock); -} - -/* - * Determine the locking method in use for this device. Return - * swap_cluster_info if SSD-style cluster-based locking is in place. - */ -static inline struct swap_cluster_info *lock_cluster_or_swap_info( - struct swap_info_struct *si, unsigned long offset) -{ - struct swap_cluster_info *ci; - - /* Try to use fine-grained SSD-style locking if available: */ - ci = lock_cluster(si, offset); - /* Otherwise, fall back to traditional, coarse locking: */ - if (!ci) - spin_lock(&si->lock); + ci = &si->cluster_info[offset / SWAPFILE_CLUSTER]; + spin_lock(&ci->lock); return ci; } -static inline void unlock_cluster_or_swap_info(struct swap_info_struct *si, - struct swap_cluster_info *ci) +static inline void unlock_cluster(struct swap_cluster_info *ci) { - if (ci) - unlock_cluster(ci); - else - spin_unlock(&si->lock); + spin_unlock(&ci->lock); } /* Add a cluster to discard list and schedule it to do discard */ @@ -558,9 +527,6 @@ static void inc_cluster_info_page(struct swap_info_struct *si, unsigned long idx = page_nr / SWAPFILE_CLUSTER; struct swap_cluster_info *ci; - if (!cluster_info) - return; - ci = cluster_info + idx; ci->count++; @@ -576,9 +542,6 @@ static void inc_cluster_info_page(struct swap_info_struct *si, static void dec_cluster_info_page(struct swap_info_struct *si, struct swap_cluster_info *ci, int nr_pages) { - if (!si->cluster_info) - return; - VM_BUG_ON(ci->count < nr_pages); VM_BUG_ON(cluster_is_free(ci)); lockdep_assert_held(&si->lock); @@ -1007,8 +970,6 @@ static int cluster_alloc_swap(struct swap_info_struct *si, { int n_ret = 0; - VM_BUG_ON(!si->cluster_info); - si->flags += SWP_SCANNING; while (n_ret < nr) { @@ -1052,10 +1013,10 @@ static int scan_swap_map_slots(struct swap_info_struct *si, } /* - * Swapfile is not block device or not using clusters so unable + * Swapfile is not block device so unable * to allocate large entries. */ - if (!(si->flags & SWP_BLKDEV) || !si->cluster_info) + if (!(si->flags & SWP_BLKDEV)) return 0; } @@ -1295,9 +1256,9 @@ static unsigned char __swap_entry_free(struct swap_info_struct *si, unsigned long offset = swp_offset(entry); unsigned char usage; - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); usage = __swap_entry_free_locked(si, offset, 1); - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); if (!usage) free_swap_slot(entry); @@ -1320,14 +1281,14 @@ static bool __swap_entries_free(struct swap_info_struct *si, if (nr > SWAPFILE_CLUSTER - offset % SWAPFILE_CLUSTER) goto fallback; - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); if (!swap_is_last_map(si, offset, nr, &has_cache)) { - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); goto fallback; } for (i = 0; i < nr; i++) WRITE_ONCE(si->swap_map[offset + i], SWAP_HAS_CACHE); - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); if (!has_cache) { for (i = 0; i < nr; i++) @@ -1383,7 +1344,7 @@ static void cluster_swap_free_nr(struct swap_info_struct *si, DECLARE_BITMAP(to_free, BITS_PER_LONG) = { 0 }; int i, nr; - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); while (nr_pages) { nr = min(BITS_PER_LONG, nr_pages); for (i = 0; i < nr; i++) { @@ -1391,18 +1352,18 @@ static void cluster_swap_free_nr(struct swap_info_struct *si, bitmap_set(to_free, i, 1); } if (!bitmap_empty(to_free, BITS_PER_LONG)) { - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); for_each_set_bit(i, to_free, BITS_PER_LONG) free_swap_slot(swp_entry(si->type, offset + i)); if (nr == nr_pages) return; bitmap_clear(to_free, 0, BITS_PER_LONG); - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); } offset += nr; nr_pages -= nr; } - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); } /* @@ -1441,9 +1402,9 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) if (!si) return; - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); if (size > 1 && swap_is_has_cache(si, offset, size)) { - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); spin_lock(&si->lock); swap_entry_range_free(si, entry, size); spin_unlock(&si->lock); @@ -1451,14 +1412,14 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry) } for (int i = 0; i < size; i++, entry.val++) { if (!__swap_entry_free_locked(si, offset + i, SWAP_HAS_CACHE)) { - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); free_swap_slot(entry); if (i == size - 1) return; - lock_cluster_or_swap_info(si, offset); + lock_cluster(si, offset); } } - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); } static int swp_entry_cmp(const void *ent1, const void *ent2) @@ -1522,9 +1483,9 @@ int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry) struct swap_cluster_info *ci; int count; - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); count = swap_count(si->swap_map[offset]); - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); return count; } @@ -1547,7 +1508,7 @@ int swp_swapcount(swp_entry_t entry) offset = swp_offset(entry); - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); count = swap_count(si->swap_map[offset]); if (!(count & COUNT_CONTINUED)) @@ -1570,7 +1531,7 @@ int swp_swapcount(swp_entry_t entry) n *= (SWAP_CONT_MAX + 1); } while (tmp_count & COUNT_CONTINUED); out: - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); return count; } @@ -1585,8 +1546,8 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, int i; bool ret = false; - ci = lock_cluster_or_swap_info(si, offset); - if (!ci || nr_pages == 1) { + ci = lock_cluster(si, offset); + if (nr_pages == 1) { if (swap_count(map[roffset])) ret = true; goto unlock_out; @@ -1598,7 +1559,7 @@ static bool swap_page_trans_huge_swapped(struct swap_info_struct *si, } } unlock_out: - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); return ret; } @@ -3428,7 +3389,7 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr) offset = swp_offset(entry); VM_WARN_ON(nr > SWAPFILE_CLUSTER - offset % SWAPFILE_CLUSTER); VM_WARN_ON(usage == 1 && nr > 1); - ci = lock_cluster_or_swap_info(si, offset); + ci = lock_cluster(si, offset); err = 0; for (i = 0; i < nr; i++) { @@ -3483,7 +3444,7 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage, int nr) } unlock_out: - unlock_cluster_or_swap_info(si, ci); + unlock_cluster(ci); return err; }