From patchwork Wed Mar 6 14:03:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13584104 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FE3BC54E41 for ; Wed, 6 Mar 2024 14:04:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 922C96B0074; Wed, 6 Mar 2024 09:04:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D39B6B0075; Wed, 6 Mar 2024 09:04:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C1966B0078; Wed, 6 Mar 2024 09:04:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 6BFEE6B0074 for ; Wed, 6 Mar 2024 09:04:12 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 0EA6E140EC3 for ; Wed, 6 Mar 2024 14:04:12 +0000 (UTC) X-FDA: 81866783544.05.1C9CDD4 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf24.hostedemail.com (Postfix) with ESMTP id 9303F18004D for ; Wed, 6 Mar 2024 14:04:07 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709733847; a=rsa-sha256; cv=none; b=fcFcfxHl0r8bibbB4NqRZUemsdzSByprwVzBGkWMikjwsVqsaufdGNMx5aui316eHCrgZF woYTdTHePWNvvMUbYa6o7DkKG/ArmrQS+TP8WBe126yqdkudaefC4ej8lhpGokd8mECzXO Y3nxQFTEx8WGXnw+2wlb5hoLuttpp4o= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709733847; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=QfXDEF+fmWayEN1cFo0Oj1ixoPoHxLu5eAskbbgsXjU=; b=zkFF5Z74pkrt2ORBODP8WWol+dQyGcvbz1dZ+rCg5Iphwefnr3rBFek72oX1s/Rx23w/v0 2uEG01BHVVAv+Xg1+KxXGN8foYHFHOynWwYw+EiTMINVZdi2rF07kh+sZdGBjTERy6Y8HU qnWC92YrcIhfktM64mL/rhHx/zLfd3U= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8FD4C1FB; Wed, 6 Mar 2024 06:04:43 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0F5CE3F762; Wed, 6 Mar 2024 06:04:04 -0800 (PST) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , "Huang, Ying" Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org Subject: [PATCH v2] mm: swap: Fix race between free_swap_and_cache() and swapoff() Date: Wed, 6 Mar 2024 14:03:56 +0000 Message-Id: <20240306140356.3974886-1-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 9303F18004D X-Stat-Signature: ay83o6dk31ga7sq4ud8oe4sgtwha6frg X-HE-Tag: 1709733847-816056 X-HE-Meta: U2FsdGVkX1/l8br4ZlbOZ6Im+Ja4Cee3bWEQIjKedvHpLzj1fLTdCREoV6RZLYXw4aXc7tBaPpvVCKSeObnvLEnLzgNvDeHaRgp76alHq8J8rYyUlS6/009Vsd97lNz5AexslsyAM7NRcwEyajP19Mur1Tazz230fjx6Ul75QvRCX8lCrfhCHq3llnC9ppjN34ICeK9+99EqRuFR0mZnCIpGxFtcoGsgiW0tSEarMAk0f3gzSjCIdbWxYnRWCi6jtsMrIuzO5xmkx+tGK/cX231yfuYHe8vK2XNazC2gnyZmUZwf1Crhn6w7BMM9ijxdiskn49Vtbc481WpxPHK2U1Q2N63ZN1Ix/hw96c5R9tD1NHK2JMR6TGxNQx6yHfpNIGA3k0yD4zr3y/NGK32T7rJVr3laPb3FtPHhQ0PrKA+leeshhoyKMYj4Xqy8XTk0k2cZI2gXt8g9I0WxaFSsA7UHhXjNLZpl9eeZajqoSJiJmUkn8uQhzC8lVLEyVaZG1xSfi5H5TAUKt4dFWrPJXt1GaP/vmW84jUhXx8tnJqRMZ7XgQDb9V5jjT6b7E2YmXkFoY0+AleXBrdGbf6twZUdYpjosp29THYGvnQZUj7QJ0I1ANZwtV2Yqa9h9Dj8TX/Qe8TvlcqCypzNdeubdSCs4CcN3C1XQybi78il/YkgEi8wRlWEEh4yHPAQsN1ri53L5BM3TNQYkbqSbnMneaQU1EBL6I0WcQ/CDwdcCcpcX0y9XNDTLhDnqYWOi+Qt93+PIBy3fEW6+U6K57+p679AStN55hm7vB7ksSVVdR94mkGFnlxqYCxNK2J5LFQF5t0wGQAjcFfGJJX36CczCvnuSe/IrOvvQyuFxXLHsRvmVPZO9mLPhg/EvBmrn+WvNDF/bTDvkprVGhkdJ6acKyTYugbAGRs8IqB7Od2+DQITkmPR+ciHAZzktKBZkWg1/uYz6BGiIPJeSPbiOR0z SaZHzBgg imSnedkLxUGUot3UOCuandZ2W4g8lwsc+j35T0uACa4aMPLmNcA6A1cdvsydph8xGHcfmCFeIez94QsQRJNoSECJQVMPdR1ApwXZnfOzn9mXnPKSEU1MFXdLRG4mU8ScrBGrOA+2W62mhFMZBaQIudg4S3zmqrHU0DUJDiBEguwXH/pY5pc1F4mtqUyN/8+Yloo53zSQKzPyWjFcH88uiRDNQcHGN6igy2uMxU2OhzfJYwcnXNd/TsLouQWS6qS7Zs/mzPzTQg6rzaRY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There was previously a theoretical window where swapoff() could run and teardown a swap_info_struct while a call to free_swap_and_cache() was running in another thread. This could cause, amongst other bad possibilities, swap_page_trans_huge_swapped() (called by free_swap_and_cache()) to access the freed memory for swap_map. This is a theoretical problem and I haven't been able to provoke it from a test case. But there has been agreement based on code review that this is possible (see link below). Fix it by using get_swap_device()/put_swap_device(), which will stall swapoff(). There was an extra check in _swap_info_get() to confirm that the swap entry was not free. This isn't present in get_swap_device() because it doesn't make sense in general due to the race between getting the reference and swapoff. So I've added an equivalent check directly in free_swap_and_cache(). Details of how to provoke one possible issue (thanks to David Hildenbrand for deriving this): --8<----- __swap_entry_free() might be the last user and result in "count == SWAP_HAS_CACHE". swapoff->try_to_unuse() will stop as soon as soon as si->inuse_pages==0. So the question is: could someone reclaim the folio and turn si->inuse_pages==0, before we completed swap_page_trans_huge_swapped(). Imagine the following: 2 MiB folio in the swapcache. Only 2 subpages are still references by swap entries. Process 1 still references subpage 0 via swap entry. Process 2 still references subpage 1 via swap entry. Process 1 quits. Calls free_swap_and_cache(). -> count == SWAP_HAS_CACHE [then, preempted in the hypervisor etc.] Process 2 quits. Calls free_swap_and_cache(). -> count == SWAP_HAS_CACHE Process 2 goes ahead, passes swap_page_trans_huge_swapped(), and calls __try_to_reclaim_swap(). __try_to_reclaim_swap()->folio_free_swap()->delete_from_swap_cache()-> put_swap_folio()->free_swap_slot()->swapcache_free_entries()-> swap_entry_free()->swap_range_free()-> ... WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries); What stops swapoff to succeed after process 2 reclaimed the swap cache but before process1 finished its call to swap_page_trans_huge_swapped()? --8<----- Fixes: 7c00bafee87c ("mm/swap: free swap slots in batch") Closes: https://lore.kernel.org/linux-mm/65a66eb9-41f8-4790-8db2-0c70ea15979f@redhat.com/ Cc: stable@vger.kernel.org Signed-off-by: Ryan Roberts --- Hi Andrew, Please replace v1 of this patch in mm-unstable with this version. Changes since v1: - Added comments for get_swap_device() as suggested by David - Moved check that swap entry is not free from get_swap_device() to free_swap_and_cache() since there are some paths that legitimately call with a free offset. I haven't addressed the recommendation by Huang Ying [1] to also revert commit 23b230ba8ac3 ("mm/swap: print bad swap offset entry in get_swap_device"). It should be done separately to this, and and we need to conclude discussion first. [1] https://lore.kernel.org/all/875xy0842q.fsf@yhuang6-desk2.ccr.corp.intel.com/ Thanks, Ryan mm/swapfile.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) -- 2.25.1 diff --git a/mm/swapfile.c b/mm/swapfile.c index 2b3a2d85e350..1155a6304119 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1232,6 +1232,11 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, * with get_swap_device() and put_swap_device(), unless the swap * functions call get/put_swap_device() by themselves. * + * Note that when only holding the PTL, swapoff might succeed immediately + * after freeing a swap entry. Therefore, immediately after + * __swap_entry_free(), the swap info might become stale and should not + * be touched without a prior get_swap_device(). + * * Check whether swap entry is valid in the swap device. If so, * return pointer to swap_info_struct, and keep the swap entry valid * via preventing the swap device from being swapoff, until @@ -1609,13 +1614,19 @@ int free_swap_and_cache(swp_entry_t entry) if (non_swap_entry(entry)) return 1; - p = _swap_info_get(entry); + p = get_swap_device(entry); if (p) { + if (WARN_ON(data_race(!p->swap_map[swp_offset(entry)]))) { + put_swap_device(p); + return 0; + } + count = __swap_entry_free(p, entry); if (count == SWAP_HAS_CACHE && !swap_page_trans_huge_swapped(p, entry)) __try_to_reclaim_swap(p, swp_offset(entry), TTRS_UNMAPPED | TTRS_FULL); + put_swap_device(p); } return p != NULL; }