From patchwork Wed Jan 26 09:55:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12724882 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5A7EC28CF5 for ; Wed, 26 Jan 2022 10:00:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 406EA6B0078; Wed, 26 Jan 2022 05:00:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B7966B007B; Wed, 26 Jan 2022 05:00:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27E266B007D; Wed, 26 Jan 2022 05:00:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id 18EEF6B0078 for ; Wed, 26 Jan 2022 05:00:58 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D1DBB95AF0 for ; Wed, 26 Jan 2022 10:00:57 +0000 (UTC) X-FDA: 79071994554.13.0E5ECFE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf08.hostedemail.com (Postfix) with ESMTP id 8A645160006 for ; Wed, 26 Jan 2022 10:00:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1643191256; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uz7WrEefhkRzVFGmcsfP6Zp/Ap9shXYEfGhZft5IAs0=; b=YqvlLzUfQHsqzFehari2wAbKLXh9PLhDQPwTNpJ6ZQTdMhYcVPBUiAy4ek+qYzE33mdBZJ PjjeXqtAhUjcp9Lsb0TT57ICNWxf7O0dg1wQhlERCx6LtZKzNvrFY4daqSVKk2hnhw38Wm DtaEoJf1OoCyuTvBKkvSP8WC7K9WHTU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-455-Z5P5QTJXMmC-UmZSLEVHQg-1; Wed, 26 Jan 2022 05:00:52 -0500 X-MC-Unique: Z5P5QTJXMmC-UmZSLEVHQg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 914831006AA6; Wed, 26 Jan 2022 10:00:49 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.241]) by smtp.corp.redhat.com (Postfix) with ESMTP id 199521F2F3; Wed, 26 Jan 2022 10:00:42 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC v2 6/9] mm/khugepaged: remove reuse_swap_page() usage Date: Wed, 26 Jan 2022 10:55:54 +0100 Message-Id: <20220126095557.32392-7-david@redhat.com> In-Reply-To: <20220126095557.32392-1-david@redhat.com> References: <20220126095557.32392-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8A645160006 X-Rspam-User: nil Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YqvlLzUf; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf08.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com X-Stat-Signature: seappyktxsadrcypbfygx6ms8b9i6c8r X-HE-Tag: 1643191256-406546 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: reuse_swap_page() currently indicates if we can write to an anon page without COW. A COW is required if the page is shared by multiple processes (either already mapped or via swap entries) or if there is concurrent writeback that cannot tolerate concurrent page modifications. reuse_swap_page() doesn't check for pending references from other processes that already unmapped the page, however, is_refcount_suitable() essentially does the same thing in the context of khugepaged. khugepaged is the last remaining user of reuse_swap_page() and we want to remove that function. In the context of khugepaged, we are not actually going to write to the page and we don't really care about other processes mapping the page: for example, without swap, we don't care about shared pages at all. The current logic seems to be: * Writable: -> Not shared, but might be in the swapcache. Nobody can fault it in from the swapcache as there are no other swap entries. * Readable and not in the swapcache: Might be shared (but nobody can fault it in from the swapcache). * Readable and in the swapcache: Might be shared and someone might be able to fault it in from the swapcache. Make sure we're the exclusive owner via reuse_swap_page(). Having to guess due to lack of comments and documentation, the current logic really only wants to make sure that a page that might be shared cannot be faulted in from the swapcache while khugepaged is active. It's hard to guess why that is that case and if it's really still required, but let's try keeping that logic unmodified. Instead of relying on reuse_swap_page(), let's unconditionally try_to_free_swap(), special casing PageKsm(). try_to_free_swap() will fail if there are still swap entries targeting the page or if the page is under writeback. After a successful try_to_free_swap() that page cannot be readded to the swapcache because we're keeping the page locked and removed from the LRU until we actually perform the copy. So once we succeeded removing a page from the swapcache, it cannot be re-added until we're done copying. Add a comment stating that. Signed-off-by: David Hildenbrand --- mm/khugepaged.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 35f14d0a00a6..bc0ff598e98f 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -683,10 +683,10 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, goto out; } if (!pte_write(pteval) && PageSwapCache(page) && - !reuse_swap_page(page)) { + (PageKsm(page) || !try_to_free_swap(page))) { /* - * Page is in the swap cache and cannot be re-used. - * It cannot be collapsed into a THP. + * Possibly shared page cannot be removed from the + * swapache. It cannot be collapsed into a THP. */ unlock_page(page); result = SCAN_SWAP_CACHE_PAGE; @@ -702,6 +702,16 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, result = SCAN_DEL_PAGE_LRU; goto out; } + + /* + * We're holding the page lock and removed the page from the + * LRU. Once done copying, we'll unlock and readd to the + * LRU via release_pte_page(). If the page is still in the + * swapcache, we're the exclusive owner. Due to the page lock + * the page cannot be added to the swapcache until we're done + * and consequently it cannot be faulted in from the swapcache + * into another process. + */ mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page), compound_nr(page));