From patchwork Wed Feb 16 09:48:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12748347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D174C433EF for ; Wed, 16 Feb 2022 09:49:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7EAE36B0080; Wed, 16 Feb 2022 04:49:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 79A8A6B0081; Wed, 16 Feb 2022 04:49:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6633B6B0082; Wed, 16 Feb 2022 04:49:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id 571DF6B0080 for ; Wed, 16 Feb 2022 04:49:39 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 12FDA181AC9C6 for ; Wed, 16 Feb 2022 09:49:39 +0000 (UTC) X-FDA: 79148170878.25.9B6BB81 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf07.hostedemail.com (Postfix) with ESMTP id 793B340009 for ; Wed, 16 Feb 2022 09:49:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645004978; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rxjE8nQUBN11kwzcRplXdSY7IFspDWL4Rd9u3ETZvZU=; b=F6fSmJyVuJcJcp/FrmasSucZk+m7W0TL3zK9P5lklWOES1l9f2uiCsyq9ZbkMQmRq9NOyk Y5IzoahAG2DgBteR+V5Xx0CqDwqQWv0I3X9e9m+dO9aLGRDaaUq8+r0dZXS6iWtdjrsJdO rVHR3dt+P0FwWiKLg0AYzyWgwQqUAqs= Received: from mail-pj1-f69.google.com (mail-pj1-f69.google.com [209.85.216.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-669-XSpMsdOROJOMKpXMYDTEbA-1; Wed, 16 Feb 2022 04:49:34 -0500 X-MC-Unique: XSpMsdOROJOMKpXMYDTEbA-1 Received: by mail-pj1-f69.google.com with SMTP id w3-20020a17090ac98300b001b8b914e91aso1230151pjt.0 for ; Wed, 16 Feb 2022 01:49:34 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rxjE8nQUBN11kwzcRplXdSY7IFspDWL4Rd9u3ETZvZU=; b=hVRP3wPbXfUge4qtPYECeMhLaWywG5gYKeh9E+2nznpT42URtJBYyhXOsIOeFkBbqW 3jNYiFLC7L0J/RL18UWIE6DYC7iEONSp9ER9Poj7qcSBxzvnqfaz+QJ1HtjS+GpVy2zM iOUFaKQYVAI1FZemT4RGVY3WB9/fgIaBRH4lWpUyYLTVa6x6kZm2bfT8VrB9xTknMwRK E6wtZzEonM0tYO4GftX8Ny+C9os7/L02pmDZZMyDWnqftXuVmGSomI0Ek2iwuFiDIMii Xj/tPQKP/K8/b09waWObZldqTMkzTZT0kvQLAJFH+Goq1tSrv3q05KOTCKfckKvbqlbL z9JA== X-Gm-Message-State: AOAM530YO1m6f6JATKBOCaD0DD9zi/J77EJLnnoNrLtEfTnIjUMjzgSb FMAv/GHBkqSvGeJ1gzRuJLqiMT7vCOUoSpBpXO/Gcs88dh0ODjEbBmukPtGjf8P6KcZrH3TllN3 0IthO0ZptfMI= X-Received: by 2002:a17:902:7892:b0:14e:c520:e47d with SMTP id q18-20020a170902789200b0014ec520e47dmr1620153pll.105.1645004973346; Wed, 16 Feb 2022 01:49:33 -0800 (PST) X-Google-Smtp-Source: ABdhPJyU/EfLN18Cu4rotd2bXfEqIGUN8J9OwxkyYql/PYAkRf8rWS5DdGmn2URaeB50VX2oPQZFOw== X-Received: by 2002:a17:902:7892:b0:14e:c520:e47d with SMTP id q18-20020a170902789200b0014ec520e47dmr1620133pll.105.1645004973050; Wed, 16 Feb 2022 01:49:33 -0800 (PST) Received: from localhost.localdomain ([64.64.123.81]) by smtp.gmail.com with ESMTPSA id qe7sm11567835pjb.25.2022.02.16.01.49.19 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 16 Feb 2022 01:49:32 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrew Morton , "Kirill A . Shutemov" , Matthew Wilcox , Yang Shi , Andrea Arcangeli , peterx@redhat.com, John Hubbard , Alistair Popple , David Hildenbrand , Vlastimil Babka , Hugh Dickins Subject: [PATCH v4 4/4] mm: Rework swap handling of zap_pte_range Date: Wed, 16 Feb 2022 17:48:10 +0800 Message-Id: <20220216094810.60572-5-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220216094810.60572-1-peterx@redhat.com> References: <20220216094810.60572-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 793B340009 X-Rspam-User: Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=F6fSmJyV; spf=none (imf07.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: ik9bgzd75qo8imqd31jo3a4fcqnkr5g6 X-Rspamd-Server: rspam11 X-HE-Tag: 1645004978-791331 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Clean the code up by merging the device private/exclusive swap entry handling with the rest, then we merge the pte clear operation too. struct* page is defined in multiple places in the function, move it upward. free_swap_and_cache() is only useful for !non_swap_entry() case, put it into the condition. No functional change intended. Signed-off-by: Peter Xu Reviewed-by: John Hubbard --- mm/memory.c | 21 ++++++--------------- 1 file changed, 6 insertions(+), 15 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index ffa8c7dfe9ad..cade96024349 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1361,6 +1361,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, arch_enter_lazy_mmu_mode(); do { pte_t ptent = *pte; + struct page *page; + if (pte_none(ptent)) continue; @@ -1368,8 +1370,6 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, break; if (pte_present(ptent)) { - struct page *page; - page = vm_normal_page(vma, addr, ptent); if (unlikely(!should_zap_page(details, page))) continue; @@ -1403,21 +1403,14 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, entry = pte_to_swp_entry(ptent); if (is_device_private_entry(entry) || is_device_exclusive_entry(entry)) { - struct page *page = pfn_swap_entry_to_page(entry); - + page = pfn_swap_entry_to_page(entry); if (unlikely(!should_zap_page(details, page))) continue; - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); rss[mm_counter(page)]--; - if (is_device_private_entry(entry)) page_remove_rmap(page, false); - put_page(page); - continue; - } - - if (!non_swap_entry(entry)) { + } else if (!non_swap_entry(entry)) { /* * If this is a genuine swap entry, then it must be an * private anon page. If the caller wants to skip @@ -1426,9 +1419,9 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, if (!should_zap_cows(details)) continue; rss[MM_SWAPENTS]--; + if (unlikely(!free_swap_and_cache(entry))) + print_bad_pte(vma, addr, ptent, NULL); } else if (is_migration_entry(entry)) { - struct page *page; - page = pfn_swap_entry_to_page(entry); if (!should_zap_page(details, page)) continue; @@ -1441,8 +1434,6 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, /* We should have covered all the swap entry types */ WARN_ON_ONCE(1); } - if (unlikely(!free_swap_and_cache(entry))) - print_bad_pte(vma, addr, ptent, NULL); pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); } while (pte++, addr += PAGE_SIZE, addr != end);