From patchwork Fri Jan 28 04:54:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12727936 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CED54C433FE for ; Fri, 28 Jan 2022 04:54:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 620F66B0095; Thu, 27 Jan 2022 23:54:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5A9F66B0096; Thu, 27 Jan 2022 23:54:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 44B966B0098; Thu, 27 Jan 2022 23:54:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0221.hostedemail.com [216.40.44.221]) by kanga.kvack.org (Postfix) with ESMTP id 354AE6B0095 for ; Thu, 27 Jan 2022 23:54:50 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id EEAB08909D for ; Fri, 28 Jan 2022 04:54:49 +0000 (UTC) X-FDA: 79078480698.05.2A93720 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf06.hostedemail.com (Postfix) with ESMTP id 6876418000E for ; Fri, 28 Jan 2022 04:54:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1643345688; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rxjE8nQUBN11kwzcRplXdSY7IFspDWL4Rd9u3ETZvZU=; b=DpPVS8bRsgEYbjmaC38U/rf8j4I/NsAvg1E7UMlIYqKiegCGXC+aOf15udJmkwuSt7uotI wor6Hsp1DdwcUQmRTVT7fk+0lBzDqZ0OwRWPExQjI2WJXGDUFmNLsHrZwdvID36/dL0Wkh ssVIGnbxEngY/O8qg7SQ0MW2pwhSq1I= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-358-LhNsHIXBP4qDOW5i57j4jg-1; Thu, 27 Jan 2022 23:54:47 -0500 X-MC-Unique: LhNsHIXBP4qDOW5i57j4jg-1 Received: by mail-wm1-f72.google.com with SMTP id j6-20020a05600c1c0600b0034c02775da7so1140485wms.3 for ; Thu, 27 Jan 2022 20:54:47 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rxjE8nQUBN11kwzcRplXdSY7IFspDWL4Rd9u3ETZvZU=; b=vTcj9ea83uxY/+iXszA8n4VKxTNVe1yhI+/j/H1h8q+mT9sBsAqY5kl8JcPwawdZRS +PI2Bn8fphjjGhOhBqfB7RZPU3XGLUhSM50N2WTi3P8RYVoU+Fz/q0+xVw/MUhTW6q+y UXQogRbDqngd1CR0xjQJx/d0UPnp3/g6gsZmTDPjFoZwsxaniUxubi3oFigmE9i1j9YW CyxYZm8p0mDUNSV+MROjO4kf/yoS2Uvway5RUWGUdMBINdMsbhr70MyZSHpMbjLYBPg0 Obj0ycZWowfQvNTXSn1orGiPPlXRqGVDAM+Sit7FE0uvqNo5F8bBbMRcJy7JAiqSDrJQ ValA== X-Gm-Message-State: AOAM531aK+GAG1atg0gmNlVNcdXC2X7q7TgWVxRTgHPhftyLMuRmtF/r SHIh+SzJ/VOm5qje16yRXcfaC2+m2SpkYJ5bOS/doEYkfB3I1GY7StAogc6K8Mrb64enzeOBAX4 yLMnbXrhREfvrlo18JVqt4JRy+/ZFvus1wx5X70dVYHhjWgGI0YGTJ2R0EPtU X-Received: by 2002:a05:6000:1a89:: with SMTP id f9mr5884829wry.251.1643345685554; Thu, 27 Jan 2022 20:54:45 -0800 (PST) X-Google-Smtp-Source: ABdhPJzvpMqKTZBmuLjr5YO2EFIJM9G9DWCeuPiKqQMCAYXt0G0OqDw+Xws+L0tBDAR/T/8LmGetKw== X-Received: by 2002:a05:6000:1a89:: with SMTP id f9mr5884805wry.251.1643345685283; Thu, 27 Jan 2022 20:54:45 -0800 (PST) Received: from localhost.localdomain ([64.64.123.9]) by smtp.gmail.com with ESMTPSA id i13sm814014wrf.3.2022.01.27.20.54.40 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 27 Jan 2022 20:54:45 -0800 (PST) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: peterx@redhat.com, Alistair Popple , Andrew Morton , Andrea Arcangeli , David Hildenbrand , Matthew Wilcox , John Hubbard , Hugh Dickins , Vlastimil Babka , Yang Shi , "Kirill A . Shutemov" Subject: [PATCH v3 4/4] mm: Rework swap handling of zap_pte_range Date: Fri, 28 Jan 2022 12:54:12 +0800 Message-Id: <20220128045412.18695-5-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220128045412.18695-1-peterx@redhat.com> References: <20220128045412.18695-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspam-User: nil X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 6876418000E X-Stat-Signature: sw5e8e85cb7hmemqgu35tw71cy6nrgba Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DpPVS8bR; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf06.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=peterx@redhat.com X-HE-Tag: 1643345689-914726 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Clean the code up by merging the device private/exclusive swap entry handling with the rest, then we merge the pte clear operation too. struct* page is defined in multiple places in the function, move it upward. free_swap_and_cache() is only useful for !non_swap_entry() case, put it into the condition. No functional change intended. Signed-off-by: Peter Xu --- mm/memory.c | 21 ++++++--------------- 1 file changed, 6 insertions(+), 15 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index ffa8c7dfe9ad..cade96024349 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1361,6 +1361,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, arch_enter_lazy_mmu_mode(); do { pte_t ptent = *pte; + struct page *page; + if (pte_none(ptent)) continue; @@ -1368,8 +1370,6 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, break; if (pte_present(ptent)) { - struct page *page; - page = vm_normal_page(vma, addr, ptent); if (unlikely(!should_zap_page(details, page))) continue; @@ -1403,21 +1403,14 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, entry = pte_to_swp_entry(ptent); if (is_device_private_entry(entry) || is_device_exclusive_entry(entry)) { - struct page *page = pfn_swap_entry_to_page(entry); - + page = pfn_swap_entry_to_page(entry); if (unlikely(!should_zap_page(details, page))) continue; - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); rss[mm_counter(page)]--; - if (is_device_private_entry(entry)) page_remove_rmap(page, false); - put_page(page); - continue; - } - - if (!non_swap_entry(entry)) { + } else if (!non_swap_entry(entry)) { /* * If this is a genuine swap entry, then it must be an * private anon page. If the caller wants to skip @@ -1426,9 +1419,9 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, if (!should_zap_cows(details)) continue; rss[MM_SWAPENTS]--; + if (unlikely(!free_swap_and_cache(entry))) + print_bad_pte(vma, addr, ptent, NULL); } else if (is_migration_entry(entry)) { - struct page *page; - page = pfn_swap_entry_to_page(entry); if (!should_zap_page(details, page)) continue; @@ -1441,8 +1434,6 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, /* We should have covered all the swap entry types */ WARN_ON_ONCE(1); } - if (unlikely(!free_swap_and_cache(entry))) - print_bad_pte(vma, addr, ptent, NULL); pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); } while (pte++, addr += PAGE_SIZE, addr != end);