From patchwork Wed Feb 16 09:48:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12748346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CD77C433F5 for ; Wed, 16 Feb 2022 09:49:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1D04D6B007E; Wed, 16 Feb 2022 04:49:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1803D6B0080; Wed, 16 Feb 2022 04:49:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 048526B0081; Wed, 16 Feb 2022 04:49:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0233.hostedemail.com [216.40.44.233]) by kanga.kvack.org (Postfix) with ESMTP id EBCC66B007E for ; Wed, 16 Feb 2022 04:49:21 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A218690F7C for ; Wed, 16 Feb 2022 09:49:21 +0000 (UTC) X-FDA: 79148170122.10.AD3CF5A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 1BC481C0006 for ; Wed, 16 Feb 2022 09:49:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645004960; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oZfOtFEoi6jPUWDy50tF9fEsGXQRQqyIANlBUu6Ts5c=; b=P3W6Mhi9W7rK8w9pWo4uY4HKBi58T2YRncPcGL0fVTutLiUySYnKjtPH0HBRohIzaQ5wa5 I88G5cNWxSZDzBq/KNsx94G8abHVhL/LVUQ8gfr/Jex4awi8ihd5FwDhKWlY0RMJFQR+Vy hdcbC28JHqQvwwjuaa1/YTyZamaUseU= Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-531-OLUQymyhNX2sAVfHEKw2GQ-1; Wed, 16 Feb 2022 04:49:19 -0500 X-MC-Unique: OLUQymyhNX2sAVfHEKw2GQ-1 Received: by mail-pf1-f200.google.com with SMTP id f128-20020a623886000000b004e152a2c149so1206455pfa.5 for ; Wed, 16 Feb 2022 01:49:19 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oZfOtFEoi6jPUWDy50tF9fEsGXQRQqyIANlBUu6Ts5c=; b=GU+WWKZ8CZnRQrgWT6JpDrFF/oQbzwukNv/cUE1gIUCLNBgciShRi8iacZ6QMdzYat FIA9KcRqKDT33f5PMLn0LLcVnfqoMbK+cHbzYIqAbysudkGTQocOWMfkr6NNWEHmbJ++ m2QhmPYzmeFakiSIg9Hb6uP/v/i/DqWf0WqMjA9IF3d1uYw1Vo5/h+DdsqHPD1ycVZWQ 2/XgBIuTFAadh4Q3R57uqgraCikeRtbxV1+77wsl42oLt1KRNOllSn1Jw/vRf+JrpqXe rWkA65wqXFbIKlGHbGprtL34o+MaY2FiZmuo+MKoH9Kc7LaH9+5fP0pE7C2f8YHlByps byxg== X-Gm-Message-State: AOAM533xJ/n66S7gY3mdTIMoaAZgldKfHh342FRBX8hRyPWCTGezYojs m8kjDeENKmjWYr5VQCKrZojUkweTMy0yPhuulL+PBAb24M8LeLC27M02XXdZUZRiPcqNPtcNPKd QGmrzfN+16i8= X-Received: by 2002:a17:90a:e7ca:b0:1b9:66c8:982c with SMTP id kb10-20020a17090ae7ca00b001b966c8982cmr784796pjb.30.1645004958303; Wed, 16 Feb 2022 01:49:18 -0800 (PST) X-Google-Smtp-Source: ABdhPJxr7nMoK1mV1Zu2nttOBw8/qblPs/Y5s3WL1TIEPfmM1m5wytDOSBV+uQUMnFliIrLh3OSfJg== X-Received: by 2002:a17:90a:e7ca:b0:1b9:66c8:982c with SMTP id kb10-20020a17090ae7ca00b001b966c8982cmr784784pjb.30.1645004958039; Wed, 16 Feb 2022 01:49:18 -0800 (PST) Received: from localhost.localdomain ([64.64.123.81]) by smtp.gmail.com with ESMTPSA id qe7sm11567835pjb.25.2022.02.16.01.49.04 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 16 Feb 2022 01:49:17 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrew Morton , "Kirill A . Shutemov" , Matthew Wilcox , Yang Shi , Andrea Arcangeli , peterx@redhat.com, John Hubbard , Alistair Popple , David Hildenbrand , Vlastimil Babka , Hugh Dickins Subject: [PATCH v4 3/4] mm: Change zap_details.zap_mapping into even_cows Date: Wed, 16 Feb 2022 17:48:09 +0800 Message-Id: <20220216094810.60572-4-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220216094810.60572-1-peterx@redhat.com> References: <20220216094810.60572-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 1BC481C0006 X-Rspam-User: Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=P3W6Mhi9; spf=none (imf20.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: garq7qbwfypndn7okz16nnxrnykrt3si X-Rspamd-Server: rspam03 X-HE-Tag: 1645004960-200997 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently we have a zap_mapping pointer maintained in zap_details, when it is specified we only want to zap the pages that has the same mapping with what the caller has specified. But what we want to do is actually simpler: we want to skip zapping private (COW-ed) pages in some cases. We can refer to unmap_mapping_pages() callers where we could have passed in different even_cows values. The other user is unmap_mapping_folio() where we always want to skip private pages. According to Hugh, we used a mapping pointer for historical reason, as explained here: https://lore.kernel.org/lkml/391aa58d-ce84-9d4-d68d-d98a9c533255@google.com/ Quoting partly from Hugh: Which raises the question again of why I did not just use a boolean flag there originally: aah, I think I've found why. In those days there was a horrible "optimization", for better performance on some benchmark I guess, which when you read from /dev/zero into a private mapping, would map the zero page there (look up read_zero_pagealigned() and zeromap_page_range() if you dare). So there was another category of page to be skipped along with the anon COWs, and I didn't want multiple tests in the zap loop, so checking check_mapping against page->mapping did both. I think nowadays you could do it by checking for PageAnon page (or genuine swap entry) instead. This patch replaced the zap_details.zap_mapping pointer into the even_cows boolean, then we check it against PageAnon. Suggested-by: Hugh Dickins Signed-off-by: Peter Xu Reviewed-by: John Hubbard --- mm/memory.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 14d8428ff4db..ffa8c7dfe9ad 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1309,8 +1309,8 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) * Parameter block passed down to zap_pte_range in exceptional cases. */ struct zap_details { - struct address_space *zap_mapping; /* Check page->mapping if set */ struct folio *single_folio; /* Locked folio to be unmapped */ + bool even_cows; /* Zap COWed private pages too? */ }; /* Whether we should zap all COWed (private) pages too */ @@ -1321,13 +1321,10 @@ static inline bool should_zap_cows(struct zap_details *details) return true; /* Or, we zap COWed pages only if the caller wants to */ - return !details->zap_mapping; + return details->even_cows; } -/* - * We set details->zap_mapping when we want to unmap shared but keep private - * pages. Return true if we should zap this page, false otherwise. - */ +/* Decides whether we should zap this page with the page pointer specified */ static inline bool should_zap_page(struct zap_details *details, struct page *page) { /* If we can make a decision without *page.. */ @@ -1338,7 +1335,8 @@ static inline bool should_zap_page(struct zap_details *details, struct page *pag if (!page) return true; - return details->zap_mapping == page_rmapping(page); + /* Otherwise we should only zap non-anon pages */ + return !PageAnon(page); } static unsigned long zap_pte_range(struct mmu_gather *tlb, @@ -3403,7 +3401,7 @@ void unmap_mapping_folio(struct folio *folio) first_index = folio->index; last_index = folio->index + folio_nr_pages(folio) - 1; - details.zap_mapping = mapping; + details.even_cows = false; details.single_folio = folio; i_mmap_lock_write(mapping); @@ -3432,7 +3430,7 @@ void unmap_mapping_pages(struct address_space *mapping, pgoff_t start, pgoff_t first_index = start; pgoff_t last_index = start + nr - 1; - details.zap_mapping = even_cows ? NULL : mapping; + details.even_cows = even_cows; if (last_index < first_index) last_index = ULONG_MAX;