From patchwork Wed Sep 1 20:57:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12470321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F49DC432BE for ; Wed, 1 Sep 2021 20:57:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1E2A06108E for ; Wed, 1 Sep 2021 20:57:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1E2A06108E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id AD04E940008; Wed, 1 Sep 2021 16:57:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A7FBA900002; Wed, 1 Sep 2021 16:57:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 920B7940008; Wed, 1 Sep 2021 16:57:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id 7E514900002 for ; Wed, 1 Sep 2021 16:57:30 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 23A7218211CCC for ; Wed, 1 Sep 2021 20:57:30 +0000 (UTC) X-FDA: 78540215460.36.9352E15 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf20.hostedemail.com (Postfix) with ESMTP id C0182D0000AA for ; Wed, 1 Sep 2021 20:57:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1630529849; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Hq9ANBDJCBArw0iPPGA88rqVQpepBRZET67OXkoqBVo=; b=SA5j9O6611Ph6jJTxg197IcBVHyZh5oGBKc2vao/BWr1jAYiQobXn27G7n9DPzDUw5ppH2 82Hnwaj8RCKxrUZ6u+O51isf2RFlmuUaQG6FlJhlYkM7FzvoW2ADX3L5eTbfB4fkEnlHba 7aDwNyeya9V2gY0Tl+O/wNqKy8XhS7g= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-235-IcCU2xmPMoynlJlHytVwTw-1; Wed, 01 Sep 2021 16:57:26 -0400 X-MC-Unique: IcCU2xmPMoynlJlHytVwTw-1 Received: by mail-qv1-f69.google.com with SMTP id t12-20020ad45bcc000000b003772069d04aso940514qvt.19 for ; Wed, 01 Sep 2021 13:57:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Hq9ANBDJCBArw0iPPGA88rqVQpepBRZET67OXkoqBVo=; b=BNcKMCo22JyFd5akdzow9ik+atSXK6l5IfirTwVh7lWjkv5ra+AG+hBRmZFTzlieAE EIhuh+9C2XPeQzCivVOt73Ax01wCHXzjRUsPpaUuBUDXyUjuSUB2wvTYo/XmvQVRE+rr ejzMU13Z2QN0h+KUHC8UwNVNTzhL0ZYjrAYJlrCelGpPm1DLa21t4NK5Ik8a1+53eT2f R54f7NxtLJVQgesOnP2ba/eIsBDKYaU5TTGV0BHjDXDehs4ZjnVKlBdsADYMf+HZbwK/ 4BE4mR6I7Crf6klTf1SK/HpJqS0BoQJZBIKtIcJfbL3QyU5FcoVB296fpF/ITgux7b1s HYXQ== X-Gm-Message-State: AOAM532ltjWq8+t/ygnx4cggc0JWnlyBrFCw2C+yVILXCvIMnkIkit+r LmX7DBDKKw+bEcKs/euMLHMpBgZHH2TGpaGJzhkUHZ0gI9wF/ODwq7r9cHnLi/eVL2gXIEM8S3s rHe7kh9c3PPg= X-Received: by 2002:a05:6214:c23:: with SMTP id a3mr1816160qvd.34.1630529845442; Wed, 01 Sep 2021 13:57:25 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy5+HjiwKW042yLAjnBDFXBymG7aWJsJmhLIvkEnkATvTWvbu1mTRN3D2vspZ0i+cjMM8yc8Q== X-Received: by 2002:a05:6214:c23:: with SMTP id a3mr1816129qvd.34.1630529845236; Wed, 01 Sep 2021 13:57:25 -0700 (PDT) Received: from t490s.redhat.com ([2607:fea8:56a3:500::ad7f]) by smtp.gmail.com with ESMTPSA id t28sm529163qkt.70.2021.09.01.13.57.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Sep 2021 13:57:24 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrew Morton , Matthew Wilcox , Yang Shi , Miaohe Lin , Hugh Dickins , David Hildenbrand , peterx@redhat.com, Mike Rapoport , Andrea Arcangeli , "Kirill A . Shutemov" , Jerome Glisse , Alistair Popple Subject: [PATCH 4/5] mm: Introduce zap_details.zap_flags Date: Wed, 1 Sep 2021 16:57:22 -0400 Message-Id: <20210901205722.7328-1-peterx@redhat.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210901205622.6935-1-peterx@redhat.com> References: <20210901205622.6935-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: C0182D0000AA Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SA5j9O66; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf20.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=peterx@redhat.com X-Rspamd-Server: rspam01 X-Stat-Signature: b8arwx543nbxnydsaqg6ac4y77seucm3 X-HE-Tag: 1630529849-373921 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Instead of trying to introduce one variable for every new zap_details fields, let's introduce a flag so that it can start to encode true/false informations. Let's start to use this flag first to clean up the only check_mapping variable. Firstly, the name "check_mapping" implies this is a "boolean", but actually it stores the mapping inside, just in a way that it won't be set if we don't want to check the mapping. To make things clearer, introduce the 1st zap flag ZAP_FLAG_CHECK_MAPPING, so that we only check against the mapping if this bit set. At the same time, we can rename check_mapping into zap_mapping and set it always. Since at it, introduce another helper zap_check_mapping_skip() and use it in zap_pte_range() properly. Some old comments have been removed in zap_pte_range() because they're duplicated, and since now we're with ZAP_FLAG_CHECK_MAPPING flag, it'll be very easy to grep this information by simply grepping the flag. It'll also make life easier when we want to e.g. pass in zap_flags into the callers like unmap_mapping_pages() (instead of adding new booleans besides the even_cows parameter). Signed-off-by: Peter Xu --- include/linux/mm.h | 19 ++++++++++++++++++- mm/memory.c | 34 ++++++++++------------------------ 2 files changed, 28 insertions(+), 25 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 69259229f090..fcbc1c4f8e8e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1716,14 +1716,31 @@ static inline bool can_do_mlock(void) { return false; } extern int user_shm_lock(size_t, struct ucounts *); extern void user_shm_unlock(size_t, struct ucounts *); +/* Whether to check page->mapping when zapping */ +#define ZAP_FLAG_CHECK_MAPPING BIT(0) + /* * Parameter block passed down to zap_pte_range in exceptional cases. */ struct zap_details { - struct address_space *check_mapping; /* Check page->mapping if set */ + struct address_space *zap_mapping; struct page *single_page; /* Locked page to be unmapped */ + unsigned long zap_flags; }; +/* Return true if skip zapping this page, false otherwise */ +static inline bool +zap_skip_check_mapping(struct zap_details *details, struct page *page) +{ + if (!details || !page) + return false; + + if (!(details->zap_flags & ZAP_FLAG_CHECK_MAPPING)) + return false; + + return details->zap_mapping != page_rmapping(page); +} + struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte); struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, diff --git a/mm/memory.c b/mm/memory.c index 3b860f6a51ac..05ccacda4fe9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1333,16 +1333,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, struct page *page; page = vm_normal_page(vma, addr, ptent); - if (unlikely(details) && page) { - /* - * unmap_shared_mapping_pages() wants to - * invalidate cache without truncating: - * unmap shared but keep private pages. - */ - if (details->check_mapping && - details->check_mapping != page_rmapping(page)) - continue; - } + if (unlikely(zap_skip_check_mapping(details, page))) + continue; ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); tlb_remove_tlb_entry(tlb, pte, addr); @@ -1375,17 +1367,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, is_device_exclusive_entry(entry)) { struct page *page = pfn_swap_entry_to_page(entry); - if (unlikely(details && details->check_mapping)) { - /* - * unmap_shared_mapping_pages() wants to - * invalidate cache without truncating: - * unmap shared but keep private pages. - */ - if (details->check_mapping != - page_rmapping(page)) - continue; - } - + if (unlikely(zap_skip_check_mapping(details, page))) + continue; pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); rss[mm_counter(page)]--; @@ -3369,8 +3352,9 @@ void unmap_mapping_page(struct page *page) first_index = page->index; last_index = page->index + thp_nr_pages(page) - 1; - details.check_mapping = mapping; + details.zap_mapping = mapping; details.single_page = page; + details.zap_flags = ZAP_FLAG_CHECK_MAPPING; i_mmap_lock_write(mapping); if (unlikely(!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root))) @@ -3395,9 +3379,11 @@ void unmap_mapping_pages(struct address_space *mapping, pgoff_t start, pgoff_t nr, bool even_cows) { pgoff_t first_index = start, last_index = start + nr - 1; - struct zap_details details = { }; + struct zap_details details = { .zap_mapping = mapping }; + + if (!even_cows) + details.zap_flags |= ZAP_FLAG_CHECK_MAPPING; - details.check_mapping = even_cows ? NULL : mapping; if (last_index < first_index) last_index = ULONG_MAX;