From patchwork Thu Sep 2 21:56:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12473077 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A76CC433EF for ; Thu, 2 Sep 2021 21:56:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4BD5460E8B for ; Thu, 2 Sep 2021 21:56:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4BD5460E8B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id DF4A06B011F; Thu, 2 Sep 2021 17:56:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DA4C66B0120; Thu, 2 Sep 2021 17:56:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C6D836B0121; Thu, 2 Sep 2021 17:56:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id B8B8D6B011F for ; Thu, 2 Sep 2021 17:56:42 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 840CA82F3103 for ; Thu, 2 Sep 2021 21:56:42 +0000 (UTC) X-FDA: 78543993444.34.4C66B5E Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf18.hostedemail.com (Postfix) with ESMTP id 2CCE14002087 for ; Thu, 2 Sep 2021 21:56:42 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id CD368603E9; Thu, 2 Sep 2021 21:56:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1630619801; bh=PslTs4ahvcZGXx8UdTpJh/OsPSM8uqLo/YVA61IE9fQ=; h=Date:From:To:Subject:In-Reply-To:From; b=PNucE00mcclIt82N9f2PnKZBw9OGMvSnZ/TOb+fAvWdgW+sFv/Eb54ah82RVfnWKI A2LtUwHgbUTI/c98q6V0NYkwiHJmG23pYuBNFITLxAdJpJcN/cKMAIoM4ns0bB/GxU 2psgEeH8/maLUD+NhwpaoR0aHT6OAprbocITlNs0= Date: Thu, 02 Sep 2021 14:56:40 -0700 From: Andrew Morton To: aarcange@redhat.com, akpm@linux-foundation.org, borntraeger@de.ibm.com, dan.carpenter@oracle.com, gerald.schaefer@linux.ibm.com, gor@linux.ibm.com, hca@linux.ibm.com, hughd@google.com, kirill.shutemov@linux.intel.com, linux-mm@kvack.org, mgorman@suse.de, mhocko@suse.com, mm-commits@vger.kernel.org, pbonzini@redhat.com, shy828301@gmail.com, torvalds@linux-foundation.org, ying.huang@intel.com, ziy@nvidia.com Subject: [patch 125/212] mm,do_huge_pmd_numa_page: remove unnecessary TLB flushing code Message-ID: <20210902215640.JIDonco3t%akpm@linux-foundation.org> In-Reply-To: <20210902144820.78957dff93d7bea620d55a89@linux-foundation.org> User-Agent: s-nail v14.8.16 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=PNucE00m; spf=pass (imf18.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 2CCE14002087 X-Stat-Signature: zqmr1gearqyz6n45gs9s8165hpecjbwi X-HE-Tag: 1630619802-187217 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Huang Ying Subject: mm,do_huge_pmd_numa_page: remove unnecessary TLB flushing code Before commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault handling"), the TLB flushing is done in do_huge_pmd_numa_page() itself via flush_tlb_range(). But after commit c5b5a3dd2c1f ("mm: thp: refactor NUMA fault handling"), the TLB flushing is done in migrate_pages() as in the following code path anyway. do_huge_pmd_numa_page migrate_misplaced_page migrate_pages So now, the TLB flushing code in do_huge_pmd_numa_page() becomes unnecessary. So the code is deleted in this patch to simplify the code. This is only code cleanup, there's no visible performance difference. The mmu_notifier_invalidate_range() in do_huge_pmd_numa_page() is deleted too. Because migrate_pages() takes care of that too when CPU TLB is flushed. Link: https://lkml.kernel.org/r/20210720065529.716031-1-ying.huang@intel.com Signed-off-by: "Huang, Ying" Reviewed-by: Zi Yan Reviewed-by: Yang Shi Cc: Dan Carpenter Cc: Mel Gorman Cc: Christian Borntraeger Cc: Gerald Schaefer Cc: Heiko Carstens Cc: Hugh Dickins Cc: Andrea Arcangeli Cc: Kirill A. Shutemov Cc: Michal Hocko Cc: Vasily Gorbik Cc: Paolo Bonzini Signed-off-by: Andrew Morton --- mm/huge_memory.c | 26 -------------------------- 1 file changed, 26 deletions(-) --- a/mm/huge_memory.c~mmdo_huge_pmd_numa_page-remove-unnecessary-tlb-flushing-code +++ a/mm/huge_memory.c @@ -1440,32 +1440,6 @@ vm_fault_t do_huge_pmd_numa_page(struct goto out; } - /* - * Since we took the NUMA fault, we must have observed the !accessible - * bit. Make sure all other CPUs agree with that, to avoid them - * modifying the page we're about to migrate. - * - * Must be done under PTL such that we'll observe the relevant - * inc_tlb_flush_pending(). - * - * We are not sure a pending tlb flush here is for a huge page - * mapping or not. Hence use the tlb range variant - */ - if (mm_tlb_flush_pending(vma->vm_mm)) { - flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE); - /* - * change_huge_pmd() released the pmd lock before - * invalidating the secondary MMUs sharing the primary - * MMU pagetables (with ->invalidate_range()). The - * mmu_notifier_invalidate_range_end() (which - * internally calls ->invalidate_range()) in - * change_pmd_range() will run after us, so we can't - * rely on it here and we need an explicit invalidate. - */ - mmu_notifier_invalidate_range(vma->vm_mm, haddr, - haddr + HPAGE_PMD_SIZE); - } - pmd = pmd_modify(oldpmd, vma->vm_page_prot); page = vm_normal_page_pmd(vma, haddr, pmd); if (!page)