From patchwork Mon Mar 29 18:33:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12170759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3983EC433C1 for ; Mon, 29 Mar 2021 18:34:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D230D60231 for ; Mon, 29 Mar 2021 18:34:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D230D60231 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6F6846B0080; Mon, 29 Mar 2021 14:34:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6CCB76B0081; Mon, 29 Mar 2021 14:34:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 594C16B0082; Mon, 29 Mar 2021 14:34:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0007.hostedemail.com [216.40.44.7]) by kanga.kvack.org (Postfix) with ESMTP id 3C4AB6B0080 for ; Mon, 29 Mar 2021 14:34:06 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EF7C48249980 for ; Mon, 29 Mar 2021 18:34:05 +0000 (UTC) X-FDA: 77973761250.16.10DF757 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) by imf05.hostedemail.com (Postfix) with ESMTP id 44DEAE005F10 for ; Mon, 29 Mar 2021 18:33:48 +0000 (UTC) Received: by mail-pg1-f170.google.com with SMTP id f3so762523pgv.0 for ; Mon, 29 Mar 2021 11:33:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=m9Q4QKqwcQMAORneAc/Zw6kgnKaiAUowUeYEWr15KRM=; b=OwnJ6tOZBwLlHpc0ebDRWyjRqOe5wlOc68rT6DL8EwIgfliEEOsx4kr7+6437Hq5Cy TqAw24Pdz7TJMOe6YrvLFKX/HljM3xppnXc3zeXw4aVYh3tm9AgnsDRjgdeRteXfC4Oo RsihUS7WyDfnFGd67ev834X86ylza1ofNEvZXAatEgYqt5yS/JDuhEgfN0fM72Pc0LY1 OuPT7TCaOkp12UbmKjchy4kfGgIi5y6v/8drmWp3RsYA/9ZKhtCnVGHxcpxadBFA2ccF 9dglr+vZIq9gCf8VT/vv1d1WYcaMOwmRIQrY1VhbgjZ604t/UohxrDA6WWj2pIJWcB+M ikMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m9Q4QKqwcQMAORneAc/Zw6kgnKaiAUowUeYEWr15KRM=; b=cS0u8Jz7FdnQ3sn9k02a0cWKY3H0ZxJNLsL/icDoUUrl0sGJHbo3qbbYsasumxWvL1 aVtY5BgbdBe7z4oewfbyCcc/eXiELaonybDpno4MWzTdQSYHpvDHmSvGnP1haL687x8q 0det+VS1v6cj7MVgViIalaZ2URhLTjSrFB8hg3zwK3/pLd5OMvoKGcvscmUddIvmagmG DDzol2WgaxhHub9UDL0KM0E0vYv7gVY4/OHolDCRDO4WcJPXtzKymb9C/UTo3cRnpPuU gN35u4dWexP01lLnPN39iIYYnYpDiN1twbGq91Wb779RHCcnme6eokcWOfarWHXDIVDV tayg== X-Gm-Message-State: AOAM533JJnYxDxfrz8/qCgSTaX5h4Q09mH10odRQY20JgysEFDx0TPPH 4z1xlUqhXqdcC0O6fAiTATQ= X-Google-Smtp-Source: ABdhPJzhvTRzixfphyZwE7maKYPzou0OjZIKCuU+hpHgC+xpcrBruTrZjEbYxwVb/7idZkqemxbmfQ== X-Received: by 2002:a65:62cb:: with SMTP id m11mr24897581pgv.77.1617042822827; Mon, 29 Mar 2021 11:33:42 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id x11sm1151158pjh.0.2021.03.29.11.33.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Mar 2021 11:33:41 -0700 (PDT) From: Yang Shi To: mgorman@suse.de, kirill.shutemov@linux.intel.com, ziy@nvidia.com, mhocko@suse.com, ying.huang@intel.com, hughd@google.com, hca@linux.ibm.com, gor@linux.ibm.com, borntraeger@de.ibm.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/6] mm: memory: add orig_pmd to struct vm_fault Date: Mon, 29 Mar 2021 11:33:07 -0700 Message-Id: <20210329183312.178266-2-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329183312.178266-1-shy828301@gmail.com> References: <20210329183312.178266-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 44DEAE005F10 X-Stat-Signature: yi4ach3rr3iy568muptipbusig8jxstp Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf05; identity=mailfrom; envelope-from=""; helo=mail-pg1-f170.google.com; client-ip=209.85.215.170 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617042828-800333 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add orig_pmd to struct vm_fault so the "orig_pmd" parameter used by huge page fault could be removed, just like its PTE counterpart does. Signed-off-by: Yang Shi --- include/linux/huge_mm.h | 9 ++++----- include/linux/mm.h | 1 + mm/huge_memory.c | 9 ++++++--- mm/memory.c | 26 +++++++++++++------------- 4 files changed, 24 insertions(+), 21 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index ba973efcd369..5650db25a49d 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -11,7 +11,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf); int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, struct vm_area_struct *vma); -void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd); +void huge_pmd_set_accessed(struct vm_fault *vmf); int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, pud_t *dst_pud, pud_t *src_pud, unsigned long addr, struct vm_area_struct *vma); @@ -24,7 +24,7 @@ static inline void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud) } #endif -vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd); +vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf); struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, unsigned int flags); @@ -286,7 +286,7 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap); -vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t orig_pmd); +vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); extern struct page *huge_zero_page; @@ -432,8 +432,7 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, return NULL; } -static inline vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, - pmd_t orig_pmd) +static inline vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) { return 0; } diff --git a/include/linux/mm.h b/include/linux/mm.h index 8ba434287387..899f55d46fba 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -528,6 +528,7 @@ struct vm_fault { * the 'address' */ pte_t orig_pte; /* Value of PTE at the time of fault */ + pmd_t orig_pmd; /* Value of PMD at the time of fault */ struct page *cow_page; /* Page handler may use for COW fault */ struct page *page; /* ->fault handlers should return a diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ae907a9c2050..53f3843ce72a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1252,11 +1252,12 @@ void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud) } #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd) +void huge_pmd_set_accessed(struct vm_fault *vmf) { pmd_t entry; unsigned long haddr; bool write = vmf->flags & FAULT_FLAG_WRITE; + pmd_t orig_pmd = vmf->orig_pmd; vmf->ptl = pmd_lock(vmf->vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) @@ -1273,11 +1274,12 @@ void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd) spin_unlock(vmf->ptl); } -vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) +vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; struct page *page; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; + pmd_t orig_pmd = vmf->orig_pmd; vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd); VM_BUG_ON_VMA(!vma->anon_vma, vma); @@ -1413,9 +1415,10 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, } /* NUMA hinting page fault entry point for trans huge pmds */ -vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd) +vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; + pmd_t pmd = vmf->orig_pmd; struct anon_vma *anon_vma = NULL; struct page *page; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; diff --git a/mm/memory.c b/mm/memory.c index 5efa07fb6cdc..33be5811ac65 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4193,12 +4193,12 @@ static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf) } /* `inline' is required to avoid gcc 4.1.2 build error */ -static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd) +static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf) { if (vma_is_anonymous(vmf->vma)) { - if (userfaultfd_huge_pmd_wp(vmf->vma, orig_pmd)) + if (userfaultfd_huge_pmd_wp(vmf->vma, vmf->orig_pmd)) return handle_userfault(vmf, VM_UFFD_WP); - return do_huge_pmd_wp_page(vmf, orig_pmd); + return do_huge_pmd_wp_page(vmf); } if (vmf->vma->vm_ops->huge_fault) { vm_fault_t ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD); @@ -4425,26 +4425,26 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, if (!(ret & VM_FAULT_FALLBACK)) return ret; } else { - pmd_t orig_pmd = *vmf.pmd; + vmf.orig_pmd = *vmf.pmd; barrier(); - if (unlikely(is_swap_pmd(orig_pmd))) { + if (unlikely(is_swap_pmd(vmf.orig_pmd))) { VM_BUG_ON(thp_migration_supported() && - !is_pmd_migration_entry(orig_pmd)); - if (is_pmd_migration_entry(orig_pmd)) + !is_pmd_migration_entry(vmf.orig_pmd)); + if (is_pmd_migration_entry(vmf.orig_pmd)) pmd_migration_entry_wait(mm, vmf.pmd); return 0; } - if (pmd_trans_huge(orig_pmd) || pmd_devmap(orig_pmd)) { - if (pmd_protnone(orig_pmd) && vma_is_accessible(vma)) - return do_huge_pmd_numa_page(&vmf, orig_pmd); + if (pmd_trans_huge(vmf.orig_pmd) || pmd_devmap(vmf.orig_pmd)) { + if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma)) + return do_huge_pmd_numa_page(&vmf); - if (dirty && !pmd_write(orig_pmd)) { - ret = wp_huge_pmd(&vmf, orig_pmd); + if (dirty && !pmd_write(vmf.orig_pmd)) { + ret = wp_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; } else { - huge_pmd_set_accessed(&vmf, orig_pmd); + huge_pmd_set_accessed(&vmf); return 0; } } From patchwork Mon Mar 29 18:33:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12170761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C33AC433DB for ; Mon, 29 Mar 2021 18:34:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 076AA60231 for ; Mon, 29 Mar 2021 18:34:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 076AA60231 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9A0A86B0081; Mon, 29 Mar 2021 14:34:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 965E16B0082; Mon, 29 Mar 2021 14:34:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 806236B0083; Mon, 29 Mar 2021 14:34:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id 5FA536B0081 for ; Mon, 29 Mar 2021 14:34:07 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 27DC645A8 for ; Mon, 29 Mar 2021 18:34:07 +0000 (UTC) X-FDA: 77973761334.32.AE94FA5 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by imf09.hostedemail.com (Postfix) with ESMTP id AE3D26000126 for ; Mon, 29 Mar 2021 18:33:49 +0000 (UTC) Received: by mail-pf1-f172.google.com with SMTP id q5so10410103pfh.10 for ; Mon, 29 Mar 2021 11:33:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FYsTycVF/lf5Suqc/le+NiU66k2yrrkWCgrAdMasvHQ=; b=EwRoKPvlF7FWX9zpVxFvTd+9Gbfv+po1sNagOi00oOc4hc08cqDJyvrx/d9Nj7oiXe 8QNj4EBukFzHkr2WG9TLfdQZnqiZuslYs9rrJA3QsutNFZqVYaEw7xiB4CwhLTbmU9XL HoTsTBCB3fYRFi4OZM5ZADxflQ5OiqwrSBnxO8vrMxYRZO05CzSn6wnGJDqOGwQUU20t /IK3nNMn8ZM+UlafqfarLjZJGVX2HrEwnE0HcjskCVHfm+ykKtnmRXxZE/tsnHVd7/hs xBr0jt4STA/4Zd+HAehlVwRRBEFaqdUk/hUixGzinRD20IS+IvGgnoDG7MNN8L3aQ1g+ U0lA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FYsTycVF/lf5Suqc/le+NiU66k2yrrkWCgrAdMasvHQ=; b=MpA8D2qtXXvqkIj1P6wSAr3xGW01LgUvNdPGAye+8Z+MQRm5alUrLYDNAMElhyPROP UROaE753B0EY8IKWxVrP0xZmC7AFO12I8R2CJvdwDVhLFg2lYnKGg/xuUAlGDfB2uTbc z8MOL3JRzlXlQEsSYaqgDO0EwYIlN13c5BR61Tn7A97ebxWVxspD3UaJkk/BsDxUs2NV zzniqoJhIMLpBNGLuhx7CLEWXlzlLK/ssUpv1OOF5MopujuGM2qXAW3KMP93GPupCSEp jwVINO1JrpPXSvr6jcqJ+B5N7NHQESgMohYi9LmDdRP6L7Pyuipr5WztvmczItGhy7Kn 90gw== X-Gm-Message-State: AOAM531i14EgGVSaobplfrSY+jEhZqJzOzdYJ2XgPMR0pbcfoiTo9ahk YMP2dljcoFhGrByRxArCOmM= X-Google-Smtp-Source: ABdhPJyUwLv/aBlbpIZMlUWrgzevZQFoOKfq2lF1E8ilUH/PStCQt54+emM0GTOxUo68jPoyEJbLWA== X-Received: by 2002:a63:1f1e:: with SMTP id f30mr25650718pgf.141.1617042825188; Mon, 29 Mar 2021 11:33:45 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id x11sm1151158pjh.0.2021.03.29.11.33.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Mar 2021 11:33:44 -0700 (PDT) From: Yang Shi To: mgorman@suse.de, kirill.shutemov@linux.intel.com, ziy@nvidia.com, mhocko@suse.com, ying.huang@intel.com, hughd@google.com, hca@linux.ibm.com, gor@linux.ibm.com, borntraeger@de.ibm.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/6] mm: memory: make numa_migrate_prep() non-static Date: Mon, 29 Mar 2021 11:33:08 -0700 Message-Id: <20210329183312.178266-3-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329183312.178266-1-shy828301@gmail.com> References: <20210329183312.178266-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: AE3D26000126 X-Stat-Signature: 75ep7dzg37ste6e9u7a1s8p6h5aek9g9 Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf09; identity=mailfrom; envelope-from=""; helo=mail-pf1-f172.google.com; client-ip=209.85.210.172 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617042829-240156 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The numa_migrate_prep() will be used by huge NUMA fault as well in the following patch, make it non-static. Signed-off-by: Yang Shi --- mm/internal.h | 3 +++ mm/memory.c | 5 ++--- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 1432feec62df..5ac525f364e6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -618,4 +618,7 @@ struct migration_target_control { gfp_t gfp_mask; }; +int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, + unsigned long addr, int page_nid, int *flags); + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memory.c b/mm/memory.c index 33be5811ac65..003bbf3187d4 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4078,9 +4078,8 @@ static vm_fault_t do_fault(struct vm_fault *vmf) return ret; } -static int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, - unsigned long addr, int page_nid, - int *flags) +int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, + unsigned long addr, int page_nid, int *flags) { get_page(page); From patchwork Mon Mar 29 18:33:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12170765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C26C2C433E0 for ; Mon, 29 Mar 2021 18:34:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 577B260231 for ; Mon, 29 Mar 2021 18:34:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 577B260231 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E97486B0083; Mon, 29 Mar 2021 14:34:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E209A6B0085; Mon, 29 Mar 2021 14:34:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC2526B0087; Mon, 29 Mar 2021 14:34:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0132.hostedemail.com [216.40.44.132]) by kanga.kvack.org (Postfix) with ESMTP id B08796B0083 for ; Mon, 29 Mar 2021 14:34:10 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6A4CB8249980 for ; Mon, 29 Mar 2021 18:34:10 +0000 (UTC) X-FDA: 77973761460.40.24900B0 Received: from mail-pg1-f182.google.com (unknown [209.85.215.182]) by imf26.hostedemail.com (Postfix) with ESMTP id 296604142C4D for ; Mon, 29 Mar 2021 18:34:01 +0000 (UTC) Received: by mail-pg1-f182.google.com with SMTP id v186so9965442pgv.7 for ; Mon, 29 Mar 2021 11:34:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8eLl3881AHbZ4nNjB6RvYqJ53F/ZpOHZuU1QaCJvItA=; b=kwSaBPZO7cNmxBxn0pqrgpKYEG8tAOHuiuyNiyDxCJ5co/V4+pKYYq4FPQlfXlFkVB 5T+u5kfT14LU01mRwKUj4D+eSFyy1ceWJTJSR1Z2ys75gI+Bo1tDjOaTFdJiSMr78KDW aJJerTatPG6tRajq8FGyMWo2ldNPddMXrCpIKW39FwssTzJZkoxIwEmODd2lYwhBRn8K eVaK0fkoHahUGSUCVJFhhlUuT9TwH9NUJ+25lQVka8UPmN7gwQBeQ1U+YygBwUdG8+au B7bT6WXhQY03CcKZtp7qSmKKPeh0QuVPi7Iu/XFw3F5qwa89ns3oqMyzEpV9st69Rvw+ MeKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8eLl3881AHbZ4nNjB6RvYqJ53F/ZpOHZuU1QaCJvItA=; b=QfJoG57pLKhDjzEZNmx6i1i6oNcP3OqwV5ityAEo0eQ7DCAR9NEKQmTLxt4oVF4N91 PMwIVJzWnwHYFutzRXQllrpGjXzMidJx2ilCr4RmQO4BfVnsvFyclfZnm0aIo7VZojhM +qfG9uVufPoTuLAc4DZyQJdW7+UcQqRPoWJyVPOydlxYdzFtA8HSI7b+e7D34Wujq5d3 5E3p2QzDjwIek+kE/x4nh0LyLWRumM+QvpjPrJ8NYSJL8zb7pKUWouy6aPkmk33JV53Q yZoBmFP2aKgd1DPxXp57JQlvWGn5NgMb0peVynISnwrhEP7Lbr7Cv7/38u6mtE5/DDoZ BuvQ== X-Gm-Message-State: AOAM532CVAjVyPYcoYpS1tFnj4aAMwf8vUSEfJ1C00cJsMIfowC2BYBV w/4MBPnMgTTbEY1tpqG2qKk= X-Google-Smtp-Source: ABdhPJxyn2xuG80U+SXHgd1NCntqw7s6KhVXO/7VXrjrexxIiOWCkyV1JlwgzlIBdN+psfQDbKvSIg== X-Received: by 2002:a63:5852:: with SMTP id i18mr8939381pgm.337.1617042827282; Mon, 29 Mar 2021 11:33:47 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id x11sm1151158pjh.0.2021.03.29.11.33.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Mar 2021 11:33:46 -0700 (PDT) From: Yang Shi To: mgorman@suse.de, kirill.shutemov@linux.intel.com, ziy@nvidia.com, mhocko@suse.com, ying.huang@intel.com, hughd@google.com, hca@linux.ibm.com, gor@linux.ibm.com, borntraeger@de.ibm.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/6] mm: migrate: teach migrate_misplaced_page() about THP Date: Mon, 29 Mar 2021 11:33:09 -0700 Message-Id: <20210329183312.178266-4-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329183312.178266-1-shy828301@gmail.com> References: <20210329183312.178266-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 296604142C4D X-Stat-Signature: 8u5fk9bsan7wf9uo5j1dwh3fxna9ogzk Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from=""; helo=mail-pg1-f182.google.com; client-ip=209.85.215.182 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617042840-970617 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the following patch the migrate_misplaced_page() will be used to migrate THP for NUMA faul too. Prepare to deal with THP. Signed-off-by: Yang Shi --- include/linux/migrate.h | 6 ++++-- mm/memory.c | 2 +- mm/migrate.c | 2 +- 3 files changed, 6 insertions(+), 4 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 3a389633b68f..6abd34986cc5 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -102,14 +102,16 @@ static inline void __ClearPageMovable(struct page *page) #ifdef CONFIG_NUMA_BALANCING extern bool pmd_trans_migrating(pmd_t pmd); extern int migrate_misplaced_page(struct page *page, - struct vm_area_struct *vma, int node); + struct vm_area_struct *vma, int node, + bool compound); #else static inline bool pmd_trans_migrating(pmd_t pmd) { return false; } static inline int migrate_misplaced_page(struct page *page, - struct vm_area_struct *vma, int node) + struct vm_area_struct *vma, int node, + bool compound) { return -EAGAIN; /* can't migrate now */ } diff --git a/mm/memory.c b/mm/memory.c index 003bbf3187d4..7fed578bdc31 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4169,7 +4169,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) } /* Migrate to the requested node */ - migrated = migrate_misplaced_page(page, vma, target_nid); + migrated = migrate_misplaced_page(page, vma, target_nid, false); if (migrated) { page_nid = target_nid; flags |= TNF_MIGRATED; diff --git a/mm/migrate.c b/mm/migrate.c index 62b81d5257aa..9c4ae5132919 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2127,7 +2127,7 @@ static inline bool is_shared_exec_page(struct vm_area_struct *vma, * the page that will be dropped by this function before returning. */ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, - int node) + int node, bool compound) { pg_data_t *pgdat = NODE_DATA(node); int isolated; From patchwork Mon Mar 29 18:33:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12170767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32FCAC433DB for ; Mon, 29 Mar 2021 18:34:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B782D60231 for ; Mon, 29 Mar 2021 18:34:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B782D60231 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 56F096B0085; Mon, 29 Mar 2021 14:34:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 544E76B0087; Mon, 29 Mar 2021 14:34:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40DB46B0088; Mon, 29 Mar 2021 14:34:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 1D4596B0085 for ; Mon, 29 Mar 2021 14:34:25 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id CBC746D78 for ; Mon, 29 Mar 2021 18:34:24 +0000 (UTC) X-FDA: 77973762048.32.DAE98B4 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf07.hostedemail.com (Postfix) with ESMTP id CDD62A0049FF for ; Mon, 29 Mar 2021 18:33:50 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id t20so4817293plr.13 for ; Mon, 29 Mar 2021 11:33:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=29/H2A8xfgwNsEVym/wTOPWEPYxThWyLZ916L71Yy00=; b=mrmoTg7T8w2ewOZaujRQucTGGuKwCcmyMGPkK+fJqCDjBpKWyHWnHZFWACqWCEkwiH cUHSuyTUqNP6MtQdz9EX3FZtA0eyM0rfofkZqTqORPNdPlZ/qp70hMZ3l0uR3Li2BvBI nxFmgtpTbx4bpxpjf2I6eowQo4Ji2QZ/r7RZPZyqByrpip2VPHMk6xIJGb0Pb134TAJQ G6y8TGwicM/jwNtKC0CC117RjFSU7+8CtiDOZZuVNr2e+fPJBQwukYzAKYD2IUpvY7Nt eAMcwuM5ini3R8dkVeNQhX6cf9JHKsueug8CZ7BI8f6FnkGJJHnWDb6Xw4AFqWENv6o/ qXQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=29/H2A8xfgwNsEVym/wTOPWEPYxThWyLZ916L71Yy00=; b=WuWMq4w5GXxrLVNpYo/Fiai3uvUEvtzfr39/i/14i0KCrTeZ9ax68qtA+I4fZlprab 5OrokF5xE1KSQUhhMccq6HuaBjs8gdmJBgTUTAadqzv7adVa44UwQfRN3GC8tFx+Bo7+ WDKjhbzWvw/0vrpPARZ6KqG1Nhbsisq/mNbcctBGbP5Y29dEuMnFMEVccug3j2+eDSS4 tZ5eqmlPuE07RltkYCFTOQagjHDtiRgG9573lW6lucrSmNjQweYO7M+ckOKaT+CRxZ1O +hWtPgdWFtcSxfdkMazDQxwVc+vcD1ag3URnZJ9d/b3AYx4MVIvFtQFpdVTTpzgFQp4W esLA== X-Gm-Message-State: AOAM531LQMGXOBE/QI3qU7kBoEeLBuK3+D+DXNhJ969HlraiVIhhmHnB QY8AtMKPcsbF45MQADRc1qo= X-Google-Smtp-Source: ABdhPJzzGtILjPNCm+kpWPLTjkZtqjZWR1rcTVghha1v/37IKDj/4fa2DaB5UVX0VoofTQSBqcMHwQ== X-Received: by 2002:a17:90b:203:: with SMTP id fy3mr490758pjb.32.1617042829545; Mon, 29 Mar 2021 11:33:49 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id x11sm1151158pjh.0.2021.03.29.11.33.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Mar 2021 11:33:48 -0700 (PDT) From: Yang Shi To: mgorman@suse.de, kirill.shutemov@linux.intel.com, ziy@nvidia.com, mhocko@suse.com, ying.huang@intel.com, hughd@google.com, hca@linux.ibm.com, gor@linux.ibm.com, borntraeger@de.ibm.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/6] mm: thp: refactor NUMA fault handling Date: Mon, 29 Mar 2021 11:33:10 -0700 Message-Id: <20210329183312.178266-5-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329183312.178266-1-shy828301@gmail.com> References: <20210329183312.178266-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: CDD62A0049FF X-Stat-Signature: d8spdftadequjd5s3f8tyqo4q5dqf1ti Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf07; identity=mailfrom; envelope-from=""; helo=mail-pl1-f179.google.com; client-ip=209.85.214.179 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617042830-132204 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When the THP NUMA fault support was added THP migration was not supported yet. So the ad hoc THP migration was implemented in NUMA fault handling. Since v4.14 THP migration has been supported so it doesn't make too much sense to still keep another THP migration implementation rather than using the generic migration code. This patch reworked the NUMA fault handling to use generic migration implementation to migrate misplaced page. There is no functional change. After the refactor the flow of NUMA fault handling looks just like its PTE counterpart: Acquire ptl Restore PMD Prepare for migration (elevate page refcount) Release ptl Isolate page from lru and elevate page refcount Migrate the misplaced THP In the old code anon_vma lock was needed to serialize THP migration against THP split, but since then the THP code has been reworked a lot, it seems anon_vma lock is not required anymore to avoid the race. The page refcount elevation when holding ptl should prevent from THP split. Signed-off-by: Yang Shi --- include/linux/migrate.h | 23 ------ mm/huge_memory.c | 132 ++++++++---------------------- mm/migrate.c | 173 ++++++---------------------------------- 3 files changed, 57 insertions(+), 271 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 6abd34986cc5..6c8640e9af4f 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -100,15 +100,10 @@ static inline void __ClearPageMovable(struct page *page) #endif #ifdef CONFIG_NUMA_BALANCING -extern bool pmd_trans_migrating(pmd_t pmd); extern int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, int node, bool compound); #else -static inline bool pmd_trans_migrating(pmd_t pmd) -{ - return false; -} static inline int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, int node, bool compound) @@ -117,24 +112,6 @@ static inline int migrate_misplaced_page(struct page *page, } #endif /* CONFIG_NUMA_BALANCING */ -#if defined(CONFIG_NUMA_BALANCING) && defined(CONFIG_TRANSPARENT_HUGEPAGE) -extern int migrate_misplaced_transhuge_page(struct mm_struct *mm, - struct vm_area_struct *vma, - pmd_t *pmd, pmd_t entry, - unsigned long address, - struct page *page, int node); -#else -static inline int migrate_misplaced_transhuge_page(struct mm_struct *mm, - struct vm_area_struct *vma, - pmd_t *pmd, pmd_t entry, - unsigned long address, - struct page *page, int node) -{ - return -EAGAIN; -} -#endif /* CONFIG_NUMA_BALANCING && CONFIG_TRANSPARENT_HUGEPAGE*/ - - #ifdef CONFIG_MIGRATION /* diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 53f3843ce72a..157c63b0fd95 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1419,94 +1419,20 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; pmd_t pmd = vmf->orig_pmd; - struct anon_vma *anon_vma = NULL; struct page *page; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; - int page_nid = NUMA_NO_NODE, this_nid = numa_node_id(); + int page_nid = NUMA_NO_NODE; int target_nid, last_cpupid = -1; - bool page_locked; bool migrated = false; - bool was_writable; + bool was_writable = pmd_savedwrite(pmd); int flags = 0; vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); - if (unlikely(!pmd_same(pmd, *vmf->pmd))) - goto out_unlock; - - /* - * If there are potential migrations, wait for completion and retry - * without disrupting NUMA hinting information. Do not relock and - * check_same as the page may no longer be mapped. - */ - if (unlikely(pmd_trans_migrating(*vmf->pmd))) { - page = pmd_page(*vmf->pmd); - if (!get_page_unless_zero(page)) - goto out_unlock; - spin_unlock(vmf->ptl); - put_and_wait_on_page_locked(page, TASK_UNINTERRUPTIBLE); - goto out; - } - - page = pmd_page(pmd); - BUG_ON(is_huge_zero_page(page)); - page_nid = page_to_nid(page); - last_cpupid = page_cpupid_last(page); - count_vm_numa_event(NUMA_HINT_FAULTS); - if (page_nid == this_nid) { - count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); - flags |= TNF_FAULT_LOCAL; - } - - /* See similar comment in do_numa_page for explanation */ - if (!pmd_savedwrite(pmd)) - flags |= TNF_NO_GROUP; - - /* - * Acquire the page lock to serialise THP migrations but avoid dropping - * page_table_lock if at all possible - */ - page_locked = trylock_page(page); - target_nid = mpol_misplaced(page, vma, haddr); - if (target_nid == NUMA_NO_NODE) { - /* If the page was locked, there are no parallel migrations */ - if (page_locked) - goto clear_pmdnuma; - } - - /* Migration could have started since the pmd_trans_migrating check */ - if (!page_locked) { - page_nid = NUMA_NO_NODE; - if (!get_page_unless_zero(page)) - goto out_unlock; + if (unlikely(!pmd_same(pmd, *vmf->pmd))) { spin_unlock(vmf->ptl); - put_and_wait_on_page_locked(page, TASK_UNINTERRUPTIBLE); goto out; } - /* - * Page is misplaced. Page lock serialises migrations. Acquire anon_vma - * to serialises splits - */ - get_page(page); - spin_unlock(vmf->ptl); - anon_vma = page_lock_anon_vma_read(page); - - /* Confirm the PMD did not change while page_table_lock was released */ - spin_lock(vmf->ptl); - if (unlikely(!pmd_same(pmd, *vmf->pmd))) { - unlock_page(page); - put_page(page); - page_nid = NUMA_NO_NODE; - goto out_unlock; - } - - /* Bail if we fail to protect against THP splits for any reason */ - if (unlikely(!anon_vma)) { - put_page(page); - page_nid = NUMA_NO_NODE; - goto clear_pmdnuma; - } - /* * Since we took the NUMA fault, we must have observed the !accessible * bit. Make sure all other CPUs agree with that, to avoid them @@ -1533,38 +1459,44 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) haddr + HPAGE_PMD_SIZE); } - /* - * Migrate the THP to the requested node, returns with page unlocked - * and access rights restored. - */ - spin_unlock(vmf->ptl); - - migrated = migrate_misplaced_transhuge_page(vma->vm_mm, vma, - vmf->pmd, pmd, vmf->address, page, target_nid); - if (migrated) { - flags |= TNF_MIGRATED; - page_nid = target_nid; - } else - flags |= TNF_MIGRATE_FAIL; - - goto out; -clear_pmdnuma: - BUG_ON(!PageLocked(page)); - was_writable = pmd_savedwrite(pmd); + /* Restore the PMD */ pmd = pmd_modify(pmd, vma->vm_page_prot); pmd = pmd_mkyoung(pmd); if (was_writable) pmd = pmd_mkwrite(pmd); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, pmd); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); - unlock_page(page); -out_unlock: + + page = vm_normal_page_pmd(vma, haddr, pmd); + if (!page) { + spin_unlock(vmf->ptl); + goto out; + } + + /* See similar comment in do_numa_page for explanation */ + if (!was_writable) + flags |= TNF_NO_GROUP; + + page_nid = page_to_nid(page); + last_cpupid = page_cpupid_last(page); + target_nid = numa_migrate_prep(page, vma, haddr, page_nid, + &flags); + spin_unlock(vmf->ptl); -out: - if (anon_vma) - page_unlock_anon_vma_read(anon_vma); + if (target_nid == NUMA_NO_NODE) { + put_page(page); + goto out; + } + + migrated = migrate_misplaced_page(page, vma, target_nid, true); + if (migrated) { + flags |= TNF_MIGRATED; + page_nid = target_nid; + } else + flags |= TNF_MIGRATE_FAIL; +out: if (page_nid != NUMA_NO_NODE) task_numa_fault(last_cpupid, page_nid, HPAGE_PMD_NR, flags); diff --git a/mm/migrate.c b/mm/migrate.c index 9c4ae5132919..86325c750c14 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2066,6 +2066,23 @@ static struct page *alloc_misplaced_dst_page(struct page *page, return newpage; } +static struct page *alloc_misplaced_dst_page_thp(struct page *page, + unsigned long data) +{ + int nid = (int) data; + struct page *newpage; + + newpage = alloc_pages_node(nid, (GFP_TRANSHUGE_LIGHT | __GFP_THISNODE), + HPAGE_PMD_ORDER); + if (!newpage) + goto out; + + prep_transhuge_page(newpage); + +out: + return newpage; +} + static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) { int page_lru; @@ -2104,12 +2121,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) return 1; } -bool pmd_trans_migrating(pmd_t pmd) -{ - struct page *page = pmd_page(pmd); - return PageLocked(page); -} - static inline bool is_shared_exec_page(struct vm_area_struct *vma, struct page *page) { @@ -2133,6 +2144,12 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, int isolated; int nr_remaining; LIST_HEAD(migratepages); + new_page_t *new; + + if (compound) + new = alloc_misplaced_dst_page_thp; + else + new = alloc_misplaced_dst_page; /* * Don't migrate file pages that are mapped in multiple processes @@ -2153,9 +2170,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, goto out; list_add(&page->lru, &migratepages); - nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_page, - NULL, node, MIGRATE_ASYNC, - MR_NUMA_MISPLACED); + nr_remaining = migrate_pages(&migratepages, *new, NULL, node, + MIGRATE_ASYNC, MR_NUMA_MISPLACED); if (nr_remaining) { if (!list_empty(&migratepages)) { list_del(&page->lru); @@ -2174,145 +2190,6 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma, return 0; } #endif /* CONFIG_NUMA_BALANCING */ - -#if defined(CONFIG_NUMA_BALANCING) && defined(CONFIG_TRANSPARENT_HUGEPAGE) -/* - * Migrates a THP to a given target node. page must be locked and is unlocked - * before returning. - */ -int migrate_misplaced_transhuge_page(struct mm_struct *mm, - struct vm_area_struct *vma, - pmd_t *pmd, pmd_t entry, - unsigned long address, - struct page *page, int node) -{ - spinlock_t *ptl; - pg_data_t *pgdat = NODE_DATA(node); - int isolated = 0; - struct page *new_page = NULL; - int page_lru = page_is_file_lru(page); - unsigned long start = address & HPAGE_PMD_MASK; - - if (is_shared_exec_page(vma, page)) - goto out; - - new_page = alloc_pages_node(node, - (GFP_TRANSHUGE_LIGHT | __GFP_THISNODE), - HPAGE_PMD_ORDER); - if (!new_page) - goto out_fail; - prep_transhuge_page(new_page); - - isolated = numamigrate_isolate_page(pgdat, page); - if (!isolated) { - put_page(new_page); - goto out_fail; - } - - /* Prepare a page as a migration target */ - __SetPageLocked(new_page); - if (PageSwapBacked(page)) - __SetPageSwapBacked(new_page); - - /* anon mapping, we can simply copy page->mapping to the new page: */ - new_page->mapping = page->mapping; - new_page->index = page->index; - /* flush the cache before copying using the kernel virtual address */ - flush_cache_range(vma, start, start + HPAGE_PMD_SIZE); - migrate_page_copy(new_page, page); - WARN_ON(PageLRU(new_page)); - - /* Recheck the target PMD */ - ptl = pmd_lock(mm, pmd); - if (unlikely(!pmd_same(*pmd, entry) || !page_ref_freeze(page, 2))) { - spin_unlock(ptl); - - /* Reverse changes made by migrate_page_copy() */ - if (TestClearPageActive(new_page)) - SetPageActive(page); - if (TestClearPageUnevictable(new_page)) - SetPageUnevictable(page); - - unlock_page(new_page); - put_page(new_page); /* Free it */ - - /* Retake the callers reference and putback on LRU */ - get_page(page); - putback_lru_page(page); - mod_node_page_state(page_pgdat(page), - NR_ISOLATED_ANON + page_lru, -HPAGE_PMD_NR); - - goto out_unlock; - } - - entry = mk_huge_pmd(new_page, vma->vm_page_prot); - entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); - - /* - * Overwrite the old entry under pagetable lock and establish - * the new PTE. Any parallel GUP will either observe the old - * page blocking on the page lock, block on the page table - * lock or observe the new page. The SetPageUptodate on the - * new page and page_add_new_anon_rmap guarantee the copy is - * visible before the pagetable update. - */ - page_add_anon_rmap(new_page, vma, start, true); - /* - * At this point the pmd is numa/protnone (i.e. non present) and the TLB - * has already been flushed globally. So no TLB can be currently - * caching this non present pmd mapping. There's no need to clear the - * pmd before doing set_pmd_at(), nor to flush the TLB after - * set_pmd_at(). Clearing the pmd here would introduce a race - * condition against MADV_DONTNEED, because MADV_DONTNEED only holds the - * mmap_lock for reading. If the pmd is set to NULL at any given time, - * MADV_DONTNEED won't wait on the pmd lock and it'll skip clearing this - * pmd. - */ - set_pmd_at(mm, start, pmd, entry); - update_mmu_cache_pmd(vma, address, &entry); - - page_ref_unfreeze(page, 2); - mlock_migrate_page(new_page, page); - page_remove_rmap(page, true); - set_page_owner_migrate_reason(new_page, MR_NUMA_MISPLACED); - - spin_unlock(ptl); - - /* Take an "isolate" reference and put new page on the LRU. */ - get_page(new_page); - putback_lru_page(new_page); - - unlock_page(new_page); - unlock_page(page); - put_page(page); /* Drop the rmap reference */ - put_page(page); /* Drop the LRU isolation reference */ - - count_vm_events(PGMIGRATE_SUCCESS, HPAGE_PMD_NR); - count_vm_numa_events(NUMA_PAGE_MIGRATE, HPAGE_PMD_NR); - - mod_node_page_state(page_pgdat(page), - NR_ISOLATED_ANON + page_lru, - -HPAGE_PMD_NR); - return isolated; - -out_fail: - count_vm_events(PGMIGRATE_FAIL, HPAGE_PMD_NR); - ptl = pmd_lock(mm, pmd); - if (pmd_same(*pmd, entry)) { - entry = pmd_modify(entry, vma->vm_page_prot); - set_pmd_at(mm, start, pmd, entry); - update_mmu_cache_pmd(vma, address, &entry); - } - spin_unlock(ptl); - -out_unlock: - unlock_page(page); -out: - put_page(page); - return 0; -} -#endif /* CONFIG_NUMA_BALANCING */ - #endif /* CONFIG_NUMA */ #ifdef CONFIG_DEVICE_PRIVATE From patchwork Mon Mar 29 18:33:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12170763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20A94C433DB for ; Mon, 29 Mar 2021 18:34:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AE14B60231 for ; Mon, 29 Mar 2021 18:34:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AE14B60231 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4D56E6B0082; Mon, 29 Mar 2021 14:34:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4ABB16B0083; Mon, 29 Mar 2021 14:34:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 34C796B0085; Mon, 29 Mar 2021 14:34:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0214.hostedemail.com [216.40.44.214]) by kanga.kvack.org (Postfix) with ESMTP id 121256B0082 for ; Mon, 29 Mar 2021 14:34:10 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C84605851 for ; Mon, 29 Mar 2021 18:34:09 +0000 (UTC) X-FDA: 77973761418.14.573F686 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by imf25.hostedemail.com (Postfix) with ESMTP id 575186000120 for ; Mon, 29 Mar 2021 18:33:57 +0000 (UTC) Received: by mail-pg1-f182.google.com with SMTP id h25so9982049pgm.3 for ; Mon, 29 Mar 2021 11:33:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+CRNvcMROqrESHeRKDz8ObLyGg8mIEbD/UzeVrHPcdA=; b=dwfbMHXO9rtx22/WgMfUY6cGe3JyGFIlA1Wor9r8FoMKQhg/8D6RV7seDwo52cXxMh s8gRBHYQMpBCWbnITkSxA/ewGhQxzHtx59bDfS3MUkIZdLIo5UJO99IVi1sZXZtZRXgY gG560qy54JPIFFDY8dJakHXavIqaQJohRgjOX2e4p1oYfMdKTmeaeVb9V6gY7ioRZtLc Av0ADSKfI3lkDFcVqxOZyq4uhhuaosjpx05sgL1LBqcKFIPMDEkQpQJ+Jw6fZsP/I/ZB 8E+HoJoIv5f7ei4tVchySYe3fYsMFzHkHyzQsnuAGW+AvlpvsXCpueoTfMMDag5I/o0b JC+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+CRNvcMROqrESHeRKDz8ObLyGg8mIEbD/UzeVrHPcdA=; b=Ty6IJNU7wJChYxBf5hxP8XyNBmikcQKtc60ISHsVDxg6x+YARHq+phouH9nLBGIrXI a793LhW2vleguvTcdiv/wRlCOWs302XMeSfjI1hsB2x7HsNlUp/lO4j1Wo358roX7sR8 g6XHhJF/gNnrjIZ/7S7R5zP2x7F+g8p6bS7yMt5hNxlIm92+dLAh2HXHuc1jiNcS9Qh6 nrC4rEKTRmZzbB6aeihZdycpLwF6AS8y0RMXdY+HbfMpQH9z8cNKP9NoLVQMK7tiSJSH tiaSU/vfb0YLfYmBRSe/EbdKUiywTZV4gwISwrNv1XRBVTd+bZ8PM0RxYbSAesozaHEW mA3Q== X-Gm-Message-State: AOAM53106npbp+FnuHyhQEuncQuA7B7jW+rZAYvW5hLO9slIHZyDwRLM 0+AuOSG5eBixHb9WzB+gyME= X-Google-Smtp-Source: ABdhPJzRrfzExBoG5WiegX0oe7EJkp86mMCbPiGhiwaIffVDW8BeQM+6Y/uubempegqUH6t4hmQzaA== X-Received: by 2002:a62:fc90:0:b029:213:be9a:7048 with SMTP id e138-20020a62fc900000b0290213be9a7048mr26647878pfh.4.1617042831909; Mon, 29 Mar 2021 11:33:51 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id x11sm1151158pjh.0.2021.03.29.11.33.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Mar 2021 11:33:50 -0700 (PDT) From: Yang Shi To: mgorman@suse.de, kirill.shutemov@linux.intel.com, ziy@nvidia.com, mhocko@suse.com, ying.huang@intel.com, hughd@google.com, hca@linux.ibm.com, gor@linux.ibm.com, borntraeger@de.ibm.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 5/6] mm: migrate: don't split THP for misplaced NUMA page Date: Mon, 29 Mar 2021 11:33:11 -0700 Message-Id: <20210329183312.178266-6-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329183312.178266-1-shy828301@gmail.com> References: <20210329183312.178266-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 575186000120 X-Stat-Signature: i1xd3xpjd8uwsf83t3at4qxdxcpur38n X-Rspamd-Server: rspam02 Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf25; identity=mailfrom; envelope-from=""; helo=mail-pg1-f182.google.com; client-ip=209.85.215.182 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617042837-727643 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The old behavior didn't split THP if migration is failed due to lack of memory on the target node. But the THP migration does split THP, so keep the old behavior for misplaced NUMA page migration. Signed-off-by: Yang Shi --- mm/migrate.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/migrate.c b/mm/migrate.c index 86325c750c14..1c0c873375ab 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1444,6 +1444,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, int swapwrite = current->flags & PF_SWAPWRITE; int rc, nr_subpages; LIST_HEAD(ret_pages); + bool nosplit = (reason == MR_NUMA_MISPLACED); if (!swapwrite) current->flags |= PF_SWAPWRITE; @@ -1495,7 +1496,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, */ case -ENOSYS: /* THP migration is unsupported */ - if (is_thp) { + if (is_thp && !nosplit) { if (!try_split_thp(page, &page2, from)) { nr_thp_split++; goto retry; From patchwork Mon Mar 29 18:33:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 12170757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBADCC433DB for ; Mon, 29 Mar 2021 18:34:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 69FDE60231 for ; Mon, 29 Mar 2021 18:34:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 69FDE60231 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0B49B6B007E; Mon, 29 Mar 2021 14:34:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 08DE66B0080; Mon, 29 Mar 2021 14:34:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E71846B0081; Mon, 29 Mar 2021 14:33:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id C61CC6B007E for ; Mon, 29 Mar 2021 14:33:59 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 889D912CB for ; Mon, 29 Mar 2021 18:33:59 +0000 (UTC) X-FDA: 77973760998.20.DAD9EC4 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf07.hostedemail.com (Postfix) with ESMTP id D71CDA0003B2 for ; Mon, 29 Mar 2021 18:33:54 +0000 (UTC) Received: by mail-pl1-f172.google.com with SMTP id f17so4817863plr.0 for ; Mon, 29 Mar 2021 11:33:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TSaTkmfM+DlISEg5t2tvo/fEPeZL9Xe7bKqioL4lk60=; b=bwSINefQh0DoGNCOud81X4AaQE+ngF8kpjG+biWhNmTz4pYazW+mi2wvB77qTlmlHP Fu2xDJJkdiYCQD9isI2ieFMGdmgf6loayf/11lmPEFvZ1nKKgZE2MVXWvXes13LHTiRy j5LBrB7fb+t3l1/NCrv0ho+s8+0wmhFHFMqKtKbJgMwlKlHNK9sM18JOfCQ/hEQciLhW 0pSfFxOkgaFH0d67absszjG9JhmuyOwm6GLS4+dcNdVQy1eTme2Eqdq1+7FiNOEbqL1U easnt/9TBiSWC6P3gFEGmSI0LoIJo/yTON//oEtbpwqI5O6v7N3YzyXa9AOk/Xg0tvKQ t/vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TSaTkmfM+DlISEg5t2tvo/fEPeZL9Xe7bKqioL4lk60=; b=Cl0/FK5SQmvpeh6uZyRzeztAfeRkNxzPcYTexclHO9nF9PyIQ16+KWHjfOGYs0HBrx qFP2eEzSCn57aG/AHTpcwH18lZOK3pWevJ3ugekN2wYtV4/qyOpIx4Pc/QEdmpdXel53 hVkRyvKF57gDI1wooOYIx5cXZo1ioC8kHXI3rodDjsPymx0G9IUOBdDckYP03FOy0MVY 3M560IBbEgo71v24E1jTdTJI3qM66XRY+X7juLFQO2pbU0AZ4eUezqUxDV9144rQqqEM j2od9ShY7jn3EHopYGZdBZstJV1eCX0iHnc6c+dlaO3SGvxr+KwYLKmSDxdT5eIgSIBM 9eIw== X-Gm-Message-State: AOAM531l0KzOc8Y3ffGFrwRq+RGEgKlrPw3jVBDJCjQ2UGNR9Ixqin6u 8mGmyR449o18DqzezjAYqgo= X-Google-Smtp-Source: ABdhPJzXT70cKxSxdOXATE+ZMVo92pyvGaGyaTU+a/wsCVfCDUVP4b/fz9NpczD4sDKyoVDSZS9GNg== X-Received: by 2002:a17:90a:64c7:: with SMTP id i7mr435427pjm.95.1617042834227; Mon, 29 Mar 2021 11:33:54 -0700 (PDT) Received: from localhost.localdomain (c-73-93-239-127.hsd1.ca.comcast.net. [73.93.239.127]) by smtp.gmail.com with ESMTPSA id x11sm1151158pjh.0.2021.03.29.11.33.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Mar 2021 11:33:53 -0700 (PDT) From: Yang Shi To: mgorman@suse.de, kirill.shutemov@linux.intel.com, ziy@nvidia.com, mhocko@suse.com, ying.huang@intel.com, hughd@google.com, hca@linux.ibm.com, gor@linux.ibm.com, borntraeger@de.ibm.com, akpm@linux-foundation.org Cc: shy828301@gmail.com, linux-mm@kvack.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 6/6] mm: migrate: remove redundant page count check for THP Date: Mon, 29 Mar 2021 11:33:12 -0700 Message-Id: <20210329183312.178266-7-shy828301@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210329183312.178266-1-shy828301@gmail.com> References: <20210329183312.178266-1-shy828301@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D71CDA0003B2 X-Stat-Signature: figo1egb6s9zgr4t74hexm6zgpfwsna1 Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf07; identity=mailfrom; envelope-from=""; helo=mail-pl1-f172.google.com; client-ip=209.85.214.172 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617042834-230932 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Don't have to keep the redundant page count check for THP anymore after switching to use generic migration code. Signed-off-by: Yang Shi --- mm/migrate.c | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 1c0c873375ab..328f76848d6c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2097,18 +2097,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) if (isolate_lru_page(page)) return 0; - /* - * migrate_misplaced_transhuge_page() skips page migration's usual - * check on page_count(), so we must do it here, now that the page - * has been isolated: a GUP pin, or any other pin, prevents migration. - * The expected page count is 3: 1 for page's mapcount and 1 for the - * caller's pin and 1 for the reference taken by isolate_lru_page(). - */ - if (PageTransHuge(page) && page_count(page) != 3) { - putback_lru_page(page); - return 0; - } - page_lru = page_is_file_lru(page); mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_lru, thp_nr_pages(page));