From patchwork Fri Dec 7 05:41:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 10717477 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C486F109C for ; Fri, 7 Dec 2018 05:42:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B23B92EC81 for ; Fri, 7 Dec 2018 05:42:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A66042EC8B; Fri, 7 Dec 2018 05:42:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 026B12EC81 for ; Fri, 7 Dec 2018 05:42:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 93E4D6B7EB5; Fri, 7 Dec 2018 00:42:23 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8EC6F6B7EB7; Fri, 7 Dec 2018 00:42:23 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 744366B7EB8; Fri, 7 Dec 2018 00:42:23 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id 210606B7EB5 for ; Fri, 7 Dec 2018 00:42:23 -0500 (EST) Received: by mail-pf1-f200.google.com with SMTP id i3so2410365pfj.4 for ; Thu, 06 Dec 2018 21:42:23 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=9mrJ/VPJpAOOFZYER1gWWHDnV3mzCevD2jQH3eLNO2A=; b=n/SR0QAmU2wEnoV8PvXNE4bbyvlVQ1xthu8UUG1EztKOclqRZDY6pUJNUGYs3KDOri RdxIMqA3vqstlDCFw1/uer0WD//RdDdsnF1m+IE0AjKBeRNd6C510GDM2qGwoijvqvKF y8h+8Uq6KAL24e2TXnKZ/DUaLZeCu7QM98nwRgeo/QKBT1uWSQqPCKT1yCe+CBwE6C8h 12r45Xyhl/FUO4HZ5vGIO0YYmdq/lRGsyQLzUoNgVf5uxQZ+nf5XvtanoQlj5n+QM3Wr /wghq6Qw640KJ5YPsiwAMyQqjq9k63YM+iyN8eVaM9ohAHJJO9vqmFn8zZ0URrHPW2LA uztQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: AA+aEWae36NfBiIBm/ts0OKn5W8duIeMPyQ60++YwR1Ij1RYch3BwqbL iIz0cYXkX8Gp9vEXygtHYQoZ3q+ZUVriZ0NbbxvS3hsRIIG/WIYTnsbar6Sh4SeVZ5+wM/iHCxS n+6eCwQms/M+7NX+K8QJE6Q5Btm3OCAcgSQrIGh82rC/3TVfAy5rpnIvvVOXfGEWk0g== X-Received: by 2002:a62:220d:: with SMTP id i13mr948612pfi.162.1544161342779; Thu, 06 Dec 2018 21:42:22 -0800 (PST) X-Google-Smtp-Source: AFSGD/XEK/x2Y3cGBCqDIDs3dE2uXSEJIpPkyTEBWjJl6XMJkHs5pleKbkhnHzKhCixOSE9J437f X-Received: by 2002:a62:220d:: with SMTP id i13mr948568pfi.162.1544161341517; Thu, 06 Dec 2018 21:42:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544161341; cv=none; d=google.com; s=arc-20160816; b=vbeczUfNqX75AEmIWQFnI4y4KIOx5G/EsMNYSVN9IibUC/Fyrl+mk0gYcjvyUquO8e ezmcnDYqTUH6TsPnqO9EZqk57vQbqWyKjRdEOyMn/dDVk95WtkU9Xy9AjbaDIOXGr26B wTQAWlDaUzEgtODCsoYEI6HUl3UU1PtEtMvqh+Wf43JSBzyxCBDy1cb2i0xEv5tSrJv+ 8PjwU9ef2xiL4lltpS8SOdYcO8uuBDL5MJr/sCqIQN62uEc/02J6VClV7mps7knGgz66 Plwrt6e7UrrA0orqUP7u4XIF1ZX7wmCyEatJtil/cPr8GG0xIb8XdSbuYB35fWzfjUza zKFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=9mrJ/VPJpAOOFZYER1gWWHDnV3mzCevD2jQH3eLNO2A=; b=RL+i+d45NHLUdSI8CQCNGPVRWnp76pjN0/UN3kAXxY3dQXPGZvd5ergTrZ6/yCULQs P1cRbl+PGABjk5iAaLBK8Q7wSUtEwxGhQlyi/O+xbaXeMeDYKWJczn2PCU1yiuy9Ktsl zT/SZxhR21FGKlwWgWXP3ByHDSant4E1joqSScH5Lk63p/TPpHtHzTfROijQShly8AGc l18x6THpxhAI8/Q3Yt+8WsEv9YKQolBjwt0sVZyQGXwvgHltB0YJRNWITneycAWk2EWR rL//pgNA1WTRlrRgT5dhk7zs74qQOjAAQQbfnvQ/uI9Vm1Giclgb7FcubepJ2ALwHjjY pmXA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga05.intel.com (mga05.intel.com. [192.55.52.43]) by mx.google.com with ESMTPS id cf16si2126256plb.227.2018.12.06.21.42.21 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Dec 2018 21:42:21 -0800 (PST) Received-SPF: pass (google.com: domain of ying.huang@intel.com designates 192.55.52.43 as permitted sender) client-ip=192.55.52.43; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Dec 2018 21:42:21 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,324,1539673200"; d="scan'208";a="105567346" Received: from yhuang-mobile.sh.intel.com ([10.239.196.133]) by fmsmga007.fm.intel.com with ESMTP; 06 Dec 2018 21:42:18 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan , Daniel Jordan Subject: [PATCH -V8 19/21] swap: Support PMD swap mapping in common path Date: Fri, 7 Dec 2018 13:41:19 +0800 Message-Id: <20181207054122.27822-20-ying.huang@intel.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20181207054122.27822-1-ying.huang@intel.com> References: <20181207054122.27822-1-ying.huang@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Original code is only for PMD migration entry, it is revised to support PMD swap mapping. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan Cc: Daniel Jordan --- fs/proc/task_mmu.c | 12 +++++------- mm/gup.c | 36 ++++++++++++++++++++++++------------ mm/huge_memory.c | 7 ++++--- mm/mempolicy.c | 2 +- 4 files changed, 34 insertions(+), 23 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 39e96a21366e..0e65233f2cc2 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -986,7 +986,7 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, pmd = pmd_clear_soft_dirty(pmd); set_pmd_at(vma->vm_mm, addr, pmdp, pmd); - } else if (is_migration_entry(pmd_to_swp_entry(pmd))) { + } else if (is_swap_pmd(pmd)) { pmd = pmd_swp_clear_soft_dirty(pmd); set_pmd_at(vma->vm_mm, addr, pmdp, pmd); } @@ -1316,9 +1316,8 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, if (pm->show_pfn) frame = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - } -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION - else if (is_swap_pmd(pmd)) { + } else if (IS_ENABLED(CONFIG_HAVE_PMD_SWAP_ENTRY) && + is_swap_pmd(pmd)) { swp_entry_t entry = pmd_to_swp_entry(pmd); unsigned long offset; @@ -1331,10 +1330,9 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, flags |= PM_SWAP; if (pmd_swp_soft_dirty(pmd)) flags |= PM_SOFT_DIRTY; - VM_BUG_ON(!is_pmd_migration_entry(pmd)); - page = migration_entry_to_page(entry); + if (is_pmd_migration_entry(pmd)) + page = migration_entry_to_page(entry); } -#endif if (page && page_mapcount(page) == 1) flags |= PM_MMAP_EXCLUSIVE; diff --git a/mm/gup.c b/mm/gup.c index 6dd33e16a806..460565825ef0 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -215,6 +215,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, spinlock_t *ptl; struct page *page; struct mm_struct *mm = vma->vm_mm; + swp_entry_t entry; pmd = pmd_offset(pudp, address); /* @@ -242,18 +243,22 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, if (!pmd_present(pmdval)) { if (likely(!(flags & FOLL_MIGRATION))) return no_page_table(vma, flags); - VM_BUG_ON(thp_migration_supported() && - !is_pmd_migration_entry(pmdval)); - if (is_pmd_migration_entry(pmdval)) + entry = pmd_to_swp_entry(pmdval); + if (thp_migration_supported() && is_migration_entry(entry)) { pmd_migration_entry_wait(mm, pmd); - pmdval = READ_ONCE(*pmd); - /* - * MADV_DONTNEED may convert the pmd to null because - * mmap_sem is held in read mode - */ - if (pmd_none(pmdval)) + pmdval = READ_ONCE(*pmd); + /* + * MADV_DONTNEED may convert the pmd to null because + * mmap_sem is held in read mode + */ + if (pmd_none(pmdval)) + return no_page_table(vma, flags); + goto retry; + } + if (IS_ENABLED(CONFIG_THP_SWAP) && !non_swap_entry(entry)) return no_page_table(vma, flags); - goto retry; + WARN_ON(1); + return no_page_table(vma, flags); } if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); @@ -275,11 +280,18 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return no_page_table(vma, flags); } if (unlikely(!pmd_present(*pmd))) { + entry = pmd_to_swp_entry(*pmd); spin_unlock(ptl); if (likely(!(flags & FOLL_MIGRATION))) return no_page_table(vma, flags); - pmd_migration_entry_wait(mm, pmd); - goto retry_locked; + if (thp_migration_supported() && is_migration_entry(entry)) { + pmd_migration_entry_wait(mm, pmd); + goto retry_locked; + } + if (IS_ENABLED(CONFIG_THP_SWAP) && !non_swap_entry(entry)) + return no_page_table(vma, flags); + WARN_ON(1); + return no_page_table(vma, flags); } if (unlikely(!pmd_trans_huge(*pmd))) { spin_unlock(ptl); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5b2eb7871cd7..b75af88c505a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2138,7 +2138,7 @@ static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl, static pmd_t move_soft_dirty_pmd(pmd_t pmd) { #ifdef CONFIG_MEM_SOFT_DIRTY - if (unlikely(is_pmd_migration_entry(pmd))) + if (unlikely(is_swap_pmd(pmd))) pmd = pmd_swp_mksoft_dirty(pmd); else if (pmd_present(pmd)) pmd = pmd_mksoft_dirty(pmd); @@ -2222,11 +2222,12 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, preserve_write = prot_numa && pmd_write(*pmd); ret = 1; -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION +#if defined(CONFIG_ARCH_ENABLE_THP_MIGRATION) || defined(CONFIG_THP_SWAP) if (is_swap_pmd(*pmd)) { swp_entry_t entry = pmd_to_swp_entry(*pmd); - VM_BUG_ON(!is_pmd_migration_entry(*pmd)); + VM_BUG_ON(!IS_ENABLED(CONFIG_THP_SWAP) && + !is_migration_entry(entry)); if (is_write_migration_entry(entry)) { pmd_t newpmd; /* diff --git a/mm/mempolicy.c b/mm/mempolicy.c index e4f8248822c1..39335bf99169 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -436,7 +436,7 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr, struct queue_pages *qp = walk->private; unsigned long flags; - if (unlikely(is_pmd_migration_entry(*pmd))) { + if (unlikely(is_swap_pmd(*pmd))) { ret = 1; goto unlock; }