From patchwork Wed May 23 08:26:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 10420625 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6FFFB60545 for ; Wed, 23 May 2018 08:27:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 62BAC28E3F for ; Wed, 23 May 2018 08:27:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 56D3A28E40; Wed, 23 May 2018 08:27:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D631228E54 for ; Wed, 23 May 2018 08:27:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 260B86B0276; Wed, 23 May 2018 04:27:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2118E6B0279; Wed, 23 May 2018 04:27:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1371E6B0276; Wed, 23 May 2018 04:27:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f71.google.com (mail-pl0-f71.google.com [209.85.160.71]) by kanga.kvack.org (Postfix) with ESMTP id C89FB6B0276 for ; Wed, 23 May 2018 04:27:28 -0400 (EDT) Received: by mail-pl0-f71.google.com with SMTP id p19-v6so13840601plo.14 for ; Wed, 23 May 2018 01:27:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=HWBhY/zwhz0Qfy3X4HdeCi7YEqSRFqvakCXGwSOpDhk=; b=HbTT7EoLEZM0UmIwkPETamJyuGRLWiLCO+zLC3qS7iPPZ1OpSqhykfHj3HQYnaPy2v O8TVbbDE0+nZroSDQJGGBR7S1bBSnCGcfXzN5VoGqNvJBimPq7sF+w8HVJcqkY5Ye8WT xobCpaYhiQZ+FQtMKIfk5L+Lr/1j8Mw0+dCkYFuwO9ywpL4SC+b+Yn2i8L0CJ4xkCaEC 5/2PVWvk4P0o9V4T/loOxvP6uTYqRZ37/k2JU40obDrQJY7IBprbGGiQUxUZ/n8YVbkt 3NzCK1Kq1cUU5foDRZm4bGPBsMFsUhvL8PKJ2ZuHktF4rcEd9CT8PQNsWDJQJ1dAF2eD 31JQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ALKqPwfBF9naSRI6ITMI9LBVDnxi40BCJVozEH1RftViEjtZ04NJ6JFl Cy5amb6ED0HYwb04QeTg0rfklPGPHKmY2JY6FPsPm+hVvn4tE2JMU+dZnPxHHWH1Mc3lrLUbKIU 3cU5AodM7rz9h/ZsdtoQrhhUcDbtGXQZBx6gK3qiKAErp8IgL2q4ceUl8q2FRqS03ZQ== X-Received: by 2002:a62:b610:: with SMTP id j16-v6mr1927677pff.17.1527064048495; Wed, 23 May 2018 01:27:28 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrO4+1rm1QFYUaOGsajXLBR3bPdeL8stqlD0uuDUER3IWF80HPnOC5tw1DmQUE22EqlqAZQ X-Received: by 2002:a62:b610:: with SMTP id j16-v6mr1927633pff.17.1527064047581; Wed, 23 May 2018 01:27:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527064047; cv=none; d=google.com; s=arc-20160816; b=EQJWsGWZewJZyEOOkugsiMpVhm8oUnJ/xqqTGGwugzlBkhLjWxE3OYHa8XKjLB0yBr ggsxsPc+EHUP2u1iZYJd1l/tXNjuJfAiLqr4A370nVM9kenbFC0CctwjQR2nPxadg2Ba +0ER94CG8JV7JRZNf5anIibyKFltwwvVHjNb8c6CZX8bvKI+Ffz6A3UXRE3KbdU4zjFo Y7pfoSs64xFl2WO5UR4g4Bk8WaJnMJJ+FvSlYtqZ/kkVFFj5if/Uzgf8twenXOHOYCOi uegg2WnC47WYzUgag2CMpX259D1hP5iWEKdyZFgVnHD63mSp4wEdQ16m3qLA7kbbCTv4 De7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=HWBhY/zwhz0Qfy3X4HdeCi7YEqSRFqvakCXGwSOpDhk=; b=QSkdPk9W6XU9zMt3CSrma6SShy9jy414ANRFQgnz3NtR1pRqOuJhPCtPzb+I+ZvtCR 14nZTKg93DuG4b1B/Ia/dIOhqeAMA3vFln3lKQzJcZGH9x+NQBTZCZrURYlTMB93kyMk rjjy7PhH3yuPhYIBPB1t6ywJ8QDQV3aSq+m6hKX/gvPScDi+LFsHUUSqUbXPyatw2k0P dSjtnrCQcuCBTE+IbUaYLuztbzLGHaFUJqTOviy1qQU3AtQWl2jb+31h9hvFaALg/7CA JY8HoIM7voo7hvraPxwOoX6ygkeTb8qxsJYdRYjnXhGloGBxee/WPZCVYyO4PBL5FHDs rXjQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga12.intel.com (mga12.intel.com. [192.55.52.136]) by mx.google.com with ESMTPS id e9-v6si18192372pln.72.2018.05.23.01.27.27 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 23 May 2018 01:27:27 -0700 (PDT) Received-SPF: pass (google.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) client-ip=192.55.52.136; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 May 2018 01:27:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,432,1520924400"; d="scan'208";a="57726271" Received: from yhuang6-ux31a.sh.intel.com ([10.239.197.97]) by fmsmga001.fm.intel.com with ESMTP; 23 May 2018 01:27:22 -0700 From: "Huang, Ying" To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan Subject: [PATCH -mm -V3 18/21] mm, THP, swap: Support PMD swap mapping in mincore() Date: Wed, 23 May 2018 16:26:22 +0800 Message-Id: <20180523082625.6897-19-ying.huang@intel.com> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180523082625.6897-1-ying.huang@intel.com> References: <20180523082625.6897-1-ying.huang@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Huang Ying During mincore(), for PMD swap mapping, swap cache will be looked up. If the resulting page isn't compound page, the PMD swap mapping will be split and fallback to PTE swap mapping processing. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan --- mm/mincore.c | 37 +++++++++++++++++++++++++++++++------ 1 file changed, 31 insertions(+), 6 deletions(-) diff --git a/mm/mincore.c b/mm/mincore.c index a66f2052c7b1..897dd2c187e8 100644 --- a/mm/mincore.c +++ b/mm/mincore.c @@ -48,7 +48,8 @@ static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr, * and is up to date; i.e. that no page-in operation would be required * at this time if an application were to map and access this page. */ -static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff) +static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff, + bool *compound) { unsigned char present = 0; struct page *page; @@ -86,6 +87,8 @@ static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff) #endif if (page) { present = PageUptodate(page); + if (compound) + *compound = PageCompound(page); put_page(page); } @@ -103,7 +106,8 @@ static int __mincore_unmapped_range(unsigned long addr, unsigned long end, pgoff = linear_page_index(vma, addr); for (i = 0; i < nr; i++, pgoff++) - vec[i] = mincore_page(vma->vm_file->f_mapping, pgoff); + vec[i] = mincore_page(vma->vm_file->f_mapping, + pgoff, NULL); } else { for (i = 0; i < nr; i++) vec[i] = 0; @@ -127,14 +131,36 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pte_t *ptep; unsigned char *vec = walk->private; int nr = (end - addr) >> PAGE_SHIFT; + swp_entry_t entry; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { - memset(vec, 1, nr); + unsigned char val = 1; + bool compound; + + if (thp_swap_supported() && is_swap_pmd(*pmd)) { + entry = pmd_to_swp_entry(*pmd); + if (!non_swap_entry(entry)) { + val = mincore_page(swap_address_space(entry), + swp_offset(entry), + &compound); + /* + * The huge swap cluster has been + * split under us + */ + if (!compound) { + __split_huge_swap_pmd(vma, addr, pmd); + spin_unlock(ptl); + goto fallback; + } + } + } + memset(vec, val, nr); spin_unlock(ptl); goto out; } +fallback: if (pmd_trans_unstable(pmd)) { __mincore_unmapped_range(addr, end, vma, vec); goto out; @@ -150,8 +176,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, else if (pte_present(pte)) *vec = 1; else { /* pte is a swap entry */ - swp_entry_t entry = pte_to_swp_entry(pte); - + entry = pte_to_swp_entry(pte); if (non_swap_entry(entry)) { /* * migration or hwpoison entries are always @@ -161,7 +186,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, } else { #ifdef CONFIG_SWAP *vec = mincore_page(swap_address_space(entry), - swp_offset(entry)); + swp_offset(entry), NULL); #else WARN_ON(1); *vec = 1;