From patchwork Fri Jun 22 03:51:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 10481175 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 006A860383 for ; Fri, 22 Jun 2018 03:56:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E790128F91 for ; Fri, 22 Jun 2018 03:56:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DC27428F98; Fri, 22 Jun 2018 03:56:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 59BBF28F91 for ; Fri, 22 Jun 2018 03:56:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 281D76B0282; Thu, 21 Jun 2018 23:56:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 230F26B0284; Thu, 21 Jun 2018 23:56:05 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 08C596B0285; Thu, 21 Jun 2018 23:56:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f70.google.com (mail-pl0-f70.google.com [209.85.160.70]) by kanga.kvack.org (Postfix) with ESMTP id AD04C6B0282 for ; Thu, 21 Jun 2018 23:56:04 -0400 (EDT) Received: by mail-pl0-f70.google.com with SMTP id b65-v6so2962358plb.5 for ; Thu, 21 Jun 2018 20:56:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=j5/mvwZfU5Ghq3EHB8RjMJ0HuRfX5OjKDsKnYUAwx2A=; b=qhvqc9W0oik2Zdbvqo0ZqWxz4fWsSS8+iJ9NVSF7L6GvrCdzgWvGOBrf0s7NvGPFf/ c/aQBOLRDd6eeQ1yuVd2jLtFlidP6WaFnaYIgtUwkwyCXt4gIk25uyvznAacZ/9At9z7 o3jT+uB5aqchM5iZJNwWo8YOUbNSOXKj5ean7N+ieX3xDRa1qBXK4Dc8kJL0bSupWedN +qttr6N3DIlWQMFxH/xRecxMqiQIu+QDWbi9kywUqOOMwcfS1PkJKNAskvIuaSQgVN10 7cGUrp4KgZmQog3uDViCRoHVnN1ZaW7NOYnCdPQmHVLr7zFRzNHXgG4NFn3/EbTVcn3V fZGg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: APt69E0x/DngpL6/rM38YWR4JMthSIOZt2ukeHpJfTBJT7qPN89oUNEQ yTOyUQKDmBVC+oqGsKERSbSJRV/97bFWRzN2HedcfNToLFf4NoPFCjh0lhKS2PuLHTWW3tyG1Og mq78dcD4seuhLSXdgvaUT9GI41U6Wy7uVddPzIcU2OOnrqPGrP3WdHNGVdVYnkUdUEA== X-Received: by 2002:a65:4a51:: with SMTP id a17-v6mr24841148pgu.168.1529639764384; Thu, 21 Jun 2018 20:56:04 -0700 (PDT) X-Google-Smtp-Source: ADUXVKItbYNrTqU3X14fwM07JaVHP2AoUOrxpbv2Dk0Z6EsBYclqrx/zxanSILrGxNwc9pVa2ScB X-Received: by 2002:a65:4a51:: with SMTP id a17-v6mr24841117pgu.168.1529639763677; Thu, 21 Jun 2018 20:56:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529639763; cv=none; d=google.com; s=arc-20160816; b=tY+9ufMv/SVm3XNL0Oy221zwlH0E6pnFIv7VEgOmSZc4dZhOXJLFwbUU3aehLzU5fM oYT2mvoCaZpHoiOPwr9ssdLIcKfmDdnDiqkh18boQU7zv7foLpa3FiccFrfu2gqbgaPI u0Crrtr8hv7WYv+0i2G+vmCUPkMHi6uGbDKs1I5/4380ExxhSETJKhFk9SG3FnzdA/6S /LMPwFEREZcsCgseb5sRXeOtObDGot2Wv5IqVV9cBxcuJHc81mMR0QLAzVo/1G9qV//j Ep/+RvFC427W+x2unqiumFurTTHQwFAW6ldw8jvF7WUdZYo9h1x/TJRJLif18b/l4las k3tA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=j5/mvwZfU5Ghq3EHB8RjMJ0HuRfX5OjKDsKnYUAwx2A=; b=oKeZjlVHA6YUihV3LAHDlf+7bf8z8tSV6yFjissk8YOoM81hcUqafW1pizGdjGVTF0 Fn6OgMf+4q3kmPJxH8GXoJbc8afgD9tDCwvshdyfn383zT3UzfDBcqETUEe5N+kHPpQM fxDHKaWs9NakLqmavllCpr/ybbwE0yskcKe1AJku0sd6lmzJA88tLnco6h3ZvFqYxCuY Cb0qVZGsqvFZRC0suA5rEf1V3KG027nQAmFIN8sbTEz5UOj1J3KAAhBhqqZ34SwYkBct uKMQt9z92LkHXQ0+B1usV+1TEK1Bk5vi+FIVpJEFnnidzTZ7/z6lV0grpc/k+mY/aXut XE4w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga09.intel.com (mga09.intel.com. [134.134.136.24]) by mx.google.com with ESMTPS id k74-v6si5146207pgc.304.2018.06.21.20.56.03 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 21 Jun 2018 20:56:03 -0700 (PDT) Received-SPF: pass (google.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) client-ip=134.134.136.24; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 134.134.136.24 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Jun 2018 20:56:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,255,1526367600"; d="scan'208";a="65335195" Received: from wanpingl-mobl.ccr.corp.intel.com (HELO yhuang6-ux31a.ccr.corp.intel.com) ([10.254.212.200]) by fmsmga004.fm.intel.com with ESMTP; 21 Jun 2018 20:55:55 -0700 From: "Huang, Ying" To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan , Daniel Jordan Subject: [PATCH -mm -v4 18/21] mm, THP, swap: Support PMD swap mapping in mincore() Date: Fri, 22 Jun 2018 11:51:48 +0800 Message-Id: <20180622035151.6676-19-ying.huang@intel.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20180622035151.6676-1-ying.huang@intel.com> References: <20180622035151.6676-1-ying.huang@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Huang Ying During mincore(), for PMD swap mapping, swap cache will be looked up. If the resulting page isn't compound page, the PMD swap mapping will be split and fallback to PTE swap mapping processing. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan Cc: Daniel Jordan --- mm/mincore.c | 37 +++++++++++++++++++++++++++++++------ 1 file changed, 31 insertions(+), 6 deletions(-) diff --git a/mm/mincore.c b/mm/mincore.c index a66f2052c7b1..897dd2c187e8 100644 --- a/mm/mincore.c +++ b/mm/mincore.c @@ -48,7 +48,8 @@ static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr, * and is up to date; i.e. that no page-in operation would be required * at this time if an application were to map and access this page. */ -static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff) +static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff, + bool *compound) { unsigned char present = 0; struct page *page; @@ -86,6 +87,8 @@ static unsigned char mincore_page(struct address_space *mapping, pgoff_t pgoff) #endif if (page) { present = PageUptodate(page); + if (compound) + *compound = PageCompound(page); put_page(page); } @@ -103,7 +106,8 @@ static int __mincore_unmapped_range(unsigned long addr, unsigned long end, pgoff = linear_page_index(vma, addr); for (i = 0; i < nr; i++, pgoff++) - vec[i] = mincore_page(vma->vm_file->f_mapping, pgoff); + vec[i] = mincore_page(vma->vm_file->f_mapping, + pgoff, NULL); } else { for (i = 0; i < nr; i++) vec[i] = 0; @@ -127,14 +131,36 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pte_t *ptep; unsigned char *vec = walk->private; int nr = (end - addr) >> PAGE_SHIFT; + swp_entry_t entry; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { - memset(vec, 1, nr); + unsigned char val = 1; + bool compound; + + if (thp_swap_supported() && is_swap_pmd(*pmd)) { + entry = pmd_to_swp_entry(*pmd); + if (!non_swap_entry(entry)) { + val = mincore_page(swap_address_space(entry), + swp_offset(entry), + &compound); + /* + * The huge swap cluster has been + * split under us + */ + if (!compound) { + __split_huge_swap_pmd(vma, addr, pmd); + spin_unlock(ptl); + goto fallback; + } + } + } + memset(vec, val, nr); spin_unlock(ptl); goto out; } +fallback: if (pmd_trans_unstable(pmd)) { __mincore_unmapped_range(addr, end, vma, vec); goto out; @@ -150,8 +176,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, else if (pte_present(pte)) *vec = 1; else { /* pte is a swap entry */ - swp_entry_t entry = pte_to_swp_entry(pte); - + entry = pte_to_swp_entry(pte); if (non_swap_entry(entry)) { /* * migration or hwpoison entries are always @@ -161,7 +186,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, } else { #ifdef CONFIG_SWAP *vec = mincore_page(swap_address_space(entry), - swp_offset(entry)); + swp_offset(entry), NULL); #else WARN_ON(1); *vec = 1;