From patchwork Wed May 9 08:38:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 10388643 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C2A9160170 for ; Wed, 9 May 2018 08:39:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B585328E13 for ; Wed, 9 May 2018 08:39:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A9A0128E24; Wed, 9 May 2018 08:39:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1036D28E13 for ; Wed, 9 May 2018 08:39:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B10CD6B0493; Wed, 9 May 2018 04:39:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AC1C56B0495; Wed, 9 May 2018 04:39:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B1FB6B0496; Wed, 9 May 2018 04:39:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f72.google.com (mail-pl0-f72.google.com [209.85.160.72]) by kanga.kvack.org (Postfix) with ESMTP id 5A7946B0493 for ; Wed, 9 May 2018 04:39:13 -0400 (EDT) Received: by mail-pl0-f72.google.com with SMTP id b36-v6so3374818pli.2 for ; Wed, 09 May 2018 01:39:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=Hrz9uCKW/QMs7hLgFZYFQRYE7FDTGRLwrTdtoPAElyE=; b=Z296x797CQ6zVxKLQvpG/P5NNK4ySmSdoLabGBta1+2FfaUnFm7eB2dzHXQ/QMcshf ee4SisEXs0KLqbaJ4U/eutWprlfDeKv1/8jpMJ0w11WtA+2Jxb4l1VzGk7ImK1bUH5eQ ud1dsXyI8hVuYqlGY5eN0YFzxDe/IBRkk5d2OSZyJrpSCGLe8OnvYOU+azbsCuVy5QG6 DNHCocjM6PZZqFveSKg+GiERk+zopFJizSkxLDkI5L1gyJykClxBpUhbZmW1f2hbq8H1 Nk/8CKCxoMaUbJLi4OODewbuZcufEbqY5j344EmcBKuSlkWR2NIS0tPqMNacfNOpmWcC pvUw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com X-Gm-Message-State: ALQs6tD/YfpecVg4qd2KQzsaqRWxXzqrKno4cYHDx6g38xIoF1OZqGsZ y1IAEYcuofR+k1NwHw4EO/YR0BVANZqxPqvEIUo/pxrFkCoK4jLmXn35V4V23pWauqUEI0ByKBp Klp7HM0+2EjABJn2AyB1ifP9BzOSPCKswpVfr+nGPhf83NRvsGOnBjNmIJUIwIntpXg== X-Received: by 2002:a17:902:b184:: with SMTP id s4-v6mr36583144plr.359.1525855153072; Wed, 09 May 2018 01:39:13 -0700 (PDT) X-Google-Smtp-Source: AB8JxZp0/dJhYuYAgOvGYgCW8i6uO5LsNbGZSpkTppY5OzkkPj1bqUwcHjCkf6WApiBgCVttqojQ X-Received: by 2002:a17:902:b184:: with SMTP id s4-v6mr36583103plr.359.1525855152374; Wed, 09 May 2018 01:39:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525855152; cv=none; d=google.com; s=arc-20160816; b=eZr6yqLMGKJRir0I1qVjgpL/9LqC7h2yDliWAUuAZqj/vgEQU8dJwThv87J8DI6TwK RnbZv0IWcSVNT4xsbXJJnaKBZfjOCQALIvRIuVr2+lnxvB8rJkADwWjeesHXqlMAXgoS XB48S5LjHaEE+TRm9JqvgzMCv1MDM2LZe1iU73NFNfY/bTi1dRSQKjvisE0avnVnJgk3 yN6AJ7iUy3oPzYTxzdB4/qnClFtp1C+exGSffbwXp//lXisH9PjTIHVcapQ9bwKvxUhS KqKWQ4dMYx7U+wow8ZUq0om6qE35Ycj9yDYrALnLrQhdjZ/tsm5orjMqCIgeFRwosjjQ b1JQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=Hrz9uCKW/QMs7hLgFZYFQRYE7FDTGRLwrTdtoPAElyE=; b=EQsgGJel8e+oyGgZWeOUvvCpBz3jpH5ynKClqXaaI0pKEAzrs8tX3qRjiIO7Fj/XvO YVWp74SnWyRD+DfKG/FoUaVKlIuZ+Mo8O+Y2ryPjBRyNbsHad79mFkkRKYnu0videmWd bC/dK+UTnz0GC4UnyRouWNXZr+oo9Q5D6N30lcnf3bNI0tlhhulE6a3gk1Df6b8jYeHl 6zpwvyYX8kyvMK28IoGCCR+vDznqQmg6/hflcpCmLbVlitkyrhqt9XXQdxQP0ft/Srpy CQY92AqiaID40HOgbr4KGV5i2MalLJZ+KKnqYCSJvbboD5qYkQMX8P8jAsJGkOA7HYz9 QUCw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com Received: from mga18.intel.com (mga18.intel.com. [134.134.136.126]) by mx.google.com with ESMTPS id t144-v6si21177836pgb.94.2018.05.09.01.39.12 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 May 2018 01:39:12 -0700 (PDT) Received-SPF: pass (google.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) client-ip=134.134.136.126; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 May 2018 01:39:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.49,381,1520924400"; d="scan'208";a="52769533" Received: from yhuang-gentoo.sh.intel.com ([10.239.193.148]) by fmsmga004.fm.intel.com with ESMTP; 09 May 2018 01:39:08 -0700 From: "Huang, Ying" To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan Subject: [PATCH -mm -V2 04/21] mm, THP, swap: Support PMD swap mapping in swapcache_free_cluster() Date: Wed, 9 May 2018 16:38:29 +0800 Message-Id: <20180509083846.14823-5-ying.huang@intel.com> X-Mailer: git-send-email 2.16.1 In-Reply-To: <20180509083846.14823-1-ying.huang@intel.com> References: <20180509083846.14823-1-ying.huang@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Huang Ying Previously, during swapout, all PMD page mapping will be split and replaced with PTE swap mapping. And when clearing the SWAP_HAS_CACHE flag for the huge swap cluster in swapcache_free_cluster(), the huge swap cluster will be split. Now, during swapout, the PMD page mapping will be changed to PMD swap mapping. So when clearing the SWAP_HAS_CACHE flag, the huge swap cluster will only be split if the PMD swap mapping count is 0. Otherwise, we will keep it as the huge swap cluster. So that we can swapin a THP as a whole later. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan --- mm/swapfile.c | 41 ++++++++++++++++++++++++++++++----------- 1 file changed, 30 insertions(+), 11 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 7e1c5082d326..ed1fec275d2f 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -514,6 +514,18 @@ static void dec_cluster_info_page(struct swap_info_struct *p, free_cluster(p, idx); } +#ifdef CONFIG_THP_SWAP +static inline int cluster_swapcount(struct swap_cluster_info *ci) +{ + if (!ci || !cluster_is_huge(ci)) + return 0; + + return cluster_count(ci) - SWAPFILE_CLUSTER; +} +#else +#define cluster_swapcount(ci) 0 +#endif + /* * It's possible scan_swap_map() uses a free cluster in the middle of free * cluster list. Avoiding such abuse to avoid list corruption. @@ -905,6 +917,7 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx) struct swap_cluster_info *ci; ci = lock_cluster(si, offset); + memset(si->swap_map + offset, 0, SWAPFILE_CLUSTER); cluster_set_count_flag(ci, 0, 0); free_cluster(si, idx); unlock_cluster(ci); @@ -1288,24 +1301,30 @@ static void swapcache_free_cluster(swp_entry_t entry) ci = lock_cluster(si, offset); VM_BUG_ON(!cluster_is_huge(ci)); + VM_BUG_ON(!is_cluster_offset(offset)); + VM_BUG_ON(cluster_count(ci) < SWAPFILE_CLUSTER); map = si->swap_map + offset; - for (i = 0; i < SWAPFILE_CLUSTER; i++) { - val = map[i]; - VM_BUG_ON(!(val & SWAP_HAS_CACHE)); - if (val == SWAP_HAS_CACHE) - free_entries++; + if (!cluster_swapcount(ci)) { + for (i = 0; i < SWAPFILE_CLUSTER; i++) { + val = map[i]; + VM_BUG_ON(!(val & SWAP_HAS_CACHE)); + if (val == SWAP_HAS_CACHE) + free_entries++; + } + if (free_entries != SWAPFILE_CLUSTER) + cluster_clear_huge(ci); } if (!free_entries) { - for (i = 0; i < SWAPFILE_CLUSTER; i++) - map[i] &= ~SWAP_HAS_CACHE; + for (i = 0; i < SWAPFILE_CLUSTER; i++) { + val = map[i]; + VM_BUG_ON(!(val & SWAP_HAS_CACHE) || + val == SWAP_HAS_CACHE); + map[i] = val & ~SWAP_HAS_CACHE; + } } - cluster_clear_huge(ci); unlock_cluster(ci); if (free_entries == SWAPFILE_CLUSTER) { spin_lock(&si->lock); - ci = lock_cluster(si, offset); - memset(map, 0, SWAPFILE_CLUSTER); - unlock_cluster(ci); mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER); swap_free_cluster(si, idx); spin_unlock(&si->lock);