From patchwork Fri Dec 7 05:41:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 10717447 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A8C913BF for ; Fri, 7 Dec 2018 05:41:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 58DA92EC6E for ; Fri, 7 Dec 2018 05:41:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4C4982EC7A; Fri, 7 Dec 2018 05:41:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C82392EC6E for ; Fri, 7 Dec 2018 05:41:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7920D6B7E96; Fri, 7 Dec 2018 00:41:41 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 718D06B7E98; Fri, 7 Dec 2018 00:41:41 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E5256B7E99; Fri, 7 Dec 2018 00:41:41 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 0CA036B7E96 for ; Fri, 7 Dec 2018 00:41:41 -0500 (EST) Received: by mail-pf1-f197.google.com with SMTP id m3so2388333pfj.14 for ; Thu, 06 Dec 2018 21:41:41 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=rd+Rdvi1ikh/aGCw3CXR20dJILc6wIwORNlf/CuVXeI=; b=eLXyMQ28k0UFZBXzVlDTkEogXkld6VCyqruYYVdrb3Qi6kGHg3ftqRLQ3cK4w3KpBO +cCd1olAqZM0XzlRrNcDRMvjK0RBKqiSvj6YbCSmjAcNhNAUX1d7dfPKQvAXb3cqW9HL IEE4YH7VCzV9gU0L8PfD1TJ7/2yg+vBDqdqXPuEOmdu3Ui490Jt7bAraQyp3KZMoz4B2 H72yvUcbZSBxlCyjrB8/ue4/j2lhHS8uLMrm6uUdRknD2MdD0bTHnCmHagq+pGDV8cRS sfyUeyQ/UTJQ4RYo8T9C72oUYpID7np9F/73ed4iueCxcEJVto+DNlEGVn8odQAK+7OX eYNQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: AA+aEWZDbTZzVJJqX/clgLqVNGHI3fbC/ag5IC2GxrjmJ6RVz3mG9H4/ heWxOXQvIuug+XWuZdo4FZJ2Zn90w8adlW069S4bhR2HjxA/F1emSIJ4+nPb/+nqDZO9qcFQ9Pg 1ckhFba+VareguFIxU+G+E2z1mPCDg7oJhWb3nvZeZIxRZTaBa9MQbIo18yzzrLFNYg== X-Received: by 2002:a63:1258:: with SMTP id 24mr802659pgs.114.1544161300681; Thu, 06 Dec 2018 21:41:40 -0800 (PST) X-Google-Smtp-Source: AFSGD/VLfE/nf6qw2RWlGWNQjAjkRARExEeuk9aa+lVAN5FbIJPAscgtaDSzuuTbvD85IH6jjtqZ X-Received: by 2002:a63:1258:: with SMTP id 24mr802631pgs.114.1544161299631; Thu, 06 Dec 2018 21:41:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544161299; cv=none; d=google.com; s=arc-20160816; b=FFAIQYp/M7HpVVDpye7eQRloxipIqDXlSu1Wfels648p5X+QtwXwowj97etNaa8nHx tPSnJXn0Vrz2LKG4onDptpS5xs/OBOT+94zwTi+MLv1yWQM/LiB02Y1pRGPaUP47pOEy kUE/gNHgKxuWm6doX1E0sSPBCirhs4ehWLUxFkPpSG4LFch4WAC8dcOGApSXFrPiPY88 lKtObIdWP9eFsTslm6NPUHebv7nv3LxFeiQBTWjDGzyMxcPWDayOIhNc9aODsURRMpVB QymUuAJTSBUGrXJV1tCmZzAbQ5P05KqhcW4ex+P8P/5gwfevW9QJwm/2xrj0FvkZyDyq rwGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=rd+Rdvi1ikh/aGCw3CXR20dJILc6wIwORNlf/CuVXeI=; b=X2BBQnVaQfT5YkJRQAPRv509hdpQFdlk0NB+TKkcW5Ja6yx270BcKgiytCCfdcjRQ9 eG70z7BpGgJSOd8YuECkAnN6Hn/TA6N/zXjrH7CXxf2oZKaHHZdZwFNk9odRM5LhxhaA GbOfdvNQbBjrLdZoW0BnyiZOEOBfSYS/xcfJlV+a0Pm8tlb1cRBIMNiS9k6IiJuZWDLI rbFlk9GFEA8JwF+s4vF0GOQDdQ9h6A4dS0EVLebTFtgJtx+9lC9iROPIqgXvrN/Limbh hGd/Qv+wXhH5E/rb5vT8X/x5pglKodFTjyVTOaRd2Sr5Ssy+tCC5EcCMETr2NpL7oQoe rWAg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga05.intel.com (mga05.intel.com. [192.55.52.43]) by mx.google.com with ESMTPS id cf16si2126256plb.227.2018.12.06.21.41.39 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 06 Dec 2018 21:41:39 -0800 (PST) Received-SPF: pass (google.com: domain of ying.huang@intel.com designates 192.55.52.43 as permitted sender) client-ip=192.55.52.43; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Dec 2018 21:41:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,324,1539673200"; d="scan'208";a="105567207" Received: from yhuang-mobile.sh.intel.com ([10.239.196.133]) by fmsmga007.fm.intel.com with ESMTP; 06 Dec 2018 21:41:36 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan , Daniel Jordan Subject: [PATCH -V8 04/21] swap: Support PMD swap mapping in put_swap_page() Date: Fri, 7 Dec 2018 13:41:04 +0800 Message-Id: <20181207054122.27822-5-ying.huang@intel.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20181207054122.27822-1-ying.huang@intel.com> References: <20181207054122.27822-1-ying.huang@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Previously, during swapout, all PMD page mapping will be split and replaced with PTE swap mapping. And when clearing the SWAP_HAS_CACHE flag for the huge swap cluster in put_swap_page(), the huge swap cluster will be split. Now, during swapout, the PMD page mappings to the THP will be changed to PMD swap mappings to the corresponding swap cluster. So when clearing the SWAP_HAS_CACHE flag, the huge swap cluster will only be split if the PMD swap mapping count is 0. Otherwise, we will keep it as the huge swap cluster. So that we can swapin a THP in one piece later. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan Cc: Daniel Jordan --- mm/swapfile.c | 31 ++++++++++++++++++++++++------- 1 file changed, 24 insertions(+), 7 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 37e20ce4983c..f30eed59c355 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1314,6 +1314,15 @@ void swap_free(swp_entry_t entry) /* * Called after dropping swapcache to decrease refcnt to swap entries. + * + * When a THP is added into swap cache, the SWAP_HAS_CACHE flag will + * be set in the swap_map[] of all swap entries in the huge swap + * cluster backing the THP. This huge swap cluster will not be split + * unless the THP is split even if its PMD swap mapping count dropped + * to 0. Later, when the THP is removed from swap cache, the + * SWAP_HAS_CACHE flag will be cleared in the swap_map[] of all swap + * entries in the huge swap cluster. And this huge swap cluster will + * be split if its PMD swap mapping count is 0. */ void put_swap_page(struct page *page, swp_entry_t entry) { @@ -1332,15 +1341,23 @@ void put_swap_page(struct page *page, swp_entry_t entry) ci = lock_cluster_or_swap_info(si, offset); if (size == SWAPFILE_CLUSTER) { - VM_BUG_ON(!cluster_is_huge(ci)); + VM_BUG_ON(!IS_ALIGNED(offset, size)); map = si->swap_map + offset; - for (i = 0; i < SWAPFILE_CLUSTER; i++) { - val = map[i]; - VM_BUG_ON(!(val & SWAP_HAS_CACHE)); - if (val == SWAP_HAS_CACHE) - free_entries++; + /* + * No PMD swap mapping, the swap cluster will be freed + * if all swap entries becoming free, otherwise the + * huge swap cluster will be split. + */ + if (!cluster_swapcount(ci)) { + for (i = 0; i < SWAPFILE_CLUSTER; i++) { + val = map[i]; + VM_BUG_ON(!(val & SWAP_HAS_CACHE)); + if (val == SWAP_HAS_CACHE) + free_entries++; + } + if (free_entries != SWAPFILE_CLUSTER) + cluster_clear_huge(ci); } - cluster_clear_huge(ci); if (free_entries == SWAPFILE_CLUSTER) { unlock_cluster_or_swap_info(si, ci); spin_lock(&si->lock);