From patchwork Wed Oct 10 07:19:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 10634107 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E304C679F for ; Wed, 10 Oct 2018 07:27:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4EA2E2969B for ; Wed, 10 Oct 2018 07:27:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 42F21296CF; Wed, 10 Oct 2018 07:27:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B28982969B for ; Wed, 10 Oct 2018 07:27:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A613D6B000A; Wed, 10 Oct 2018 03:27:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A134C6B000C; Wed, 10 Oct 2018 03:27:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 901B56B000D; Wed, 10 Oct 2018 03:27:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id 5291D6B000A for ; Wed, 10 Oct 2018 03:27:16 -0400 (EDT) Received: by mail-pf1-f200.google.com with SMTP id i76-v6so3896150pfk.14 for ; Wed, 10 Oct 2018 00:27:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=SPI3fL10Tf3ioe9RQtf0bh0kWvivOVdjWM8YAqhViBA=; b=t1YgxA5WirJgJ6JjQS5lmTupksPu283elSfQT6JOv2CZCuuCdj8KPINO5JvuPtlIC9 xKvNYtrsTU9bdOwwsLv/KzU5ynZhkZ3JkEmgPxhiolqWOC1azBCUAYJqluy7e1rNXvcY +5foj4KiTwcUb2LwrUdwUTSqG0/6stkfU1VoViFSvte5RhWmdoFma6WoQK2VQcru9BSo ppQs31vKkdDV8dPj1AHoM+uF2xXjbY9mVtvih3wKGrUuG1nb9Ts+76/Sabcj+GFLhP4H s56NMcWCGCQnYz2hthgBv32YJfd4FV1n/o5VdKLl8dtG2/n3hKdV2VqukA+iH0mV41EZ ocBQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: ABuFfoiIHky3XmRomjNlhVvzs1W3l1FKrZ0BPlzNvOtUEcBsBeEZ5d78 /WEXjRHTTKyi8+OAVjfrQsSvAmsKfxfgEJarwBbyKilnaNnmKdCuYstZhg+83VylJwA/pczhfE0 pOdyrrjhp9wtmS3VCSbZU6Amboq4+c2wFiS80O3wl3aDP2SmHUPnRyo7E2KFaNOCFLw== X-Received: by 2002:a17:902:4103:: with SMTP id e3-v6mr31979464pld.236.1539156436020; Wed, 10 Oct 2018 00:27:16 -0700 (PDT) X-Google-Smtp-Source: ACcGV639B7nBNefnQohM5Mkfwz68IBu4LSoXlFn6TvLfrlV5OqMXgEsKpegLf3Yp3o5my2C/QGZP X-Received: by 2002:a17:902:4103:: with SMTP id e3-v6mr31979419pld.236.1539156435267; Wed, 10 Oct 2018 00:27:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539156435; cv=none; d=google.com; s=arc-20160816; b=RmPVi07sQ5hfyCGe1cGL3ytmdR/G2TkuavJ7n+ezLFSPO/wzplAtb+eWvYFujEqWjO RRGnXPtknOhFXlkA9D04OOjam391V1Hxtoc+HiZUBPzfe7dxM9ms52ualx0dj+/DiwXy QZ3gJGdCCJJMjFyk/7qNR2eJSn5UygFUqr7Wt4q3VhGXmT/3XgRik8nXPiylL8dH6/7a QS2BetGBn3fo+IcjrAWkQZBWi4TPwMjJ5+eOZ5LXawpY4imVuNbKEzjnRGfo+1c7X8P9 SYbX+flgYP08jyYWTPm7lA8tl02B/WAOGAhpFIOWPdeL24kqsn6m8uevVbrzr0QHoybF oaOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=SPI3fL10Tf3ioe9RQtf0bh0kWvivOVdjWM8YAqhViBA=; b=Bl9p3MHi9AEPgvZ/BYIq4lhfxmIaJTWry2yhvZrQOo19uJ8G36DKZRBCDFpR8+38pD 6EoPozhoY+NCDzQ6W2xargwidiMWB0F7Av1i+sWAPGHS541XJcsb1MG9WEdwsfEuzqyM wMb6MwSwwFyfrabeZymXzacZ/DZJWJP2DeRQThfuLVpHlUJKn6Tdfec+QCvC07rzMVNp UHZGv04SeubLN2wX+k+UIkHQGhhYZC2zlQJi4C/URg5fyWeqHEH7LKHkgnOW6R6HX8u9 0zMh9IadzaXgXyqaoNOsAvsOhjktLKeu0hcnGbUu0DqE2CWbAh98nYlOmHXyRLY4goPS Ln2w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga18.intel.com (mga18.intel.com. [134.134.136.126]) by mx.google.com with ESMTPS id e2-v6si30331496pfh.64.2018.10.10.00.27.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 Oct 2018 00:27:15 -0700 (PDT) Received-SPF: pass (google.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) client-ip=134.134.136.126; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Oct 2018 00:27:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,363,1534834800"; d="scan'208";a="93869987" Received: from yhuang-mobile.sh.intel.com ([10.239.198.87]) by fmsmga002.fm.intel.com with ESMTP; 10 Oct 2018 00:19:04 -0700 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , "Kirill A. Shutemov" , Andrea Arcangeli , Michal Hocko , Johannes Weiner , Shaohua Li , Hugh Dickins , Minchan Kim , Rik van Riel , Dave Hansen , Naoya Horiguchi , Zi Yan , Daniel Jordan Subject: [PATCH -V6 04/21] swap: Support PMD swap mapping in put_swap_page() Date: Wed, 10 Oct 2018 15:19:07 +0800 Message-Id: <20181010071924.18767-5-ying.huang@intel.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20181010071924.18767-1-ying.huang@intel.com> References: <20181010071924.18767-1-ying.huang@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Previously, during swapout, all PMD page mapping will be split and replaced with PTE swap mapping. And when clearing the SWAP_HAS_CACHE flag for the huge swap cluster in put_swap_page(), the huge swap cluster will be split. Now, during swapout, the PMD page mappings to the THP will be changed to PMD swap mappings to the corresponding swap cluster. So when clearing the SWAP_HAS_CACHE flag, the huge swap cluster will only be split if the PMD swap mapping count is 0. Otherwise, we will keep it as the huge swap cluster. So that we can swapin a THP in one piece later. Signed-off-by: "Huang, Ying" Cc: "Kirill A. Shutemov" Cc: Andrea Arcangeli Cc: Michal Hocko Cc: Johannes Weiner Cc: Shaohua Li Cc: Hugh Dickins Cc: Minchan Kim Cc: Rik van Riel Cc: Dave Hansen Cc: Naoya Horiguchi Cc: Zi Yan Cc: Daniel Jordan --- mm/swapfile.c | 31 ++++++++++++++++++++++++------- 1 file changed, 24 insertions(+), 7 deletions(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index a5a1ab46dab7..45c12abcb467 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1314,6 +1314,15 @@ void swap_free(swp_entry_t entry) /* * Called after dropping swapcache to decrease refcnt to swap entries. + * + * When a THP is added into swap cache, the SWAP_HAS_CACHE flag will + * be set in the swap_map[] of all swap entries in the huge swap + * cluster backing the THP. This huge swap cluster will not be split + * unless the THP is split even if its PMD swap mapping count dropped + * to 0. Later, when the THP is removed from swap cache, the + * SWAP_HAS_CACHE flag will be cleared in the swap_map[] of all swap + * entries in the huge swap cluster. And this huge swap cluster will + * be split if its PMD swap mapping count is 0. */ void put_swap_page(struct page *page, swp_entry_t entry) { @@ -1332,15 +1341,23 @@ void put_swap_page(struct page *page, swp_entry_t entry) ci = lock_cluster_or_swap_info(si, offset); if (size == SWAPFILE_CLUSTER) { - VM_BUG_ON(!cluster_is_huge(ci)); + VM_BUG_ON(!IS_ALIGNED(offset, size)); map = si->swap_map + offset; - for (i = 0; i < SWAPFILE_CLUSTER; i++) { - val = map[i]; - VM_BUG_ON(!(val & SWAP_HAS_CACHE)); - if (val == SWAP_HAS_CACHE) - free_entries++; + /* + * No PMD swap mapping, the swap cluster will be freed + * if all swap entries becoming free, otherwise the + * huge swap cluster will be split. + */ + if (!cluster_swapcount(ci)) { + for (i = 0; i < SWAPFILE_CLUSTER; i++) { + val = map[i]; + VM_BUG_ON(!(val & SWAP_HAS_CACHE)); + if (val == SWAP_HAS_CACHE) + free_entries++; + } + if (free_entries != SWAPFILE_CLUSTER) + cluster_clear_huge(ci); } - cluster_clear_huge(ci); if (free_entries == SWAPFILE_CLUSTER) { unlock_cluster_or_swap_info(si, ci); spin_lock(&si->lock);