From patchwork Wed Dec 26 05:15:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 10742827 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81E566C5 for ; Wed, 26 Dec 2018 05:15:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E9DF28783 for ; Wed, 26 Dec 2018 05:15:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 61A3E289AA; Wed, 26 Dec 2018 05:15:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F1CCB28783 for ; Wed, 26 Dec 2018 05:15:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2B22F8E0002; Wed, 26 Dec 2018 00:15:04 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2604E8E0001; Wed, 26 Dec 2018 00:15:04 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 177988E0002; Wed, 26 Dec 2018 00:15:04 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id CB2E98E0001 for ; Wed, 26 Dec 2018 00:15:03 -0500 (EST) Received: by mail-pl1-f200.google.com with SMTP id v2so13255563plg.6 for ; Tue, 25 Dec 2018 21:15:03 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; bh=8n9P7iKQAwgo3ciXwXMo2RCOfWo6xT7DTp3suq0DVx8=; b=o2Dd7wTca6YjsW7vt2sGtSC/00Zt8aZ0luH/7BP23yyvFxsbYR4fEdvkdatQXI35hf 91sW90vX+VLYjLng2ihioK98c5awxsYXxPiiTGbijb3VBO3++GYHnNU7TtKbMJdRBmHm Dd4nfvHJ6w+R89pvnkjsacHO9dQkvllJhgJxbs/LemrYuSeX32zlpUiy6QKUVvJcUFlv UXBaGj0w4hU06EdqxlTdpIO8uJrAH5m3savNq5Ez+/f3QCqDm014q2RrotqrvjzAZhHU 3rSPwwVSUvBT4uJ2qiha/Zdt80X2b8PNOXyWh55V1b+mswip3RJKqbC9xFHKrRlgZcu+ wvIw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Gm-Message-State: AJcUuke+23jAxp5pYUsI1a6dyx93CBA7XyUKYQ1/20FJ2W8cb/z1evSu wd9ia4jxZC72Y+M/fmOM+gpaebds7hXq0jhCbYbp/sJb1L92PB5ug5JFHOJ3gKkQ4H10EEe3O4h 2BeHslawAjvHkB2LDcQmcrzIfBBsy6CUBf5d2qyKA9ByRvh6XQQ280ChgZnyrAZfCDg== X-Received: by 2002:a17:902:9691:: with SMTP id n17mr19027983plp.9.1545801303416; Tue, 25 Dec 2018 21:15:03 -0800 (PST) X-Google-Smtp-Source: ALg8bN4+rYeX0r1aKeVB0i0Qa2Rr7YFx+ICNbICnACrZ9pk7WcR5jRInV09rS1SynkxZtUHDZxkK X-Received: by 2002:a17:902:9691:: with SMTP id n17mr19027953plp.9.1545801302493; Tue, 25 Dec 2018 21:15:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1545801302; cv=none; d=google.com; s=arc-20160816; b=Dhu+WPcfoqB1eTI+FqW0rDJrIDA/lziw+Huctd62SL81UXTNVWDMFzZm2ovoGVjqgr Zshwhl8jzSiq89ahbVGsfMbiXm8VnnnegBIuERCfZQndp2Wz8/MMANrUFYLNgkUAzdMk DUSQ03w98nUDJnWEtqlfHKkgOJCPz60B4tt27LwR9egRyinacJ3Bsa6ufdKRUwwM9peu 2mtrNMIatXFby9xZUNNZ2I2j+xAp4lLa2JnVoEk9d8k+f6AxaN7E+g8uWUtSrjHnFnkJ l7M82oitq0+8OsslGnCdxjS5UTbiizMsk/npOo7yx3NqFvE+iNd4I4joFKa+k7Hk+Oud yqrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from; bh=8n9P7iKQAwgo3ciXwXMo2RCOfWo6xT7DTp3suq0DVx8=; b=ocfFBHaBl7GBlkgzlR2KzzXkS7raOlafrwsU/+qMpOj66ECvtn50HkuDJWDgJULhUu IHU+1bdz5SeV7iV2dYpkVLZf1zgBY/u5pDCf+H/JL2syBo/E6sCtyXPrhA6yNq1U2upk XC2PwN+kyZ4E+shgIjXZOfhw56j5ur5sWtmceCWeFaoLCPb2/2VS5VHBPp3xrL/GKKZo Ttjp/YQN9ympffTFmBt7V0z8Ap3/XFWqj1EMVulh8bNRLPErO1DH2/oYFNsjBmDPsIps eOxtO/qkOEed+pg5j4TrKnI6cQ6i98Ac1Cd/oYcOhm7XacMUopcx68Wug/6a9aNJJLin Wtug== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from mga01.intel.com (mga01.intel.com. [192.55.52.88]) by mx.google.com with ESMTPS id x8si30051856pll.187.2018.12.25.21.15.02 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 25 Dec 2018 21:15:02 -0800 (PST) Received-SPF: pass (google.com: domain of ying.huang@intel.com designates 192.55.52.88 as permitted sender) client-ip=192.55.52.88; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ying.huang@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Dec 2018 21:15:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,399,1539673200"; d="scan'208";a="121683858" Received: from yhuang-mobile.sh.intel.com ([10.239.192.121]) by orsmga001.jf.intel.com with ESMTP; 25 Dec 2018 21:14:57 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Rik van Riel , Johannes Weiner , Minchan Kim , Shaohua Li , Daniel Jordan , Hugh Dickins Subject: [PATCH] mm, swap: Fix swapoff with KSM pages Date: Wed, 26 Dec 2018 13:15:22 +0800 Message-Id: <20181226051522.28442-1-ying.huang@intel.com> X-Mailer: git-send-email 2.19.2 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP KSM pages may be mapped to the multiple VMAs that cannot be reached from one anon_vma. So during swapin, a new copy of the page need to be generated if a different anon_vma is needed, please refer to comments of ksm_might_need_to_copy() for details. During swapoff, unuse_vma() uses anon_vma (if available) to locate VMA and virtual address mapped to the page, so not all mappings to a swapped out KSM page could be found. So in try_to_unuse(), even if the swap count of a swap entry isn't zero, the page needs to be deleted from swap cache, so that, in the next round a new page could be allocated and swapin for the other mappings of the swapped out KSM page. But this contradicts with the THP swap support. Where the THP could be deleted from swap cache only after the swap count of every swap entry in the huge swap cluster backing the THP has reach 0. So try_to_unuse() is changed in commit e07098294adf ("mm, THP, swap: support to reclaim swap space for THP swapped out") to check that before delete a page from swap cache, but this has broken KSM swapoff too. Fortunately, KSM is for the normal pages only, so the original behavior for KSM pages could be restored easily via checking PageTransCompound(). That is how this patch works. Fixes: e07098294adf ("mm, THP, swap: support to reclaim swap space for THP swapped out") Signed-off-by: "Huang, Ying" Reported-and-Tested-and-Acked-by: Hugh Dickins Cc: Rik van Riel Cc: Johannes Weiner Cc: Minchan Kim Cc: Shaohua Li Cc: Daniel Jordan --- mm/swapfile.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/swapfile.c b/mm/swapfile.c index 8688ae65ef58..20d3c0f47a5f 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -2197,7 +2197,8 @@ int try_to_unuse(unsigned int type, bool frontswap, */ if (PageSwapCache(page) && likely(page_private(page) == entry.val) && - !page_swapped(page)) + (!PageTransCompound(page) || + !swap_page_trans_huge_swapped(si, entry))) delete_from_swap_cache(compound_head(page)); /*