From patchwork Thu Nov 22 19:52:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 10694765 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8189114BD for ; Thu, 22 Nov 2018 19:54:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 73BDF29710 for ; Thu, 22 Nov 2018 19:54:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 67D482AF94; Thu, 22 Nov 2018 19:54:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D22029710 for ; Thu, 22 Nov 2018 19:54:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5AD7F6B2CEB; Thu, 22 Nov 2018 14:54:44 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 55CEE6B2CEC; Thu, 22 Nov 2018 14:54:44 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4738D6B2CED; Thu, 22 Nov 2018 14:54:44 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id 098646B2CEB for ; Thu, 22 Nov 2018 14:54:44 -0500 (EST) Received: by mail-pg1-f198.google.com with SMTP id k125so3048264pga.5 for ; Thu, 22 Nov 2018 11:54:44 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=Ill0//kn93Bkdu5f9E4sZ5BD2rwFwXfjebJZmHkxkYE=; b=ryroBfqIlMihtF74au+e9cfaGzGr4s3FXILBxlfHFMDLjychMjnBr4h1Hxx3X7DTaj vnwP9tUbMjDkB+jNxBPDiZn75a9NuB/WpMqw/hhg2WuO81/EeBQ+9kne8+hDuS1bx2ev zYsOIhvcSRnhYB6H7OSReId51Qg1y58EINO3LEBFbCn/q5TsHqxqivYhMdPrs7mdcVKW mlkCfntOyLRZFgBCHihX1+e5a+/PbyQV5+7w9g2gNcKzteBXYSG+N5rZRriXcnsC1NwP H51bo+DiS782kUd/craPVb4SSksUsffPNlYtJUW1H4pDimXfLM+2eZnyU107ZkCWCkJR J/1A== X-Gm-Message-State: AA+aEWY8ZJGv3mm5IQN1cA7uiVu+UlJuAGMkgfpcXDsQ1vNs4iOacbmd AEjYaa5Az9c3ZLK2/RNvztGkrdsshJ/i6RLh1tdRVOKZlZVcNBzMIe1CkTEe1T+9fbhAaDLPp2/ Cr2/6D20e2EdxflKH9ZqWwhHg3Esg+dwXhVY7zheInAlEqorHl8o/tWi0Og12alIVOA== X-Received: by 2002:a63:6bc1:: with SMTP id g184mr11445011pgc.25.1542916483696; Thu, 22 Nov 2018 11:54:43 -0800 (PST) X-Google-Smtp-Source: AFSGD/VJCQRx0zDN/ble+xJFUb6gkzC+SnOcNrKeYrrV3xAmKAyvX+M1za/sWBEDxwqZeRJNId/I X-Received: by 2002:a63:6bc1:: with SMTP id g184mr11444976pgc.25.1542916483031; Thu, 22 Nov 2018 11:54:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542916483; cv=none; d=google.com; s=arc-20160816; b=VUQ2SWdH12ZQ+q+nevQDekjsyn4EMDpQZeXU6O7OQLXx3nWgwMujP85Fmw0fhWS3R6 U+R4Pu+wwIjUfyNH0Lj53ItjD+XmXY1YcZoL9dhs++aEeCzPuZZb/qVDmnHuocNb2XSq RC69CxNBtKcyp3ZbXgaLxaXkO+8ZoZaxbDR02yeGNVSF4+VabolQznymFD3kJXFttwTQ TkAA+TwmefEcMSdhaAcOnlrrCpknGWY2Cl1E4C+Ixv6oZgYawlQYuWC2qzQKt+38ks4p t+pCuTy+OjMuidq4P/kH6Bgayw0XUlqtK+z41KhhZnR1eu6gdEvASMhb6Zv3FGTnd2ue fj0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Ill0//kn93Bkdu5f9E4sZ5BD2rwFwXfjebJZmHkxkYE=; b=i/VUSlxqqCmvumOuZegWFMHsX8bjeIArPpfDpQ7r08molJG1ARMhKP5eQBsMrDQKYf MA2XQgwvkNyCFrFYNMhgrgWa0kO6nnqGsBQm6ZGkeDY55+IS4M1rRzintABVZYucTc+A L37NJqx03q3f+bltfHxpjzerCyCXE7a8tMUMLKtX4i2ES62RMUkiiwk8DDMimDAqGNx6 A+FTEQskbWEEsv2cLFoidKBTnabNXEIP2kcbLD9F5Tb9Vb4JG6/FciN+3gTaJywC70c9 6WknymT7oQO7zGNCfxbB1DAdKiywsdecBJaKaEuycj5/V/XszDqfjsVaymxFDHSRCEef zRHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=qlgxrVOt; spf=pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from mail.kernel.org (mail.kernel.org. [198.145.29.99]) by mx.google.com with ESMTPS id s123si3095407pgs.93.2018.11.22.11.54.42 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Nov 2018 11:54:43 -0800 (PST) Received-SPF: pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) client-ip=198.145.29.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=qlgxrVOt; spf=pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sasha-vm.mshome.net (unknown [37.142.5.207]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A626A20672; Thu, 22 Nov 2018 19:54:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1542916482; bh=cEI4xZXD/nCA45zDyQn1uqDbcZn+7YfiZfdn9YA3Bbo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qlgxrVOt4xGUrFU6Xg8g3hN+owDpAy+xZpsLE/xG4RF5EFOpgbR23ILlrHGU/nlMe tyomIChSXHesT5yQntDQ9AHNbwq/wYZHRXgqplxqzQSodXRfXnB3asyj4jQIMpO/CE UptynUgatfUX/fqromdhpstajHZuHbBk2mIBY0aE= From: Sasha Levin To: stable@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Michal Hocko , Andrew Morton , Linus Torvalds , Sasha Levin , linux-mm@kvack.org Subject: [PATCH AUTOSEL 4.19 34/36] mm, memory_hotplug: check zone_movable in has_unmovable_pages Date: Thu, 22 Nov 2018 14:52:38 -0500 Message-Id: <20181122195240.13123-34-sashal@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181122195240.13123-1-sashal@kernel.org> References: <20181122195240.13123-1-sashal@kernel.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Michal Hocko [ Upstream commit 9d7899999c62c1a81129b76d2a6ecbc4655e1597 ] Page state checks are racy. Under a heavy memory workload (e.g. stress -m 200 -t 2h) it is quite easy to hit a race window when the page is allocated but its state is not fully populated yet. A debugging patch to dump the struct page state shows has_unmovable_pages: pfn:0x10dfec00, found:0x1, count:0x0 page:ffffea0437fb0000 count:1 mapcount:1 mapping:ffff880e05239841 index:0x7f26e5000 compound_mapcount: 1 flags: 0x5fffffc0090034(uptodate|lru|active|head|swapbacked) Note that the state has been checked for both PageLRU and PageSwapBacked already. Closing this race completely would require some sort of retry logic. This can be tricky and error prone (think of potential endless or long taking loops). Workaround this problem for movable zones at least. Such a zone should only contain movable pages. Commit 15c30bc09085 ("mm, memory_hotplug: make has_unmovable_pages more robust") has told us that this is not strictly true though. Bootmem pages should be marked reserved though so we can move the original check after the PageReserved check. Pages from other zones are still prone to races but we even do not pretend that memory hotremove works for those so pre-mature failure doesn't hurt that much. Link: http://lkml.kernel.org/r/20181106095524.14629-1-mhocko@kernel.org Fixes: 15c30bc09085 ("mm, memory_hotplug: make has_unmovable_pages more robust") Signed-off-by: Michal Hocko Reported-by: Baoquan He Tested-by: Baoquan He Acked-by: Baoquan He Reviewed-by: Oscar Salvador Acked-by: Balbir Singh Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/page_alloc.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e2ef1c17942f..3a4065312938 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7690,6 +7690,14 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, if (PageReserved(page)) goto unmovable; + /* + * If the zone is movable and we have ruled out all reserved + * pages then it should be reasonably safe to assume the rest + * is movable. + */ + if (zone_idx(zone) == ZONE_MOVABLE) + continue; + /* * Hugepages are not in LRU lists, but they're movable. * We need not scan over tail pages bacause we don't