From patchwork Tue Nov 13 05:50:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 10679605 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3901413BB for ; Tue, 13 Nov 2018 05:52:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2828629BCF for ; Tue, 13 Nov 2018 05:52:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 26A2F2A34E; Tue, 13 Nov 2018 05:52:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 667E829BCF for ; Tue, 13 Nov 2018 05:52:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 973646B0283; Tue, 13 Nov 2018 00:51:51 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 948B36B0284; Tue, 13 Nov 2018 00:51:51 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 810716B0285; Tue, 13 Nov 2018 00:51:51 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 3E4406B0283 for ; Tue, 13 Nov 2018 00:51:51 -0500 (EST) Received: by mail-pg1-f199.google.com with SMTP id l131so7395439pga.2 for ; Mon, 12 Nov 2018 21:51:51 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=Cal766x8NKGJDINwRcNerg4OvXJvoc0iljtnm8GlA5Y=; b=QUnINoxSauQNUe2pbzYkxmy9oXQjn7rHB/Mt/OsE3NXbS9/52i6jQzyd20ITSDD8Pb Wce4t2LLIt8KxLOBbJobN0CzZmOF9RkCJ/qIbJlD5OLvtOOYaLBU5R5JPl3GK9SDq6Mu bFGcXjiR88Dw5NCsiVPXmr07LX/NROl6GGzkRH0LeLRCag0Zm53pelSgfuC53YV1lKO8 2E8B94g5+g9rD62weFE+6AAma843pi/HWbdWYVFdRGxUR6y8eT+P8GbjSliwTv3ewiD5 146ORl25ZGp8dxPXRN22LY05EjVw2GEZEsEY37uhWgPhbn+4VIllxzHUdVeXL3ORYwIh 3hrg== X-Gm-Message-State: AGRZ1gLk4ONjdzoOZM33/takwODI6wKqBIGWWPOS8VSEPQ2wPXVDsWJN PJU/K63+WZ8ZGC/RDWwDODcB4s3ArcunNNJtDfFu8xtTcEIe+AxoCzNMkNt8Cuj3ynuisl8fHOP 8WVfT+3w6UmjCLBwvwn26oMAxMIA/rxfDQlVGIEmgIQbKZfX+aFFO3Vy1oOCcQ2l9lA== X-Received: by 2002:a62:b9b:: with SMTP id 27-v6mr3764275pfl.235.1542088310930; Mon, 12 Nov 2018 21:51:50 -0800 (PST) X-Google-Smtp-Source: AJdET5fE9NKQpTvsWEu28T9LdR/2OkYDS64kVoMLIFClwUaXTtyuBDa4ISZ8LfZsZ2I0sXesXb9R X-Received: by 2002:a62:b9b:: with SMTP id 27-v6mr3764246pfl.235.1542088310065; Mon, 12 Nov 2018 21:51:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542088310; cv=none; d=google.com; s=arc-20160816; b=UnpafMYi8eiNnRcW07+GsKSquDpj1pFHxqSqRya062QjyABWnlQa/oHevtifXDrVma c1pL/oppIZavekJ1nkKaSJ4Fy7kixZ0bb23GJm9y2tL6IXXHsk1fJW5yYETbw54sH5Mf fmTaC+umMhI+65kFBaUDLs0GijWbSUG5yQ0HMguy5jrbOh9Uugy8MQV/nEHZiuwDWinY FH6WqhVHL7obLIP1la9Hi6ul7YM/aW0fO5HG/8ZapR9zfy9eieySA1/hYEmiv/BbjW5Q n67O6TtehCk7jM7Xn3H5pJrmeusKWXq/hBm8oaVvSe/j4wgGH8Dspvy/TLi5wsSlZKCO 7K6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Cal766x8NKGJDINwRcNerg4OvXJvoc0iljtnm8GlA5Y=; b=yeySnyxKEQu5k6llnLqQ+wJf6yrJ3fkhBnkxIR97hGHuDiw9QgrjIK6xvZY30gwdhR fUdgIg3HewlZb+bJtZVHDRNi/BJqZU7cgIh1PHFJGQc6dDLR+/qo8OTSBrgckfhHRRrU NlmVZcAzgyos7KGkBvYjYfTv6kRMbOUjLn8oQCox0sw8d879DK/IcaxEallR87qM5gpt nVjeXTTlsCbsPBlmBka92indyxvBQobWatVA1NIsCCs9PUy8o4/NI5/nn7Ac9Oi4nyaz O9k3OcJ7zqsg/lbTSNfISvEeLWShIChMh4IdAsUCXp3FqewDoOOjTcJ6PMEYFgnm2Bkl NDdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=C3dMtBpS; spf=pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from mail.kernel.org (mail.kernel.org. [198.145.29.99]) by mx.google.com with ESMTPS id bc3-v6si20190128plb.52.2018.11.12.21.51.49 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 12 Nov 2018 21:51:50 -0800 (PST) Received-SPF: pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) client-ip=198.145.29.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=C3dMtBpS; spf=pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sasha-vm.mshome.net (unknown [64.114.255.114]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7DF0A22507; Tue, 13 Nov 2018 05:51:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1542088309; bh=rPql8XgGWs45tSjiEdAy+Tg+skKBsCFaPro/SvFYm54=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=C3dMtBpSIPiaWB3Qnqbo7KbBztGUQP09I+OxlxH/DDonlUzWJxKMTMvNZAUUDbhIZ O1oeCsJgiYEfimHQPb1I1Oj0imIAZobu3n+hqLSBE+wpM2+gfF9DIN1voYBMcijq7s yWF1TaCFAx9Wb5L8/5r4COnnYIRSDOzJeTUZ87Lc= From: Sasha Levin To: stable@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Roman Gushchin , Johannes Weiner , Michal Hocko , Tejun Heo , Rik van Riel , Konstantin Khlebnikov , Matthew Wilcox , Andrew Morton , Linus Torvalds , Sasha Levin , linux-mm@kvack.org Subject: [PATCH AUTOSEL 4.18 38/39] mm: don't miss the last page because of round-off error Date: Tue, 13 Nov 2018 00:50:52 -0500 Message-Id: <20181113055053.78352-38-sashal@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113055053.78352-1-sashal@kernel.org> References: <20181113055053.78352-1-sashal@kernel.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Roman Gushchin [ Upstream commit 68600f623d69da428c6163275f97ca126e1a8ec5 ] I've noticed, that dying memory cgroups are often pinned in memory by a single pagecache page. Even under moderate memory pressure they sometimes stayed in such state for a long time. That looked strange. My investigation showed that the problem is caused by applying the LRU pressure balancing math: scan = div64_u64(scan * fraction[lru], denominator), where denominator = fraction[anon] + fraction[file] + 1. Because fraction[lru] is always less than denominator, if the initial scan size is 1, the result is always 0. This means the last page is not scanned and has no chances to be reclaimed. Fix this by rounding up the result of the division. In practice this change significantly improves the speed of dying cgroups reclaim. [guro@fb.com: prevent double calculation of DIV64_U64_ROUND_UP() arguments] Link: http://lkml.kernel.org/r/20180829213311.GA13501@castle Link: http://lkml.kernel.org/r/20180827162621.30187-3-guro@fb.com Signed-off-by: Roman Gushchin Reviewed-by: Andrew Morton Cc: Johannes Weiner Cc: Michal Hocko Cc: Tejun Heo Cc: Rik van Riel Cc: Konstantin Khlebnikov Cc: Matthew Wilcox Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- include/linux/math64.h | 3 +++ mm/vmscan.c | 6 ++++-- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/include/linux/math64.h b/include/linux/math64.h index 837f2f2d1d34..bb2c84afb80c 100644 --- a/include/linux/math64.h +++ b/include/linux/math64.h @@ -281,4 +281,7 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor) } #endif /* mul_u64_u32_div */ +#define DIV64_U64_ROUND_UP(ll, d) \ + ({ u64 _tmp = (d); div64_u64((ll) + _tmp - 1, _tmp); }) + #endif /* _LINUX_MATH64_H */ diff --git a/mm/vmscan.c b/mm/vmscan.c index 03822f86f288..7b94e33823b5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2287,9 +2287,11 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, /* * Scan types proportional to swappiness and * their relative recent reclaim efficiency. + * Make sure we don't miss the last page + * because of a round-off error. */ - scan = div64_u64(scan * fraction[file], - denominator); + scan = DIV64_U64_ROUND_UP(scan * fraction[file], + denominator); break; case SCAN_FILE: case SCAN_ANON: