From patchwork Tue Nov 13 05:51:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 10679617 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6983F13B5 for ; Tue, 13 Nov 2018 05:52:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A04C2A3AB for ; Tue, 13 Nov 2018 05:52:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4EB762A34B; Tue, 13 Nov 2018 05:52:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BAC972A278 for ; Tue, 13 Nov 2018 05:52:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EA6076B028E; Tue, 13 Nov 2018 00:52:23 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E07886B0290; Tue, 13 Nov 2018 00:52:23 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D25AF6B0291; Tue, 13 Nov 2018 00:52:23 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 8DD526B028E for ; Tue, 13 Nov 2018 00:52:23 -0500 (EST) Received: by mail-pf1-f197.google.com with SMTP id g63-v6so9569614pfc.9 for ; Mon, 12 Nov 2018 21:52:23 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=fcNopMamqapVYz9HdjRONdtNOXGiQC20MRPm+JqCNYY=; b=AtxmOVRizuPEJdpfqdRVNmJ6osUf8XhcQ3l6qX+3x5YD4rlu5ySsogBc5IlmiBZqgL NFU7vTHhHq2LtK7h4xqAPNhkZWyJu7bocyy0gwbTBksPslBLmJUHEUqJ6EAHdr0Epmeu QoPx8i8OKA1eMdeHXVJtguMwuupfcfubfD6Tb9EIQPxY2S65O8IsmPsdI3QWZOoOrEPo SRxfoGo3Dl+eDo/gjm44Yu4xWxS1wscfl5VxG4ZOZsHZgnJrvwaQ9LkkG54j6fEn02zE V0rIeMMx2lqYXw3cN8T7kGwrr+dFQCCH/knQSPDufd1nf5+SiqCxuG9JuJZDe25y9eny rKPw== X-Gm-Message-State: AGRZ1gLZMepr8E5hfPOVK4Awc3Tb8rFTqzv+Jq1DEpzQDy1BW77YrTC4 uFGjCcBgeP17vVFAM0qGM+fCMP5qC4/yjc1zB7mh4VIj7JVzo8DpdxV6XGyKHLoxAfcIU7LsjYw nQ3KBzJIfpp6TRT3LaGvDsQbiB/iNXv3SFfMpPLA53gb1H2oUdNWv4tBE98bPPkAAlQ== X-Received: by 2002:a63:344e:: with SMTP id b75mr3427560pga.184.1542088343256; Mon, 12 Nov 2018 21:52:23 -0800 (PST) X-Google-Smtp-Source: AJdET5f4iL/yFfONYqMuU1E1sm/FMyAPcHYQWqZg1f3QT6pyB5cQuwAqArpCj6uPYt06mGKw7Yxu X-Received: by 2002:a63:344e:: with SMTP id b75mr3427540pga.184.1542088342555; Mon, 12 Nov 2018 21:52:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542088342; cv=none; d=google.com; s=arc-20160816; b=WpsBlub7NA9acTISfqukixl8KULDbBm5yy/UlJDET3W9TNO+FgxwI4de/8mzzwl2j9 LHszmPI8bHj727GxpJ0d4gsbH4xCsQnOMjiZjHvqmsdC3PYMEYYV63kn8eMdg2zgZ/Hb 8tuuHZRvb+SRyHT0gWCqMhu95c6OPa64klMre+zLZUZcdTIjzGwD+vM6KdvnuLsN1ARa Hjwh9BEr6hGCIBz4ZZsoZAT5Y3kwi5SRRVV5YcK2pQj8Vxdca93vf4rcj2HFD2KCRLfG HE+nCmRHyHEI1M0qVSVmBFU+t+zcmFDx8oYGW/mqxym7u8aujdaL/hkwVkXE2zrPcUQs /H2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fcNopMamqapVYz9HdjRONdtNOXGiQC20MRPm+JqCNYY=; b=TACK+weX+wrq17PdM+hBh752ByK4UeO1lnAjFPPmAMp+V1ss3OQyX5rDi+dMvKSB8b bdy50aYnowyIlzMl5zu0hqZi31d3ArPtR85K0ROBBq0kd8vvfwEirMSri3qX5nOsTPJj OPKe7zJ+0UUFBX0ArrmxYGNcruQ1j73fDUX3UErQ4KF/n9Jo57lEKmMCiJ0UXu/3ayti dl/TMKArIk8MQFLcol8pDw4cyiuSQfjgB3n4ZzNydNGjtq1S+DLuCTARKDne3IVPfUef kJukVffOEJ4vBOt3u4rEBZFhW8K+ESDx42YJqcr6wupfDjqwqMQRp5krVeKesElcK8XG L+hg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Ynvul6Qg; spf=pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from mail.kernel.org (mail.kernel.org. [198.145.29.99]) by mx.google.com with ESMTPS id 185-v6si14322704pff.77.2018.11.12.21.52.22 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 12 Nov 2018 21:52:22 -0800 (PST) Received-SPF: pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) client-ip=198.145.29.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Ynvul6Qg; spf=pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sasha-vm.mshome.net (unknown [64.114.255.114]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 192D622507; Tue, 13 Nov 2018 05:52:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1542088342; bh=9dH0lsyD10h8yVXMzJ7Xjtl0we3Me20hRIgdPeiXvpM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ynvul6QgrEZCrINCz9RtbJqrtbf4rBEz0X/1qkxmW6l8IOXBjEacvF81AanMN9TK+ HrS0OzjE+VkYmg/b8ZVPFgq6FFoZ3zVOr2NWzEpbTPWlZ3sej0MwMWGpZu+3ja2/l8 UAmBcK7C0nNLz7VTVj8KyPnZKg+yW72IGiupEMRY= From: Sasha Levin To: stable@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Roman Gushchin , Johannes Weiner , Michal Hocko , Tejun Heo , Rik van Riel , Konstantin Khlebnikov , Matthew Wilcox , Andrew Morton , Linus Torvalds , Sasha Levin , linux-mm@kvack.org Subject: [PATCH AUTOSEL 4.14 25/26] mm: don't miss the last page because of round-off error Date: Tue, 13 Nov 2018 00:51:49 -0500 Message-Id: <20181113055150.78773-25-sashal@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113055150.78773-1-sashal@kernel.org> References: <20181113055150.78773-1-sashal@kernel.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Roman Gushchin [ Upstream commit 68600f623d69da428c6163275f97ca126e1a8ec5 ] I've noticed, that dying memory cgroups are often pinned in memory by a single pagecache page. Even under moderate memory pressure they sometimes stayed in such state for a long time. That looked strange. My investigation showed that the problem is caused by applying the LRU pressure balancing math: scan = div64_u64(scan * fraction[lru], denominator), where denominator = fraction[anon] + fraction[file] + 1. Because fraction[lru] is always less than denominator, if the initial scan size is 1, the result is always 0. This means the last page is not scanned and has no chances to be reclaimed. Fix this by rounding up the result of the division. In practice this change significantly improves the speed of dying cgroups reclaim. [guro@fb.com: prevent double calculation of DIV64_U64_ROUND_UP() arguments] Link: http://lkml.kernel.org/r/20180829213311.GA13501@castle Link: http://lkml.kernel.org/r/20180827162621.30187-3-guro@fb.com Signed-off-by: Roman Gushchin Reviewed-by: Andrew Morton Cc: Johannes Weiner Cc: Michal Hocko Cc: Tejun Heo Cc: Rik van Riel Cc: Konstantin Khlebnikov Cc: Matthew Wilcox Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- include/linux/math64.h | 3 +++ mm/vmscan.c | 6 ++++-- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/include/linux/math64.h b/include/linux/math64.h index 082de345b73c..3a7a14062668 100644 --- a/include/linux/math64.h +++ b/include/linux/math64.h @@ -254,4 +254,7 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor) } #endif /* mul_u64_u32_div */ +#define DIV64_U64_ROUND_UP(ll, d) \ + ({ u64 _tmp = (d); div64_u64((ll) + _tmp - 1, _tmp); }) + #endif /* _LINUX_MATH64_H */ diff --git a/mm/vmscan.c b/mm/vmscan.c index be56e2e1931e..9734e62654fa 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2367,9 +2367,11 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, /* * Scan types proportional to swappiness and * their relative recent reclaim efficiency. + * Make sure we don't miss the last page + * because of a round-off error. */ - scan = div64_u64(scan * fraction[file], - denominator); + scan = DIV64_U64_ROUND_UP(scan * fraction[file], + denominator); break; case SCAN_FILE: case SCAN_ANON: