From patchwork Tue Nov 13 05:49:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 10679581 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 86A8013BB for ; Tue, 13 Nov 2018 05:51:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 775D52A2D9 for ; Tue, 13 Nov 2018 05:51:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7576A2A2EA; Tue, 13 Nov 2018 05:51:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E88932A2DD for ; Tue, 13 Nov 2018 05:51:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D690A6B0272; Tue, 13 Nov 2018 00:50:56 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D18CB6B0273; Tue, 13 Nov 2018 00:50:56 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBBCA6B0274; Tue, 13 Nov 2018 00:50:56 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id 773066B0272 for ; Tue, 13 Nov 2018 00:50:56 -0500 (EST) Received: by mail-pg1-f198.google.com with SMTP id f9so7371808pgs.13 for ; Mon, 12 Nov 2018 21:50:56 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=lYYfv6n2nF3gHC3xyzSlawdDD1qw2lhIl2WFMV3Ad/w=; b=tgmJLymFmgnrmAnCpa8GivlZIXgHzO4bSMPfQBv8TxX2IznnxNWxHPWZ+9pg6D3DSa Ot79KmMtH61OzwODfW3yeUaI0AygSUnvYG4zjHreCsrBOObCnaq2bRdbTpHwCopDs0AU PNatdUP2JtEvEU45DDupChSCohwX6t6aN21KyhvSF3Qt9h6YJym10u4AcD9/Y9U3/Gsh +9HcXjS6HthNT2vyprnVkC6LNV4WsvX+v/PlkF4C/n04P9IMGy0n05CnYvAhC1O83dJj qzib3niQyTQt+h1hBK4kHVloyXxTwJRd8qkWbkIu8H9rndYZFidhMBKjgRcTTcAuZslR zkVA== X-Gm-Message-State: AGRZ1gKh7yO2PDAD8hfzbNpiCBzToDhfrvqScnGDGdmoPEY6UH6cXQpH 01dnhHiiN0zOnlevimyBiYG+cDHvskH9uFvyw5FP0G/JbYxL4rBTbQKYmmvJrtzvJia+fW3bnAz 5M6ICaF5cxIwxt0VcHa3gfzqyUZYRNQBOGASeLDl0MnNeYwMAZZGobPVrq/v3egVDEg== X-Received: by 2002:a63:194f:: with SMTP id 15mr3573367pgz.192.1542088256132; Mon, 12 Nov 2018 21:50:56 -0800 (PST) X-Google-Smtp-Source: AJdET5f+UhGNy7ITUqVKxqsSOdE8HIDre4DlwUeLubwT+logy/EujVhOoqi58brQihwS0S2qaCPI X-Received: by 2002:a63:194f:: with SMTP id 15mr3573278pgz.192.1542088253146; Mon, 12 Nov 2018 21:50:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542088253; cv=none; d=google.com; s=arc-20160816; b=sWssqdkxIoVq9NRtK/UtapN2bcWmK3WcdpN64Y7EEsKcQNLIiJfMbyH4BEwSfAfG8m 8WjP6WqFklz4/1NZ2tiqup1uiH9XQ18ESYinzw2XtnCpJL9YgK3diRIUjLEPOftSr77J vTfVaatsWB9teMvQ/ir/pux8EAx2Xv4yZyz+tBP5VcBW08q0jUxdh3PqX9MsZD/xToIR CcUNmHcbyR8N2HPnU7oEvydcvhFbz4U2XChityPh0muwOW7/65XJn9IFdU6O4PNamskA PbfrT7bOfm9tY7j+A+WMN99IugSzd+jNWjar6uSb7t1ObcULS2a86yXyhcWF9cCdg9bS rq3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=lYYfv6n2nF3gHC3xyzSlawdDD1qw2lhIl2WFMV3Ad/w=; b=EeU+wc/jVxmA+K4Rn/2ddTM/u/U3JpqtIRGG/UigXblBBswiU/ekH7IvzRIimc8z8o D93bYSoVKuvJk4QPEMJ4QDGXGneVP6EjjfAA86XEDUCQznn31Q7jDL2QLjeJiT3vJ6UH AchnM4s81TpyovR4gMyhM9pz66VMNqa59d0FPuaHhTDVNBA/nATOvRVMllwvAoGzS++f qZ+aEiPz3c6G66WsLFZkbJrJOgQByzRUSLUMQTlsRpUZKfT2QNKKd9w8F+HkzvI6AkSa FHnWEyJ3HI3R2S601mxvmbqb1RKYPzIcKWXf8g6DDoCM2fbBGirSQQgATA6Z3/PHW7gI ugkQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=MPUDu9ip; spf=pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from mail.kernel.org (mail.kernel.org. [198.145.29.99]) by mx.google.com with ESMTPS id j65si18143866pge.444.2018.11.12.21.50.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 12 Nov 2018 21:50:53 -0800 (PST) Received-SPF: pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) client-ip=198.145.29.99; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=MPUDu9ip; spf=pass (google.com: domain of sashal@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from sasha-vm.mshome.net (unknown [64.114.255.114]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A3F8C2250F; Tue, 13 Nov 2018 05:50:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1542088252; bh=goa8PjvyKggqGIrvQvtX0YWGuJPLnQBHobPpvTN8qtA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MPUDu9ipWUDGcqM5xOooaHx2o/FNYLisuucti5Ej8xEONAW2T3wCZlRbxeUFML1KO kEYAsp7cmR/ERUDAOIKK8pk8ijp8mczN2dEPAsEzRd4sqxr9dNzl2LVXMxmOvUk3po U9pg9Ngho0KU/G7fJk87pHeASu76F79UoujtsCxI= From: Sasha Levin To: stable@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Roman Gushchin , Johannes Weiner , Michal Hocko , Tejun Heo , Rik van Riel , Konstantin Khlebnikov , Matthew Wilcox , Andrew Morton , Linus Torvalds , Sasha Levin , linux-mm@kvack.org Subject: [PATCH AUTOSEL 4.19 43/44] mm: don't miss the last page because of round-off error Date: Tue, 13 Nov 2018 00:49:49 -0500 Message-Id: <20181113054950.77898-43-sashal@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113054950.77898-1-sashal@kernel.org> References: <20181113054950.77898-1-sashal@kernel.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Roman Gushchin [ Upstream commit 68600f623d69da428c6163275f97ca126e1a8ec5 ] I've noticed, that dying memory cgroups are often pinned in memory by a single pagecache page. Even under moderate memory pressure they sometimes stayed in such state for a long time. That looked strange. My investigation showed that the problem is caused by applying the LRU pressure balancing math: scan = div64_u64(scan * fraction[lru], denominator), where denominator = fraction[anon] + fraction[file] + 1. Because fraction[lru] is always less than denominator, if the initial scan size is 1, the result is always 0. This means the last page is not scanned and has no chances to be reclaimed. Fix this by rounding up the result of the division. In practice this change significantly improves the speed of dying cgroups reclaim. [guro@fb.com: prevent double calculation of DIV64_U64_ROUND_UP() arguments] Link: http://lkml.kernel.org/r/20180829213311.GA13501@castle Link: http://lkml.kernel.org/r/20180827162621.30187-3-guro@fb.com Signed-off-by: Roman Gushchin Reviewed-by: Andrew Morton Cc: Johannes Weiner Cc: Michal Hocko Cc: Tejun Heo Cc: Rik van Riel Cc: Konstantin Khlebnikov Cc: Matthew Wilcox Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- include/linux/math64.h | 3 +++ mm/vmscan.c | 6 ++++-- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/include/linux/math64.h b/include/linux/math64.h index 837f2f2d1d34..bb2c84afb80c 100644 --- a/include/linux/math64.h +++ b/include/linux/math64.h @@ -281,4 +281,7 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor) } #endif /* mul_u64_u32_div */ +#define DIV64_U64_ROUND_UP(ll, d) \ + ({ u64 _tmp = (d); div64_u64((ll) + _tmp - 1, _tmp); }) + #endif /* _LINUX_MATH64_H */ diff --git a/mm/vmscan.c b/mm/vmscan.c index c5ef7240cbcb..961401c46334 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2456,9 +2456,11 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, /* * Scan types proportional to swappiness and * their relative recent reclaim efficiency. + * Make sure we don't miss the last page + * because of a round-off error. */ - scan = div64_u64(scan * fraction[file], - denominator); + scan = DIV64_U64_ROUND_UP(scan * fraction[file], + denominator); break; case SCAN_FILE: case SCAN_ANON: