From patchwork Fri Feb 22 17:43:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Ryabinin X-Patchwork-Id: 10826747 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4EC5B922 for ; Fri, 22 Feb 2019 17:43:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3138832B64 for ; Fri, 22 Feb 2019 17:43:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2026032B67; Fri, 22 Feb 2019 17:43:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1B32732B64 for ; Fri, 22 Feb 2019 17:43:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 258AF8E0108; Fri, 22 Feb 2019 12:43:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1CA9E8E011D; Fri, 22 Feb 2019 12:43:28 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F383F8E0120; Fri, 22 Feb 2019 12:43:27 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lf1-f72.google.com (mail-lf1-f72.google.com [209.85.167.72]) by kanga.kvack.org (Postfix) with ESMTP id 8249E8E011D for ; Fri, 22 Feb 2019 12:43:27 -0500 (EST) Received: by mail-lf1-f72.google.com with SMTP id j16so557570lfk.1 for ; Fri, 22 Feb 2019 09:43:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:mime-version:content-transfer-encoding; bh=yKRXvErt07o5tp+Ch7j9nKCwXPl5/sL18WaRQ7Sr6og=; b=aIIN+9XUE1GtKKa3YNbOZhiNHv0zzzZOcEX3ZvHF8gxduM/nfYheIVBd7Ex+ypWvN9 DuMk17c0+mWEKp7F+2kDQ9BkNAXkZE0w4taxAv1TNdK2E5uFAepeYvTN1+wQBbk9bYBD hAt0a8oW/yixQq6bS3aMgpfK4iNwG13xb/WmzTlAXlobDKOQs/nrt5wBLvKa2kVNzeCq /cOTNJTV+WwBYmvEn/wxXvqVqt+tOc4L6GM5Ag7VSY90OCJT+8nCO+sv4FkZf/tu9/cD akVPk77S/Z3i87LDRiPiaNhDvY5pM+l4lwSeOw4BK9uBunmeiWrUp/4Xzyq9pRjxt2YN 8g5A== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com X-Gm-Message-State: AHQUAualVoSiSbGoLeduQznKxrnHGgzyTh5HtjVqHXLtgGvDxqM0AxlC gUgMOyehL5nJlZFQruIBq4+3+deyJff6pxYWEEiAPUosvXTRwGtQ+bWsqt54DhYmLH9HreHjxvi /5PsX3x8AAh1Ocrvw8gJJP7kDWqLD2HhQx+FpO7KO0pNF6HVEFzgIrWkMZInJKaTz/Q== X-Received: by 2002:a2e:81c7:: with SMTP id s7mr3180431ljg.146.1550857406777; Fri, 22 Feb 2019 09:43:26 -0800 (PST) X-Google-Smtp-Source: AHgI3IYGFtVnCNHV7XIZYvmXrEazVU4rbeDuzbgnbYKROwapK9H5KcoVTWIe9ANLAIpLhJMaYIuU X-Received: by 2002:a2e:81c7:: with SMTP id s7mr3180370ljg.146.1550857405547; Fri, 22 Feb 2019 09:43:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550857405; cv=none; d=google.com; s=arc-20160816; b=btENr6MlNZbAfTRQ3jqSTO6pb8FmNIAlcvRJ8YVrcuAOv9wVkwkcpo3Jkcg1hO5PwI CtM02uEZQ7NC8MmGRnH8kSJgrYk0OoN53LgcDMMFsRhWifE39y0pjy8/babr28ZoLzHq uvrKIqPq5faJ87UswsPQz7l60X5/OLtF2l35rf9pRm+p1AuL8jYdWa8Ezt76Q+2Ey1wQ 4vla1aIEFlp/5C4DxIygX6JUaynbH0Hy/DHqFFb2xu+6E9kEqrbwLoqCMJ20+/Vr2bAS w77XoKfrdYgyW2ZTMqJLqCm9T2iSOVwl8rLy9bnz6mrtW77pHbCyRKdzuUiVsdvWsq8u oIPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from; bh=yKRXvErt07o5tp+Ch7j9nKCwXPl5/sL18WaRQ7Sr6og=; b=z7REJeiDOpbh7X9DIRV/26Re76ped6gr7XREo7DD2+Iztw0D3UqwcBKrXemUn55HOw yTNk+jbX1s6jDYeopthOLu/IWBVuJFcSmrFhJWUNvjac9COPNEUZPh0cMAi3euwAvMk7 B3JGBkvACEbJrrEuNoAtxYeDsbqF/AbWQfWGjjjlqmWn1rb+PZpfe0FhdVNV8dp82qjK ZrVEV+LDVPijPLglCfXn5/l9IyViRUDZJF/6HiGRVbBqtjeLPxpknW2oxc5XzL74qPHE 3+Kna2ah9aneD7CsuG+9MnTUUorKMzW79PWzu6Yq9YAPwrXekFWl8ngzakLytJC1L7NQ ltRg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Received: from relay.sw.ru (relay.sw.ru. [185.231.240.75]) by mx.google.com with ESMTPS id j12si1580157lji.90.2019.02.22.09.43.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 22 Feb 2019 09:43:25 -0800 (PST) Received-SPF: pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) client-ip=185.231.240.75; Authentication-Results: mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Received: from [172.16.25.12] (helo=i7.sw.ru) by relay.sw.ru with esmtp (Exim 4.91) (envelope-from ) id 1gxEr3-00010r-5K; Fri, 22 Feb 2019 20:43:21 +0300 From: Andrey Ryabinin To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Ryabinin , Johannes Weiner , Michal Hocko , Vlastimil Babka , Rik van Riel , Mel Gorman Subject: [PATCH 1/5] mm/workingset: remove unused @mapping argument in workingset_eviction() Date: Fri, 22 Feb 2019 20:43:33 +0300 Message-Id: <20190222174337.26390-1-aryabinin@virtuozzo.com> X-Mailer: git-send-email 2.19.2 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP workingset_eviction() doesn't use and never did use the @mapping argument. Remove it. Signed-off-by: Andrey Ryabinin Cc: Johannes Weiner Cc: Michal Hocko Cc: Vlastimil Babka Cc: Rik van Riel Cc: Mel Gorman Acked-by: Johannes Weiner Acked-by: Rik van Riel --- include/linux/swap.h | 2 +- mm/vmscan.c | 2 +- mm/workingset.c | 3 +-- 3 files changed, 3 insertions(+), 4 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 649529be91f2..fc50e21b3b88 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -307,7 +307,7 @@ struct vma_swap_readahead { }; /* linux/mm/workingset.c */ -void *workingset_eviction(struct address_space *mapping, struct page *page); +void *workingset_eviction(struct page *page); void workingset_refault(struct page *page, void *shadow); void workingset_activation(struct page *page); diff --git a/mm/vmscan.c b/mm/vmscan.c index ac4806f0f332..a9852ed7b97f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -952,7 +952,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page, */ if (reclaimed && page_is_file_cache(page) && !mapping_exiting(mapping) && !dax_mapping(mapping)) - shadow = workingset_eviction(mapping, page); + shadow = workingset_eviction(page); __delete_from_page_cache(page, shadow); xa_unlock_irqrestore(&mapping->i_pages, flags); diff --git a/mm/workingset.c b/mm/workingset.c index dcb994f2acc2..0906137760c5 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -215,13 +215,12 @@ static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat, /** * workingset_eviction - note the eviction of a page from memory - * @mapping: address space the page was backing * @page: the page being evicted * * Returns a shadow entry to be stored in @mapping->i_pages in place * of the evicted @page so that a later refault can be detected. */ -void *workingset_eviction(struct address_space *mapping, struct page *page) +void *workingset_eviction(struct page *page) { struct pglist_data *pgdat = page_pgdat(page); struct mem_cgroup *memcg = page_memcg(page); From patchwork Fri Feb 22 17:43:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Ryabinin X-Patchwork-Id: 10826753 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1AF3A13B5 for ; Fri, 22 Feb 2019 17:43:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F2A8D32B64 for ; Fri, 22 Feb 2019 17:43:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E685032B68; Fri, 22 Feb 2019 17:43:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AAF7032B64 for ; Fri, 22 Feb 2019 17:43:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E7EBA8E011E; Fri, 22 Feb 2019 12:43:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D66828E011F; Fri, 22 Feb 2019 12:43:28 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C05068E011E; Fri, 22 Feb 2019 12:43:28 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lf1-f71.google.com (mail-lf1-f71.google.com [209.85.167.71]) by kanga.kvack.org (Postfix) with ESMTP id 1E2518E0122 for ; Fri, 22 Feb 2019 12:43:28 -0500 (EST) Received: by mail-lf1-f71.google.com with SMTP id m10so572013lfk.6 for ; Fri, 22 Feb 2019 09:43:28 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=cRlVQHitbv/wOqvEWz/Nz3ioku5TmV87hsK2P01JqfA=; b=NkIlWxrOgpg1qXwGXNj3MlxuvJwdzdGeyAg8quGN0x24ytZozlnSyStLbH2Db/OzJr LBDdh/BtiIwS6mrBrG4a1vj8vr2VtdIBuUDW9TfhhGXa93Q8oRXhXeqA5hEnXj0jz2si 4KV1wHmmlMgZGAo+ORrmqqNjngwWre0myBNvmgYY2/g4UUefeKCnsV2jDVU+BfXNiDtl uAW2T3K1J/BocnjcfDUIVsyEjJGLh6FfGsksZf5SOuhmui9HLQyP7KwfYP5Y5sgphluR qikczAiYq1dNuvtV2laGT/YATz3TJFWHs4eJwlzrfUJGKX6g2X1+KOigSOcbI/KeR1xn 7eeA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com X-Gm-Message-State: AHQUAubu62b+iSRz5hYxqtSoo6UWuiypUm8CYxyqaY2RQX5YB4VqLUt8 UUpOAH+2VquYIcfn+0C8qx+D+YxcjwXZC9IWEVGotwW42Q9da2DT7XPMec9+x8015wJvT1ZragA kSubN7OFfYqjWb1zqqYn72REPdwQsGvS5iVRmcP5YnoBnDzpvAiXQ+vBV168KihqhkA== X-Received: by 2002:a2e:8456:: with SMTP id u22mr2977412ljh.108.1550857407371; Fri, 22 Feb 2019 09:43:27 -0800 (PST) X-Google-Smtp-Source: AHgI3IZAycEBw+F5cBISfYCdLjMvH1++5z9Y+ACve0krivh4HNZgzkiRTVZxLPbKrnWBSCl3X4+H X-Received: by 2002:a2e:8456:: with SMTP id u22mr2977342ljh.108.1550857405588; Fri, 22 Feb 2019 09:43:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550857405; cv=none; d=google.com; s=arc-20160816; b=jBtF5/RY7mrVqIOWoRr588X7PxdNDC+CB1EHjijgttxXZp8KxtbGgKEjjXB+gNUA7+ f+/OhKsW7CItMGF1LtyI10ZXUQC5doSNgpADGM+5Yttj5tZk53/BUcFr9e4CKD9XlpnY mEsJh1FFS9Fjb0U4oJNdk3tVwm0Mf4OM04cxTTqmRWPN8NqIcAtOCTVOILvOZlV4kVyp jvPySsOy9q51i0Ao50+8uphvqa4QWZjuaBx2sKZ+Pr5EIsz85qo8qsVRci1GJsjkQS5g Y9eq7SxLEHVDCRw2E3KLQnckvC8fIYNYDaePcwmpqQzcfTGyocpTwvYbwLnmxXPcBDk/ Rhpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=cRlVQHitbv/wOqvEWz/Nz3ioku5TmV87hsK2P01JqfA=; b=diJf/OSbEqZKWNf3+sxSTK+xHPOqgXERO2Nb5C71i7AD6Psp9uKsnnT1NjX3WCahWL BO59SCz0hPI6IhagHgkYDUKpQlho5wAQO94tnt4l4gjNO1uvDxqCri3ZOkSbUfWSqk+h PpsFNuq2W5ShGag+WfcEy6AcyFQkduXRYqKhSe4XeK62QX1hMHDUOIPioiJ4JGBEg6Xs kSGhd7kpe6O4IVPnwB8bB64hoHZvUVNcBx9bK08bfQIBFv5q2XqrUYne2kji1dHxjOTJ CLFuXlsdGzb+cOKpNKnBOOi4AwoAdnqkDRg70dUpB34bqhHEBIR2bbrnh5ddOQwB0Dpt HBSQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Received: from relay.sw.ru (relay.sw.ru. [185.231.240.75]) by mx.google.com with ESMTPS id t5si1627832lft.124.2019.02.22.09.43.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 22 Feb 2019 09:43:25 -0800 (PST) Received-SPF: pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) client-ip=185.231.240.75; Authentication-Results: mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Received: from [172.16.25.12] (helo=i7.sw.ru) by relay.sw.ru with esmtp (Exim 4.91) (envelope-from ) id 1gxEr3-00010r-9i; Fri, 22 Feb 2019 20:43:21 +0300 From: Andrey Ryabinin To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Ryabinin , Johannes Weiner , Michal Hocko , Vlastimil Babka , Rik van Riel , Mel Gorman Subject: [PATCH 2/5] mm: remove zone_lru_lock() function access ->lru_lock directly Date: Fri, 22 Feb 2019 20:43:34 +0300 Message-Id: <20190222174337.26390-2-aryabinin@virtuozzo.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190222174337.26390-1-aryabinin@virtuozzo.com> References: <20190222174337.26390-1-aryabinin@virtuozzo.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We have common pattern to access lru_lock from a page pointer: zone_lru_lock(page_zone(page)) Which is silly, because it unfolds to this: &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)]->zone_pgdat->lru_lock while we can simply do &NODE_DATA(page_to_nid(page))->lru_lock Remove zone_lru_lock() function, since it's only complicate things. Use 'page_pgdat(page)->lru_lock' pattern instead. Signed-off-by: Andrey Ryabinin Cc: Johannes Weiner Cc: Michal Hocko Cc: Vlastimil Babka Cc: Rik van Riel Cc: Mel Gorman Acked-by: Vlastimil Babka --- Documentation/cgroup-v1/memcg_test.txt | 4 ++-- Documentation/cgroup-v1/memory.txt | 4 ++-- include/linux/mm_types.h | 2 +- include/linux/mmzone.h | 4 ---- mm/compaction.c | 15 ++++++++------- mm/filemap.c | 4 ++-- mm/huge_memory.c | 6 +++--- mm/memcontrol.c | 14 +++++++------- mm/mlock.c | 14 +++++++------- mm/page_idle.c | 8 ++++---- mm/rmap.c | 2 +- mm/swap.c | 16 ++++++++-------- mm/vmscan.c | 16 ++++++++-------- 13 files changed, 53 insertions(+), 56 deletions(-) diff --git a/Documentation/cgroup-v1/memcg_test.txt b/Documentation/cgroup-v1/memcg_test.txt index 5c7f310f32bb..621e29ffb358 100644 --- a/Documentation/cgroup-v1/memcg_test.txt +++ b/Documentation/cgroup-v1/memcg_test.txt @@ -107,9 +107,9 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y. 8. LRU Each memcg has its own private LRU. Now, its handling is under global - VM's control (means that it's handled under global zone_lru_lock). + VM's control (means that it's handled under global pgdat->lru_lock). Almost all routines around memcg's LRU is called by global LRU's - list management functions under zone_lru_lock(). + list management functions under pgdat->lru_lock. A special function is mem_cgroup_isolate_pages(). This scans memcg's private LRU and call __isolate_lru_page() to extract a page diff --git a/Documentation/cgroup-v1/memory.txt b/Documentation/cgroup-v1/memory.txt index 8e2cb1dabeb0..a33cedf85427 100644 --- a/Documentation/cgroup-v1/memory.txt +++ b/Documentation/cgroup-v1/memory.txt @@ -267,11 +267,11 @@ When oom event notifier is registered, event will be delivered. Other lock order is following: PG_locked. mm->page_table_lock - zone_lru_lock + pgdat->lru_lock lock_page_cgroup. In many cases, just lock_page_cgroup() is called. per-zone-per-cgroup LRU (cgroup's private LRU) is just guarded by - zone_lru_lock, it has no lock of its own. + pgdat->lru_lock, it has no lock of its own. 2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index fe0f672f53ce..9b9dd8350e26 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -79,7 +79,7 @@ struct page { struct { /* Page cache and anonymous pages */ /** * @lru: Pageout list, eg. active_list protected by - * zone_lru_lock. Sometimes used as a generic list + * pgdat->lru_lock. Sometimes used as a generic list * by the page owner. */ struct list_head lru; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 2fd4247262e9..22423763c0bd 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -788,10 +788,6 @@ typedef struct pglist_data { #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) #define node_end_pfn(nid) pgdat_end_pfn(NODE_DATA(nid)) -static inline spinlock_t *zone_lru_lock(struct zone *zone) -{ - return &zone->zone_pgdat->lru_lock; -} static inline struct lruvec *node_lruvec(struct pglist_data *pgdat) { diff --git a/mm/compaction.c b/mm/compaction.c index 98f99f41dfdc..a3305f13a138 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -775,6 +775,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, unsigned long end_pfn, isolate_mode_t isolate_mode) { struct zone *zone = cc->zone; + pg_data_t *pgdat = zone->zone_pgdat; unsigned long nr_scanned = 0, nr_isolated = 0; struct lruvec *lruvec; unsigned long flags = 0; @@ -839,8 +840,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * if contended. */ if (!(low_pfn % SWAP_CLUSTER_MAX) - && compact_unlock_should_abort(zone_lru_lock(zone), flags, - &locked, cc)) + && compact_unlock_should_abort(&pgdat->lru_lock, + flags, &locked, cc)) break; if (!pfn_valid_within(low_pfn)) @@ -910,7 +911,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { if (locked) { - spin_unlock_irqrestore(zone_lru_lock(zone), + spin_unlock_irqrestore(&pgdat->lru_lock, flags); locked = false; } @@ -940,7 +941,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* If we already hold the lock, we can skip some rechecking */ if (!locked) { - locked = compact_lock_irqsave(zone_lru_lock(zone), + locked = compact_lock_irqsave(&pgdat->lru_lock, &flags, cc); /* Try get exclusive access under lock */ @@ -965,7 +966,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, } } - lruvec = mem_cgroup_page_lruvec(page, zone->zone_pgdat); + lruvec = mem_cgroup_page_lruvec(page, pgdat); /* Try isolate the page */ if (__isolate_lru_page(page, isolate_mode) != 0) @@ -1007,7 +1008,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ if (nr_isolated) { if (locked) { - spin_unlock_irqrestore(zone_lru_lock(zone), flags); + spin_unlock_irqrestore(&pgdat->lru_lock, flags); locked = false; } putback_movable_pages(&cc->migratepages); @@ -1034,7 +1035,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, isolate_abort: if (locked) - spin_unlock_irqrestore(zone_lru_lock(zone), flags); + spin_unlock_irqrestore(&pgdat->lru_lock, flags); /* * Updated the cached scanner pfn once the pageblock has been scanned diff --git a/mm/filemap.c b/mm/filemap.c index 663f3b84990d..cace3eb8069f 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -98,8 +98,8 @@ * ->swap_lock (try_to_unmap_one) * ->private_lock (try_to_unmap_one) * ->i_pages lock (try_to_unmap_one) - * ->zone_lru_lock(zone) (follow_page->mark_page_accessed) - * ->zone_lru_lock(zone) (check_pte_range->isolate_lru_page) + * ->pgdat->lru_lock (follow_page->mark_page_accessed) + * ->pgdat->lru_lock (check_pte_range->isolate_lru_page) * ->private_lock (page_remove_rmap->set_page_dirty) * ->i_pages lock (page_remove_rmap->set_page_dirty) * bdi.wb->list_lock (page_remove_rmap->set_page_dirty) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d4847026d4b1..4ccac6b32d49 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2475,7 +2475,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_unlock(&head->mapping->i_pages); } - spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags); + spin_unlock_irqrestore(&page_pgdat(head)->lru_lock, flags); remap_page(head); @@ -2686,7 +2686,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) lru_add_drain(); /* prevent PageLRU to go away from under us, and freeze lru stats */ - spin_lock_irqsave(zone_lru_lock(page_zone(head)), flags); + spin_lock_irqsave(&pgdata->lru_lock, flags); if (mapping) { XA_STATE(xas, &mapping->i_pages, page_index(head)); @@ -2731,7 +2731,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) spin_unlock(&pgdata->split_queue_lock); fail: if (mapping) xa_unlock(&mapping->i_pages); - spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags); + spin_unlock_irqrestore(&pgdata->lru_lock, flags); remap_page(head); ret = -EBUSY; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 5fc2e1a7d4d2..17859721a263 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2377,13 +2377,13 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) static void lock_page_lru(struct page *page, int *isolated) { - struct zone *zone = page_zone(page); + pg_data_t *pgdat = page_pgdat(page); - spin_lock_irq(zone_lru_lock(zone)); + spin_lock_irq(&pgdat->lru_lock); if (PageLRU(page)) { struct lruvec *lruvec; - lruvec = mem_cgroup_page_lruvec(page, zone->zone_pgdat); + lruvec = mem_cgroup_page_lruvec(page, pgdat); ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_lru(page)); *isolated = 1; @@ -2393,17 +2393,17 @@ static void lock_page_lru(struct page *page, int *isolated) static void unlock_page_lru(struct page *page, int isolated) { - struct zone *zone = page_zone(page); + pg_data_t *pgdat = page_pgdat(page); if (isolated) { struct lruvec *lruvec; - lruvec = mem_cgroup_page_lruvec(page, zone->zone_pgdat); + lruvec = mem_cgroup_page_lruvec(page, pgdat); VM_BUG_ON_PAGE(PageLRU(page), page); SetPageLRU(page); add_page_to_lru_list(page, lruvec, page_lru(page)); } - spin_unlock_irq(zone_lru_lock(zone)); + spin_unlock_irq(&pgdat->lru_lock); } static void commit_charge(struct page *page, struct mem_cgroup *memcg, @@ -2689,7 +2689,7 @@ void __memcg_kmem_uncharge(struct page *page, int order) /* * Because tail pages are not marked as "used", set it. We're under - * zone_lru_lock and migration entries setup in all page mappings. + * pgdat->lru_lock and migration entries setup in all page mappings. */ void mem_cgroup_split_huge_fixup(struct page *head) { diff --git a/mm/mlock.c b/mm/mlock.c index 41cc47e28ad6..080f3b36415b 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -182,7 +182,7 @@ static void __munlock_isolation_failed(struct page *page) unsigned int munlock_vma_page(struct page *page) { int nr_pages; - struct zone *zone = page_zone(page); + pg_data_t *pgdat = page_pgdat(page); /* For try_to_munlock() and to serialize with page migration */ BUG_ON(!PageLocked(page)); @@ -194,7 +194,7 @@ unsigned int munlock_vma_page(struct page *page) * might otherwise copy PageMlocked to part of the tail pages before * we clear it in the head page. It also stabilizes hpage_nr_pages(). */ - spin_lock_irq(zone_lru_lock(zone)); + spin_lock_irq(&pgdat->lru_lock); if (!TestClearPageMlocked(page)) { /* Potentially, PTE-mapped THP: do not skip the rest PTEs */ @@ -203,17 +203,17 @@ unsigned int munlock_vma_page(struct page *page) } nr_pages = hpage_nr_pages(page); - __mod_zone_page_state(zone, NR_MLOCK, -nr_pages); + __mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages); if (__munlock_isolate_lru_page(page, true)) { - spin_unlock_irq(zone_lru_lock(zone)); + spin_unlock_irq(&pgdat->lru_lock); __munlock_isolated_page(page); goto out; } __munlock_isolation_failed(page); unlock_out: - spin_unlock_irq(zone_lru_lock(zone)); + spin_unlock_irq(&pgdat->lru_lock); out: return nr_pages - 1; @@ -298,7 +298,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) pagevec_init(&pvec_putback); /* Phase 1: page isolation */ - spin_lock_irq(zone_lru_lock(zone)); + spin_lock_irq(&zone->zone_pgdat->lru_lock); for (i = 0; i < nr; i++) { struct page *page = pvec->pages[i]; @@ -325,7 +325,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) pvec->pages[i] = NULL; } __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked); - spin_unlock_irq(zone_lru_lock(zone)); + spin_unlock_irq(&zone->zone_pgdat->lru_lock); /* Now we can release pins of pages that we are not munlocking */ pagevec_release(&pvec_putback); diff --git a/mm/page_idle.c b/mm/page_idle.c index b9e4b42b33ab..0b39ec0c945c 100644 --- a/mm/page_idle.c +++ b/mm/page_idle.c @@ -31,7 +31,7 @@ static struct page *page_idle_get_page(unsigned long pfn) { struct page *page; - struct zone *zone; + pg_data_t *pgdat; if (!pfn_valid(pfn)) return NULL; @@ -41,13 +41,13 @@ static struct page *page_idle_get_page(unsigned long pfn) !get_page_unless_zero(page)) return NULL; - zone = page_zone(page); - spin_lock_irq(zone_lru_lock(zone)); + pgdat = page_pgdat(page); + spin_lock_irq(&pgdat->lru_lock); if (unlikely(!PageLRU(page))) { put_page(page); page = NULL; } - spin_unlock_irq(zone_lru_lock(zone)); + spin_unlock_irq(&pgdat->lru_lock); return page; } diff --git a/mm/rmap.c b/mm/rmap.c index 0454ecc29537..b30c7c71d1d9 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -27,7 +27,7 @@ * mapping->i_mmap_rwsem * anon_vma->rwsem * mm->page_table_lock or pte_lock - * zone_lru_lock (in mark_page_accessed, isolate_lru_page) + * pgdat->lru_lock (in mark_page_accessed, isolate_lru_page) * swap_lock (in swap_duplicate, swap_info_get) * mmlist_lock (in mmput, drain_mmlist and others) * mapping->private_lock (in __set_page_dirty_buffers) diff --git a/mm/swap.c b/mm/swap.c index 4d7d37eb3c40..301ed4e04320 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -58,16 +58,16 @@ static DEFINE_PER_CPU(struct pagevec, activate_page_pvecs); static void __page_cache_release(struct page *page) { if (PageLRU(page)) { - struct zone *zone = page_zone(page); + pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; unsigned long flags; - spin_lock_irqsave(zone_lru_lock(zone), flags); - lruvec = mem_cgroup_page_lruvec(page, zone->zone_pgdat); + spin_lock_irqsave(&pgdat->lru_lock, flags); + lruvec = mem_cgroup_page_lruvec(page, pgdat); VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); - spin_unlock_irqrestore(zone_lru_lock(zone), flags); + spin_unlock_irqrestore(&pgdat->lru_lock, flags); } __ClearPageWaiters(page); mem_cgroup_uncharge(page); @@ -322,12 +322,12 @@ static inline void activate_page_drain(int cpu) void activate_page(struct page *page) { - struct zone *zone = page_zone(page); + pg_data_t *pgdat = page_pgdat(page); page = compound_head(page); - spin_lock_irq(zone_lru_lock(zone)); - __activate_page(page, mem_cgroup_page_lruvec(page, zone->zone_pgdat), NULL); - spin_unlock_irq(zone_lru_lock(zone)); + spin_lock_irq(&pgdat->lru_lock); + __activate_page(page, mem_cgroup_page_lruvec(page, pgdat), NULL); + spin_unlock_irq(&pgdat->lru_lock); } #endif diff --git a/mm/vmscan.c b/mm/vmscan.c index a9852ed7b97f..2d081a32c6a8 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1614,8 +1614,8 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec, } -/* - * zone_lru_lock is heavily contended. Some of the functions that +/** + * pgdat->lru_lock is heavily contended. Some of the functions that * shrink the lists perform better by taking out a batch of pages * and working on them outside the LRU lock. * @@ -1750,11 +1750,11 @@ int isolate_lru_page(struct page *page) WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"); if (PageLRU(page)) { - struct zone *zone = page_zone(page); + pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; - spin_lock_irq(zone_lru_lock(zone)); - lruvec = mem_cgroup_page_lruvec(page, zone->zone_pgdat); + spin_lock_irq(&pgdat->lru_lock); + lruvec = mem_cgroup_page_lruvec(page, pgdat); if (PageLRU(page)) { int lru = page_lru(page); get_page(page); @@ -1762,7 +1762,7 @@ int isolate_lru_page(struct page *page) del_page_from_lru_list(page, lruvec, lru); ret = 0; } - spin_unlock_irq(zone_lru_lock(zone)); + spin_unlock_irq(&pgdat->lru_lock); } return ret; } @@ -1990,9 +1990,9 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, * processes, from rmap. * * If the pages are mostly unmapped, the processing is fast and it is - * appropriate to hold zone_lru_lock across the whole operation. But if + * appropriate to hold pgdat->lru_lock across the whole operation. But if * the pages are mapped, the processing is slow (page_referenced()) so we - * should drop zone_lru_lock around each page. It's impossible to balance + * should drop pgdat->lru_lock around each page. It's impossible to balance * this, so instead we remove the pages from the LRU while processing them. * It is safe to rely on PG_active against the non-LRU pages in here because * nobody will play with that bit on a non-LRU page. From patchwork Fri Feb 22 17:43:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Ryabinin X-Patchwork-Id: 10826749 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD86B922 for ; Fri, 22 Feb 2019 17:43:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B3A3F32B66 for ; Fri, 22 Feb 2019 17:43:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A82CE32B6A; Fri, 22 Feb 2019 17:43:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2990532B66 for ; Fri, 22 Feb 2019 17:43:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F3AB58E0121; Fri, 22 Feb 2019 12:43:27 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EC5EA8E0108; Fri, 22 Feb 2019 12:43:27 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DD86B8E0120; Fri, 22 Feb 2019 12:43:27 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lf1-f71.google.com (mail-lf1-f71.google.com [209.85.167.71]) by kanga.kvack.org (Postfix) with ESMTP id 7479A8E0108 for ; Fri, 22 Feb 2019 12:43:27 -0500 (EST) Received: by mail-lf1-f71.google.com with SMTP id g17so566430lfh.19 for ; Fri, 22 Feb 2019 09:43:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=npWLHy5QXMb4vE/qvQv7Q6UWWDaG70gQpg05ucqiQ+w=; b=pxvtBtVAfrV2wUz5A7YZYN+9qUnckZVSRvSy+QXNXm25Mp1mht+glTIhYSXJv9rXFa EYu5fBdYeDHgcNRxjb4/NIvMtpoRclyZgjcTeCdqwMiq9avGh/e5SRCQCF/UA1Be66F9 PV1jX4QbWood/1/SsDQeupjZ4YLCUNYeMGGbpM9G0IdXvbEitszt9vPCEFt/zvjOPLaD W85iAC4NVURhFdQwymvOSgzjf9SYoe9UFV2fUnNk3riwtYMwEQAhRR1El8qHfryvUWu2 NIOCVsUrU77LLIz8dnJzwrpZ6mnEnDwGfmsnsWMFrOpKrgVYn23eF34qoWoq2YD5Qy5U KxXA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com X-Gm-Message-State: AHQUAublqTcay6ZomQriPgGBo+A+1PbY4CJ9+nY+HjrK+aYuzq6BP9Mp cc5pq8BU7c8OnMCSavl3v3uqqGyHHWcayVvWF+ukF3oeB+8uDEpP6QHA3Lg5t107aYVnGxR5jc7 UVydw5i8aSMI/7emJfPtOOybFnowbcm7kjxWAOKer5OyviD6b6v1y2tZKrjmfrJ1LPA== X-Received: by 2002:a19:7507:: with SMTP id y7mr3368573lfe.140.1550857406657; Fri, 22 Feb 2019 09:43:26 -0800 (PST) X-Google-Smtp-Source: AHgI3IYz1kbH+cjXTEZOecSkdDW7bH+1Hjd7KClIh68j3rEATdcuwi6/M3c8djdF308jH/0Pqvm5 X-Received: by 2002:a19:7507:: with SMTP id y7mr3368521lfe.140.1550857405558; Fri, 22 Feb 2019 09:43:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550857405; cv=none; d=google.com; s=arc-20160816; b=icjCi2ff91Ogohkc5aqRgUvo+1J3XLp4nF+4Tf4nK8xyfE8+WTkPCopEQ3Pc/EO4zg tiYPpyKRgiMwcB52xCVocDuTNuI6ll3uXD88oK3pWc13Hfgdf7oEDGxiT33OtOyr5+io 3yujAVCDYFxiS91IILsognUurPei68eqAEKE9HjHhvTKuOQe7cFrael+YPvgfGFuVTeb WtbTlYVesj5zDBmLCPx1u8zJNUqcgkwtNCRRbkJ49FqKbIGIAv2SHlax63Y1EwC6VkEg /dnmEr3VfP8Pi+nXbd8PLGmmbkW53D2fzIv1te0pCWjz8N1xafaNR6HDuorxmF5oyIvq U4tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=npWLHy5QXMb4vE/qvQv7Q6UWWDaG70gQpg05ucqiQ+w=; b=B0AeH0Q5ULEstH2M8zwR7VXvEa578LfQH0wunoTwMVf9ZEaiAPVqcm7oACu54JGBqR GPQp01O3mXphwxIOn8wcyAYdsLBbiRmXD23ZQjI8HYhK5v+G6ItxdAKnj8BLCZrfkpji uj4a0KQhKMv3nQZj4nwdZQHImiqzu9gfgrCM0gC0xW3uvfVXDoF2KRW9AJx9x16tfX5c rA3tAOEKNrBxcbXihnS0V+UPSMZGSIYW7ZCXqsq3uaObuCaD83Q5xy3XelbkuLgxYTYe 4t7AvhvyOZqSHPhNk3x3tCeOLPp72BULEiU15qwY6LTbmfOnhHRW98jowKMPyxnb8WOk LNNQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Received: from relay.sw.ru (relay.sw.ru. [185.231.240.75]) by mx.google.com with ESMTPS id t5si1627833lft.124.2019.02.22.09.43.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 22 Feb 2019 09:43:25 -0800 (PST) Received-SPF: pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) client-ip=185.231.240.75; Authentication-Results: mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Received: from [172.16.25.12] (helo=i7.sw.ru) by relay.sw.ru with esmtp (Exim 4.91) (envelope-from ) id 1gxEr3-00010r-Bv; Fri, 22 Feb 2019 20:43:21 +0300 From: Andrey Ryabinin To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Ryabinin , Johannes Weiner , Michal Hocko , Vlastimil Babka , Rik van Riel , Mel Gorman Subject: [PATCH 3/5] mm/compaction: pass pgdat to too_many_isolated() instead of zone Date: Fri, 22 Feb 2019 20:43:35 +0300 Message-Id: <20190222174337.26390-3-aryabinin@virtuozzo.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190222174337.26390-1-aryabinin@virtuozzo.com> References: <20190222174337.26390-1-aryabinin@virtuozzo.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP too_many_isolated() in mm/compaction.c looks only at node state, so it makes more sense to change argument to pgdat instead of zone. Signed-off-by: Andrey Ryabinin Cc: Johannes Weiner Cc: Michal Hocko Cc: Vlastimil Babka Cc: Rik van Riel Cc: Mel Gorman Acked-by: Rik van Riel Acked-by: Vlastimil Babka --- mm/compaction.c | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index a3305f13a138..b2d02aba41d8 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -738,16 +738,16 @@ isolate_freepages_range(struct compact_control *cc, } /* Similar to reclaim, but different enough that they don't share logic */ -static bool too_many_isolated(struct zone *zone) +static bool too_many_isolated(pg_data_t *pgdat) { unsigned long active, inactive, isolated; - inactive = node_page_state(zone->zone_pgdat, NR_INACTIVE_FILE) + - node_page_state(zone->zone_pgdat, NR_INACTIVE_ANON); - active = node_page_state(zone->zone_pgdat, NR_ACTIVE_FILE) + - node_page_state(zone->zone_pgdat, NR_ACTIVE_ANON); - isolated = node_page_state(zone->zone_pgdat, NR_ISOLATED_FILE) + - node_page_state(zone->zone_pgdat, NR_ISOLATED_ANON); + inactive = node_page_state(pgdat, NR_INACTIVE_FILE) + + node_page_state(pgdat, NR_INACTIVE_ANON); + active = node_page_state(pgdat, NR_ACTIVE_FILE) + + node_page_state(pgdat, NR_ACTIVE_ANON); + isolated = node_page_state(pgdat, NR_ISOLATED_FILE) + + node_page_state(pgdat, NR_ISOLATED_ANON); return isolated > (inactive + active) / 2; } @@ -774,8 +774,7 @@ static unsigned long isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, unsigned long end_pfn, isolate_mode_t isolate_mode) { - struct zone *zone = cc->zone; - pg_data_t *pgdat = zone->zone_pgdat; + pg_data_t *pgdat = cc->zone->zone_pgdat; unsigned long nr_scanned = 0, nr_isolated = 0; struct lruvec *lruvec; unsigned long flags = 0; @@ -791,7 +790,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * list by either parallel reclaimers or compaction. If there are, * delay for some time until fewer pages are isolated */ - while (unlikely(too_many_isolated(zone))) { + while (unlikely(too_many_isolated(pgdat))) { /* async migration should just abort */ if (cc->mode == MIGRATE_ASYNC) return 0; From patchwork Fri Feb 22 17:43:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Ryabinin X-Patchwork-Id: 10826751 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A7BBD13B5 for ; Fri, 22 Feb 2019 17:43:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8AA2C32B64 for ; Fri, 22 Feb 2019 17:43:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7E21132B68; Fri, 22 Feb 2019 17:43:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F414D32B64 for ; Fri, 22 Feb 2019 17:43:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 901EF8E011D; Fri, 22 Feb 2019 12:43:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6D4468E011E; Fri, 22 Feb 2019 12:43:28 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 488978E011F; Fri, 22 Feb 2019 12:43:28 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lf1-f69.google.com (mail-lf1-f69.google.com [209.85.167.69]) by kanga.kvack.org (Postfix) with ESMTP id AB9988E011E for ; Fri, 22 Feb 2019 12:43:27 -0500 (EST) Received: by mail-lf1-f69.google.com with SMTP id y13so568120lfg.14 for ; Fri, 22 Feb 2019 09:43:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=l3573FcQI2TMlFRaIikWxGtlWdg1rVzxjrfhnZoKb7g=; b=IBnjw7rb4myGr9sRmdf1tGcTjK6igQ7Z6C21LeKKOrM0+MashEf9fClOTBIa5aW3Jq vzhK+nNTy4tj3SJ2b1K3/JnHBQTLCdz32chWmXNEKlklg315Tl8g7xVoQTR8Ap+PC7aW 2OfDobfygZ5rTu0xkY55i3vnqLJFXLUKQ1ysVPJeCyIXm6cwzV/oTEvy4Sf4joagrGwC 66Wn2thQa1Q1OMskR4LQJ4UL36+RHtqsb/sQaqsExrasbK4DQVyPD9VRrErwq0qwdqGk knJmPxKbwYRKuWyIex2VV7/B2kqq/H3DWPklfedP97ClT7UD1k+hn8Jocc/IcvqxCT4q vXtA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com X-Gm-Message-State: AHQUAub4zENceu7dzGuhXdeIQkZ8imsEItOZ9C/RQAJSNvSAvOyXljUv zv/PVLRydMMJGH1SKT3iJS8fLVXUcO/rJYU2RfLftUDhT4yUVFIE2C5tfu0/Jrmg1lQ/pItiRDy 40xe3i/qqkaNzVkYx6vs5kZs9idZlY+W8VuE2f0U9sUrl3xERZAjUJS1fs620JTASbA== X-Received: by 2002:a2e:8446:: with SMTP id u6-v6mr3043012ljh.74.1550857406958; Fri, 22 Feb 2019 09:43:26 -0800 (PST) X-Google-Smtp-Source: AHgI3IbjDxC20x9E8o+QZDVnP78BS4Q1bYCJ0HZ4cgTJwnGcC3pcwysCK5uIn8Q0HVWYscJTpDuk X-Received: by 2002:a2e:8446:: with SMTP id u6-v6mr3042959ljh.74.1550857405548; Fri, 22 Feb 2019 09:43:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550857405; cv=none; d=google.com; s=arc-20160816; b=qTDUesPjdwGOzbaXe/Onsf7xmcf+HiOt4IXi+RLEr54d8V0slk3ONI33wMpCzapYq3 REu0CYOLwsx1C3KAtcMB6Q50vv+VWy28MFmB16LB+ygn6rip1MO5JFYjrjBWkIvppDd+ Bz4NsUHoIY97tcciB1a9s7eGqFwBEIAsQmH2HQv09aD17HC6C+pTEVDX2diDAEw5VV9D y2+/G9Pp3Qu2r5Qww129vlgJfxsunDBybqpqJzyo7h2bGYx1ZhEvlNvrm2+btOadppUS EAHOrEUD6p04eKqjPpIFq3H5yd9MpsW2xY4RIvSouGaJFwULXDShj4Ihw3rIORrrtQBT UrDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=l3573FcQI2TMlFRaIikWxGtlWdg1rVzxjrfhnZoKb7g=; b=iAOCzdNgs6cYlv+5GHrQQPzxWlvG921AW2kdDvEwL1tq9fMC8j8Zl243sfdlAODs80 F2Eq9OSflAeT1/tZENPhLLD8SwKwTFgnuNRl8sqq8e8HHvfIhzSOLxytK6NP08Rk0Ka6 ipTEygxwJDrF6x6LspkyZjmsZz4XcV+r3XB2pTHDxWZLnuTgIQY4e/7Jp39uQ4aReFv4 iL9LDcnUX0e1eD/ZzelUVn1CzgR8aZVxTBiJZrmAZIIKnnWTFWKV61tPSNIS1tQvCWqF OgJ7mm3mogmXD6pagIigxRjKR+36FXUAtwPQtOd/4JvqTym/xjwGcVrk2HJ2USSjVbG4 SkyQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Received: from relay.sw.ru (relay.sw.ru. [185.231.240.75]) by mx.google.com with ESMTPS id m4si1537939lfh.59.2019.02.22.09.43.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 22 Feb 2019 09:43:25 -0800 (PST) Received-SPF: pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) client-ip=185.231.240.75; Authentication-Results: mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Received: from [172.16.25.12] (helo=i7.sw.ru) by relay.sw.ru with esmtp (Exim 4.91) (envelope-from ) id 1gxEr3-00010r-FQ; Fri, 22 Feb 2019 20:43:21 +0300 From: Andrey Ryabinin To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Ryabinin , Johannes Weiner , Michal Hocko , Vlastimil Babka , Rik van Riel , Mel Gorman Subject: [PATCH 4/5] mm/vmscan: remove unused lru_pages argument Date: Fri, 22 Feb 2019 20:43:36 +0300 Message-Id: <20190222174337.26390-4-aryabinin@virtuozzo.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190222174337.26390-1-aryabinin@virtuozzo.com> References: <20190222174337.26390-1-aryabinin@virtuozzo.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The argument 'unsigned long *lru_pages' passed around with no purpose, remove it. Signed-off-by: Andrey Ryabinin Cc: Johannes Weiner Cc: Michal Hocko Cc: Vlastimil Babka Cc: Rik van Riel Cc: Mel Gorman Acked-by: Johannes Weiner Acked-by: Vlastimil Babka --- mm/vmscan.c | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 2d081a32c6a8..07f74e9507b6 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2257,8 +2257,7 @@ enum scan_balance { * nr[2] = file inactive pages to scan; nr[3] = file active pages to scan */ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, - struct scan_control *sc, unsigned long *nr, - unsigned long *lru_pages) + struct scan_control *sc, unsigned long *nr) { int swappiness = mem_cgroup_swappiness(memcg); struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; @@ -2409,7 +2408,6 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, fraction[1] = fp; denominator = ap + fp + 1; out: - *lru_pages = 0; for_each_evictable_lru(lru) { int file = is_file_lru(lru); unsigned long lruvec_size; @@ -2525,7 +2523,6 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, BUG(); } - *lru_pages += lruvec_size; nr[lru] = scan; } } @@ -2534,7 +2531,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, * This is a basic per-node page freer. Used by both kswapd and direct reclaim. */ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memcg, - struct scan_control *sc, unsigned long *lru_pages) + struct scan_control *sc) { struct lruvec *lruvec = mem_cgroup_lruvec(pgdat, memcg); unsigned long nr[NR_LRU_LISTS]; @@ -2546,7 +2543,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc struct blk_plug plug; bool scan_adjusted; - get_scan_count(lruvec, memcg, sc, nr, lru_pages); + get_scan_count(lruvec, memcg, sc, nr); /* Record the original scan target for proportional adjustments later */ memcpy(targets, nr, sizeof(nr)); @@ -2751,7 +2748,6 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) .pgdat = pgdat, .priority = sc->priority, }; - unsigned long node_lru_pages = 0; struct mem_cgroup *memcg; memset(&sc->nr, 0, sizeof(sc->nr)); @@ -2761,7 +2757,6 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) memcg = mem_cgroup_iter(root, NULL, &reclaim); do { - unsigned long lru_pages; unsigned long reclaimed; unsigned long scanned; @@ -2798,8 +2793,7 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) reclaimed = sc->nr_reclaimed; scanned = sc->nr_scanned; - shrink_node_memcg(pgdat, memcg, sc, &lru_pages); - node_lru_pages += lru_pages; + shrink_node_memcg(pgdat, memcg, sc); if (sc->may_shrinkslab) { shrink_slab(sc->gfp_mask, pgdat->node_id, @@ -3332,7 +3326,6 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg, .may_swap = !noswap, .may_shrinkslab = 1, }; - unsigned long lru_pages; sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) | (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK); @@ -3349,7 +3342,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg, * will pick up pages from other mem cgroup's as well. We hack * the priority and make it zero. */ - shrink_node_memcg(pgdat, memcg, &sc, &lru_pages); + shrink_node_memcg(pgdat, memcg, &sc); trace_mm_vmscan_memcg_softlimit_reclaim_end(sc.nr_reclaimed); From patchwork Fri Feb 22 17:43:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Ryabinin X-Patchwork-Id: 10826755 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 31A88922 for ; Fri, 22 Feb 2019 17:43:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1769332B64 for ; Fri, 22 Feb 2019 17:43:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 08A2A32B66; Fri, 22 Feb 2019 17:43:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5BC0D32B67 for ; Fri, 22 Feb 2019 17:43:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6D67D8E0120; Fri, 22 Feb 2019 12:43:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5F1B68E011D; Fri, 22 Feb 2019 12:43:28 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 353698E0120; Fri, 22 Feb 2019 12:43:28 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lj1-f197.google.com (mail-lj1-f197.google.com [209.85.208.197]) by kanga.kvack.org (Postfix) with ESMTP id B15398E011F for ; Fri, 22 Feb 2019 12:43:27 -0500 (EST) Received: by mail-lj1-f197.google.com with SMTP id v27-v6so486064ljv.1 for ; Fri, 22 Feb 2019 09:43:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=0J6EKeVNPV4DTtVCERPQrtxws2K83/Ht9c8m80Bex9g=; b=ZHiIu497LY0GWjTageawQWF0tFYzPVZw+V5IgA4Y3hqUPdnOZwgHFXGk99tLrKHs5J NIUR+jWjb0cSjivTxjAHvjQ9hUidT8uVawu90rMzUfD/wLXg4Elc7DnAYj7gzM3Bucwh bp3Sv+ED+LWIaxqZ4vHhPECKm+xD47TZSIPHkneD5L9uR4A7YbjWgFYg8o80hsk/0/a7 nhXTSbwzyjDBbBY1pcCEyBrJIX5P/MtvvpKm54eB2aI/EDr+LKfo1ScalGo3AK+/92xZ LMrzOCstJVgrB9CVzt2MwgZqps2zmQZaj8s4L3sP352zMKOZl7tQHqnqiH92BNuoudZ5 Lyiw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com X-Gm-Message-State: AHQUAuYbzB6Wg0JVIl8Q46Op5WLDsKvEYvFZ51bhBK3wVFialc+U6jSh 8x4/km68egLs2hfiNFslo49XCxxWTk/2AaxRUgm2NdtORHo+6ZcZi/vkHMNhcjgljmicSwOuRT3 j2ib7/FSWAr7KfpC8hNIpioOlU0hvd6PmfMyI89eYQPTIJhoQTaFQokgIvNMcaVOFXw== X-Received: by 2002:ac2:415a:: with SMTP id c26mr3381224lfi.62.1550857406944; Fri, 22 Feb 2019 09:43:26 -0800 (PST) X-Google-Smtp-Source: AHgI3IZFfbb5wBqJLaAvfQsdXWbVbYIfYP5XPDtfDOWNeQyoyM09W/df90wUdNSOw8mcLEiD8ZDS X-Received: by 2002:ac2:415a:: with SMTP id c26mr3381153lfi.62.1550857405546; Fri, 22 Feb 2019 09:43:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1550857405; cv=none; d=google.com; s=arc-20160816; b=nAwIo4mqvi3/mqWRBoPqmY7plTpvYgyVKsoaH6eCQZdLq/O3+P1DGIPT3xO2q4efGF l/nr1os9grrB2c7nEW6JUSRwPHMNTennalEI5eFnkF0iRlPTONQ5s2tv82VojgVsV6++ XzQwOyOMsvsNte9Y5BqvcbRABKma+zh1kIJGYE78C9i3NbZ9X2JfeW5MrVvJGuid2SEj hfgXNQIngt2ec9KZ5UVu3Y+IS8HFfdixi5VPtwZ1V7mvQSAZnu/lnfYvBrDNvh+a7DPT aN00Cf7Yum6rPLPDJTkqJLJwWYiAMzF8IftSUNNe/slbZLxqPVfNkK4k+a/j6At+eoA3 OkuQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=0J6EKeVNPV4DTtVCERPQrtxws2K83/Ht9c8m80Bex9g=; b=W0qKdmCu4Uoerm/TLmq/UAl6O/wjSbBIw81Gi5XTfmYDE3LpyOlHDinpeB0Yv5CP1f /kLcBzeBde58JTtr1A/qkB5brJYcNcOnh4vfBSXbvlLTUdm8l3DZVpPvsrdXWu3+vndz lztPiwdYO7Zpps8eC9WMKOsaMuTMANPs7OigPIdeoKTKW8ETtVBv4SDZi2uq2MreMIr/ aWjz//jg9bmZGm87CnG8P+y/JPltRcz56QUKNhaAkOIs1TE8+/ZeSde5TTsFefPvewEW JqkDTeS4atq8Y77V7PtYtBd15dS9IMBaGgFoFneXeMxq51YgBJQypLS7XFEuA5nA7QnH 6kyQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Received: from relay.sw.ru (relay.sw.ru. [185.231.240.75]) by mx.google.com with ESMTPS id 1-v6si1488654lji.133.2019.02.22.09.43.25 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 22 Feb 2019 09:43:25 -0800 (PST) Received-SPF: pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) client-ip=185.231.240.75; Authentication-Results: mx.google.com; spf=pass (google.com: domain of aryabinin@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=aryabinin@virtuozzo.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=virtuozzo.com Received: from [172.16.25.12] (helo=i7.sw.ru) by relay.sw.ru with esmtp (Exim 4.91) (envelope-from ) id 1gxEr3-00010r-J9; Fri, 22 Feb 2019 20:43:21 +0300 From: Andrey Ryabinin To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Ryabinin , Johannes Weiner , Michal Hocko , Vlastimil Babka , Rik van Riel , Mel Gorman Subject: [PATCH 5/5] mm/vmscan: don't forcely shrink active anon lru list Date: Fri, 22 Feb 2019 20:43:37 +0300 Message-Id: <20190222174337.26390-5-aryabinin@virtuozzo.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190222174337.26390-1-aryabinin@virtuozzo.com> References: <20190222174337.26390-1-aryabinin@virtuozzo.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP shrink_node_memcg() always forcely shrink active anon list. This doesn't seem like correct behavior. If system/memcg has no swap, it's absolutely pointless to rebalance anon lru lists. And in case we did scan the active anon list above, it's unclear why would we need this additional force scan. If there are cases when we want more aggressive scan of the anon lru we should just change the scan target in get_scan_count() (and better explain such cases in the comments). Remove this force shrink and let get_scan_count() to decide how much of active anon we want to shrink. Signed-off-by: Andrey Ryabinin Cc: Johannes Weiner Cc: Michal Hocko Cc: Vlastimil Babka Cc: Rik van Riel Cc: Mel Gorman --- mm/vmscan.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 07f74e9507b6..efd10d6b9510 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2563,8 +2563,8 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc sc->priority == DEF_PRIORITY); blk_start_plug(&plug); - while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] || - nr[LRU_INACTIVE_FILE]) { + while (nr[LRU_ACTIVE_ANON] || nr[LRU_INACTIVE_ANON] || + nr[LRU_ACTIVE_FILE] || nr[LRU_INACTIVE_FILE]) { unsigned long nr_anon, nr_file, percentage; unsigned long nr_scanned; @@ -2636,14 +2636,6 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc } blk_finish_plug(&plug); sc->nr_reclaimed += nr_reclaimed; - - /* - * Even if we did not try to evict anon pages at all, we want to - * rebalance the anon lru active/inactive ratio. - */ - if (inactive_list_is_low(lruvec, false, memcg, sc, true)) - shrink_active_list(SWAP_CLUSTER_MAX, lruvec, - sc, LRU_ACTIVE_ANON); } /* Use reclaim/compaction for costly allocs or under memory pressure */