From patchwork Mon Dec 7 22:09:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11956971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7A95C433FE for ; Mon, 7 Dec 2020 22:10:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 691B62395A for ; Mon, 7 Dec 2020 22:10:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 691B62395A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ED0198D0005; Mon, 7 Dec 2020 17:10:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E59808D0003; Mon, 7 Dec 2020 17:10:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D6D578D0005; Mon, 7 Dec 2020 17:10:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id B805D8D0003 for ; Mon, 7 Dec 2020 17:10:03 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7E8C31EE6 for ; Mon, 7 Dec 2020 22:10:03 +0000 (UTC) X-FDA: 77567879886.12.cord25_0f13134273e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 39F0918000CBE for ; Mon, 7 Dec 2020 22:10:03 +0000 (UTC) X-HE-Tag: cord25_0f13134273e1 X-Filterd-Recvd-Size: 4240 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 22:10:02 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id c137so19860456ybf.21 for ; Mon, 07 Dec 2020 14:10:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=LIYHOEA3PKvDsVgiHjdmx9YU9pD0AIzqmU7pPjIkHgc=; b=SRTEYO3VD9UFrTY1qt4R0yH0vlwmlgziaRTQfYygwrkX8kNy7mRB36+gr8+gx+Ricp unh2OtzCqK8zJ12qwX0VbKP47DHd2l5cNXg5czCIaRSCXooTf2YEDoSXbqQ3h/VgumY9 FKd4BDr6z736sF2hKk7wVbbtazgFcPIRnawoMSv7YOJuwyk3Z9lq9hLVSSvURNR27/4E 7O8NNEbrDsxteqTwKomQvvq4EtDvjydc4r8eEXOzrjwT43RPCrjc+DYqdrVaI49TVzlA AHEsg3t1+MQQWc1iU5nj8IZlaumfDnRrGpQowv3JHTuT/VPTz37aoVPM+KDxF+sGmCx5 0GCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=LIYHOEA3PKvDsVgiHjdmx9YU9pD0AIzqmU7pPjIkHgc=; b=CEVTgLqAma8VzXHTaeg7rRpLOZ288zMA+0E7M8trdZcaS+T6+YSoHakGwGD96FChOm YDGP+jFG5Ek7fcoKJ0jYf+Bay4n8+sg1yzOw917Sp9BXSnhhRELAa31wubLn55+8WUCQ VvJQ4vheT6GC0BIp0yxq3RTs0PHxrxICdIUkUjhsRg7kbsubi/HCHVK+Hu8QXXv/8gYB 74h0QMG1CwzBYlLCvmini34gz6xE/oumeVOFoDT0ZkyTWxRtm0BJ+gvwt7m9f2nHJY9b uADRZT5XwHpj2XM6kA5x6tax4hn6lp//UddTOTfnoQHzZrUNd1RORVo98l5bGlvOB3AL J4vw== X-Gm-Message-State: AOAM530w2A3jTNb//NsRuBdDRQ0OQJCxTxdgjlcX/4GQ0pbHktkeft+P 3Bb6gc9CzZrhc4B8wCFn6RfUG+Ru/vQ= X-Google-Smtp-Source: ABdhPJz7hXUtem/Ga7wk+XyAE/X9Oz3SO0UGGwnI81ZSyCjKlfiQlsGQVEWWGqaa80fwndrDIlmsA8Z7ciQ= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a25:2485:: with SMTP id k127mr13179519ybk.315.1607379001691; Mon, 07 Dec 2020 14:10:01 -0800 (PST) Date: Mon, 7 Dec 2020 15:09:39 -0700 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Message-Id: <20201207220949.830352-2-yuzhao@google.com> Mime-Version: 1.0 References: <20201207220949.830352-1-yuzhao@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH 01/11] mm: use add_page_to_lru_list() From: Yu Zhao To: Andrew Morton , Hugh Dickins , Alex Shi Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is add_page_to_lru_list(), and move_pages_to_lru() should reuse it, not duplicate it. Signed-off-by: Yu Zhao Reviewed-by: Alex Shi --- mm/vmscan.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 469016222cdb..a174594e40f8 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1821,7 +1821,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, int nr_pages, nr_moved = 0; LIST_HEAD(pages_to_free); struct page *page; - enum lru_list lru; while (!list_empty(list)) { page = lru_to_page(list); @@ -1866,11 +1865,8 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, * inhibits memcg migration). */ VM_BUG_ON_PAGE(!lruvec_holds_page_lru_lock(page, lruvec), page); - lru = page_lru(page); + add_page_to_lru_list(page, lruvec, page_lru(page)); nr_pages = thp_nr_pages(page); - - update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); - list_add(&page->lru, &lruvec->lists[lru]); nr_moved += nr_pages; if (PageActive(page)) workingset_age_nonresident(lruvec, nr_pages); From patchwork Mon Dec 7 22:09:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11956973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA9C6C4361B for ; Mon, 7 Dec 2020 22:10:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 43BF72395A for ; Mon, 7 Dec 2020 22:10:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 43BF72395A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BC16A8D0007; Mon, 7 Dec 2020 17:10:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B98538D0006; Mon, 7 Dec 2020 17:10:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A86358D0007; Mon, 7 Dec 2020 17:10:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 841378D0006 for ; Mon, 7 Dec 2020 17:10:13 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4C5AA181AEF09 for ; Mon, 7 Dec 2020 22:10:13 +0000 (UTC) X-FDA: 77567880306.24.spark84_2515d6c273e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id 2685C1A4A0 for ; Mon, 7 Dec 2020 22:10:13 +0000 (UTC) X-HE-Tag: spark84_2515d6c273e1 X-Filterd-Recvd-Size: 5384 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 22:10:03 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id e19so19760335ybc.5 for ; Mon, 07 Dec 2020 14:10:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=yunM0GzjyBS9Tj5OpBUgPwlJ+GO4Rh/4ODWoqMPhgLY=; b=L/5sxQoi/steUdZ1iakY0CpLHtXtlyL4xGgsmB6jlyk96aUMo0Xjqf9ctO1MUu9GEY fwogQLmTERTk/34k9lC7+R0nKY1/HVvqflMAVjVn/7fiC8t5pReZawJFMOLgYUuLZ6zs K7nRT3UY+mxBYKOab765OT3uPDwVX01ruKvCgznxOXWJcnj4IXqDC7+lVFa6nzPjtAz8 H3MQrexyDtxUVmFtNlzMIFY+4oWZGeJzaDTY/7EZYNCalujNmHZED6tplTZ/NKRC35bP h716FyyJJnRXFZ63choQC2Tg4m9SAkgi1/iIkM15cdfkILsGTTCQeNyZczeWXDVj1mA3 yXzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=yunM0GzjyBS9Tj5OpBUgPwlJ+GO4Rh/4ODWoqMPhgLY=; b=rv14bLHUZeaoXa7k+YeuQOWMgFMhFeBQ7PI7tMP8ruLSABMMj9nRQPOiotIC5ausVQ U5EUmB9d2pG1TTvmc1/JnE/RoP0tOTMqevyEwSi5p5wWc3O4nqxssqvWJDPPCi+zxi1P xlsZsx057D3BusK128ME+61JoYNlX5fx3d0FTRqKs9eFw4HqdYRS+vYRIelhmL25ijxo L0QY9TxOVHwIkvEPHHbNBcvBhm+4ImVYx8v/e2Zlw+G+Wcm51kYihFoHbHnGSqBWV1Nz /aVN0dt5k+L3sGLxkvcnQdu1XTiW1foN8eOdkHbM+tNr6e82gTuAscxaEhVW1qCkOuKZ NTlA== X-Gm-Message-State: AOAM532M5ttTpg4RDsqWQSwhD8kP7LUhTylGNY/XYqFytcV5LAJR+rW+ ozGrJqWrkcY+4mXQ366XPwJ0O3u1V9Q= X-Google-Smtp-Source: ABdhPJxfxXnIKekgCXL9sgcYlVQb7+vLQGjiaJKdHs4NLWPelxbqlH5ESdSC6kZZmkhVSKxz6DQ2sk1Vf1k= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a25:d18c:: with SMTP id i134mr26549772ybg.448.1607379002924; Mon, 07 Dec 2020 14:10:02 -0800 (PST) Date: Mon, 7 Dec 2020 15:09:40 -0700 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Message-Id: <20201207220949.830352-3-yuzhao@google.com> Mime-Version: 1.0 References: <20201207220949.830352-1-yuzhao@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH 02/11] mm: shuffle lru list addition and deletion functions From: Yu Zhao To: Andrew Morton , Hugh Dickins , Alex Shi Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These functions will call page_lru() in the following patches. Move them below page_lru() to avoid the forward declaration. Signed-off-by: Yu Zhao --- include/linux/mm_inline.h | 42 +++++++++++++++++++-------------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 8fc71e9d7bb0..2889741f450a 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -45,27 +45,6 @@ static __always_inline void update_lru_size(struct lruvec *lruvec, #endif } -static __always_inline void add_page_to_lru_list(struct page *page, - struct lruvec *lruvec, enum lru_list lru) -{ - update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); - list_add(&page->lru, &lruvec->lists[lru]); -} - -static __always_inline void add_page_to_lru_list_tail(struct page *page, - struct lruvec *lruvec, enum lru_list lru) -{ - update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); - list_add_tail(&page->lru, &lruvec->lists[lru]); -} - -static __always_inline void del_page_from_lru_list(struct page *page, - struct lruvec *lruvec, enum lru_list lru) -{ - list_del(&page->lru); - update_lru_size(lruvec, lru, page_zonenum(page), -thp_nr_pages(page)); -} - /** * page_lru_base_type - which LRU list type should a page be on? * @page: the page to test @@ -125,4 +104,25 @@ static __always_inline enum lru_list page_lru(struct page *page) } return lru; } + +static __always_inline void add_page_to_lru_list(struct page *page, + struct lruvec *lruvec, enum lru_list lru) +{ + update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); + list_add(&page->lru, &lruvec->lists[lru]); +} + +static __always_inline void add_page_to_lru_list_tail(struct page *page, + struct lruvec *lruvec, enum lru_list lru) +{ + update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); + list_add_tail(&page->lru, &lruvec->lists[lru]); +} + +static __always_inline void del_page_from_lru_list(struct page *page, + struct lruvec *lruvec, enum lru_list lru) +{ + list_del(&page->lru); + update_lru_size(lruvec, lru, page_zonenum(page), -thp_nr_pages(page)); +} #endif From patchwork Mon Dec 7 22:09:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11956977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81CDDC4361B for ; Mon, 7 Dec 2020 22:10:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2719D2395A for ; Mon, 7 Dec 2020 22:10:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2719D2395A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 200518D000B; Mon, 7 Dec 2020 17:10:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B1488D000D; Mon, 7 Dec 2020 17:10:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F02C08D000B; Mon, 7 Dec 2020 17:10:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id D19FC8D0008 for ; Mon, 7 Dec 2020 17:10:14 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 972608249980 for ; Mon, 7 Dec 2020 22:10:14 +0000 (UTC) X-FDA: 77567880348.27.woman62_0a16efa273e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id C66943D66B for ; Mon, 7 Dec 2020 22:10:13 +0000 (UTC) X-HE-Tag: woman62_0a16efa273e1 X-Filterd-Recvd-Size: 8171 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 22:10:04 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id v12so19922736ybi.6 for ; Mon, 07 Dec 2020 14:10:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=McdTDI5gTaApfzVRyszFJfKAeo/sD2oxQxdyTUnmuTU=; b=VzV//rAr/11+bo+kkpF9/SgtAirJwiN/16IJGMJ4kZgcqXPR4rjxNm3l0i/CFnpPWV prr2VVg2jq+5MgYTFREBfWcqJZK1u6FGROt0dbv4qUZTIfdEm/hXDl0Lqr8v8obWPHEN UKYfk63VEX8+nrl1ezkYi+kmdkOFkezxmiUEkdKhGNeeQXwzZAawfICznHKsjMwe80MK JY630hcH3BppRLj0Rtz9g1BG1WfkMCprb5n4DRa+H6K0PxwEYao9qiHdpuUmyx/ZM0Of pxZU8gp97DuhsaocR9jrXW0cHxJJYXzarPOnRxPYWX+tPtbd6cvpnp02yntBmzgsNL8n WCVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=McdTDI5gTaApfzVRyszFJfKAeo/sD2oxQxdyTUnmuTU=; b=emLtTulxZ8Y3tzh1IYZnyypE+CYNPIutR95ak9h5ZwfGEU2whMql/8uYShCxfP0ivz che4t8eQIyWbhseU28u5lE3AkaR/w0oRhDqDHJIgo0I9QnXbqPfZoKWJJQGOslKFQkRc /jPPg7nj2NvSkVN+MaxmuGe933vINGCiSWr5TkJBF4u/LMYanVrKKGWTIu7gGWhai/7h 8xVnb31vsb80XAbjMu3c1aur7Q7LumS2JnpTyfMOxSvpaBHz2WzT8dV2FjW8UBYryw8V SLV1MjTDfIfVLATUbWyf/Z7k01mtDP83VrMEOYT7ORdhV6kW/Y6s7GY39Cpip/n3oTU9 TKWg== X-Gm-Message-State: AOAM533opfl7csiAkNDnXBpNt0oQda1HbT6QScsTHIfX4u+bn/la+gDl 1MeXIE8I1uBZB6+qnZJek1tLRRkc0c0= X-Google-Smtp-Source: ABdhPJzdJEGVZ8rnGEwlN3OE6e/qoWUD8uaxm9qmkp6j49pGWZQRiP6bBqDLeanyLkseC2KSGOiXA91BGhI= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a25:d2c5:: with SMTP id j188mr26647887ybg.485.1607379004333; Mon, 07 Dec 2020 14:10:04 -0800 (PST) Date: Mon, 7 Dec 2020 15:09:41 -0700 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Message-Id: <20201207220949.830352-4-yuzhao@google.com> Mime-Version: 1.0 References: <20201207220949.830352-1-yuzhao@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH 03/11] mm: don't pass "enum lru_list" to lru list addition functions From: Yu Zhao To: Andrew Morton , Hugh Dickins , Alex Shi Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The "enum lru_list" parameter to add_page_to_lru_list() and add_page_to_lru_list_tail() is redundant in the sense that it can be extracted from the "struct page" parameter by page_lru(). A caveat is that we need to make sure PageActive() or PageUnevictable() is correctly set or cleared before calling these two functions. And they are indeed. Signed-off-by: Yu Zhao --- include/linux/mm_inline.h | 8 ++++++-- mm/swap.c | 15 +++++++-------- mm/vmscan.c | 6 ++---- 3 files changed, 15 insertions(+), 14 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 2889741f450a..130ba3201d3f 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -106,15 +106,19 @@ static __always_inline enum lru_list page_lru(struct page *page) } static __always_inline void add_page_to_lru_list(struct page *page, - struct lruvec *lruvec, enum lru_list lru) + struct lruvec *lruvec) { + enum lru_list lru = page_lru(page); + update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); list_add(&page->lru, &lruvec->lists[lru]); } static __always_inline void add_page_to_lru_list_tail(struct page *page, - struct lruvec *lruvec, enum lru_list lru) + struct lruvec *lruvec) { + enum lru_list lru = page_lru(page); + update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); list_add_tail(&page->lru, &lruvec->lists[lru]); } diff --git a/mm/swap.c b/mm/swap.c index 5022dfe388ad..136acabbfab5 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -231,7 +231,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec) if (!PageUnevictable(page)) { del_page_from_lru_list(page, lruvec, page_lru(page)); ClearPageActive(page); - add_page_to_lru_list_tail(page, lruvec, page_lru(page)); + add_page_to_lru_list_tail(page, lruvec); __count_vm_events(PGROTATED, thp_nr_pages(page)); } } @@ -313,8 +313,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec) del_page_from_lru_list(page, lruvec, lru); SetPageActive(page); - lru += LRU_ACTIVE; - add_page_to_lru_list(page, lruvec, lru); + add_page_to_lru_list(page, lruvec); trace_mm_lru_activate(page); __count_vm_events(PGACTIVATE, nr_pages); @@ -543,14 +542,14 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) * It can make readahead confusing. But race window * is _really_ small and it's non-critical problem. */ - add_page_to_lru_list(page, lruvec, lru); + add_page_to_lru_list(page, lruvec); SetPageReclaim(page); } else { /* * The page's writeback ends up during pagevec * We moves tha page into tail of inactive. */ - add_page_to_lru_list_tail(page, lruvec, lru); + add_page_to_lru_list_tail(page, lruvec); __count_vm_events(PGROTATED, nr_pages); } @@ -570,7 +569,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); ClearPageActive(page); ClearPageReferenced(page); - add_page_to_lru_list(page, lruvec, lru); + add_page_to_lru_list(page, lruvec); __count_vm_events(PGDEACTIVATE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, @@ -595,7 +594,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec) * anonymous pages */ ClearPageSwapBacked(page); - add_page_to_lru_list(page, lruvec, LRU_INACTIVE_FILE); + add_page_to_lru_list(page, lruvec); __count_vm_events(PGLAZYFREE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, @@ -1005,7 +1004,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) __count_vm_events(UNEVICTABLE_PGCULLED, nr_pages); } - add_page_to_lru_list(page, lruvec, lru); + add_page_to_lru_list(page, lruvec); trace_mm_lru_insertion(page, lru); } diff --git a/mm/vmscan.c b/mm/vmscan.c index a174594e40f8..8fc8f2c9d7ec 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1865,7 +1865,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, * inhibits memcg migration). */ VM_BUG_ON_PAGE(!lruvec_holds_page_lru_lock(page, lruvec), page); - add_page_to_lru_list(page, lruvec, page_lru(page)); + add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); nr_moved += nr_pages; if (PageActive(page)) @@ -4280,12 +4280,10 @@ void check_move_unevictable_pages(struct pagevec *pvec) lruvec = relock_page_lruvec_irq(page, lruvec); if (page_evictable(page) && PageUnevictable(page)) { - enum lru_list lru = page_lru_base_type(page); - VM_BUG_ON_PAGE(PageActive(page), page); ClearPageUnevictable(page); del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE); - add_page_to_lru_list(page, lruvec, lru); + add_page_to_lru_list(page, lruvec); pgrescued += nr_pages; } SetPageLRU(page); From patchwork Mon Dec 7 22:09:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11956981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69A67C1B0D8 for ; Mon, 7 Dec 2020 22:10:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0C7402395A for ; Mon, 7 Dec 2020 22:10:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0C7402395A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9D4408D000D; Mon, 7 Dec 2020 17:10:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8415F8D000A; Mon, 7 Dec 2020 17:10:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A2058D000E; Mon, 7 Dec 2020 17:10:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id 0B3F08D0008 for ; Mon, 7 Dec 2020 17:10:15 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B74DE82499A8 for ; Mon, 7 Dec 2020 22:10:14 +0000 (UTC) X-FDA: 77567880348.17.drop85_2a10a9d273e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 8FE49180D0184 for ; Mon, 7 Dec 2020 22:10:14 +0000 (UTC) X-HE-Tag: drop85_2a10a9d273e1 X-Filterd-Recvd-Size: 5212 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 22:10:06 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id z83so19808417ybz.2 for ; Mon, 07 Dec 2020 14:10:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=+i7jXjo3RGuU1pORyKs9fhQMrDbu2jqMyaA/zOdPLEU=; b=BBB1vpJZAqn3QWvpJ41MwDbQ3WY9GeKfM5yvpEaRsOygyH8QMir+PyA5kyYNIcOcfd 380wJ853ujXujcGzKi8iZ5IMVosg4cCEvx5M5LO1Gu2za6qb0fw/R0qRaA1Aon3nNasi h7sPm4J2mFD9GjVg74/EeEfy2O5L7ztHjAMpJ63qXj8rV5+HY85baWXeZnWVRiaiiSSm VVHaZ5tj1TCE9gwI9Q/rmQZD2xOCpZFvgyhD9hWunj8GsBN14DsLsCF/ycPOHlqKqTaF GkumoYu8oz2+Mf0w3BY8w/FoLNiHAvgsNgugIuT4SPUaVdtPTCzqtr+stQKTosz7wrO+ aBEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+i7jXjo3RGuU1pORyKs9fhQMrDbu2jqMyaA/zOdPLEU=; b=C4T3RDq0jsU5+HLsi2iwsks2x/lPp826ze3lSUZtyeIdB8Tqux0x+hFcj9eGyhOGXk FT8LCZjEeB7NnAEgmIHy3xHx6dBfhfYWsiDNoHDIc4/3NPxEY4960K9VTzVk9iObgNKZ xKeUAoA6C1YQCANuOCa0EH1UKOTDLOTPpQMOpmxwHx42hcwCU0o0zB3K1uJkenrd9f7c Uq1NNQabgVFv549MlAyFR/QN22JMfk963T9b6w64HHgRx2v+51X0B/gozZJ76J7sIbFk MQyoRWyth9DxPlQz9MgGPScFrZawqPpj+cWZvu8lOzlI+hSawpP3QlVnLKXUg4hbEDBY Xduw== X-Gm-Message-State: AOAM531sUgQgYWuRu3/NPGMAU3bfAjtiocWHg1G7SSElcDOMZrCCGr86 ReN2umEzJHPw3LoLCRlWeuhE3Yv6ou8= X-Google-Smtp-Source: ABdhPJzeFSY4rrnRDv5UMWqStyUMNed1tMPLl3uYWnIsHoL+6EzxfRunU0xwJOzgTRUeXIfRX6jiYC2oGWA= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a25:15c8:: with SMTP id 191mr26000192ybv.256.1607379005807; Mon, 07 Dec 2020 14:10:05 -0800 (PST) Date: Mon, 7 Dec 2020 15:09:42 -0700 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Message-Id: <20201207220949.830352-5-yuzhao@google.com> Mime-Version: 1.0 References: <20201207220949.830352-1-yuzhao@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH 04/11] mm: don't pass "enum lru_list" to trace_mm_lru_insertion() From: Yu Zhao To: Andrew Morton , Hugh Dickins , Alex Shi Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The parameter is redundant in the sense that it can be extracted from the "struct page" parameter by page_lru() correctly. Signed-off-by: Yu Zhao Reviewed-by: Alex Shi --- include/trace/events/pagemap.h | 11 ++++------- mm/swap.c | 5 +---- 2 files changed, 5 insertions(+), 11 deletions(-) diff --git a/include/trace/events/pagemap.h b/include/trace/events/pagemap.h index 8fd1babae761..e1735fe7c76a 100644 --- a/include/trace/events/pagemap.h +++ b/include/trace/events/pagemap.h @@ -27,24 +27,21 @@ TRACE_EVENT(mm_lru_insertion, - TP_PROTO( - struct page *page, - int lru - ), + TP_PROTO(struct page *page), - TP_ARGS(page, lru), + TP_ARGS(page), TP_STRUCT__entry( __field(struct page *, page ) __field(unsigned long, pfn ) - __field(int, lru ) + __field(enum lru_list, lru ) __field(unsigned long, flags ) ), TP_fast_assign( __entry->page = page; __entry->pfn = page_to_pfn(page); - __entry->lru = lru; + __entry->lru = page_lru(page); __entry->flags = trace_pagemap_flags(page); ), diff --git a/mm/swap.c b/mm/swap.c index 136acabbfab5..e053b4db108a 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -957,7 +957,6 @@ EXPORT_SYMBOL(__pagevec_release); static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) { - enum lru_list lru; int was_unevictable = TestClearPageUnevictable(page); int nr_pages = thp_nr_pages(page); @@ -993,11 +992,9 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) smp_mb__after_atomic(); if (page_evictable(page)) { - lru = page_lru(page); if (was_unevictable) __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else { - lru = LRU_UNEVICTABLE; ClearPageActive(page); SetPageUnevictable(page); if (!was_unevictable) @@ -1005,7 +1002,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) } add_page_to_lru_list(page, lruvec); - trace_mm_lru_insertion(page, lru); + trace_mm_lru_insertion(page); } struct lruvecs { From patchwork Mon Dec 7 22:09:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11956975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DEB1C433FE for ; Mon, 7 Dec 2020 22:10:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1835D239EE for ; Mon, 7 Dec 2020 22:10:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1835D239EE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 976D58D0009; Mon, 7 Dec 2020 17:10:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 92AED8D0008; Mon, 7 Dec 2020 17:10:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CEA68D0009; Mon, 7 Dec 2020 17:10:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0073.hostedemail.com [216.40.44.73]) by kanga.kvack.org (Postfix) with ESMTP id 5C3C98D0008 for ; Mon, 7 Dec 2020 17:10:14 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0C1268249980 for ; Mon, 7 Dec 2020 22:10:14 +0000 (UTC) X-FDA: 77567880348.19.toe87_22020bc273e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id BBDB31AD31E for ; Mon, 7 Dec 2020 22:10:13 +0000 (UTC) X-HE-Tag: toe87_22020bc273e1 X-Filterd-Recvd-Size: 9671 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 22:10:07 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id m203so460176ybf.1 for ; Mon, 07 Dec 2020 14:10:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=M6CwO9vrk8Jipcr5P78f46dlJFouYfu/sDvlfr5MXC4=; b=IziUJR3gadCdwjzue8i4u+Rd/gOqNgrzK/uqFLSsCbKbm04jYATGUJvtOV6u75wGQJ g5T63LkvFzj5ao/cGGDQ80HTbrP1SEb71K8NhN06eLeQg/IoYuQxD11Xv9V0u7vxPr2P Pgv/4ThBGoO9qMeguGuHLFd/IErzS2Exak9Ql6oyUHPz6+k8t/8JnfhqOVWr5PkM8Onq GTHJQEDII6gLHEeEsWth/S+Mdmvp7eV3eFCtmIyvGZ214atwN3Rg9Rgs/kXO9TYO48sW M1fiw/hVwF5oS1N7H0fAUOKjNnTY0yvqV6h0w2dk/fFQZqpnyPDo+LIVTHllp6NdCXyg /I/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=M6CwO9vrk8Jipcr5P78f46dlJFouYfu/sDvlfr5MXC4=; b=e0VS8qs6BzUwA69RWSe3UhD028haz3VmbjusX24TINAoACFje26yWH3jjH6/idjZXl FkBOCY7YtUpCaoElzJF7kJEMWv3LUr5vGbrrsFMEbPqPRtrwkgCY4Qum3S3xZdJkVAUr HBPxU2KwRg/A1ubyQOtAt68tZlQiu+yi8gjD5roAjyQhBikE0X6I6MJ8i30n8a2KVl7c CBwlXereLUGT46lU/8efBGmwFM+YeZBiV4urRsjGPWYsMoC897gebeQJ3Jmtj3AMlShB X5Ohj0uVHf34TZLXCYTKsx8sVry8HQtwbUlo3iv4aCdPRTt5cVgc5uraZXuIdWezniTQ s0Ow== X-Gm-Message-State: AOAM530b2r5rMdLBHveShB4JuMC7g42Ne5D6o2XpvDo9B/UkRKhoDtpB c3bKUiSKywD+MLyL5KyxMbWdBjV8X8M= X-Google-Smtp-Source: ABdhPJwZb6+nslCcravkBg1ZSHnaWPcfDWbLM4oVGssVoLFmC2KkzoM4YBx2t/HdgizhhzCCOnOSPPtfYvw= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a25:f09:: with SMTP id 9mr26951208ybp.129.1607379007239; Mon, 07 Dec 2020 14:10:07 -0800 (PST) Date: Mon, 7 Dec 2020 15:09:43 -0700 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Message-Id: <20201207220949.830352-6-yuzhao@google.com> Mime-Version: 1.0 References: <20201207220949.830352-1-yuzhao@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH 05/11] mm: don't pass "enum lru_list" to del_page_from_lru_list() From: Yu Zhao To: Andrew Morton , Hugh Dickins , Alex Shi Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The parameter is redundant in the sense that it can be potentially extracted from the "struct page" parameter by page_lru(). We need to make sure that existing PageActive() or PageUnevictable() remains until the function returns. A few places don't conform, and simple reordering fixes them. This patch may have left page_off_lru() seemingly odd, and we'll take care of it in the next patch. Signed-off-by: Yu Zhao --- include/linux/mm_inline.h | 5 +++-- mm/compaction.c | 2 +- mm/mlock.c | 3 +-- mm/swap.c | 26 ++++++++++---------------- mm/vmscan.c | 4 ++-- 5 files changed, 17 insertions(+), 23 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 130ba3201d3f..ffacc6273678 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -124,9 +124,10 @@ static __always_inline void add_page_to_lru_list_tail(struct page *page, } static __always_inline void del_page_from_lru_list(struct page *page, - struct lruvec *lruvec, enum lru_list lru) + struct lruvec *lruvec) { list_del(&page->lru); - update_lru_size(lruvec, lru, page_zonenum(page), -thp_nr_pages(page)); + update_lru_size(lruvec, page_lru(page), page_zonenum(page), + -thp_nr_pages(page)); } #endif diff --git a/mm/compaction.c b/mm/compaction.c index 8049d3530812..fd2058330497 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1034,7 +1034,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, low_pfn += compound_nr(page) - 1; /* Successfully isolated */ - del_page_from_lru_list(page, lruvec, page_lru(page)); + del_page_from_lru_list(page, lruvec); mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page), thp_nr_pages(page)); diff --git a/mm/mlock.c b/mm/mlock.c index 55b3b3672977..73960bb3464d 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -278,8 +278,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) */ if (TestClearPageLRU(page)) { lruvec = relock_page_lruvec_irq(page, lruvec); - del_page_from_lru_list(page, lruvec, - page_lru(page)); + del_page_from_lru_list(page, lruvec); continue; } else __munlock_isolation_failed(page); diff --git a/mm/swap.c b/mm/swap.c index e053b4db108a..d55a0c27d804 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -85,7 +85,8 @@ static void __page_cache_release(struct page *page) lruvec = lock_page_lruvec_irqsave(page, &flags); VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, page_off_lru(page)); + del_page_from_lru_list(page, lruvec); + page_off_lru(page); unlock_page_lruvec_irqrestore(lruvec, flags); } __ClearPageWaiters(page); @@ -229,7 +230,7 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec) { if (!PageUnevictable(page)) { - del_page_from_lru_list(page, lruvec, page_lru(page)); + del_page_from_lru_list(page, lruvec); ClearPageActive(page); add_page_to_lru_list_tail(page, lruvec); __count_vm_events(PGROTATED, thp_nr_pages(page)); @@ -308,10 +309,9 @@ void lru_note_cost_page(struct page *page) static void __activate_page(struct page *page, struct lruvec *lruvec) { if (!PageActive(page) && !PageUnevictable(page)) { - int lru = page_lru_base_type(page); int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec, lru); + del_page_from_lru_list(page, lruvec); SetPageActive(page); add_page_to_lru_list(page, lruvec); trace_mm_lru_activate(page); @@ -518,8 +518,7 @@ void lru_cache_add_inactive_or_unevictable(struct page *page, */ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) { - int lru; - bool active; + bool active = PageActive(page); int nr_pages = thp_nr_pages(page); if (PageUnevictable(page)) @@ -529,10 +528,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) if (page_mapped(page)) return; - active = PageActive(page); - lru = page_lru_base_type(page); - - del_page_from_lru_list(page, lruvec, lru + active); + del_page_from_lru_list(page, lruvec); ClearPageActive(page); ClearPageReferenced(page); @@ -563,10 +559,9 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) { if (PageActive(page) && !PageUnevictable(page)) { - int lru = page_lru_base_type(page); int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec, lru + LRU_ACTIVE); + del_page_from_lru_list(page, lruvec); ClearPageActive(page); ClearPageReferenced(page); add_page_to_lru_list(page, lruvec); @@ -581,11 +576,9 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec) { if (PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { - bool active = PageActive(page); int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec, - LRU_INACTIVE_ANON + active); + del_page_from_lru_list(page, lruvec); ClearPageActive(page); ClearPageReferenced(page); /* @@ -919,7 +912,8 @@ void release_pages(struct page **pages, int nr) VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, page_off_lru(page)); + del_page_from_lru_list(page, lruvec); + page_off_lru(page); } __ClearPageWaiters(page); diff --git a/mm/vmscan.c b/mm/vmscan.c index 8fc8f2c9d7ec..49451899037c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1764,7 +1764,7 @@ int isolate_lru_page(struct page *page) get_page(page); lruvec = lock_page_lruvec_irq(page); - del_page_from_lru_list(page, lruvec, page_lru(page)); + del_page_from_lru_list(page, lruvec); unlock_page_lruvec_irq(lruvec); ret = 0; } @@ -4281,8 +4281,8 @@ void check_move_unevictable_pages(struct pagevec *pvec) lruvec = relock_page_lruvec_irq(page, lruvec); if (page_evictable(page) && PageUnevictable(page)) { VM_BUG_ON_PAGE(PageActive(page), page); + del_page_from_lru_list(page, lruvec); ClearPageUnevictable(page); - del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE); add_page_to_lru_list(page, lruvec); pgrescued += nr_pages; } From patchwork Mon Dec 7 22:09:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11956991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADBEDC4167B for ; Mon, 7 Dec 2020 22:10:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 56D2D2395A for ; Mon, 7 Dec 2020 22:10:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 56D2D2395A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9A00F8D0013; Mon, 7 Dec 2020 17:10:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 975858D0010; Mon, 7 Dec 2020 17:10:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 840698D0013; Mon, 7 Dec 2020 17:10:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id 6E0E28D0010 for ; Mon, 7 Dec 2020 17:10:18 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0A162181AEF09 for ; Mon, 7 Dec 2020 22:10:18 +0000 (UTC) X-FDA: 77567880516.05.smash26_050a9e1273e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id CDB7618020C3B for ; Mon, 7 Dec 2020 22:10:17 +0000 (UTC) X-HE-Tag: smash26_050a9e1273e1 X-Filterd-Recvd-Size: 6286 Received: from mail-qt1-f201.google.com (mail-qt1-f201.google.com [209.85.160.201]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 22:10:09 +0000 (UTC) Received: by mail-qt1-f201.google.com with SMTP id h13so10681482qtq.5 for ; Mon, 07 Dec 2020 14:10:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=oqyquPDsQv/losMm6vmwKnq7puKQvoB7JepY3I4Ctjo=; b=IutVZCqQJIGWGjR8da5I9BZEA6qbm1LSKTSIyeEi88bwC+1t7hNQss/U7D5Xp/xBbN tnjL4W1QYVO2quExQoj8aWLsrfAhoeeHUFeAPRGm8ZuDOgql1XgHWniiBdKVMVLG6+Ds mAoGc9OBJLechfDWZdFWog6v8Rrt/lEzdqVEkg5ANBHZioqRPfL+zaSeHd52enkDXQoh JfUu/SBRZr29zpkWKhrBtjyIFaQn7aIZncsiVraupsxAgwtVGG7Xs4W1bOmVvrssYiWw T8wJcDgkrAzV+BXYH7DsPN2qZ24y0Xz6WIyTjmLK8zG6+Dm0rhLsW0zy8CcTd95Kfe9D 3ddg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=oqyquPDsQv/losMm6vmwKnq7puKQvoB7JepY3I4Ctjo=; b=OucHsfoNwNiIkdAeyITiBPWdzFCEjaXXZ7Sgs2fOexEVL6ytmMzYbofLse9gM+wK2W m/xq6qhrFXZFCtqqSYM5vZ13BhoBghsD359FeG+5sSPRihoTyOeUfTayR0aaFa315Gz0 xusLElEOI4L2HiGSTUOP3/38S7Fy0eZWRPamxAtlvteFWx2IZeTsNCwoHYDm8NMgv93/ xHxsSOlQQVv3fkCpvTCL1+R8l+Q0+U4WnIM5xvQ1xXkJBkSdJeKL2kyfxq2g7P7MK378 M5Fqe9YUIELchr0NM4HHQ2252qwybtWvZhMTcDCmeHM+bsSqGTl3c2dK0BDU9zlxxIsw 5eMg== X-Gm-Message-State: AOAM533djD9ja61IPOKB3QVFKdW7OWw9zy4k3Tp1j3xOmrbP1I49GIbh 29xG18iHtrsb8PR/nnRAUclVRQkJ2e0= X-Google-Smtp-Source: ABdhPJyFcXvBQlIF2WWyp7++Nj9btU9SvkMYtKHuKytdDJ36qrAREMDYQIhMtnFTsibh5fPPfemlfI+ErzY= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a0c:bf44:: with SMTP id b4mr23909200qvj.30.1607379008664; Mon, 07 Dec 2020 14:10:08 -0800 (PST) Date: Mon, 7 Dec 2020 15:09:44 -0700 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Message-Id: <20201207220949.830352-7-yuzhao@google.com> Mime-Version: 1.0 References: <20201207220949.830352-1-yuzhao@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH 06/11] mm: add __clear_page_lru_flags() to replace page_off_lru() From: Yu Zhao To: Andrew Morton , Hugh Dickins , Alex Shi Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Similar to page_off_lru(), the new function does non-atomic clearing of PageLRU() in addition to PageActive() and PageUnevictable(), on a page that has no references left. If PageActive() and PageUnevictable() are both set, refuse to clear either and leave them to bad_page(). This is a behavior change that is meant to help debug. Signed-off-by: Yu Zhao --- include/linux/mm_inline.h | 28 ++++++++++------------------ mm/swap.c | 6 ++---- mm/vmscan.c | 3 +-- 3 files changed, 13 insertions(+), 24 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index ffacc6273678..ef3fd79222e5 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -61,27 +61,19 @@ static inline enum lru_list page_lru_base_type(struct page *page) } /** - * page_off_lru - which LRU list was page on? clearing its lru flags. - * @page: the page to test - * - * Returns the LRU list a page was on, as an index into the array of LRU - * lists; and clears its Unevictable or Active flags, ready for freeing. + * __clear_page_lru_flags - clear page lru flags before releasing a page + * @page: the page that was on lru and now has a zero reference */ -static __always_inline enum lru_list page_off_lru(struct page *page) +static __always_inline void __clear_page_lru_flags(struct page *page) { - enum lru_list lru; + __ClearPageLRU(page); - if (PageUnevictable(page)) { - __ClearPageUnevictable(page); - lru = LRU_UNEVICTABLE; - } else { - lru = page_lru_base_type(page); - if (PageActive(page)) { - __ClearPageActive(page); - lru += LRU_ACTIVE; - } - } - return lru; + /* this shouldn't happen, so leave the flags to bad_page() */ + if (PageActive(page) && PageUnevictable(page)) + return; + + __ClearPageActive(page); + __ClearPageUnevictable(page); } /** diff --git a/mm/swap.c b/mm/swap.c index d55a0c27d804..a37c896a32b0 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -84,9 +84,8 @@ static void __page_cache_release(struct page *page) lruvec = lock_page_lruvec_irqsave(page, &flags); VM_BUG_ON_PAGE(!PageLRU(page), page); - __ClearPageLRU(page); del_page_from_lru_list(page, lruvec); - page_off_lru(page); + __clear_page_lru_flags(page); unlock_page_lruvec_irqrestore(lruvec, flags); } __ClearPageWaiters(page); @@ -911,9 +910,8 @@ void release_pages(struct page **pages, int nr) lock_batch = 0; VM_BUG_ON_PAGE(!PageLRU(page), page); - __ClearPageLRU(page); del_page_from_lru_list(page, lruvec); - page_off_lru(page); + __clear_page_lru_flags(page); } __ClearPageWaiters(page); diff --git a/mm/vmscan.c b/mm/vmscan.c index 49451899037c..e6bdfdfa2da1 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1847,8 +1847,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, SetPageLRU(page); if (unlikely(put_page_testzero(page))) { - __ClearPageLRU(page); - __ClearPageActive(page); + __clear_page_lru_flags(page); if (unlikely(PageCompound(page))) { spin_unlock_irq(&lruvec->lru_lock); From patchwork Mon Dec 7 22:09:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11956985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3467BC4361B for ; Mon, 7 Dec 2020 22:10:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BB1DC2395A for ; Mon, 7 Dec 2020 22:10:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BB1DC2395A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2C7B48D000F; Mon, 7 Dec 2020 17:10:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1FFF68D0010; Mon, 7 Dec 2020 17:10:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C37C8D000F; Mon, 7 Dec 2020 17:10:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0225.hostedemail.com [216.40.44.225]) by kanga.kvack.org (Postfix) with ESMTP id EA6028D000A for ; Mon, 7 Dec 2020 17:10:16 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A7D981EE6 for ; Mon, 7 Dec 2020 22:10:16 +0000 (UTC) X-FDA: 77567880432.01.home63_3506ef9273e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id 66BB31004AF99 for ; Mon, 7 Dec 2020 22:10:15 +0000 (UTC) X-HE-Tag: home63_3506ef9273e1 X-Filterd-Recvd-Size: 5278 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 22:10:10 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id h75so4554454ybg.18 for ; Mon, 07 Dec 2020 14:10:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=NPIJCrwWnwAzgQ2Qn9Zjmyj37pTsRc+tAPAwXdBwjQM=; b=JkwUTYvRknielhHc5S+DwhqhSrdC1vNFwohjSYPIXJdWfzrFUgg1chCp+rkY8bCcan AxGxOZ2Sk6lSPeJsAlcYMdBeQWBJtLI9ZDMIbLNi0j5gT17OrWUJ9EXZ2ENjvDZOmjOx aQr9gwgfoZ9OvNocI0K+W26Rr+V/93XvsAWJUoKNbSfhrPVTmH0tRKXAXvpbD2dT3WTB /lyQVWAaCN1A+OveyrGUarqC9D48ZmqQ6ubOKuVQTZEOINclgwHL/GIAJC9ZOdSFbHKC 3yLHjhseb44m19eInYtUvdaXj+IRiTycoM87Xul6XMMEzC8VnFaFwZfsFhslTQSDeTzU IxLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NPIJCrwWnwAzgQ2Qn9Zjmyj37pTsRc+tAPAwXdBwjQM=; b=AXcvMmRLCHZ1KgFnAAmESdWHgcqvlFxLdvpwc4kppmkDU2D7dKDqGIVilXTxmzMXWX rfd5G0ssLrr4eVW3JlTdA/vd5TWIbagbGL4k5krBvblhG56CgfAt4bsyBu/1qZ1L0g00 1ummWTIY1yzwmeaHFQXe1RIyeNs4e+WcGwDcp9Qs0sy61yfo4wGmETmdWGf9US/bj/4T vqCIseTDlCAkuvcE8E6bSBLmYn5cHDWW2RYHCa1XIoCxUMn+L/n3XRVUu//q8vyBKnQO bCf4A0m4m6u8GpxWmG6osUEIxey9UmE0Ek1IHvtBjxZD1v2sl/g1volBbDTi0AncGVqL SDMQ== X-Gm-Message-State: AOAM533MDHukvaAxKlbbJRTFf1QTc3ViKy3JDRGNy3J1rwFJe2NbTlun vvCL02ltbMCiy+REmrRfxFJJPsLm9Jc= X-Google-Smtp-Source: ABdhPJzlYqmdnXQI26Kw5mDt0tCjeHnOs8eVNmrbUZukm+ES7uDZ71CL+xtBFkBA1yiOLpj65GPnWe8zu8w= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a25:8446:: with SMTP id r6mr24209537ybm.442.1607379010197; Mon, 07 Dec 2020 14:10:10 -0800 (PST) Date: Mon, 7 Dec 2020 15:09:45 -0700 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Message-Id: <20201207220949.830352-8-yuzhao@google.com> Mime-Version: 1.0 References: <20201207220949.830352-1-yuzhao@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH 07/11] mm: VM_BUG_ON lru page flags From: Yu Zhao To: Andrew Morton , Hugh Dickins , Alex Shi Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Move scattered VM_BUG_ONs to two essential places that cover all lru list additions and deletions. Signed-off-by: Yu Zhao --- include/linux/mm_inline.h | 4 ++++ mm/swap.c | 2 -- mm/vmscan.c | 1 - 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index ef3fd79222e5..6d907a4dd6ad 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -66,6 +66,8 @@ static inline enum lru_list page_lru_base_type(struct page *page) */ static __always_inline void __clear_page_lru_flags(struct page *page) { + VM_BUG_ON_PAGE(!PageLRU(page), page); + __ClearPageLRU(page); /* this shouldn't happen, so leave the flags to bad_page() */ @@ -87,6 +89,8 @@ static __always_inline enum lru_list page_lru(struct page *page) { enum lru_list lru; + VM_BUG_ON_PAGE(PageActive(page) && PageUnevictable(page), page); + if (PageUnevictable(page)) lru = LRU_UNEVICTABLE; else { diff --git a/mm/swap.c b/mm/swap.c index a37c896a32b0..09c4a48e0bcd 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -83,7 +83,6 @@ static void __page_cache_release(struct page *page) unsigned long flags; lruvec = lock_page_lruvec_irqsave(page, &flags); - VM_BUG_ON_PAGE(!PageLRU(page), page); del_page_from_lru_list(page, lruvec); __clear_page_lru_flags(page); unlock_page_lruvec_irqrestore(lruvec, flags); @@ -909,7 +908,6 @@ void release_pages(struct page **pages, int nr) if (prev_lruvec != lruvec) lock_batch = 0; - VM_BUG_ON_PAGE(!PageLRU(page), page); del_page_from_lru_list(page, lruvec); __clear_page_lru_flags(page); } diff --git a/mm/vmscan.c b/mm/vmscan.c index e6bdfdfa2da1..95e581c9d9af 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4279,7 +4279,6 @@ void check_move_unevictable_pages(struct pagevec *pvec) lruvec = relock_page_lruvec_irq(page, lruvec); if (page_evictable(page) && PageUnevictable(page)) { - VM_BUG_ON_PAGE(PageActive(page), page); del_page_from_lru_list(page, lruvec); ClearPageUnevictable(page); add_page_to_lru_list(page, lruvec); From patchwork Mon Dec 7 22:09:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11956979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D7B6C4167B for ; Mon, 7 Dec 2020 22:10:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 438CC2395A for ; Mon, 7 Dec 2020 22:10:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 438CC2395A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 662F08D0008; Mon, 7 Dec 2020 17:10:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 50F088D000F; Mon, 7 Dec 2020 17:10:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FBBD8D000A; Mon, 7 Dec 2020 17:10:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0239.hostedemail.com [216.40.44.239]) by kanga.kvack.org (Postfix) with ESMTP id E97688D000A for ; Mon, 7 Dec 2020 17:10:14 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A4E82824999B for ; Mon, 7 Dec 2020 22:10:14 +0000 (UTC) X-FDA: 77567880348.19.drain53_010a75c273e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 38CC91AD1B4 for ; Mon, 7 Dec 2020 22:10:14 +0000 (UTC) X-HE-Tag: drain53_010a75c273e1 X-Filterd-Recvd-Size: 4639 Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 22:10:12 +0000 (UTC) Received: by mail-qk1-f202.google.com with SMTP id a68so7112522qke.9 for ; Mon, 07 Dec 2020 14:10:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=h0OiZUD2Xpj8wH+yrifT175RqRi+PBeb00G32sH3P0Y=; b=DvNTqP0Jlg37VPbc2IrU9Pi+z1DkfYcdpq+P8NKbSX5O4nR9+j1cJ/O7LgDhG6ijWS 5WJrRyQ9KRj2fcmKVTH6HUkEAC3mRxu+7jsql3dYH6K0bEbI6sQmRqfgK5T6Sn0lsNsQ FV2ifEGawMSAgN48r8Rk38P1zjpaAOaqsFPr7xuR9r6XpdYY8KWCUlLUBwWGdy2sC/b3 97xjHAFK+USl+MsFwNv1hwIoc9J3eGUDlJg+qVSMHN+ioy3g07+q1CtrpQA7AMlzd+Xi MRstm38sV1pNPo6mendy2h7S1ZhNp6ekXl1EEep86WN2Xr2a3m+olse1oMEm/wEdfDPT ipuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=h0OiZUD2Xpj8wH+yrifT175RqRi+PBeb00G32sH3P0Y=; b=Qq7y0vzKNqA6PvbhuQ92FuLMhX+zv6pE4ryHgnhfNJ//bUDJNxqPdK4Vz4LQZ3ySWN UOq5jlV2NUy9nY5KK5DU991l7+z0wXW6idwz6qVj7n67ZDsCsQw8+Vw+Syxfyad4c0TO Tp+El3GtdE5j2tk44wUOQmsSd9XALM3uNROt80UaE4sDbq9J7cd9jmWg8cRvDGliX6rh ECS1FtBxR0EDfJKBu6+16kcxBhH29khGmsvdWqSn9VPj859QH5S/x5wQc750pJ40P9wm oZ61k6vDCdAn3Vu6esa1XLfSawuW/FeiODrzbzQpwdW8kwwGspT1d+uZ5AGySwBOyko9 4AOg== X-Gm-Message-State: AOAM533jFOm5mdFGPlzx5ph3xW68sQ/zxQlWQ7JR+XFmYPnVKnjjeiMc KAmA+dBYQISbCAFLGtmQorUT7PnWJ/g= X-Google-Smtp-Source: ABdhPJwqHNXxnqK14pujguHwfglOZuFBgiZx/0erN/Z0wWVSGH4yVtQs+MW8BefBjdNfc6hh2AsPchdiBe4= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a05:6214:1785:: with SMTP id ct5mr4855273qvb.26.1607379011448; Mon, 07 Dec 2020 14:10:11 -0800 (PST) Date: Mon, 7 Dec 2020 15:09:46 -0700 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Message-Id: <20201207220949.830352-9-yuzhao@google.com> Mime-Version: 1.0 References: <20201207220949.830352-1-yuzhao@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH 08/11] mm: fold page_lru_base_type() into its sole caller From: Yu Zhao To: Andrew Morton , Hugh Dickins , Alex Shi Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We've removed all other references to this function. Signed-off-by: Yu Zhao Reviewed-by: Alex Shi --- include/linux/mm_inline.h | 27 ++++++--------------------- 1 file changed, 6 insertions(+), 21 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 6d907a4dd6ad..7183c7a03f09 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -45,21 +45,6 @@ static __always_inline void update_lru_size(struct lruvec *lruvec, #endif } -/** - * page_lru_base_type - which LRU list type should a page be on? - * @page: the page to test - * - * Used for LRU list index arithmetic. - * - * Returns the base LRU type - file or anon - @page should be on. - */ -static inline enum lru_list page_lru_base_type(struct page *page) -{ - if (page_is_file_lru(page)) - return LRU_INACTIVE_FILE; - return LRU_INACTIVE_ANON; -} - /** * __clear_page_lru_flags - clear page lru flags before releasing a page * @page: the page that was on lru and now has a zero reference @@ -92,12 +77,12 @@ static __always_inline enum lru_list page_lru(struct page *page) VM_BUG_ON_PAGE(PageActive(page) && PageUnevictable(page), page); if (PageUnevictable(page)) - lru = LRU_UNEVICTABLE; - else { - lru = page_lru_base_type(page); - if (PageActive(page)) - lru += LRU_ACTIVE; - } + return LRU_UNEVICTABLE; + + lru = page_is_file_lru(page) ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON; + if (PageActive(page)) + lru += LRU_ACTIVE; + return lru; } From patchwork Mon Dec 7 22:09:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11956983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58B8CC4361B for ; Mon, 7 Dec 2020 22:10:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E70732395A for ; Mon, 7 Dec 2020 22:10:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E70732395A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 24BFF8D000E; Mon, 7 Dec 2020 17:10:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D6DD8D000A; Mon, 7 Dec 2020 17:10:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 079018D000E; Mon, 7 Dec 2020 17:10:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0232.hostedemail.com [216.40.44.232]) by kanga.kvack.org (Postfix) with ESMTP id DE74E8D000A for ; Mon, 7 Dec 2020 17:10:15 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 995E8180AD820 for ; Mon, 7 Dec 2020 22:10:15 +0000 (UTC) X-FDA: 77567880390.25.trade47_4d05b9f273e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 231B31804E3AF for ; Mon, 7 Dec 2020 22:10:15 +0000 (UTC) X-HE-Tag: trade47_4d05b9f273e1 X-Filterd-Recvd-Size: 4407 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 22:10:13 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id b9so14104556ybi.12 for ; Mon, 07 Dec 2020 14:10:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=yXXP8quBOjVl1OL9RcYZvWTz/kgt9y486O6FjVXA5xY=; b=tKfFhaUpHhIEcN5B0oEAXQi1YVvvtNTn5VnR3es/0Zgh8LP5+PcT2ZvOokvDcd/zjm e1HPTjOqBasqW1hy8QbsLfVIBX1lP5I3jneG28Q4WG/JJK9roWQpnFU8/0siY4qxGsqS UdJNMaSQlzAAk2JYDIDx8+Asl3Z/Hen+9ykXYXXgdWAs/pHAenrOnfqg+2neULTmPS4w MbOC1UJ3s9JdvsHziY3jCg/5kUnMNSD4lgNgIRFvkwb6EIFOsFnBsaWdrVE1Y+kO+1DD //Zz/j9FQy8BfNodeodRtGE3sBonCXdhQFqWjczlMV47xALVM46y5l7Ne27oYDMACip8 UO7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=yXXP8quBOjVl1OL9RcYZvWTz/kgt9y486O6FjVXA5xY=; b=uCpq+83i+Q0FzaWw9o/UiK38RuOwCTzW2pfg9yF7KfiHcFIT8UhnQY1HGoe09PlBi7 rjCxY1W9utChFp1xPV2FIwPJWdiwd/zbdQ4MG4oYpdB/yuJW+WZl6NOcJ9UrNbuMmALg ADtc5T88LAygmIUoB7ipb0I0LQ1g9ZwKQEWLk+AUdOqVLZyirBBix7n6uDVZR8xi40rL 40dfWWGEgxLB4H2LgrMroencaI1rjERB3Dq8B4SRrhlB9S8aNty25heDW8EGx5bdgPSM +PmfdZPxHXe+cf2CdW+Y3qUSrdagIGKuqQPYtmxsei64c5GriUlvL1PuxY7Qn8WaPSeS eOhA== X-Gm-Message-State: AOAM533nbwjNXBRtn2KbOYm1opuS7LWhPkmkRoR++CFj98ZogBYb0QaV ujxHqDTBxJs4E9KzcF7k0oLEGunaEg8= X-Google-Smtp-Source: ABdhPJz6IzTe5gVmF8PWCCMI+lC10tI2SH2ofu361/HjN9RsmEeOFq50ryygNPV7peCXpdeCwUo4G+330mw= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a25:397:: with SMTP id 145mr24269777ybd.6.1607379012926; Mon, 07 Dec 2020 14:10:12 -0800 (PST) Date: Mon, 7 Dec 2020 15:09:47 -0700 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Message-Id: <20201207220949.830352-10-yuzhao@google.com> Mime-Version: 1.0 References: <20201207220949.830352-1-yuzhao@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH 09/11] mm: fold __update_lru_size() into its sole caller From: Yu Zhao To: Andrew Morton , Hugh Dickins , Alex Shi Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All other references to the function were removed after commit a892cb6b977f ("mm/vmscan.c: use update_lru_size() in update_lru_sizes()") Signed-off-by: Yu Zhao Reviewed-by: Alex Shi --- include/linux/mm_inline.h | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 7183c7a03f09..355ea1ee32bd 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -24,7 +24,7 @@ static inline int page_is_file_lru(struct page *page) return !PageSwapBacked(page); } -static __always_inline void __update_lru_size(struct lruvec *lruvec, +static __always_inline void update_lru_size(struct lruvec *lruvec, enum lru_list lru, enum zone_type zid, int nr_pages) { @@ -33,13 +33,6 @@ static __always_inline void __update_lru_size(struct lruvec *lruvec, __mod_lruvec_state(lruvec, NR_LRU_BASE + lru, nr_pages); __mod_zone_page_state(&pgdat->node_zones[zid], NR_ZONE_LRU_BASE + lru, nr_pages); -} - -static __always_inline void update_lru_size(struct lruvec *lruvec, - enum lru_list lru, enum zone_type zid, - int nr_pages) -{ - __update_lru_size(lruvec, lru, zid, nr_pages); #ifdef CONFIG_MEMCG mem_cgroup_update_lru_size(lruvec, lru, zid, nr_pages); #endif From patchwork Mon Dec 7 22:09:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11956989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5AC2C4361B for ; Mon, 7 Dec 2020 22:10:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7C9C72395A for ; Mon, 7 Dec 2020 22:10:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7C9C72395A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 054638D0011; Mon, 7 Dec 2020 17:10:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E68E38D0010; Mon, 7 Dec 2020 17:10:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C0ED48D0010; Mon, 7 Dec 2020 17:10:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id A65808D0011 for ; Mon, 7 Dec 2020 17:10:17 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 69E8F180AD806 for ; Mon, 7 Dec 2020 22:10:17 +0000 (UTC) X-FDA: 77567880474.21.toe00_2117499273e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 64CBB180442CB for ; Mon, 7 Dec 2020 22:10:15 +0000 (UTC) X-HE-Tag: toe00_2117499273e1 X-Filterd-Recvd-Size: 4383 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 22:10:14 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id z1so3127046ybg.22 for ; Mon, 07 Dec 2020 14:10:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Kf9dTc+9mbcjPQrEvILlWLhCJdkwYGqBYb+/1iUw0zE=; b=fSSoLvefvFm3BrkZIbU4dDZtvcdRjAbXGzkjbBOWFvbteEjxfMfEe3atbDOlGPpCal FEHCiJlJRf7w6/pYh5y4rTOykCqMEaML2dFZjkMKxxJ5FkYQyaJAHzvjroaBd87owmpE kudzsHm17VQAsY2OiJXmTUh1mif77MvhegWi5Sm2voaEhwXng3qWQZEfZtZmPOdR9l9W GggqxJbGNB/skBYYM4PMsGeJD3aPJo6pQ9HzAcqbssmf9HwXBEOMQ3m8z1rTmL3zAFEf 1mawzw3XWZuFgj6NgueTNj7LUy/Zl+l/vh7z7/FrSNXEUW3VpLZck59vzoTi3XzgGXZG uzcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Kf9dTc+9mbcjPQrEvILlWLhCJdkwYGqBYb+/1iUw0zE=; b=SgkyvTpx3QKMKOkZQ7GzTLwL7PcNhSPB/flokcXRVIp4UAo3HtA+pRMYaz3Hz8Ykyk d9uY3Fc97JVrHTwZqKrZxM8qKE6zF7po6TGKFYk30hnAt6FhhvUq8oAQkE4XK7lbciDJ a5SEO75QmMO1maESyMhObEqHHJssVEEzb2v4GHxyPpU2B5TAKoTYcqGvO5LkIiU7ESwa FG39FH/jKKwwK0Ti6cidF8tz9STkvZbK6OlRGM+erTyFKDm/I4xHsNBBv8vB4rw2NkD2 Mx05Qj74RJrHnEM+lwd4Snjnf4cx2Osw+n1CTF91BKzaascba8G0v+abiGJ9P0deOyZq XRyg== X-Gm-Message-State: AOAM532n1b+Rm+No+TkTuLF4wkN/9gR24EcCid735YqAFvFhwQYq/N15 E884MTizw5Yct8tUERtMP7JSWwPwfbE= X-Google-Smtp-Source: ABdhPJxbDdiXhRgZfRl/qZ9cRUyAS3tHwdQh9kxPcNbivuTcOuJWGJfVGb6Wpd+AhWxop71Knc8AuAp1I8M= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a25:ac25:: with SMTP id w37mr12499169ybi.522.1607379014242; Mon, 07 Dec 2020 14:10:14 -0800 (PST) Date: Mon, 7 Dec 2020 15:09:48 -0700 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Message-Id: <20201207220949.830352-11-yuzhao@google.com> Mime-Version: 1.0 References: <20201207220949.830352-1-yuzhao@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH 10/11] mm: make lruvec_lru_size() static From: Yu Zhao To: Andrew Morton , Hugh Dickins , Alex Shi Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: All other references to the function were removed after commit b910718a948a ("mm: vmscan: detect file thrashing at the reclaim root") Signed-off-by: Yu Zhao Reviewed-by: Alex Shi --- include/linux/mmzone.h | 2 -- mm/vmscan.c | 3 ++- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b593316bff3d..2fc54e269eaf 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -872,8 +872,6 @@ static inline struct pglist_data *lruvec_pgdat(struct lruvec *lruvec) #endif } -extern unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx); - #ifdef CONFIG_HAVE_MEMORYLESS_NODES int local_memory_node(int node_id); #else diff --git a/mm/vmscan.c b/mm/vmscan.c index 95e581c9d9af..fd0c2313bee4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -310,7 +310,8 @@ unsigned long zone_reclaimable_pages(struct zone *zone) * @lru: lru to use * @zone_idx: zones to consider (use MAX_NR_ZONES for the whole LRU list) */ -unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx) +static unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, + int zone_idx) { unsigned long size = 0; int zid; From patchwork Mon Dec 7 22:09:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 11956987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52870C4167B for ; Mon, 7 Dec 2020 22:10:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C42CB2395A for ; Mon, 7 Dec 2020 22:10:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C42CB2395A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C87658D0012; Mon, 7 Dec 2020 17:10:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C59158D0011; Mon, 7 Dec 2020 17:10:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB0EE8D0012; Mon, 7 Dec 2020 17:10:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0116.hostedemail.com [216.40.44.116]) by kanga.kvack.org (Postfix) with ESMTP id 934A68D0010 for ; Mon, 7 Dec 2020 17:10:17 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 54E2E362D for ; Mon, 7 Dec 2020 22:10:17 +0000 (UTC) X-FDA: 77567880474.12.wind98_45138ad273e1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 365C518055196 for ; Mon, 7 Dec 2020 22:10:17 +0000 (UTC) X-HE-Tag: wind98_45138ad273e1 X-Filterd-Recvd-Size: 9927 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 22:10:16 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id h75so4554770ybg.18 for ; Mon, 07 Dec 2020 14:10:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=bOw9j0dUMpy7AXeSO7Vybh8rgZgZSfkPfVMJWfLbDFk=; b=jpKfnihEOJ3JzD742ipH1Jes5cJ47T8rtbGc6jGgsgArUzz8eeJQRi/fi08ldEweRk hEGKGktwBj4mnQFVS1JTnEbTTJ9tjWyf8As/3pPyA94PF+RGEoh79TRUYtwb2MlBDVDk EFVYS+VDSBO0CB8UBa2UqsQzewGtaKvK9ihysCn1i9jT1Yppuql4B1Wb5IB8oFLMPglJ 0dTtyqWK1TY9N9f71urtC6IhQcFARTv5rHThTBWcDyW/+/lId40rxPsoiaNknX5AkWOZ oAO2A2/o4y9ted4yZSJ9gHNHBjYinbsXtDQgyTYLI+SM4N/cax/qfDTzuPEo3ZaTtw3N Amgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bOw9j0dUMpy7AXeSO7Vybh8rgZgZSfkPfVMJWfLbDFk=; b=X/sgoYSrgJn5KEXsBZ1BscGx8frbQYISsDEF/nBImHPJ4PeZlxOSN+dvAxpvAFcBtm YWcFcJw/8rM9nvN6NBmao+p3V2/VSQYiHIr7lc9yeMFn3hZlntxKK9BTC111xhDQeil4 VP6UmzMobVHXlcWsJk86f69z0PlQ1XJTlu8qtt2xcIusU85S+4KWDH+54sfVitrLcaMS S6bUXc4OKVu2My8WuTTbknOvRx55hVmaA+6NAq33d7cbsVLDQVePnITRggkNv5iIkcQH 8PLf9777+GbZwn1Ce0fMyKVQinG3meVUke/E8TLIX2wMJpQGnz3BAx3zRxiaNx/NjvnI nR7g== X-Gm-Message-State: AOAM530+ECTcHH7cZ5Mj4fU7WTogTGkfghtlES32K0w6omnuselq2UDM DSkPk5INUGR/hW/LCza0Y+Uzot6H3Lc= X-Google-Smtp-Source: ABdhPJx1LU2DrQbcr78vSzaIF+5+OB3wODd3u0KeQ5xe/FybZhTw1Qi70c8ZhKenpheZhGDmTkwjYwmaO7Q= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) (user=yuzhao job=sendgmr) by 2002:a25:cf0a:: with SMTP id f10mr26624027ybg.353.1607379015692; Mon, 07 Dec 2020 14:10:15 -0800 (PST) Date: Mon, 7 Dec 2020 15:09:49 -0700 In-Reply-To: <20201207220949.830352-1-yuzhao@google.com> Message-Id: <20201207220949.830352-12-yuzhao@google.com> Mime-Version: 1.0 References: <20201207220949.830352-1-yuzhao@google.com> X-Mailer: git-send-email 2.29.2.576.ga3fc446d84-goog Subject: [PATCH 11/11] mm: enlarge the "int nr_pages" parameter of update_lru_size() From: Yu Zhao To: Andrew Morton , Hugh Dickins , Alex Shi Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , Roman Gushchin , Vlastimil Babka , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: update_lru_sizes() defines an unsigned long argument and passes it as nr_pages to update_lru_size(). Though this isn't causing any overflows I'm aware of, it's a bad idea to go through the demotion given that we have recently stumbled on a related type promotion problem fixed by commit 2da9f6305f30 ("mm/vmscan: fix NR_ISOLATED_FILE corruption on 64-bit") Note that the underlying counters are already in long. This is another reason we shouldn't have the demotion. This patch enlarges all relevant parameters on the path to the final underlying counters: update_lru_size(int -> long) if memcg: __mod_lruvec_state(int -> long) if smp: __mod_node_page_state(long) else: __mod_node_page_state(int -> long) __mod_memcg_lruvec_state(int -> long) __mod_memcg_state(int -> long) else: __mod_lruvec_state(int -> long) if smp: __mod_node_page_state(long) else: __mod_node_page_state(int -> long) __mod_zone_page_state(long) if memcg: mem_cgroup_update_lru_size(int -> long) Note that __mod_node_page_state() for the smp case and __mod_zone_page_state() already use long. So this change also fixes the inconsistency. Signed-off-by: Yu Zhao Reviewed-by: Alex Shi --- include/linux/memcontrol.h | 10 +++++----- include/linux/mm_inline.h | 2 +- include/linux/vmstat.h | 6 +++--- mm/memcontrol.c | 10 +++++----- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 3febf64d1b80..1454201abb8d 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -810,7 +810,7 @@ static inline bool mem_cgroup_online(struct mem_cgroup *memcg) int mem_cgroup_select_victim_node(struct mem_cgroup *memcg); void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru, - int zid, int nr_pages); + int zid, long nr_pages); static inline unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec, @@ -896,7 +896,7 @@ static inline unsigned long memcg_page_state_local(struct mem_cgroup *memcg, return x; } -void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val); +void __mod_memcg_state(struct mem_cgroup *memcg, int idx, long val); /* idx can be of type enum memcg_stat_item or node_stat_item */ static inline void mod_memcg_state(struct mem_cgroup *memcg, @@ -948,7 +948,7 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, } void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, - int val); + long val); void __mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val); static inline void mod_lruvec_kmem_state(void *p, enum node_stat_item idx, @@ -1346,7 +1346,7 @@ static inline unsigned long memcg_page_state_local(struct mem_cgroup *memcg, static inline void __mod_memcg_state(struct mem_cgroup *memcg, int idx, - int nr) + long nr) { } @@ -1369,7 +1369,7 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, } static inline void __mod_memcg_lruvec_state(struct lruvec *lruvec, - enum node_stat_item idx, int val) + enum node_stat_item idx, long val) { } diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 355ea1ee32bd..18e85071b44a 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -26,7 +26,7 @@ static inline int page_is_file_lru(struct page *page) static __always_inline void update_lru_size(struct lruvec *lruvec, enum lru_list lru, enum zone_type zid, - int nr_pages) + long nr_pages) { struct pglist_data *pgdat = lruvec_pgdat(lruvec); diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index 773135fc6e19..230922179ba0 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -310,7 +310,7 @@ static inline void __mod_zone_page_state(struct zone *zone, } static inline void __mod_node_page_state(struct pglist_data *pgdat, - enum node_stat_item item, int delta) + enum node_stat_item item, long delta) { if (vmstat_item_in_bytes(item)) { VM_WARN_ON_ONCE(delta & (PAGE_SIZE - 1)); @@ -453,7 +453,7 @@ static inline const char *vm_event_name(enum vm_event_item item) #ifdef CONFIG_MEMCG void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, - int val); + long val); static inline void mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) @@ -481,7 +481,7 @@ static inline void mod_lruvec_page_state(struct page *page, #else static inline void __mod_lruvec_state(struct lruvec *lruvec, - enum node_stat_item idx, int val) + enum node_stat_item idx, long val) { __mod_node_page_state(lruvec_pgdat(lruvec), idx, val); } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index de17f02d27ad..c3fe5880c42d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -758,7 +758,7 @@ mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_node *mctz) * @idx: the stat item - can be enum memcg_stat_item or enum node_stat_item * @val: delta to add to the counter, can be negative */ -void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val) +void __mod_memcg_state(struct mem_cgroup *memcg, int idx, long val) { long x, threshold = MEMCG_CHARGE_BATCH; @@ -796,7 +796,7 @@ parent_nodeinfo(struct mem_cgroup_per_node *pn, int nid) } void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, - int val) + long val) { struct mem_cgroup_per_node *pn; struct mem_cgroup *memcg; @@ -837,7 +837,7 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, * change of state at this level: per-node, per-cgroup, per-lruvec. */ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, - int val) + long val) { /* Update node */ __mod_node_page_state(lruvec_pgdat(lruvec), idx, val); @@ -1407,7 +1407,7 @@ struct lruvec *lock_page_lruvec_irqsave(struct page *page, unsigned long *flags) * so as to allow it to check that lru_size 0 is consistent with list_empty). */ void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru, - int zid, int nr_pages) + int zid, long nr_pages) { struct mem_cgroup_per_node *mz; unsigned long *lru_size; @@ -1424,7 +1424,7 @@ void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru, size = *lru_size; if (WARN_ONCE(size < 0, - "%s(%p, %d, %d): lru_size %ld\n", + "%s(%p, %d, %ld): lru_size %ld\n", __func__, lruvec, lru, nr_pages, size)) { VM_BUG_ON(1); *lru_size = 0;