From patchwork Tue Oct 26 17:38:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12585323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84543C433F5 for ; Tue, 26 Oct 2021 17:38:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 185AC60F9D for ; Tue, 26 Oct 2021 17:38:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 185AC60F9D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 8F7FC940009; Tue, 26 Oct 2021 13:38:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 766D3940007; Tue, 26 Oct 2021 13:38:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56B38940009; Tue, 26 Oct 2021 13:38:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id 42FF8940007 for ; Tue, 26 Oct 2021 13:38:27 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 0469C3A8E5 for ; Tue, 26 Oct 2021 17:38:27 +0000 (UTC) X-FDA: 78739297854.36.3F4FA31 Received: from mail-qk1-f180.google.com (mail-qk1-f180.google.com [209.85.222.180]) by imf23.hostedemail.com (Postfix) with ESMTP id E697590000A7 for ; Tue, 26 Oct 2021 17:38:18 +0000 (UTC) Received: by mail-qk1-f180.google.com with SMTP id bj31so16109378qkb.2 for ; Tue, 26 Oct 2021 10:38:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=1MjJIbJlIahDU+iLwwAfOaikWhB3foL3f/r8kdQBxNc=; b=lB5z28Fcx2ZBMKy/ZYv4hZAv3ZLMd31iPtufXn202o9yLe6MI1x3qw9X/RNSvtYIew uv6uXGo0OC2+G7hCGULnU85/pCy9QKRW8qJD/dytf8dC8XCK36AeNyUL7UHpm53+O9CA DU0MsN9PMMOUZ7wItqiRbkpzXWoaFQinJVqxz2VSB2ahBqqcdWGJObKx7fm9OrBZ9vip QCHDJiSw2Ez0E746AlEObFg6cd26FPxxQTLmX3S9ktxx0mK0SaOUfz3vFmtZz0YTb4DT vMlgFvpJtDayjOOWDAGJZBb7Wmzh8KWypWtb/cHBXLavKpY7RcnSuXRHZy3ygu0/Ttqi Y62A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1MjJIbJlIahDU+iLwwAfOaikWhB3foL3f/r8kdQBxNc=; b=HUErd4f5XTVz55mH/LAFhYCRSOx5Nq36Erm1Wj0YexXoLOPgbGkOjEyQnd1UeDW6UL sbxYrF8HZbIlpBPs16Z4ZhnD5aQRE5jkXPXfAvGh2voK0OCuZumcUr+3HGLE72GFSbnm QGCxoxEtRbEWvaHWEPVsP5Xj3RZbiLooUUljkaZDnOnZTSxPtBB4Cp7G9EmcFSEnikQR OW5C7rykkcydMEAX4+Jah5cnTya1QQwFl8JhTpSuUkrDYFIxOwyKmVFhP+xzWXzgf8zA J3UZ4FrrVLQvlu0oG0zdZ6TcAlk+IQdgq4RM1nAtiyEdBrxuIaAbXWXzYvVLkPpZ3Me6 LGUQ== X-Gm-Message-State: AOAM532N8KDhBojsBYafE6ONf5g2EWxwTiqhF7F8IgwcQw8IRPBfdGUo LPcW8xRUcFqgmknPUzyQWDL7HQ== X-Google-Smtp-Source: ABdhPJzM+k476AduvT9DqDoOET4gfPzY9MByvHSnUBjP5xwl3bvdX3hONs7MKtegCmUHu0t+iVRQrA== X-Received: by 2002:a37:2ec6:: with SMTP id u189mr20159144qkh.466.1635269905910; Tue, 26 Oct 2021 10:38:25 -0700 (PDT) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id bj37sm11001939qkb.49.2021.10.26.10.38.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Oct 2021 10:38:25 -0700 (PDT) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com Subject: [RFC 1/8] mm: add overflow and underflow checks for page->_refcount Date: Tue, 26 Oct 2021 17:38:15 +0000 Message-Id: <20211026173822.502506-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog In-Reply-To: <20211026173822.502506-1-pasha.tatashin@soleen.com> References: <20211026173822.502506-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E697590000A7 X-Stat-Signature: g7tpmry9xz4dpnku9ucuqf1pe3tstr49 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=lB5z28Fc; spf=pass (imf23.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.180 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1635269898-651043 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The problems with page->_refcount are hard to debug, because usually when they are detected, the damage has occurred a long time ago. Yet, the problems with invalid page refcount may be catastrophic and lead to memory corruptions. Reduce the scope of when the _refcount problems manifest themselves by adding checks for underflows and overflows into functions that modify _refcount. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 61 ++++++++++++++++++++++++++++++++-------- 1 file changed, 49 insertions(+), 12 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 7ad46f45df39..b3ec2b231fc7 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -90,21 +90,35 @@ static inline void init_page_count(struct page *page) static inline void page_ref_add(struct page *page, int nr) { - atomic_add(nr, &page->_refcount); + int ret; + + VM_BUG_ON(nr <= 0); + ret = atomic_add_return(nr, &page->_refcount); + VM_BUG_ON_PAGE(ret <= 0, page); + if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, nr); } static inline void page_ref_sub(struct page *page, int nr) { - atomic_sub(nr, &page->_refcount); + int ret; + + VM_BUG_ON(nr <= 0); + ret = atomic_sub_return(nr, &page->_refcount); + VM_BUG_ON_PAGE(ret < 0, page); + if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, -nr); } static inline int page_ref_sub_return(struct page *page, int nr) { - int ret = atomic_sub_return(nr, &page->_refcount); + int ret; + + VM_BUG_ON(nr <= 0); + ret = atomic_sub_return(nr, &page->_refcount); + VM_BUG_ON_PAGE(ret < 0, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) __page_ref_mod_and_return(page, -nr, ret); @@ -113,31 +127,43 @@ static inline int page_ref_sub_return(struct page *page, int nr) static inline void page_ref_inc(struct page *page) { - atomic_inc(&page->_refcount); + int ret = atomic_inc_return(&page->_refcount); + + VM_BUG_ON_PAGE(ret <= 0, page); + if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, 1); } static inline void page_ref_dec(struct page *page) { - atomic_dec(&page->_refcount); + int ret = atomic_dec_return(&page->_refcount); + + VM_BUG_ON_PAGE(ret < 0, page); + if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, -1); } static inline int page_ref_sub_and_test(struct page *page, int nr) { - int ret = atomic_sub_and_test(nr, &page->_refcount); + int ret; + + VM_BUG_ON(nr <= 0); + ret = atomic_sub_return(nr, &page->_refcount); + VM_BUG_ON_PAGE(ret < 0, page); if (page_ref_tracepoint_active(page_ref_mod_and_test)) __page_ref_mod_and_test(page, -nr, ret); - return ret; + return ret == 0; } static inline int page_ref_inc_return(struct page *page) { int ret = atomic_inc_return(&page->_refcount); + VM_BUG_ON_PAGE(ret <= 0, page); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) __page_ref_mod_and_return(page, 1, ret); return ret; @@ -145,17 +171,21 @@ static inline int page_ref_inc_return(struct page *page) static inline int page_ref_dec_and_test(struct page *page) { - int ret = atomic_dec_and_test(&page->_refcount); + int ret = atomic_dec_return(&page->_refcount); + + VM_BUG_ON_PAGE(ret < 0, page); if (page_ref_tracepoint_active(page_ref_mod_and_test)) __page_ref_mod_and_test(page, -1, ret); - return ret; + return ret == 0; } static inline int page_ref_dec_return(struct page *page) { int ret = atomic_dec_return(&page->_refcount); + VM_BUG_ON_PAGE(ret < 0, page); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) __page_ref_mod_and_return(page, -1, ret); return ret; @@ -163,16 +193,23 @@ static inline int page_ref_dec_return(struct page *page) static inline int page_ref_add_unless(struct page *page, int nr, int u) { - int ret = atomic_add_unless(&page->_refcount, nr, u); + int ret; + + VM_BUG_ON(nr <= 0 || u < 0); + ret = atomic_fetch_add_unless(&page->_refcount, nr, u); + VM_BUG_ON_PAGE(ret < 0, page); if (page_ref_tracepoint_active(page_ref_mod_unless)) __page_ref_mod_unless(page, nr, ret); - return ret; + return ret != u; } static inline int page_ref_freeze(struct page *page, int count) { - int ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count); + int ret; + + VM_BUG_ON(count <= 0); + ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count); if (page_ref_tracepoint_active(page_ref_freeze)) __page_ref_freeze(page, count, ret); From patchwork Tue Oct 26 17:38:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12585325 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6996DC433F5 for ; Tue, 26 Oct 2021 17:38:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0A93D6103C for ; Tue, 26 Oct 2021 17:38:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0A93D6103C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3E82994000A; Tue, 26 Oct 2021 13:38:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 39578940007; Tue, 26 Oct 2021 13:38:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2610394000A; Tue, 26 Oct 2021 13:38:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id 12DA0940007 for ; Tue, 26 Oct 2021 13:38:28 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id CFF9E2B341 for ; Tue, 26 Oct 2021 17:38:27 +0000 (UTC) X-FDA: 78739297854.24.B650991 Received: from mail-qk1-f176.google.com (mail-qk1-f176.google.com [209.85.222.176]) by imf23.hostedemail.com (Postfix) with ESMTP id CA9B69000381 for ; Tue, 26 Oct 2021 17:38:19 +0000 (UTC) Received: by mail-qk1-f176.google.com with SMTP id h20so15853833qko.13 for ; Tue, 26 Oct 2021 10:38:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=YFzPbDbRL/x6ma5Ca6cfrOJFG3iWMh8fKsCfxaMAfk8=; b=YS8eDNpGNOgrhmCcfg9iAfefq9xkwMoUI89lf12WEOadcE9GDaOq+SrKS9at0LRDOq QvV0zAPSnq6Jhf+lzstiFxhlVkIwVx4nK7on89VHZHwKlMEIXw1zoSlurpwB9TvtKBTP JSu0GGc+AphS9BEpoZN/RJX+iu1jkmCQW74rf3K0Sg29I/WviQ0Q9/H9RigjK1eYZPsK rvQLAwaMnsYn6LyrWGHH+63AE9ls3BqAr0OcH3+O5Oc+JjwiD7aBNfJdskPJ+of9GbLn XPJjW6DABqlkPdAY4lnOphSbXTpcajPuRlVFi9DfIlW/62zNKjXbx+xO+cVxKhYlrd9U M/nQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YFzPbDbRL/x6ma5Ca6cfrOJFG3iWMh8fKsCfxaMAfk8=; b=Ird8vIS0bRkzfsw0D4qY2iYpfQNdz8VdK9F4XQXgJWvyuu/c49pNnv+ArAdIef4sEW R4UsCVj9xOWR91jlnC9rhhtgPgNPe4WQCsAe5kVbJgLNfTkr3T7BxdNQw83Rde1yaz2T 9cl0LKFBMadzaNHypCGNfW3WRH2Gz9hgpemElDvu8PO/eIxUCzW9KvihL3qXIOW/BtEI OsRyRqNcvuE2hMpjHTOjUyg3gu8+1PLk+A2n/ZttRBBLg6tpXxlX7H+9lh1C5ecA828/ rCXgzKg7H3zyJ5goCG4oJMChQ+gbTUyxKKpymMLxTEqMcG8c4kZArJZIVfqQ+ballT7z 6b9w== X-Gm-Message-State: AOAM532UM89WlONEWezIMR0IVLMauFwkKifVyU474Fy1qK5KThiqzXgg GstTV2gEsKDKz3Va7rCMoFHfmg== X-Google-Smtp-Source: ABdhPJzJ5B+p6Rwtjwrqe/0j7MwD86qEOkIZcZ9ny2M9nsl7064QiqLbNAf1+bTV9ovTuHVldv6+BA== X-Received: by 2002:a05:620a:29d2:: with SMTP id s18mr3823816qkp.418.1635269906887; Tue, 26 Oct 2021 10:38:26 -0700 (PDT) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id bj37sm11001939qkb.49.2021.10.26.10.38.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Oct 2021 10:38:26 -0700 (PDT) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com Subject: [RFC 2/8] mm/hugetlb: remove useless set_page_count() Date: Tue, 26 Oct 2021 17:38:16 +0000 Message-Id: <20211026173822.502506-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog In-Reply-To: <20211026173822.502506-1-pasha.tatashin@soleen.com> References: <20211026173822.502506-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Stat-Signature: g5s1juttzsfjna5rjppc6tmx6xh8qpp1 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: CA9B69000381 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=YS8eDNpG; dmarc=none; spf=pass (imf23.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.176 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1635269899-205478 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: prep_compound_gigantic_page() calls set_page_count(0, p), but it is not needed because page_ref_freeze(p, 1) already sets refcount to 0. Using, set_page_count() is dangerous, because it unconditionally resets refcount from the current value to unrestrained value, and therefore should be minimized. Signed-off-by: Pasha Tatashin --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 95dc7b83381f..7e3996c8b696 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1707,7 +1707,7 @@ static bool prep_compound_gigantic_page(struct page *page, unsigned int order) pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n"); goto out_error; } - set_page_count(p, 0); + VM_BUG_ON_PAGE(page_count(p), p); set_compound_head(p, page); } atomic_set(compound_mapcount_ptr(page), -1); From patchwork Tue Oct 26 17:38:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12585327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 757DFC433FE for ; Tue, 26 Oct 2021 17:38:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0CDEC6109E for ; Tue, 26 Oct 2021 17:38:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0CDEC6109E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 9169594000B; Tue, 26 Oct 2021 13:38:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8C9BC940007; Tue, 26 Oct 2021 13:38:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74BD994000B; Tue, 26 Oct 2021 13:38:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id 64264940007 for ; Tue, 26 Oct 2021 13:38:29 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1655C18286210 for ; Tue, 26 Oct 2021 17:38:29 +0000 (UTC) X-FDA: 78739297938.08.A3AEFA0 Received: from mail-qv1-f49.google.com (mail-qv1-f49.google.com [209.85.219.49]) by imf11.hostedemail.com (Postfix) with ESMTP id B7A44F000201 for ; Tue, 26 Oct 2021 17:38:28 +0000 (UTC) Received: by mail-qv1-f49.google.com with SMTP id d6so10088710qvb.3 for ; Tue, 26 Oct 2021 10:38:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=RTaUD2RUm6/em+jvQTkh6ivxttGsmNn09ntigKewekQ=; b=GIGS37Qoa3iQ9XGqYkImLqW64mhHkHZqcAdTsvWcZapxSea6btQvnnY5RgjopRu6dq PcqIvjD2ChFH0Bik2yOsB25Qz/PRL4pLDZpduH15i+44oAsfKQzQy77CjbE3RvfjtQ9w pZD5mtvVpkN0conJO3+cVJmm6e0g6QBh0b7JZ3psQzlo68/P7mTjqOozTJL5OLMSa12D El3xALwir4fVBu8/IaMqnBCrpXu/TCuTT4CVHK5gJIQlfjOXwvc5RvN1E6NEfnqYIuxj i4mmV1jV2cHm3QBFvK4uF/aAthF+7j6Tn/8+IW1HmzQ968lQ+O3l+yzPZYkM8cudcmV0 oRIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RTaUD2RUm6/em+jvQTkh6ivxttGsmNn09ntigKewekQ=; b=qwQfuhmsOG75YOsB/bx4qfQ8uynnHSQqsJ3oVtIsMfddUPR8YktxSlZxLcqCddAW6c hRPE8J6QbOEn+NtiZYvme/jI5bQEMyy7/37g3nEm563iudud+VJhzPAqahVcDetce7do 4RD59Fvwaoeug7I5GhTSUm4B9G9++TviDvVGzIGYGO38nsj7yeVzE0YwyClbG2Sy+vjU bV5aZUiZsIhrHAaojVfzwgigY5jtQt8JkyaW8hcOSPxWJYEhqnyQXRikl0p+abAglLs6 WyWRBsf/0dvyZWIIbqnwxkxFDcFhi2GnVdJY1TCjkI34f3V6HPGdWktJX4wb+iMzZl/c VzzA== X-Gm-Message-State: AOAM530ZOnmft/xvJDL1ONfxddab8ehxXB5vd28Ja7QOD2cypEjkGtUO jhXQDSrTAXk6Yr9PEjgmMymPIw== X-Google-Smtp-Source: ABdhPJx51qUHPL7MosY+ccdW2v8AQS7qpS2Lf6x9DSO9D0SE6bk4w8h6c+6o+lQ+QoMcJjSYsK+17A== X-Received: by 2002:a05:6214:23c5:: with SMTP id hr5mr24631556qvb.59.1635269907919; Tue, 26 Oct 2021 10:38:27 -0700 (PDT) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id bj37sm11001939qkb.49.2021.10.26.10.38.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Oct 2021 10:38:27 -0700 (PDT) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com Subject: [RFC 3/8] mm: Avoid using set_page_count() in set_page_recounted() Date: Tue, 26 Oct 2021 17:38:17 +0000 Message-Id: <20211026173822.502506-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog In-Reply-To: <20211026173822.502506-1-pasha.tatashin@soleen.com> References: <20211026173822.502506-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: B7A44F000201 X-Stat-Signature: 3k3n6p7awrpg65caxwoh7kh53us5i9o8 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=GIGS37Qo; dmarc=none; spf=pass (imf11.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.49 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1635269908-601009 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_refcounted() converts a non-refcounted page that has (page->_refcount == 0) into a refcounted page by setting _refcount to 1, Use page_ref_inc_return() instead to avoid unconditionally overwriting the _refcount value. Signed-off-by: Pasha Tatashin --- mm/internal.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/internal.h b/mm/internal.h index cf3cb933eba3..cf345fac6894 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -91,9 +91,12 @@ static inline bool page_evictable(struct page *page) */ static inline void set_page_refcounted(struct page *page) { + int refcnt; + VM_BUG_ON_PAGE(PageTail(page), page); VM_BUG_ON_PAGE(page_ref_count(page), page); - set_page_count(page, 1); + refcnt = page_ref_inc_return(page); + VM_BUG_ON_PAGE(refcnt != 1, page); } extern unsigned long highest_memmap_pfn; From patchwork Tue Oct 26 17:38:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12585329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A023AC433F5 for ; Tue, 26 Oct 2021 17:38:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 522A36109D for ; Tue, 26 Oct 2021 17:38:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 522A36109D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2626E94000C; Tue, 26 Oct 2021 13:38:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2117A940007; Tue, 26 Oct 2021 13:38:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03B2A94000C; Tue, 26 Oct 2021 13:38:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0193.hostedemail.com [216.40.44.193]) by kanga.kvack.org (Postfix) with ESMTP id E8994940007 for ; Tue, 26 Oct 2021 13:38:29 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id AD38582499B9 for ; Tue, 26 Oct 2021 17:38:29 +0000 (UTC) X-FDA: 78739297938.04.CCBD2A0 Received: from mail-qt1-f182.google.com (mail-qt1-f182.google.com [209.85.160.182]) by imf26.hostedemail.com (Postfix) with ESMTP id 4C30120019C9 for ; Tue, 26 Oct 2021 17:38:30 +0000 (UTC) Received: by mail-qt1-f182.google.com with SMTP id t40so1959801qtc.6 for ; Tue, 26 Oct 2021 10:38:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=amB34lO2XSg7TLKdijE9Lv8Nvadz5QaO2Wd8mVuWFPM=; b=UvkQGJH2xUki3bT8TP/u4V57Jx6YNVYOhlR8XwaZpYU/cLXky/a+Odx+QEHRAwKZP4 5tYqZn2x2c5ykd+SfCswrBnOWsUHwScQRSclAnkFdKauHvPc8cXV75EtqNhxHJh3Ny3W S4u8E/0j9mKiuFwnUwWYcXrYRTeBBH7FDBt6kOs6nbtiGvYM6ImHoN7ldfieIWTYdpto /NTuu9yahMryb90AwSlKtEWgljomhEWbYV4oPPdWqDEzilTmSlUfS1wkPQeX0mzI9hJK VSHT9awQHLM4F3LDCBqavcVaebKl0cwF4q1bd6Xp7W5xDDtd7NyQmlbXOrG8KiagHBqH Nj7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=amB34lO2XSg7TLKdijE9Lv8Nvadz5QaO2Wd8mVuWFPM=; b=z3l9uPupeEezXNfF148PsaNCblU0RS8f7QjFQk1leGjejgpPXQ7lb5c5JxJgtitAx9 IYhQraqk51QYTOQ/DTtm0N0WXBFf7fBtrOf1h7F0AH2yTugHvaEZwsI8yx0sX0Zn/7Y3 +uAWwO007fhHg6dnbUZyZ15cp2YzTExtmzE75N90StVMBHtVM7khtnSeEF9p2iVXM3C3 MAl3TE9vqPvP9cJTdiVwcIEhyJk8roXPkOM8ORhajRzC1RqTC6JZQhnyyTMjdo9YH7gM P8UF3rb/lgK2APOrKzL/JxtSTaBBC/2IPm2GAIYo57UITKXwzDkt8+iOSaQWlFKXbB15 kk/A== X-Gm-Message-State: AOAM533kD8CxeO/bZAHQ5kUqVB7xURIkZPUS+0Mnr8vymNwuWSRxnvcx xtMrD17rE4xtKyRg7Gl/ssR+aw== X-Google-Smtp-Source: ABdhPJyVeZK2MwFq2jYs4GVzJmbEbYXlrpRJehMXT5vxhwOW3NH+YWuXHJQV1KbM6WrKRXcAWA05mw== X-Received: by 2002:ac8:584d:: with SMTP id h13mr26002147qth.41.1635269908771; Tue, 26 Oct 2021 10:38:28 -0700 (PDT) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id bj37sm11001939qkb.49.2021.10.26.10.38.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Oct 2021 10:38:28 -0700 (PDT) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com Subject: [RFC 4/8] mm: remove set_page_count() from page_frag_alloc_align Date: Tue, 26 Oct 2021 17:38:18 +0000 Message-Id: <20211026173822.502506-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog In-Reply-To: <20211026173822.502506-1-pasha.tatashin@soleen.com> References: <20211026173822.502506-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 4C30120019C9 X-Stat-Signature: snn95dtbza5beoudszxf98m8wsemh3cx Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=UvkQGJH2; dmarc=none; spf=pass (imf26.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.160.182 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1635269910-488572 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_count() unconditionally resets the value of _ref_count and that is dangerous, as it is not programmatically verified. Instead we rely on comments like: "OK, page count is 0, we can safely set it". Add a new refcount function: page_ref_add_return() to return the new refcount value after adding to it. Use the return value to verify that the _ref_count was indeed what we expected. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 13 +++++++++++++ mm/page_alloc.c | 6 ++++-- 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index b3ec2b231fc7..db7ccb461c3e 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -88,6 +88,19 @@ static inline void init_page_count(struct page *page) set_page_count(page, 1); } +static inline int page_ref_add_return(struct page *page, int nr) +{ + int ret; + + VM_BUG_ON(nr <= 0); + ret = atomic_add_return(nr, &page->_refcount); + VM_BUG_ON_PAGE(ret <= 0, page); + + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, nr, ret); + return ret; +} + static inline void page_ref_add(struct page *page, int nr) { int ret; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b37435c274cf..6af4596bddc2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5510,6 +5510,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int size = PAGE_SIZE; struct page *page; int offset; + int refcnt; if (unlikely(!nc->va)) { refill: @@ -5548,8 +5549,9 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, /* if size can vary use size else just use PAGE_SIZE */ size = nc->size; #endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + /* page count is 0, set it to PAGE_FRAG_CACHE_MAX_SIZE + 1 */ + refcnt = page_ref_add_return(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + VM_BUG_ON_PAGE(refcnt != PAGE_FRAG_CACHE_MAX_SIZE + 1, page); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; From patchwork Tue Oct 26 17:38:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12585331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA64CC433EF for ; Tue, 26 Oct 2021 17:38:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9355C6109D for ; Tue, 26 Oct 2021 17:38:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9355C6109D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3BFB294000D; Tue, 26 Oct 2021 13:38:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 39936940007; Tue, 26 Oct 2021 13:38:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 127FF94000D; Tue, 26 Oct 2021 13:38:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id F018F940007 for ; Tue, 26 Oct 2021 13:38:30 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AFDE32D233 for ; Tue, 26 Oct 2021 17:38:30 +0000 (UTC) X-FDA: 78739297980.18.AF4F2E9 Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) by imf30.hostedemail.com (Postfix) with ESMTP id A58A7E001999 for ; Tue, 26 Oct 2021 17:38:20 +0000 (UTC) Received: by mail-qt1-f172.google.com with SMTP id v17so14297971qtp.1 for ; Tue, 26 Oct 2021 10:38:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=NtsmbIV7d7zHsfeH2swrf3uQXR/7d3oxwiABUKv37gE=; b=PNV2XjJVxfXiWSL/tW8PwiWtR53RywlW1cPArNMtx+L/Wf1vtLvbcRfwXNq8iigtmF 0Ptzle5Oxy3TJccLaUNjqNcJAdOfvgeiZp0MzvZc3G8KohHo/VcPzjODLJx9o9kU/6cX GEULhx2Qx0G6F0zgk5YZdLlLLVBpM46DILOs4Xs2fAwaS0fiA5OBvXXrk29N7BMPRklt 55/AXI++VDHFF17oDDqnobsBMethvmOiV9b4iQbVZl3YnCeFa9nI3XnITsGID9Ki1gP0 866kW/ymqEYJwdxfD/RQL2NLzj7kKA2duuENsqob/TJ/sveDbLA8v1PgCxyNMzEsUUtN DE0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NtsmbIV7d7zHsfeH2swrf3uQXR/7d3oxwiABUKv37gE=; b=fjamShFbMiJ2gfDncz/7oT5VC/1RA8oxhI64DWEezX9UNsoAv283ryE5murtD3ZF+9 rfZnzeqNv42womShcUVjlocKCWH9jqRardCnLD7kKoidLU8ENgvGmQ+D2/blOkcrmNe/ wlXl5aJlJMZ45ny+meerqHdIjoJmxBWlR6ISLrV8FAyPrL7aYYkCvmVDOgZ3i7xBy5LH uSjJLirgmRSNzyKHos1jcuXK0aVZPDPCCMHJcwBbpX2rtXmU5a5Dm7B09DzbBPvhm/Ik k0JGThKSK69vL5eqtO+ks22bkKWCfwwY/AUif6pJ3+EvW9v2dHGekQdDjpW5echeksF7 HNvw== X-Gm-Message-State: AOAM533HHUQ18+dI+id7cRYp0/Y7BY7Tp05KGTxRoy51RCElYEc0+QT5 YDczKxpNQJdpBgiEFPC1RUJsVQ== X-Google-Smtp-Source: ABdhPJwVq2CNV+SssZKgO5B05WCOSZ3H9k5Rw9dOZSjNBYaCWHeQIFOEPRQdXQLz2oXdhaGZXH612A== X-Received: by 2002:ac8:5707:: with SMTP id 7mr26700054qtw.397.1635269909790; Tue, 26 Oct 2021 10:38:29 -0700 (PDT) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id bj37sm11001939qkb.49.2021.10.26.10.38.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Oct 2021 10:38:29 -0700 (PDT) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com Subject: [RFC 5/8] mm: avoid using set_page_count() when pages are freed into allocator Date: Tue, 26 Oct 2021 17:38:19 +0000 Message-Id: <20211026173822.502506-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog In-Reply-To: <20211026173822.502506-1-pasha.tatashin@soleen.com> References: <20211026173822.502506-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Stat-Signature: n5bsr4orhewjycf3kojzfzooy4cecuyn X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A58A7E001999 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=PNV2XjJV; dmarc=none; spf=pass (imf30.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.160.172 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1635269900-439415 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When struct pages are first initialized the page->_refcount field is set 1. However, later when pages are freed into allocator we set _refcount to 0 via set_page_count(). Unconditionally resetting _refcount is dangerous. Instead use page_ref_dec_return(), and verify that the _refcount is what is expected. Signed-off-by: Pasha Tatashin --- mm/page_alloc.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6af4596bddc2..9d18e5f9a85a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1659,6 +1659,7 @@ void __free_pages_core(struct page *page, unsigned int order) unsigned int nr_pages = 1 << order; struct page *p = page; unsigned int loop; + int refcnt; /* * When initializing the memmap, __init_single_page() sets the refcount @@ -1669,10 +1670,12 @@ void __free_pages_core(struct page *page, unsigned int order) for (loop = 0; loop < (nr_pages - 1); loop++, p++) { prefetchw(p + 1); __ClearPageReserved(p); - set_page_count(p, 0); + refcnt = page_ref_dec_return(p); + VM_BUG_ON_PAGE(refcnt, p); } __ClearPageReserved(p); - set_page_count(p, 0); + refcnt = page_ref_dec_return(p); + VM_BUG_ON_PAGE(refcnt, p); atomic_long_add(nr_pages, &page_zone(page)->managed_pages); @@ -2244,10 +2247,12 @@ void __init init_cma_reserved_pageblock(struct page *page) { unsigned i = pageblock_nr_pages; struct page *p = page; + int refcnt; do { __ClearPageReserved(p); - set_page_count(p, 0); + refcnt = page_ref_dec_return(p); + VM_BUG_ON_PAGE(refcnt, p); } while (++p, --i); set_pageblock_migratetype(page, MIGRATE_CMA); From patchwork Tue Oct 26 17:38:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12585333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AC88C433F5 for ; Tue, 26 Oct 2021 17:38:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D00BF61002 for ; Tue, 26 Oct 2021 17:38:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D00BF61002 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 634B094000E; Tue, 26 Oct 2021 13:38:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5482F940007; Tue, 26 Oct 2021 13:38:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25E1194000E; Tue, 26 Oct 2021 13:38:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 17047940007 for ; Tue, 26 Oct 2021 13:38:32 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C3A9E2D233 for ; Tue, 26 Oct 2021 17:38:31 +0000 (UTC) X-FDA: 78739298022.30.3E510D0 Received: from mail-qk1-f179.google.com (mail-qk1-f179.google.com [209.85.222.179]) by imf05.hostedemail.com (Postfix) with ESMTP id 640085088F81 for ; Tue, 26 Oct 2021 17:38:21 +0000 (UTC) Received: by mail-qk1-f179.google.com with SMTP id y10so15926847qkp.9 for ; Tue, 26 Oct 2021 10:38:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=FsG3RHPa8nzkmCz4CSeN655/fUpHviYuTHJ5x2xYTOE=; b=Nv7zndmfXqpwhsOq2NLowimWW7GXaMiu53JP5dVwN954EVlkwz8Tr5cPYXYtNfz9el 7qCuz1q6GvYhXPA9XnGQpvHpNT8Xm2ez00Y2tsd7QPwzy8Yb8WRIJtGxDs13FKiCCrYi bKJKHMRNKJMmpbQFvX53gAR2N80OGOKjb4lJtvxgMjpZpghD1wppVtJDtfvK+64K+jHe 1DLmdlT1CsnEU1OZ/4acs9rW4LqFqMFvmq71gn/CGPB2YxE/YmR4pGzONGc7nbL0+ZOS XjedsHGqI1lXDwrxMOLdqR605kZ5hsLgTKcaPT8lGX+q9E0DPnppSc7vANx+fXrVjfDT OaZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FsG3RHPa8nzkmCz4CSeN655/fUpHviYuTHJ5x2xYTOE=; b=Afm1CzEoNhO59cyFBAQ01j/8berXHmeID/WTFIOP+1aFxjx8dzWuP+LL49Vgk26KYh TVYzjH8UXmQ1YFtKO+ZfwH6QfJdNdo1abE7s22vUDVoS7PMl3pwgVoyJ/QRjA+kLzfAi 1J1Cms8/RIN4OFJ3JedMc99zmcM7KymhTMCxXzdnFXZcWNA277JpsgARGOgucvMP1pNi /WffYX1wI/A6rYjrgQAEAMrrQNc+RHcnuQjyEig55JkqZV2a1NgJ1F2piAIf7uCNBkvp Vp+P6BYsNCkszK1revEkxlU9ruhcxBW2xN8uyyqptsgm22Zgwttcmxx5lXSciM+VMDWt PcgA== X-Gm-Message-State: AOAM533Kf0pq7gs4U0f94H8pVK1ZtvA0re15kvCr94O/CMYCuYfHgMM8 w7twJ/xQSt4OsZ6OjrxvMxnh9A== X-Google-Smtp-Source: ABdhPJygEpowD8ss7FYI0VOb6OcZJOg1pZj4I7OiYUtzVy+Olxa1jiPdn2NiV4eaUKYhA4IIp0sdog== X-Received: by 2002:a05:620a:4150:: with SMTP id k16mr20202483qko.357.1635269910751; Tue, 26 Oct 2021 10:38:30 -0700 (PDT) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id bj37sm11001939qkb.49.2021.10.26.10.38.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Oct 2021 10:38:30 -0700 (PDT) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com Subject: [RFC 6/8] mm: rename init_page_count() -> page_ref_init() Date: Tue, 26 Oct 2021 17:38:20 +0000 Message-Id: <20211026173822.502506-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog In-Reply-To: <20211026173822.502506-1-pasha.tatashin@soleen.com> References: <20211026173822.502506-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 640085088F81 X-Stat-Signature: 5iwg5qrafjbbep5priq1jhrnhga1qydk Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=Nv7zndmf; spf=pass (imf05.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.179 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1635269901-481671 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that set_page_count() is not called from outside anymore and about to be removed, init_page_count() is the only function that is going to be used to unconditionally set _refcount, however it is restricted to set it only to 1. Make init_page_count() aligned with the other page_ref_* functions by renaming it. Signed-off-by: Pasha Tatashin Acked-by: Geert Uytterhoeven --- arch/m68k/mm/motorola.c | 2 +- include/linux/mm.h | 2 +- include/linux/page_ref.h | 10 +++++++--- mm/page_alloc.c | 2 +- 4 files changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 9f3f77785aa7..0d016c2e390b 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -133,7 +133,7 @@ void __init init_pointer_table(void *table, int type) /* unreserve the page so it's possible to free that page */ __ClearPageReserved(PD_PAGE(dp)); - init_page_count(PD_PAGE(dp)); + page_ref_init(PD_PAGE(dp)); return; } diff --git a/include/linux/mm.h b/include/linux/mm.h index 73a52aba448f..46a25e6a14b8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2397,7 +2397,7 @@ extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end); static inline void free_reserved_page(struct page *page) { ClearPageReserved(page); - init_page_count(page); + page_ref_init(page); __free_page(page); adjust_managed_page_count(page, 1); } diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index db7ccb461c3e..81a628dc9b8b 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -80,10 +80,14 @@ static inline void set_page_count(struct page *page, int v) } /* - * Setup the page count before being freed into the page allocator for - * the first time (boot or memory hotplug) + * Setup the page refcount to one before being freed into the page allocator. + * The memory might not be initialized and therefore there cannot be any + * assumptions about the current value of page->_refcount. This call should be + * done during boot when memory is being initialized, during memory hotplug + * when new memory is added, or when a previous reserved memory is unreserved + * this is the first time kernel take control of the given memory. */ -static inline void init_page_count(struct page *page) +static inline void page_ref_init(struct page *page) { set_page_count(page, 1); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9d18e5f9a85a..fcd4c4ce329b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1561,7 +1561,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn, { mm_zero_struct_page(page); set_page_links(page, zone, nid, pfn); - init_page_count(page); + page_ref_init(page); page_mapcount_reset(page); page_cpupid_reset_last(page); page_kasan_tag_reset(page); From patchwork Tue Oct 26 17:38:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12585335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C36C0C433FE for ; Tue, 26 Oct 2021 17:38:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6D6BC6103C for ; Tue, 26 Oct 2021 17:38:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6D6BC6103C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2607D94000F; Tue, 26 Oct 2021 13:38:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 21373940007; Tue, 26 Oct 2021 13:38:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F083D94000F; Tue, 26 Oct 2021 13:38:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id E224F940007 for ; Tue, 26 Oct 2021 13:38:32 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A6A5C183EC324 for ; Tue, 26 Oct 2021 17:38:32 +0000 (UTC) X-FDA: 78739298064.26.0DBE98A Received: from mail-qt1-f170.google.com (mail-qt1-f170.google.com [209.85.160.170]) by imf21.hostedemail.com (Postfix) with ESMTP id 4E3C8D036A47 for ; Tue, 26 Oct 2021 17:38:29 +0000 (UTC) Received: by mail-qt1-f170.google.com with SMTP id o12so14262492qtq.7 for ; Tue, 26 Oct 2021 10:38:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=f+0dxhx0YBByqqgXIIL8cnQMPKeSpMXv8H4jMMrqnmk=; b=ir/Wl81o8SFb3Sh0Ld1XTac6UQ4FqGpzCY8Hr8viqqYdRCJuDfGp6GDO4ZTGax32i2 ftKODouFrQ9+3TdpquCUwwV96nDMrvCXsA8PDtAybTKJxwE6EWrzI9V8TVIdcs2LZdwO yGyxrZDpob7EonNK0eU/s8kZszxWTptY9V68fmETk9v0Zd7jG+itBuX4g6nsr44+X2gg Udj1DbLlHMu0VKG+zdjas9ytqwFb1+TouBG8RrTssaykoTfwuArOacxdI68IzT0RRY0r NttVZw78PSZgakLV4f8bHigwY27ZYoAjFj3+sh75mgdztjc+5qfS7EMmG5CxoDXLMMTa c+cQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f+0dxhx0YBByqqgXIIL8cnQMPKeSpMXv8H4jMMrqnmk=; b=yYKcMOEAGJhhlZpDt7AWRRe8jXvY6hN8S8dpGaAsf+bODIIamejLU9z4BT21uYf7Bx vo6KGI9kT1oUA71YTtDqZ3NKGcPReFK1bCPvZCIjjyT7rGIX8LQTjnR69phjPwsMoJfj 2vu4Sb4jt6K5SoHJBlZ1H8XCb9nt/52AADwymMpiPnC3J4FEDLhSwZoQA7MxAwrvotHA ARrqNb9+gKd31EqhXnCXwtFH7wRsOsJtmqoMir1Oqu0lq/D5HhCYYz7JybPyPr6shl57 mZYuF8ZA2xtw19/SJpxPRJeRtcAjc0ra3SBlHK284RM3fBLDwrV1lZVUVY/i0bCxdlpN KyRA== X-Gm-Message-State: AOAM531uqlxqgtEDI9uA1q05bqGTc4n0Ym2rHt092RFfsgnG1uQqD2GV bDBd0zOCmZU5/I50sPXrBiUY9pyrJLpgeg== X-Google-Smtp-Source: ABdhPJw1wGuOrl76ejWqGzRtvUg/0F66qljQjVxDV5PDrx9JwVBpOODE49L8czXLYboxq9sc2IZo9A== X-Received: by 2002:a05:622a:1453:: with SMTP id v19mr27297476qtx.125.1635269911638; Tue, 26 Oct 2021 10:38:31 -0700 (PDT) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id bj37sm11001939qkb.49.2021.10.26.10.38.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Oct 2021 10:38:31 -0700 (PDT) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com Subject: [RFC 7/8] mm: remove set_page_count() Date: Tue, 26 Oct 2021 17:38:21 +0000 Message-Id: <20211026173822.502506-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog In-Reply-To: <20211026173822.502506-1-pasha.tatashin@soleen.com> References: <20211026173822.502506-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 4E3C8D036A47 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b="ir/Wl81o"; spf=pass (imf21.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.160.170 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-Stat-Signature: wmwkbrta63ew3gsik89uibd6pypiiwaz X-Rspamd-Server: rspam06 X-HE-Tag: 1635269909-663296 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_count() is dangerous because it resets _refcount to an arbitrary value. Instead we now initialize _refcount to 1 only once, and the rest of the time we are using add/dec/cmpxchg to have a contiguous track of the counter. Remove set_page_count() and add new tracing hooks to page_ref_init(). Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 19 +++++--------- include/trace/events/page_ref.h | 46 ++++++++++++++++++++++++++++----- mm/debug_page_ref.c | 8 +++--- 3 files changed, 50 insertions(+), 23 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 81a628dc9b8b..06f5760fcd06 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -7,7 +7,7 @@ #include #include -DECLARE_TRACEPOINT(page_ref_set); +DECLARE_TRACEPOINT(page_ref_init); DECLARE_TRACEPOINT(page_ref_mod); DECLARE_TRACEPOINT(page_ref_mod_and_test); DECLARE_TRACEPOINT(page_ref_mod_and_return); @@ -26,7 +26,7 @@ DECLARE_TRACEPOINT(page_ref_unfreeze); */ #define page_ref_tracepoint_active(t) tracepoint_enabled(t) -extern void __page_ref_set(struct page *page, int v); +extern void __page_ref_init(struct page *page); extern void __page_ref_mod(struct page *page, int v); extern void __page_ref_mod_and_test(struct page *page, int v, int ret); extern void __page_ref_mod_and_return(struct page *page, int v, int ret); @@ -38,7 +38,7 @@ extern void __page_ref_unfreeze(struct page *page, int v); #define page_ref_tracepoint_active(t) false -static inline void __page_ref_set(struct page *page, int v) +static inline void __page_ref_init(struct page *page) { } static inline void __page_ref_mod(struct page *page, int v) @@ -72,15 +72,8 @@ static inline int page_count(const struct page *page) return atomic_read(&compound_head(page)->_refcount); } -static inline void set_page_count(struct page *page, int v) -{ - atomic_set(&page->_refcount, v); - if (page_ref_tracepoint_active(page_ref_set)) - __page_ref_set(page, v); -} - /* - * Setup the page refcount to one before being freed into the page allocator. + * Setup the page->_refcount to 1 before being freed into the page allocator. * The memory might not be initialized and therefore there cannot be any * assumptions about the current value of page->_refcount. This call should be * done during boot when memory is being initialized, during memory hotplug @@ -89,7 +82,9 @@ static inline void set_page_count(struct page *page, int v) */ static inline void page_ref_init(struct page *page) { - set_page_count(page, 1); + atomic_set(&page->_refcount, 1); + if (page_ref_tracepoint_active(page_ref_init)) + __page_ref_init(page); } static inline int page_ref_add_return(struct page *page, int nr) diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index 8a99c1cd417b..87551bb1df9e 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -10,6 +10,45 @@ #include #include +DECLARE_EVENT_CLASS(page_ref_init_template, + + TP_PROTO(struct page *page), + + TP_ARGS(page), + + TP_STRUCT__entry( + __field(unsigned long, pfn) + __field(unsigned long, flags) + __field(int, count) + __field(int, mapcount) + __field(void *, mapping) + __field(int, mt) + __field(int, val) + ), + + TP_fast_assign( + __entry->pfn = page_to_pfn(page); + __entry->flags = page->flags; + __entry->count = page_ref_count(page); + __entry->mapcount = page_mapcount(page); + __entry->mapping = page->mapping; + __entry->mt = get_pageblock_migratetype(page); + ), + + TP_printk("pfn=0x%lx flags=%s count=%d mapcount=%d mapping=%p mt=%d", + __entry->pfn, + show_page_flags(__entry->flags & PAGEFLAGS_MASK), + __entry->count, + __entry->mapcount, __entry->mapping, __entry->mt) +); + +DEFINE_EVENT(page_ref_init_template, page_ref_init, + + TP_PROTO(struct page *page), + + TP_ARGS(page) +); + DECLARE_EVENT_CLASS(page_ref_mod_template, TP_PROTO(struct page *page, int v), @@ -44,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __entry->val) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_set, - - TP_PROTO(struct page *page, int v), - - TP_ARGS(page, v) -); - DEFINE_EVENT(page_ref_mod_template, page_ref_mod, TP_PROTO(struct page *page, int v), diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c index f3b2c9d3ece2..e32149734122 100644 --- a/mm/debug_page_ref.c +++ b/mm/debug_page_ref.c @@ -5,12 +5,12 @@ #define CREATE_TRACE_POINTS #include -void __page_ref_set(struct page *page, int v) +void __page_ref_init(struct page *page) { - trace_page_ref_set(page, v); + trace_page_ref_init(page); } -EXPORT_SYMBOL(__page_ref_set); -EXPORT_TRACEPOINT_SYMBOL(page_ref_set); +EXPORT_SYMBOL(__page_ref_init); +EXPORT_TRACEPOINT_SYMBOL(page_ref_init); void __page_ref_mod(struct page *page, int v) { From patchwork Tue Oct 26 17:38:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12585337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B0F8C433EF for ; Tue, 26 Oct 2021 17:38:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0E3676103C for ; Tue, 26 Oct 2021 17:38:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0E3676103C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 23A41940010; Tue, 26 Oct 2021 13:38:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 19B9F940007; Tue, 26 Oct 2021 13:38:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE31C940010; Tue, 26 Oct 2021 13:38:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0121.hostedemail.com [216.40.44.121]) by kanga.kvack.org (Postfix) with ESMTP id D419B940007 for ; Tue, 26 Oct 2021 13:38:33 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9666D18205800 for ; Tue, 26 Oct 2021 17:38:33 +0000 (UTC) X-FDA: 78739298106.07.BEDD0EA Received: from mail-qk1-f171.google.com (mail-qk1-f171.google.com [209.85.222.171]) by imf08.hostedemail.com (Postfix) with ESMTP id 6FA8B3000240 for ; Tue, 26 Oct 2021 17:38:26 +0000 (UTC) Received: by mail-qk1-f171.google.com with SMTP id bp7so15868593qkb.12 for ; Tue, 26 Oct 2021 10:38:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=HF6B+ANn807oEjA+4tqKdVdBlJJYmmQDtbry277TkjQ=; b=UOWyQLPF0kSv2vshEy+5mKyoiYKfuFaWJXQEAM4Dq5suXZs5nNy8D6u8PIMAb/vTh6 LixhC1W+oJGXwIkcXzvCtKxzxpViFcoP7Q3E4H7G9tOGaD84tilIoXqYVMMrCDCEsk87 eHGEWz07VwEKHc8KxWiS3SCx7UvLktRXlwdRBF1vizFzQ35YbkBx5Uk6SB4OWwHoa2O3 qlXoVx0HtXPJySJmcnHMuCIodzP38uK4FSYI1u3Yb+3vV44Or2DJ1694HJcu8tmNa7tK hljvkwRFGVoIIvc7+PNxTNb3vrxlaeDhmK+hYuwgS1ufuLy5Wes2iWmmnreU7k6BDP8F xbUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HF6B+ANn807oEjA+4tqKdVdBlJJYmmQDtbry277TkjQ=; b=F3lv93PmGT6gay52V85qWNQ1H5KK4sLU5oB0it9ZqUa5VJHfNaTjq/foY726hocBfc 0To2eUl4zQsPosH22TqtZs4o+1otXARK8qF77XsxqzSpz0XKbDRFOqKk2x8OmUseEbGT 2Bj5PJg0oOVud8QUau2FbCTgycdyY+B/P5GmCsUpkylTlYgjbPl/g9WEgrRojJomhyci 0Yumo/YPpWtD9mgeRyBFIWRS46gZGzqAXG3tum4tZTAGSfTLLEea0yElPuXs82qif+YI DdAJJakn0yOIUoSQ6rmnIRPLYZ/dI1JkrGbK+jYfivLr57A1fdA/YC9nVYXL6VfDsiip /7xw== X-Gm-Message-State: AOAM531sgfMC9+jOe7KLjOvtKnh0LeiJz49578OF4tWX6HLtmrgXsQ/8 VEIvLfm3f0V6NKkX8+Edzz/1Yg== X-Google-Smtp-Source: ABdhPJxx9MFs+/QV/teVroSXPkhUKINhq5aIn3syHlgyXssAgeoP8UjDKqxkaYgieNJDGNNH/dS1LQ== X-Received: by 2002:a05:620a:288b:: with SMTP id j11mr4071680qkp.257.1635269912637; Tue, 26 Oct 2021 10:38:32 -0700 (PDT) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id bj37sm11001939qkb.49.2021.10.26.10.38.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Oct 2021 10:38:32 -0700 (PDT) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com Subject: [RFC 8/8] mm: simplify page_ref_* functions Date: Tue, 26 Oct 2021 17:38:22 +0000 Message-Id: <20211026173822.502506-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog In-Reply-To: <20211026173822.502506-1-pasha.tatashin@soleen.com> References: <20211026173822.502506-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Stat-Signature: ce6n1xw1bmyte5m9qoehreexje7oytbr Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=UOWyQLPF; spf=pass (imf08.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.171 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6FA8B3000240 X-HE-Tag: 1635269906-918389 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that we are using atomic_return variants to add/sub/inc/dec page _refcount, it makes sense to combined page_ref_* return and non return functions. Also remove some extra trace points for non-return variants. This improves the tracability by always recording the new _refcount value after the modifications has occurred. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 79 +++++++-------------------------- include/trace/events/page_ref.h | 26 +++-------- mm/debug_page_ref.c | 14 ------ 3 files changed, 22 insertions(+), 97 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 06f5760fcd06..2a91dbc33486 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -8,8 +8,6 @@ #include DECLARE_TRACEPOINT(page_ref_init); -DECLARE_TRACEPOINT(page_ref_mod); -DECLARE_TRACEPOINT(page_ref_mod_and_test); DECLARE_TRACEPOINT(page_ref_mod_and_return); DECLARE_TRACEPOINT(page_ref_mod_unless); DECLARE_TRACEPOINT(page_ref_freeze); @@ -27,8 +25,6 @@ DECLARE_TRACEPOINT(page_ref_unfreeze); #define page_ref_tracepoint_active(t) tracepoint_enabled(t) extern void __page_ref_init(struct page *page); -extern void __page_ref_mod(struct page *page, int v); -extern void __page_ref_mod_and_test(struct page *page, int v, int ret); extern void __page_ref_mod_and_return(struct page *page, int v, int ret); extern void __page_ref_mod_unless(struct page *page, int v, int u); extern void __page_ref_freeze(struct page *page, int v, int ret); @@ -41,12 +37,6 @@ extern void __page_ref_unfreeze(struct page *page, int v); static inline void __page_ref_init(struct page *page) { } -static inline void __page_ref_mod(struct page *page, int v) -{ -} -static inline void __page_ref_mod_and_test(struct page *page, int v, int ret) -{ -} static inline void __page_ref_mod_and_return(struct page *page, int v, int ret) { } @@ -102,26 +92,7 @@ static inline int page_ref_add_return(struct page *page, int nr) static inline void page_ref_add(struct page *page, int nr) { - int ret; - - VM_BUG_ON(nr <= 0); - ret = atomic_add_return(nr, &page->_refcount); - VM_BUG_ON_PAGE(ret <= 0, page); - - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, nr); -} - -static inline void page_ref_sub(struct page *page, int nr) -{ - int ret; - - VM_BUG_ON(nr <= 0); - ret = atomic_sub_return(nr, &page->_refcount); - VM_BUG_ON_PAGE(ret < 0, page); - - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, -nr); + page_ref_add_return(page, nr); } static inline int page_ref_sub_return(struct page *page, int nr) @@ -137,37 +108,14 @@ static inline int page_ref_sub_return(struct page *page, int nr) return ret; } -static inline void page_ref_inc(struct page *page) -{ - int ret = atomic_inc_return(&page->_refcount); - - VM_BUG_ON_PAGE(ret <= 0, page); - - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, 1); -} - -static inline void page_ref_dec(struct page *page) +static inline void page_ref_sub(struct page *page, int nr) { - int ret = atomic_dec_return(&page->_refcount); - - VM_BUG_ON_PAGE(ret < 0, page); - - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, -1); + page_ref_sub_return(page, nr); } static inline int page_ref_sub_and_test(struct page *page, int nr) { - int ret; - - VM_BUG_ON(nr <= 0); - ret = atomic_sub_return(nr, &page->_refcount); - VM_BUG_ON_PAGE(ret < 0, page); - - if (page_ref_tracepoint_active(page_ref_mod_and_test)) - __page_ref_mod_and_test(page, -nr, ret); - return ret == 0; + return page_ref_sub_return(page, nr) == 0; } static inline int page_ref_inc_return(struct page *page) @@ -181,15 +129,10 @@ static inline int page_ref_inc_return(struct page *page) return ret; } -static inline int page_ref_dec_and_test(struct page *page) +static inline void page_ref_inc(struct page *page) { - int ret = atomic_dec_return(&page->_refcount); - VM_BUG_ON_PAGE(ret < 0, page); - - if (page_ref_tracepoint_active(page_ref_mod_and_test)) - __page_ref_mod_and_test(page, -1, ret); - return ret == 0; + page_ref_inc_return(page); } static inline int page_ref_dec_return(struct page *page) @@ -203,6 +146,16 @@ static inline int page_ref_dec_return(struct page *page) return ret; } +static inline void page_ref_dec(struct page *page) +{ + page_ref_dec_return(page); +} + +static inline int page_ref_dec_and_test(struct page *page) +{ + return page_ref_dec_return(page) == 0; +} + static inline int page_ref_add_unless(struct page *page, int nr, int u) { int ret; diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index 87551bb1df9e..883d90508ca2 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -49,7 +49,7 @@ DEFINE_EVENT(page_ref_init_template, page_ref_init, TP_ARGS(page) ); -DECLARE_EVENT_CLASS(page_ref_mod_template, +DECLARE_EVENT_CLASS(page_ref_unfreeze_template, TP_PROTO(struct page *page, int v), @@ -83,14 +83,7 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __entry->val) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_mod, - - TP_PROTO(struct page *page, int v), - - TP_ARGS(page, v) -); - -DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, +DECLARE_EVENT_CLASS(page_ref_mod_template, TP_PROTO(struct page *page, int v, int ret), @@ -126,35 +119,28 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, __entry->val, __entry->ret) ); -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_test, - - TP_PROTO(struct page *page, int v, int ret), - - TP_ARGS(page, v, ret) -); - -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_return, +DEFINE_EVENT(page_ref_mod_template, page_ref_mod_and_return, TP_PROTO(struct page *page, int v, int ret), TP_ARGS(page, v, ret) ); -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_unless, +DEFINE_EVENT(page_ref_mod_template, page_ref_mod_unless, TP_PROTO(struct page *page, int v, int ret), TP_ARGS(page, v, ret) ); -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_freeze, +DEFINE_EVENT(page_ref_mod_template, page_ref_freeze, TP_PROTO(struct page *page, int v, int ret), TP_ARGS(page, v, ret) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_unfreeze, +DEFINE_EVENT(page_ref_unfreeze_template, page_ref_unfreeze, TP_PROTO(struct page *page, int v), diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c index e32149734122..1de9d93cca25 100644 --- a/mm/debug_page_ref.c +++ b/mm/debug_page_ref.c @@ -12,20 +12,6 @@ void __page_ref_init(struct page *page) EXPORT_SYMBOL(__page_ref_init); EXPORT_TRACEPOINT_SYMBOL(page_ref_init); -void __page_ref_mod(struct page *page, int v) -{ - trace_page_ref_mod(page, v); -} -EXPORT_SYMBOL(__page_ref_mod); -EXPORT_TRACEPOINT_SYMBOL(page_ref_mod); - -void __page_ref_mod_and_test(struct page *page, int v, int ret) -{ - trace_page_ref_mod_and_test(page, v, ret); -} -EXPORT_SYMBOL(__page_ref_mod_and_test); -EXPORT_TRACEPOINT_SYMBOL(page_ref_mod_and_test); - void __page_ref_mod_and_return(struct page *page, int v, int ret) { trace_page_ref_mod_and_return(page, v, ret);