From patchwork Wed Nov 17 01:20:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46E16C433F5 for ; Wed, 17 Nov 2021 01:21:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E9B0961BFE for ; Wed, 17 Nov 2021 01:21:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E9B0961BFE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 509D86B0073; Tue, 16 Nov 2021 20:21:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B8D06B0074; Tue, 16 Nov 2021 20:21:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3328E6B0078; Tue, 16 Nov 2021 20:21:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0239.hostedemail.com [216.40.44.239]) by kanga.kvack.org (Postfix) with ESMTP id 25F786B0073 for ; Tue, 16 Nov 2021 20:21:14 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id DD48A1828D94C for ; Wed, 17 Nov 2021 01:21:03 +0000 (UTC) X-FDA: 78816668406.13.7DC1725 Received: from mail-qv1-f54.google.com (mail-qv1-f54.google.com [209.85.219.54]) by imf21.hostedemail.com (Postfix) with ESMTP id CDC08D0369C8 for ; Wed, 17 Nov 2021 01:21:02 +0000 (UTC) Received: by mail-qv1-f54.google.com with SMTP id m17so849865qvx.8 for ; Tue, 16 Nov 2021 17:21:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=FH+5WwfhYAIPkELABVWXQa59kewrxSBrQi9rSX9g1+4=; b=dxGaQ1Jm6wnJiktHsiiJzi9cGowGHt9885KSWbetos6ByWbMlbSZpMM0AdmCk/Fwir gefma3OXH3C7sU0IlUxnVjdNvtgLrgLe6XKwKw5w1fbH0R+ibKs93tMzCu5XXe3tIROp 9ijxwjorlCdAey9/gGKeDLoFXmNqp99+fws9Bwp2r91h2KEJxfEZbV/GQAu1xsVQzNyl bFI7nHzkhgsndH8Ho15TgS7s9oPElstUuROmID4N+ersMw5H20nekARp2R/NXNyY01h7 osHsC/3oMSjy+drpFHeNREPPw05Ke+b2X05anr6Moe8zV+1JckHlwehi8jYurUTh50Sm Kh8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FH+5WwfhYAIPkELABVWXQa59kewrxSBrQi9rSX9g1+4=; b=yj4OwWH/pPn2e1NHW2JyxJr0VJMANffFfU1j5kYoaMRQeCs1PfCrMW6c1icbXlQC2g EobcMEuQ2ZNicj+S728bij/THGDAJ59cPG0WloVntBc1UnboKF5hdsALLnSCEok1LJdO 27xJoOkrFjc7pQwocLn0SrQ7Omt0RA4QzbOIqPUovMpZV16yCRXkjhVGOaaJdq6bO3Kh nNF8ogbj0ZlQiiKOWQNXt9YSrtLmsV6rXJJY40s0kh5BEH4sSPtR1a2M78wrD8FajHh+ 6WiHLmgn5fZOmqecCHIzPRgeLTIizwKY9MlV30SYQ2fzD+j8AKe9+Od3K3NL5Puc61zi BUKQ== X-Gm-Message-State: AOAM5320qAOKjpZEqvl0VYLtjbSTPrLHdvluWjfpGx38Lz3T+J5HFPuR 5HSge3h0JgkRYL6vmQ5A6wJnTg== X-Google-Smtp-Source: ABdhPJxTy8b7URnrYZvIhG5z3M4otJoTKhYmGFrYUufMdmaTlrH62mCpVzHb576jgixQZXohtggHhA== X-Received: by 2002:a05:6214:5195:: with SMTP id kl21mr51097605qvb.42.1637112062978; Tue, 16 Nov 2021 17:21:02 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i6sm3482289qti.40.2021.11.16.17.21.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 17:21:02 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [RFC v2 01/10] mm: page_ref_add_unless() does not trace 'u' argument Date: Wed, 17 Nov 2021 01:20:50 +0000 Message-Id: <20211117012059.141450-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117012059.141450-1-pasha.tatashin@soleen.com> References: <20211117012059.141450-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: CDC08D0369C8 X-Stat-Signature: gjaoh9gu3r9a4qctnfja1we8sgt1nzhg Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=dxGaQ1Jm; dmarc=none; spf=pass (imf21.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.54 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1637112062-535836 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In other page_ref_* functions all arguments and returns are traced, but in page_ref_add_unless the 'u' argument which stands for unless boolean is not traced. However, what is more confusing is that in the tracing routine: __page_ref_mod_unless(struct page *page, int v, int u); The 'u' argument present, but instead a return value is passed into this argument. Add a new template specific for page_ref_add_unless(), and trace all arguments and the return value. Fixes: 95813b8faa0c ("mm/page_ref: add tracepoint to track down page reference manipulation") Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 10 ++++---- include/trace/events/page_ref.h | 43 ++++++++++++++++++++++++++++++--- mm/debug_page_ref.c | 8 +++--- 3 files changed, 49 insertions(+), 12 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 2e677e6ad09f..1903af5fb087 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -11,7 +11,7 @@ DECLARE_TRACEPOINT(page_ref_set); DECLARE_TRACEPOINT(page_ref_mod); DECLARE_TRACEPOINT(page_ref_mod_and_test); DECLARE_TRACEPOINT(page_ref_mod_and_return); -DECLARE_TRACEPOINT(page_ref_mod_unless); +DECLARE_TRACEPOINT(page_ref_add_unless); DECLARE_TRACEPOINT(page_ref_freeze); DECLARE_TRACEPOINT(page_ref_unfreeze); @@ -30,7 +30,7 @@ extern void __page_ref_set(struct page *page, int v); extern void __page_ref_mod(struct page *page, int v); extern void __page_ref_mod_and_test(struct page *page, int v, int ret); extern void __page_ref_mod_and_return(struct page *page, int v, int ret); -extern void __page_ref_mod_unless(struct page *page, int v, int u); +extern void __page_ref_add_unless(struct page *page, int v, int u, int ret); extern void __page_ref_freeze(struct page *page, int v, int ret); extern void __page_ref_unfreeze(struct page *page, int v); @@ -50,7 +50,7 @@ static inline void __page_ref_mod_and_test(struct page *page, int v, int ret) static inline void __page_ref_mod_and_return(struct page *page, int v, int ret) { } -static inline void __page_ref_mod_unless(struct page *page, int v, int u) +static inline void __page_ref_add_unless(struct page *page, int v, int u, int ret) { } static inline void __page_ref_freeze(struct page *page, int v, int ret) @@ -237,8 +237,8 @@ static inline bool page_ref_add_unless(struct page *page, int nr, int u) { bool ret = atomic_add_unless(&page->_refcount, nr, u); - if (page_ref_tracepoint_active(page_ref_mod_unless)) - __page_ref_mod_unless(page, nr, ret); + if (page_ref_tracepoint_active(page_ref_add_unless)) + __page_ref_add_unless(page, nr, u, ret); return ret; } diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index 8a99c1cd417b..c32d6d161cdb 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -94,6 +94,43 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, __entry->val, __entry->ret) ); +DECLARE_EVENT_CLASS(page_ref_add_unless_template, + + TP_PROTO(struct page *page, int v, int u, int ret), + + TP_ARGS(page, v, u, ret), + + TP_STRUCT__entry( + __field(unsigned long, pfn) + __field(unsigned long, flags) + __field(int, count) + __field(int, mapcount) + __field(void *, mapping) + __field(int, mt) + __field(int, val) + __field(int, unless) + __field(int, ret) + ), + + TP_fast_assign( + __entry->pfn = page_to_pfn(page); + __entry->flags = page->flags; + __entry->count = page_ref_count(page); + __entry->mapcount = page_mapcount(page); + __entry->mapping = page->mapping; + __entry->mt = get_pageblock_migratetype(page); + __entry->val = v; + __entry->ret = ret; + ), + + TP_printk("pfn=0x%lx flags=%s count=%d mapcount=%d mapping=%p mt=%d val=%d unless=%d ret=%d", + __entry->pfn, + show_page_flags(__entry->flags & PAGEFLAGS_MASK), + __entry->count, + __entry->mapcount, __entry->mapping, __entry->mt, + __entry->val, __entry->unless, __entry->ret) +); + DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_test, TP_PROTO(struct page *page, int v, int ret), @@ -108,11 +145,11 @@ DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_return, TP_ARGS(page, v, ret) ); -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_unless, +DEFINE_EVENT(page_ref_add_unless_template, page_ref_add_unless, - TP_PROTO(struct page *page, int v, int ret), + TP_PROTO(struct page *page, int v, int u, int ret), - TP_ARGS(page, v, ret) + TP_ARGS(page, v, u, ret) ); DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_freeze, diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c index f3b2c9d3ece2..1426d6887b01 100644 --- a/mm/debug_page_ref.c +++ b/mm/debug_page_ref.c @@ -33,12 +33,12 @@ void __page_ref_mod_and_return(struct page *page, int v, int ret) EXPORT_SYMBOL(__page_ref_mod_and_return); EXPORT_TRACEPOINT_SYMBOL(page_ref_mod_and_return); -void __page_ref_mod_unless(struct page *page, int v, int u) +void __page_ref_add_unless(struct page *page, int v, int u, int ret) { - trace_page_ref_mod_unless(page, v, u); + trace_page_ref_add_unless(page, v, u, ret); } -EXPORT_SYMBOL(__page_ref_mod_unless); -EXPORT_TRACEPOINT_SYMBOL(page_ref_mod_unless); +EXPORT_SYMBOL(__page_ref_add_unless); +EXPORT_TRACEPOINT_SYMBOL(page_ref_add_unless); void __page_ref_freeze(struct page *page, int v, int ret) { From patchwork Wed Nov 17 01:20:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AC3DC433EF for ; Wed, 17 Nov 2021 01:22:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EEF8061A58 for ; Wed, 17 Nov 2021 01:22:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EEF8061A58 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 297C76B0074; Tue, 16 Nov 2021 20:21:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2496B6B0078; Tue, 16 Nov 2021 20:21:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E9046B007B; Tue, 16 Nov 2021 20:21:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id 009016B0074 for ; Tue, 16 Nov 2021 20:21:14 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BAAD082499B9 for ; Wed, 17 Nov 2021 01:21:04 +0000 (UTC) X-FDA: 78816668448.20.1BC1F07 Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) by imf30.hostedemail.com (Postfix) with ESMTP id 777B4E001983 for ; Wed, 17 Nov 2021 01:21:03 +0000 (UTC) Received: by mail-qv1-f51.google.com with SMTP id a24so865006qvb.5 for ; Tue, 16 Nov 2021 17:21:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=c6QiLmlUMu6hFkhumiDKDZu9js//szVlq6n3JHPrLXw=; b=gIjr1uz2nVbaDDNbcv62cKz1i9XvxH04jiXTdDCvBqGXcLc4knHK9OjnoDrQHuxS8z mO+/DSlRG+NS++W56RiyimLq0cozRIQIgezQX0CeTVstntyldl0vA/aZgNx9MUqLnQzK KggjcvZrH5MaoPi8eOb3j+rQ8kcnWuCLV7Ked5QrGdU4n3KLAkngEp5nazAhPZPHVoIN 3/qy+TQPMTA5XZIDx8yp/Rd2gcFC4SbReflwX3pDp5h/Kb+yuN+N7xYaZL9vHNfayklL PTk3voko7kwpmFyQo67xBHzdfPHXSy67+GatQ75WLkT2hk/ooG/Cw9H8sCtwy5ZxNrt8 XDZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c6QiLmlUMu6hFkhumiDKDZu9js//szVlq6n3JHPrLXw=; b=6dRyV0vZh+y8RYJTbQ5ZRuOHmMSnMDgVbYoytqmHVqVop+cvLUx8+JncLx4Tdwp2d+ wwIdSr4AHO0S3dozS+/L4z2mtLecoC0m3NVVnMEjOtX+ujxm1JEyoOWDugqWeF8dsoi0 4BosnC5luIcih9xE709kh0yXb7GcX8LXhUtS9QAVShKnt6ktS70OWtnvohvU5S9+VRu0 Pglk28yeEwNZPE1EVTD5yfMBuP6zcDiBgfjC0aqUjYvo5OU6Q+Wn6B1WNM4iyzAsDAuL AgJ/NEUiXB2ZsDS5oBA+nGLsPK7TcqtXSSpCbQDtr+WtdJdTKCjce06L6bmJUJSR9/+d lr8A== X-Gm-Message-State: AOAM530FDy8zj1tZ6rPmLe2r9Px3EhF99OqBi9/Q9ZBZI//sfG//bc1j /3LAvRwqI+vCiRAocGDeDUi1vw== X-Google-Smtp-Source: ABdhPJyUqxdH9AaEaZ1fMy/a8wtsdZJ2ntRkTjxZBs8VqrqlqOZuQSUqsp7eZN7bz5P3ZRcl2KkFFQ== X-Received: by 2002:a05:6214:e66:: with SMTP id jz6mr43785108qvb.20.1637112063694; Tue, 16 Nov 2021 17:21:03 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i6sm3482289qti.40.2021.11.16.17.21.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 17:21:03 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [RFC v2 02/10] mm: add overflow and underflow checks for page->_refcount Date: Wed, 17 Nov 2021 01:20:51 +0000 Message-Id: <20211117012059.141450-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117012059.141450-1-pasha.tatashin@soleen.com> References: <20211117012059.141450-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Stat-Signature: fgafcphwczy6r8uq6t5k3og98fepd4s8 X-Rspamd-Queue-Id: 777B4E001983 X-Rspamd-Server: rspam07 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=gIjr1uz2; spf=pass (imf30.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.51 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1637112063-343572 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The problems with page->_refcount are hard to debug, because usually when they are detected, the damage has occurred a long time ago. Yet, the problems with invalid page refcount may be catastrophic and lead to memory corruptions. Reduce the scope of when the _refcount problems manifest themselves by adding checks for underflows and overflows into functions that modify _refcount. Use atomic_fetch_* functions to get the old values of the _refcount, and use it to check for overflow/underflow. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 59 +++++++++++++++++++++++++++++----------- 1 file changed, 43 insertions(+), 16 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 1903af5fb087..f3c61dc6344a 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -117,7 +117,10 @@ static inline void init_page_count(struct page *page) static inline void page_ref_add(struct page *page, int nr) { - atomic_add(nr, &page->_refcount); + int old_val = atomic_fetch_add(nr, &page->_refcount); + int new_val = old_val + nr; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, nr); } @@ -129,7 +132,10 @@ static inline void folio_ref_add(struct folio *folio, int nr) static inline void page_ref_sub(struct page *page, int nr) { - atomic_sub(nr, &page->_refcount); + int old_val = atomic_fetch_sub(nr, &page->_refcount); + int new_val = old_val - nr; + + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, -nr); } @@ -141,11 +147,13 @@ static inline void folio_ref_sub(struct folio *folio, int nr) static inline int page_ref_sub_return(struct page *page, int nr) { - int ret = atomic_sub_return(nr, &page->_refcount); + int old_val = atomic_fetch_sub(nr, &page->_refcount); + int new_val = old_val - nr; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -nr, ret); - return ret; + __page_ref_mod_and_return(page, -nr, new_val); + return new_val; } static inline int folio_ref_sub_return(struct folio *folio, int nr) @@ -155,7 +163,10 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr) static inline void page_ref_inc(struct page *page) { - atomic_inc(&page->_refcount); + int old_val = atomic_fetch_inc(&page->_refcount); + int new_val = old_val + 1; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, 1); } @@ -167,7 +178,10 @@ static inline void folio_ref_inc(struct folio *folio) static inline void page_ref_dec(struct page *page) { - atomic_dec(&page->_refcount); + int old_val = atomic_fetch_dec(&page->_refcount); + int new_val = old_val - 1; + + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, -1); } @@ -179,8 +193,11 @@ static inline void folio_ref_dec(struct folio *folio) static inline int page_ref_sub_and_test(struct page *page, int nr) { - int ret = atomic_sub_and_test(nr, &page->_refcount); + int old_val = atomic_fetch_sub(nr, &page->_refcount); + int new_val = old_val - nr; + int ret = new_val == 0; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_test)) __page_ref_mod_and_test(page, -nr, ret); return ret; @@ -193,11 +210,13 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr) static inline int page_ref_inc_return(struct page *page) { - int ret = atomic_inc_return(&page->_refcount); + int old_val = atomic_fetch_inc(&page->_refcount); + int new_val = old_val + 1; + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, 1, ret); - return ret; + __page_ref_mod_and_return(page, 1, new_val); + return new_val; } static inline int folio_ref_inc_return(struct folio *folio) @@ -207,8 +226,11 @@ static inline int folio_ref_inc_return(struct folio *folio) static inline int page_ref_dec_and_test(struct page *page) { - int ret = atomic_dec_and_test(&page->_refcount); + int old_val = atomic_fetch_dec(&page->_refcount); + int new_val = old_val - 1; + int ret = new_val == 0; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_test)) __page_ref_mod_and_test(page, -1, ret); return ret; @@ -221,11 +243,13 @@ static inline int folio_ref_dec_and_test(struct folio *folio) static inline int page_ref_dec_return(struct page *page) { - int ret = atomic_dec_return(&page->_refcount); + int old_val = atomic_fetch_dec(&page->_refcount); + int new_val = old_val - 1; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -1, ret); - return ret; + __page_ref_mod_and_return(page, -1, new_val); + return new_val; } static inline int folio_ref_dec_return(struct folio *folio) @@ -235,8 +259,11 @@ static inline int folio_ref_dec_return(struct folio *folio) static inline bool page_ref_add_unless(struct page *page, int nr, int u) { - bool ret = atomic_add_unless(&page->_refcount, nr, u); + int old_val = atomic_fetch_add_unless(&page->_refcount, nr, u); + int new_val = old_val + nr; + int ret = old_val != u; + VM_BUG_ON_PAGE(ret && (unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_add_unless)) __page_ref_add_unless(page, nr, u, ret); return ret; From patchwork Wed Nov 17 01:20:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDEDAC433EF for ; Wed, 17 Nov 2021 01:22:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7109661BFE for ; Wed, 17 Nov 2021 01:22:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7109661BFE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id B23856B0078; Tue, 16 Nov 2021 20:21:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AD33E6B007B; Tue, 16 Nov 2021 20:21:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 925EC6B007D; Tue, 16 Nov 2021 20:21:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0243.hostedemail.com [216.40.44.243]) by kanga.kvack.org (Postfix) with ESMTP id 851946B0078 for ; Tue, 16 Nov 2021 20:21:15 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4B4F57BE67 for ; Wed, 17 Nov 2021 01:21:05 +0000 (UTC) X-FDA: 78816668490.24.E22D116 Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) by imf28.hostedemail.com (Postfix) with ESMTP id 09A5590000B2 for ; Wed, 17 Nov 2021 01:21:04 +0000 (UTC) Received: by mail-qv1-f51.google.com with SMTP id gu12so860567qvb.6 for ; Tue, 16 Nov 2021 17:21:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=m0gNmNzkZWVRv6w7da8Aeqi7F4kIW5gL+hhpZ/Xzv/o=; b=Oml9J9TSmBupcaKreQn7f7AaxiWOab9NqOgMSRpCdwFjKq1vRO8SFVw0xCjUWh36fK ID7GeDWAFEyp6QTjWUO6Qd6yhs7maVhCTbEkQe6x5surd5bOuYo77KnXiy/Xwxb1lsGo QBoQjhym9uLxUbMyJyD3SH3A0aYwk3aPdoJjF8EL25b60dEtFQPVSAr6dLSkjZ5VhW6m 1FnrlUSLA8LCLK2ltFKulS1YGN8F3gb0rxE5DTlo8s4UERjVa9RBGHcEuN9Nu5SY9yas 1WrwtxT5KJY5a3AJFmMBjRLZ+WUZdGwRoyWgkVqHWyXCO2OZLxghzqSw7oxBkjQJCnDV jZoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m0gNmNzkZWVRv6w7da8Aeqi7F4kIW5gL+hhpZ/Xzv/o=; b=W3PI+xN+y11WEmEq/kZdirfPEWOnT4A3BuwwPm4i3/DCBrIJu2SXm2fcCvLSP1H3id q1Fuc2I+rlHTPN9gWuS5ZqeBSmq/LF4XBh90Fe7yNYZ7CCaPPdziRCVJp0XFSo0jU9uG Cwy0QSfYehUs3fJT03Qh4/cEEpIMiNMd3anf3Ih/soCOx1jCYSXRRa7ZeNc95w+KKy6y Em6uCONUfLzI8AttkbcY/V2X1WAlWPTJW7JAgWZQ32DrNKyTCf1HKWcuq5AJArHMt2aL DWLYd6Fo0dtekn+3ZidnX+xCedvssFFpN7zkfMHoTaCWHNJhKMCGn3sskbQnkf84O/N6 1G/A== X-Gm-Message-State: AOAM532Y+lOpoxyvPRSEwX9o3+CsoxCqS4NyV0lTL8RMpajmQ1uhifpm SorRe6jM5w6a8ubwmpCua03u+A== X-Google-Smtp-Source: ABdhPJw0PBEl5FXxtC6aW72AUuzKgaD7PgHuWSgRVC63Pfy75/BltIYhkTIJgkOC+07BHxbZsDURzg== X-Received: by 2002:ad4:54ca:: with SMTP id j10mr50057319qvx.2.1637112064465; Tue, 16 Nov 2021 17:21:04 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i6sm3482289qti.40.2021.11.16.17.21.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 17:21:04 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [RFC v2 03/10] mm: Avoid using set_page_count() in set_page_recounted() Date: Wed, 17 Nov 2021 01:20:52 +0000 Message-Id: <20211117012059.141450-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117012059.141450-1-pasha.tatashin@soleen.com> References: <20211117012059.141450-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 09A5590000B2 X-Stat-Signature: oyo4y6rjpwypi6agf1wau1r15sfqbmr8 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=Oml9J9TS; spf=pass (imf28.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.51 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1637112064-4255 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_refcounted() converts a non-refcounted page that has (page->_refcount == 0) into a refcounted page by setting _refcount to 1. The current apporach uses the following logic: VM_BUG_ON_PAGE(page_ref_count(page), page); set_page_count(page, 1); However, if _refcount changes from 0 to 1 between the VM_BUG_ON_PAGE() and set_page_count() we can break _refcount, which can cause other problems such as memory corruptions. Instead, use a safer method: increment _refcount first and verify that at increment time it was indeed 1. refcnt = page_ref_inc_return(page); VM_BUG_ON_PAGE(refcnt != 1, page); Use page_ref_inc_return() to avoid unconditionally overwriting the _refcount value with set_page_count(), and check the return value. Signed-off-by: Pasha Tatashin --- mm/internal.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 3b79a5c9427a..f601575b7e5a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -132,9 +132,11 @@ static inline bool page_evictable(struct page *page) */ static inline void set_page_refcounted(struct page *page) { + int refcnt; + VM_BUG_ON_PAGE(PageTail(page), page); - VM_BUG_ON_PAGE(page_ref_count(page), page); - set_page_count(page, 1); + refcnt = page_ref_inc_return(page); + VM_BUG_ON_PAGE(refcnt != 1, page); } extern unsigned long highest_memmap_pfn; From patchwork Wed Nov 17 01:20:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 546D1C433F5 for ; Wed, 17 Nov 2021 01:23:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DBA1E61A58 for ; Wed, 17 Nov 2021 01:23:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DBA1E61A58 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 900FB6B007B; Tue, 16 Nov 2021 20:21:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B03F6B007D; Tue, 16 Nov 2021 20:21:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 750FE6B007E; Tue, 16 Nov 2021 20:21:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0187.hostedemail.com [216.40.44.187]) by kanga.kvack.org (Postfix) with ESMTP id 67C646B007B for ; Tue, 16 Nov 2021 20:21:16 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3303F83EA1 for ; Wed, 17 Nov 2021 01:21:06 +0000 (UTC) X-FDA: 78816668532.14.3A38E85 Received: from mail-qv1-f43.google.com (mail-qv1-f43.google.com [209.85.219.43]) by imf06.hostedemail.com (Postfix) with ESMTP id B1AEB801A8BF for ; Wed, 17 Nov 2021 01:21:04 +0000 (UTC) Received: by mail-qv1-f43.google.com with SMTP id a24so865036qvb.5 for ; Tue, 16 Nov 2021 17:21:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Uxu9jRtxw4HfsqmOsn2OoknK/KseQaLw1cgxWnJ0WVI=; b=FAWaBWJRTtgyJuy6C8oDxJdNSGUWKTCxi5HYaaEvXAoe/HLLS1aP7503u4sKh5FI9M J4Y93zdlrjP/qmcrupDpO6//AXqwkPYrY66wUrX8NJ6VnkhuV0yam8tPuKfeRw1zAMNl 8ozhTvBfQOVvkzKS71CMNN3HzFGsB0HeVLTETeJBuwettGcSYeXZK2dBprIigt2Oeih9 yNQH80ovRYcWT9kPXeCWCvozLy3yrA4R4y2WSeeIB0gd2G6OsDqdG2pKEhcCBkHvfmIf Z8mpSeNy/D8NeNBD8qnp46ccxcXqNM3CYsg4DH4/O8yb2qrj+R9VoD5aJuh0+xH+D1Xn yQ6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Uxu9jRtxw4HfsqmOsn2OoknK/KseQaLw1cgxWnJ0WVI=; b=Q3p7aV7+Fwo1kdsghmTEfx+xrEuOCH4fVCJtuoAyIfTB+BDL7P8ypUYBug5cLggRp4 URL0HYgobNUuZWIi8lVl3W1wCJ8V3n9gA99bWI2VR3UNufJiEtpbozDmU26RoLfaWMVf o7Iljqep4B/q5FXQmSs4vDtF9s+N8sRxM/Dzc4G1CS+1zp/JKJBWOlLwQBJthbvAlpmn JJNp9aqvCTe7uLxvcpGp/ZWBu9y7k+LaUrVZCIeKbA7znWGg0V/Saq5NA15r33BZSIac yLX0nSCMJOEpy7ive1UEWdbRTa6sAqv5Q2p1Sx4Ul4nJxQrz3eH8pSk+YK7jEMkxOiSV jPOA== X-Gm-Message-State: AOAM533BD1cA83Z7A8JEaYBWUWjy14a/02HSGV6oiCxsjIXwU88BW/eq vkguqEB0F/ZOHGILx4Wjjo9n3w== X-Google-Smtp-Source: ABdhPJz+3QFjrUicdH92hbNUAeG2NY1ye5tA6FEG0dT0KUu0X9r4bLKsd2TFebc82z8FnQQFd6IqTQ== X-Received: by 2002:a0c:df0c:: with SMTP id g12mr50307969qvl.24.1637112065186; Tue, 16 Nov 2021 17:21:05 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i6sm3482289qti.40.2021.11.16.17.21.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 17:21:04 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [RFC v2 04/10] mm: remove set_page_count() from page_frag_alloc_align Date: Wed, 17 Nov 2021 01:20:53 +0000 Message-Id: <20211117012059.141450-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117012059.141450-1-pasha.tatashin@soleen.com> References: <20211117012059.141450-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Stat-Signature: ufd79tjb6byyggbbc6bm4wxbzy96j1sp X-Rspamd-Queue-Id: B1AEB801A8BF X-Rspamd-Server: rspam07 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=FAWaBWJR; spf=pass (imf06.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.43 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1637112064-225750 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_count() unconditionally resets the value of _ref_count and that is dangerous, as it is not programmatically verified. Instead we rely on comments like: "OK, page count is 0, we can safely set it". Add a new refcount function: page_ref_add_return() to return the new refcount value after adding to it. Use the return value to verify that the _ref_count was indeed the expected one. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 11 +++++++++++ mm/page_alloc.c | 6 ++++-- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index f3c61dc6344a..27880aca2e2f 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -115,6 +115,17 @@ static inline void init_page_count(struct page *page) set_page_count(page, 1); } +static inline int page_ref_add_return(struct page *page, int nr) +{ + int old_val = atomic_fetch_add(nr, &page->_refcount); + int new_val = old_val + nr; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, nr, new_val); + return new_val; +} + static inline void page_ref_add(struct page *page, int nr) { int old_val = atomic_fetch_add(nr, &page->_refcount); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c5952749ad40..e8e88111028a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5516,6 +5516,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int size = PAGE_SIZE; struct page *page; int offset; + int refcnt; if (unlikely(!nc->va)) { refill: @@ -5554,8 +5555,9 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, /* if size can vary use size else just use PAGE_SIZE */ size = nc->size; #endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + /* page count is 0, set it to PAGE_FRAG_CACHE_MAX_SIZE + 1 */ + refcnt = page_ref_add_return(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + VM_BUG_ON_PAGE(refcnt != PAGE_FRAG_CACHE_MAX_SIZE + 1, page); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; From patchwork Wed Nov 17 01:20:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68669C433EF for ; Wed, 17 Nov 2021 01:23:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0FDAF6124B for ; Wed, 17 Nov 2021 01:23:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0FDAF6124B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 52F7C6B007D; Tue, 16 Nov 2021 20:21:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4DF526B007E; Tue, 16 Nov 2021 20:21:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 382816B0080; Tue, 16 Nov 2021 20:21:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2AE156B007D for ; Tue, 16 Nov 2021 20:21:17 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E3DAD182C04AB for ; Wed, 17 Nov 2021 01:21:06 +0000 (UTC) X-FDA: 78816668742.05.97A1289 Received: from mail-qv1-f42.google.com (mail-qv1-f42.google.com [209.85.219.42]) by imf03.hostedemail.com (Postfix) with ESMTP id 970B330019C1 for ; Wed, 17 Nov 2021 01:21:05 +0000 (UTC) Received: by mail-qv1-f42.google.com with SMTP id v2so840266qve.11 for ; Tue, 16 Nov 2021 17:21:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=J8t8/DAeGEfEe9FXtdVuqeJ/w6uUdirh+zG61soSWRk=; b=m7lwyDdY5JEt5J4salxVkCRGEvAo/Xfj1AA1cSCFEEWRHlZasz/HAm8SRcbXNaX2rY 6u4Mm12NUwmroUqOhPU7o1A3v1aDxV+cH1YLwWtP3pRbUR69S/Ct4FKNWqDl8N3vDKp5 bRenHIu8lumSyXwNNivwqpG4lLKsVrvl3UKOddfW456KHnTHqXLNFmlF6qjmzrziwvWe O5B6rRS8yFgxBHtIdvlihBULfMj0x5IyL5QkU9ITXIDMQ97mGH4n8hqueRCzOjJjpA6Y 3jlAGFCtEnmpNx+DDA9t35TzThm9F8lQB/QA7v21MONSmO7MoxJAE99JvOKPW6Tgo8NY iaEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=J8t8/DAeGEfEe9FXtdVuqeJ/w6uUdirh+zG61soSWRk=; b=U8Im2+tbrnRm5PjNyrERD6WnFwcAhMRdLPVOT8iNx/wvkdAcPO5IylvocNh/DPJGBX 30k9gcw3mCpdCt0UdpnWqJfPyS/dg/O2goIw/IQ7HmRvG2K4VfrK25lpr6ub8+H09tmq rkjsvLH57N4+TkFTXD4w9nui4IHV2yDlQmarVinvcwTj2DlRx777Y//VuiJlo8lq6fhc ctYBf/aRUx7ERLen/wZBH5+HuuPIzroCz7aesladrTd5JZ+gJqEYK+7NTvbMnaf35WHX KsZ3pmVNR25gOPykr3nvQcVHYlnE2x5JFYfDvqyOPp3+K7ADkYG9vB15Cw3ls2zfN3FP dG6A== X-Gm-Message-State: AOAM5335L1WK+QRouI57hiNznBHI5Nadu7YrYS1BDMJ9EBeuyTSj8ktz n0Ir7IhSvvCgeDnRBuZx67zlSw== X-Google-Smtp-Source: ABdhPJweN0Koe+NSGCchxfIxbPIwK5yv6IUG0vo2LeP7GrAt89KSEBALSQCo8fN7ZP+NoS+MC4ovjA== X-Received: by 2002:ad4:5c6c:: with SMTP id i12mr51951710qvh.42.1637112065960; Tue, 16 Nov 2021 17:21:05 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i6sm3482289qti.40.2021.11.16.17.21.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 17:21:05 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [RFC v2 05/10] mm: avoid using set_page_count() when pages are freed into allocator Date: Wed, 17 Nov 2021 01:20:54 +0000 Message-Id: <20211117012059.141450-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117012059.141450-1-pasha.tatashin@soleen.com> References: <20211117012059.141450-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Stat-Signature: ipr6n5bcidcr84ppeakcntw6o5mkym3h X-Rspamd-Queue-Id: 970B330019C1 X-Rspamd-Server: rspam07 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=m7lwyDdY; spf=pass (imf03.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.42 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1637112065-376959 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When struct pages are first initialized the page->_refcount field is set 1. However, later when pages are freed into allocator we set _refcount to 0 via set_page_count(). Unconditionally resetting _refcount is dangerous. Instead use page_ref_dec_return(), and verify that the _refcount is what is expected. Signed-off-by: Pasha Tatashin --- mm/page_alloc.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e8e88111028a..217c0c9fa25b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1653,6 +1653,7 @@ void __free_pages_core(struct page *page, unsigned int order) unsigned int nr_pages = 1 << order; struct page *p = page; unsigned int loop; + int refcnt; /* * When initializing the memmap, __init_single_page() sets the refcount @@ -1663,10 +1664,12 @@ void __free_pages_core(struct page *page, unsigned int order) for (loop = 0; loop < (nr_pages - 1); loop++, p++) { prefetchw(p + 1); __ClearPageReserved(p); - set_page_count(p, 0); + refcnt = page_ref_dec_return(p); + VM_BUG_ON_PAGE(refcnt, p); } __ClearPageReserved(p); - set_page_count(p, 0); + refcnt = page_ref_dec_return(p); + VM_BUG_ON_PAGE(refcnt, p); atomic_long_add(nr_pages, &page_zone(page)->managed_pages); @@ -2238,10 +2241,12 @@ void __init init_cma_reserved_pageblock(struct page *page) { unsigned i = pageblock_nr_pages; struct page *p = page; + int refcnt; do { __ClearPageReserved(p); - set_page_count(p, 0); + refcnt = page_ref_dec_return(p); + VM_BUG_ON_PAGE(refcnt, p); } while (++p, --i); set_pageblock_migratetype(page, MIGRATE_CMA); From patchwork Wed Nov 17 01:20:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78D77C433EF for ; Wed, 17 Nov 2021 01:24:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 22CCC61BFE for ; Wed, 17 Nov 2021 01:24:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 22CCC61BFE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 384156B007E; Tue, 16 Nov 2021 20:21:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 332796B0080; Tue, 16 Nov 2021 20:21:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D3606B0081; Tue, 16 Nov 2021 20:21:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 0EC4C6B007E for ; Tue, 16 Nov 2021 20:21:18 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D2B3F182C04C0 for ; Wed, 17 Nov 2021 01:21:07 +0000 (UTC) X-FDA: 78816668574.22.C051481 Received: from mail-qv1-f50.google.com (mail-qv1-f50.google.com [209.85.219.50]) by imf22.hostedemail.com (Postfix) with ESMTP id 27C861916 for ; Wed, 17 Nov 2021 01:21:07 +0000 (UTC) Received: by mail-qv1-f50.google.com with SMTP id b11so860461qvm.7 for ; Tue, 16 Nov 2021 17:21:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=itHiYOWCJUb0nrp2FwmrOfYUVUksojP1AIObWh9UO1s=; b=Amxwjgr/bi/cfwPexyIKSzy+cWQQa/FPLHp93s1kSUq0trv4NSPC1wYWiQsx6H6dod osfcMRXdqSomprDLvNprMq2ZO2tfPAs6GYlmnCTVq2MtJ0iOq3gwL+h8XCvJczhjYvDf tAxz/DpFCDrWKaB/61E83dSASNK8NuApouoburnZs8adqi0b+LV/BNqEuvRHPbtkdsT6 nmTvRuC1eD0VC1m+saF2HfeCjbkaA3XJfWGslibsaaLCebExLQNcTBNNVo74MI6/da2r IALeDG9/ImWlbzeMm68HZHcuVx8M40TSq6V4CNxISpzdhadHVF2tEAjmIo7+/QEF6Rdz Ybnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=itHiYOWCJUb0nrp2FwmrOfYUVUksojP1AIObWh9UO1s=; b=OKuauZ8qMDfW/QOno24Xek4W81nJAAa6S4E8+q+w7X9C9chpIS/cPDw9ovIp/7uaxX aedsCGN/3SnsgnnxkJDio0bsU2oRiCuLpcd5cCDc03t6OJ4p1tRoiIAhWD6fcVGCFMCA Lb/NomCiVmL+woZqOFznV01Ef8M6NMsbkdftLMal0PwDaE38lVIV1X98UY2Hodmsq0Cy uEjR2knhEBjQ8KHhKyFij2llgBDBAUW12k7ULl3L52C0T6zX7iP2HY1VDQ78gSlJexEe VO41ik03dfMU7muqUE+vsSQ0fMeWKErDMRppiBnIj7beeVbp4f2CzyiEAkwNoQgkn+EK hIfw== X-Gm-Message-State: AOAM531rXI55DCvfn2Thei6oSw8IIg+9VHroXjiiiJ7AYA6r+bWNvvRe hSa6UdYbx9eIxFbe2YQBOo2siw== X-Google-Smtp-Source: ABdhPJzmFAbbdMELzTH4m6zNqKMVAm1xIDyKURFrCR6PoeYAmy5SHEKE8sKURnOBTW/Ye7sxdr9bjg== X-Received: by 2002:a05:6214:29c3:: with SMTP id gh3mr23597620qvb.30.1637112066871; Tue, 16 Nov 2021 17:21:06 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i6sm3482289qti.40.2021.11.16.17.21.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 17:21:06 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [RFC v2 06/10] mm: rename init_page_count() -> page_ref_init() Date: Wed, 17 Nov 2021 01:20:55 +0000 Message-Id: <20211117012059.141450-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117012059.141450-1-pasha.tatashin@soleen.com> References: <20211117012059.141450-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 27C861916 X-Stat-Signature: 695xq3pgs487bfz8r5jamewqzi44nf4e Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b="Amxwjgr/"; spf=pass (imf22.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.50 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1637112067-159022 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that set_page_count() is not called from outside anymore and about to be removed, init_page_count() is the only function that is going to be used to unconditionally set _refcount, however it is restricted to set it only to 1. Make init_page_count() aligned with the other page_ref_* functions by renaming it. Signed-off-by: Pasha Tatashin Acked-by: Geert Uytterhoeven --- arch/m68k/mm/motorola.c | 2 +- include/linux/mm.h | 2 +- include/linux/page_ref.h | 10 +++++++--- mm/page_alloc.c | 2 +- 4 files changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 2b05bb2bac00..e81ecafedff3 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -133,7 +133,7 @@ void __init init_pointer_table(void *table, int type) /* unreserve the page so it's possible to free that page */ __ClearPageReserved(PD_PAGE(dp)); - init_page_count(PD_PAGE(dp)); + page_ref_init(PD_PAGE(dp)); return; } diff --git a/include/linux/mm.h b/include/linux/mm.h index a7e4a9e7d807..736bf16e7104 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2506,7 +2506,7 @@ extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end); static inline void free_reserved_page(struct page *page) { ClearPageReserved(page); - init_page_count(page); + page_ref_init(page); __free_page(page); adjust_managed_page_count(page, 1); } diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 27880aca2e2f..ff946d753df8 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -107,10 +107,14 @@ static inline void folio_set_count(struct folio *folio, int v) } /* - * Setup the page count before being freed into the page allocator for - * the first time (boot or memory hotplug) + * Setup the page refcount to one before being freed into the page allocator. + * The memory might not be initialized and therefore there cannot be any + * assumptions about the current value of page->_refcount. This call should be + * done during boot when memory is being initialized, during memory hotplug + * when new memory is added, or when a previous reserved memory is unreserved + * this is the first time kernel take control of the given memory. */ -static inline void init_page_count(struct page *page) +static inline void page_ref_init(struct page *page) { set_page_count(page, 1); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 217c0c9fa25b..fc828dfde4fc 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1555,7 +1555,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn, { mm_zero_struct_page(page); set_page_links(page, zone, nid, pfn); - init_page_count(page); + page_ref_init(page); page_mapcount_reset(page); page_cpupid_reset_last(page); page_kasan_tag_reset(page); From patchwork Wed Nov 17 01:20:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9A5CC433EF for ; Wed, 17 Nov 2021 01:25:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6572861BFE for ; Wed, 17 Nov 2021 01:25:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6572861BFE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id EA66D6B0080; Tue, 16 Nov 2021 20:21:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E56FC6B0081; Tue, 16 Nov 2021 20:21:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D1CB16B0082; Tue, 16 Nov 2021 20:21:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id C4B5F6B0080 for ; Tue, 16 Nov 2021 20:21:18 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 800C884D02 for ; Wed, 17 Nov 2021 01:21:08 +0000 (UTC) X-FDA: 78816668700.01.5853753 Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) by imf14.hostedemail.com (Postfix) with ESMTP id 75F5E6001989 for ; Wed, 17 Nov 2021 01:21:07 +0000 (UTC) Received: by mail-qv1-f51.google.com with SMTP id v2so840304qve.11 for ; Tue, 16 Nov 2021 17:21:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=jEeIo+YXWu+jQERugbOWoalFO2IQ307TwaxyAny4Bnk=; b=nFzALkftd49gyh1V2soKOSu0/z5Tf4/YwVjEdwF8ObBGG5trFX4FN4bUKNU/9Ur5lX Ord5IykkAT8njQtDV/VrLRFOKL5umR+BLcemCr7lLtdVQ0P3Ssy8dPzycCvLRhpGSq8w zr9aw5nPWpgzDIywXyxqt8vfEp8FUSsMb+XtV+ExNcgfu6X6x1t/Pox9fEM+jnhKsh4Y DndIVThnd2r3NrhgDjivuKkKREO8fx9EcFbPI703EN3GKrZCGAuhgqdrXcOUbGlQRB3R VyhTDolxhu0bZkfG3vmhhBRoXAVtpjcvZkV99XA3cUvVQS3rlMeIMKO7LNYCRQTvKefD UnyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jEeIo+YXWu+jQERugbOWoalFO2IQ307TwaxyAny4Bnk=; b=jvVz6/BkTFJqNwLtedFEgviwdtGdtIa+N3Qk/aQnuciM6rOhRV9ou9ZY8Brbcesu2/ rmxx1dAHWlBS9jHcvQIzn5XgeSFUGTa9o0LtR2Wub9kPMrBnS0GUFbRAFbB/ULYE/ki6 FmdYi6XsR0tPbEwI7D8L1R8174exGx7Ods0Kyj0ATPl0InPyun7LJuxkPzBLNxvvARt3 BPL0IAbNtDS6PHyCnUMmsLKYqQr8nYEJDdCNm5NY16OcBeTpqYpqzmHUAWGB2C4QjDt1 jM1xs+TK1pyLFjyfdzPWFPuIuKJ10OKvEroCm4K1IfxMOQYmX5cMRNu2KnUO8Bjhd49L v0bg== X-Gm-Message-State: AOAM532rwhdN+9hvpq95ZMq9qsRS+VM+/CVvMDTfnrPzyp0H83Bn2DMh dnvCr44BFUKVop6gwfB40GXA+g== X-Google-Smtp-Source: ABdhPJzOqheYjgAuUY6YV/2zcgXSCW6m0DkvopLc+HWZTcDI+iAwwIhSB6veVo0YImgFHCn06hsG2A== X-Received: by 2002:ad4:4452:: with SMTP id l18mr50662052qvt.8.1637112067585; Tue, 16 Nov 2021 17:21:07 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i6sm3482289qti.40.2021.11.16.17.21.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 17:21:07 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [RFC v2 07/10] mm: remove set_page_count() Date: Wed, 17 Nov 2021 01:20:56 +0000 Message-Id: <20211117012059.141450-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117012059.141450-1-pasha.tatashin@soleen.com> References: <20211117012059.141450-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 75F5E6001989 X-Stat-Signature: 8ydbksahhh4je56hycqaz4ky9j6x39nf Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=nFzALkft; dmarc=none; spf=pass (imf14.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.51 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1637112067-667380 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_count() is dangerous because it resets _refcount to an arbitrary value. Instead we now initialize _refcount to 1 only once, and the rest of the time we are using add/dec/cmpxchg to have a contiguous track of the counter. Remove set_page_count() and add new tracing hooks to page_ref_init(). Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 27 ++++++++----------- include/trace/events/page_ref.h | 46 ++++++++++++++++++++++++++++----- mm/debug_page_ref.c | 8 +++--- 3 files changed, 54 insertions(+), 27 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index ff946d753df8..c7033f506d68 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -7,7 +7,7 @@ #include #include -DECLARE_TRACEPOINT(page_ref_set); +DECLARE_TRACEPOINT(page_ref_init); DECLARE_TRACEPOINT(page_ref_mod); DECLARE_TRACEPOINT(page_ref_mod_and_test); DECLARE_TRACEPOINT(page_ref_mod_and_return); @@ -26,7 +26,7 @@ DECLARE_TRACEPOINT(page_ref_unfreeze); */ #define page_ref_tracepoint_active(t) tracepoint_enabled(t) -extern void __page_ref_set(struct page *page, int v); +extern void __page_ref_init(struct page *page); extern void __page_ref_mod(struct page *page, int v); extern void __page_ref_mod_and_test(struct page *page, int v, int ret); extern void __page_ref_mod_and_return(struct page *page, int v, int ret); @@ -38,7 +38,7 @@ extern void __page_ref_unfreeze(struct page *page, int v); #define page_ref_tracepoint_active(t) false -static inline void __page_ref_set(struct page *page, int v) +static inline void __page_ref_init(struct page *page) { } static inline void __page_ref_mod(struct page *page, int v) @@ -94,18 +94,6 @@ static inline int page_count(const struct page *page) return folio_ref_count(page_folio(page)); } -static inline void set_page_count(struct page *page, int v) -{ - atomic_set(&page->_refcount, v); - if (page_ref_tracepoint_active(page_ref_set)) - __page_ref_set(page, v); -} - -static inline void folio_set_count(struct folio *folio, int v) -{ - set_page_count(&folio->page, v); -} - /* * Setup the page refcount to one before being freed into the page allocator. * The memory might not be initialized and therefore there cannot be any @@ -116,7 +104,14 @@ static inline void folio_set_count(struct folio *folio, int v) */ static inline void page_ref_init(struct page *page) { - set_page_count(page, 1); + atomic_set(&page->_refcount, 1); + if (page_ref_tracepoint_active(page_ref_init)) + __page_ref_init(page); +} + +static inline void folio_ref_init(struct folio *folio) +{ + page_ref_init(&folio->page); } static inline int page_ref_add_return(struct page *page, int nr) diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index c32d6d161cdb..2b8e5a4df53b 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -10,6 +10,45 @@ #include #include +DECLARE_EVENT_CLASS(page_ref_init_template, + + TP_PROTO(struct page *page), + + TP_ARGS(page), + + TP_STRUCT__entry( + __field(unsigned long, pfn) + __field(unsigned long, flags) + __field(int, count) + __field(int, mapcount) + __field(void *, mapping) + __field(int, mt) + __field(int, val) + ), + + TP_fast_assign( + __entry->pfn = page_to_pfn(page); + __entry->flags = page->flags; + __entry->count = page_ref_count(page); + __entry->mapcount = page_mapcount(page); + __entry->mapping = page->mapping; + __entry->mt = get_pageblock_migratetype(page); + ), + + TP_printk("pfn=0x%lx flags=%s count=%d mapcount=%d mapping=%p mt=%d", + __entry->pfn, + show_page_flags(__entry->flags & PAGEFLAGS_MASK), + __entry->count, + __entry->mapcount, __entry->mapping, __entry->mt) +); + +DEFINE_EVENT(page_ref_init_template, page_ref_init, + + TP_PROTO(struct page *page), + + TP_ARGS(page) +); + DECLARE_EVENT_CLASS(page_ref_mod_template, TP_PROTO(struct page *page, int v), @@ -44,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __entry->val) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_set, - - TP_PROTO(struct page *page, int v), - - TP_ARGS(page, v) -); - DEFINE_EVENT(page_ref_mod_template, page_ref_mod, TP_PROTO(struct page *page, int v), diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c index 1426d6887b01..ad21abfec463 100644 --- a/mm/debug_page_ref.c +++ b/mm/debug_page_ref.c @@ -5,12 +5,12 @@ #define CREATE_TRACE_POINTS #include -void __page_ref_set(struct page *page, int v) +void __page_ref_init(struct page *page) { - trace_page_ref_set(page, v); + trace_page_ref_init(page); } -EXPORT_SYMBOL(__page_ref_set); -EXPORT_TRACEPOINT_SYMBOL(page_ref_set); +EXPORT_SYMBOL(__page_ref_init); +EXPORT_TRACEPOINT_SYMBOL(page_ref_init); void __page_ref_mod(struct page *page, int v) { From patchwork Wed Nov 17 01:20:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99836C433EF for ; Wed, 17 Nov 2021 01:25:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 30CB561C12 for ; Wed, 17 Nov 2021 01:25:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 30CB561C12 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D1E966B0081; Tue, 16 Nov 2021 20:21:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CCE946B0082; Tue, 16 Nov 2021 20:21:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B47C66B0083; Tue, 16 Nov 2021 20:21:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id A7C176B0081 for ; Tue, 16 Nov 2021 20:21:19 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 621C318239AB7 for ; Wed, 17 Nov 2021 01:21:09 +0000 (UTC) X-FDA: 78816668658.20.5945DC2 Received: from mail-qv1-f48.google.com (mail-qv1-f48.google.com [209.85.219.48]) by imf22.hostedemail.com (Postfix) with ESMTP id ADEDC1916 for ; Wed, 17 Nov 2021 01:21:08 +0000 (UTC) Received: by mail-qv1-f48.google.com with SMTP id j9so837068qvm.10 for ; Tue, 16 Nov 2021 17:21:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=1h5Wzp94/Sipgu/3o+uHq113m+sMIY4OkzO9y4mR9jc=; b=j9DXTx4ng57xAkRMlxqTywde1NlFoCOEP2KwbR0LpptlhgroYQV1sriPmUUvAoxlsb ++tJOFk2GjfvS4IWA+pr577NoNfePIp+6QCE/apcd/1a8OO0/lHeHdSXGqZPA0dphuCJ sHrcFzvygs8kS7MUcDUi6CMs38FUVSiqMkN28VNv+39xtD70V10KLnTQni4Z5ZtC6pHc l8r4wOFo+OaO5jjLRHbK3TLnhlJAwaME14h2ft8w+MxVPbEv+032MDAyeJyDWX3Uhket X7IKlYAvfGb/taxRPyIvEUAe4FzeReXElLIZ4j1bXDUo8I/L7Qw+HHdLoo7+nE40U7zV ynjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1h5Wzp94/Sipgu/3o+uHq113m+sMIY4OkzO9y4mR9jc=; b=4Y/cVlh+ZnfGZBpZs/Flez2BYU3K/a5MIIziMaLibbSDJl7H1s3lFsAkhD8EgLmSrd 9L7gQTBxKurBiB0fRqtCsq12NARBri1HWZdO6Y14tDOkeKk5k2+5SGvRzM6EKp4BpCts /FRCdZa+erKRoVasRVb6ux7nJhPnHFCAPDLfTpLuSVP8Sx1J12Vtk9G+7o4O66acDK3C BL75OH8Zhpip2/TgiqqdJWt2pbQNopetUFOdJjNqIjYBbFkJhnWrunuzb7mvHqvwLt/a 0nSAhLRcwk0rizcNCKiDmL9zDmLB211AQduR6NxX+yQs9ZOe3wRwFQd0hejF80eXy2lj QM3A== X-Gm-Message-State: AOAM532d5loSMJnuvfVue3rZKooRmKFHpXQZd0JPN7437BlPPU1RcxjN G2+uOAP/uhl9HtNCZZdsOF81gQ== X-Google-Smtp-Source: ABdhPJz1xBTj4mNz+/NKiIYbHJKGeE3kTVS1OiE1f4YrzufypbcPCU4eTddlRdib3hCTe2Hbl+oLUA== X-Received: by 2002:ad4:4e49:: with SMTP id eb9mr50037122qvb.22.1637112068328; Tue, 16 Nov 2021 17:21:08 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i6sm3482289qti.40.2021.11.16.17.21.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 17:21:08 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [RFC v2 08/10] mm: simplify page_ref_* functions Date: Wed, 17 Nov 2021 01:20:57 +0000 Message-Id: <20211117012059.141450-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117012059.141450-1-pasha.tatashin@soleen.com> References: <20211117012059.141450-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: ADEDC1916 X-Stat-Signature: uio5boxax5j135c1gg4977az9ncjuyoh Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=j9DXTx4n; dmarc=none; spf=pass (imf22.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.48 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1637112068-69168 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that we are using atomic_fetch* variants to add/sub/inc/dec page _refcount, it makes sense to combined page_ref_* return and non return functions. Also remove some extra trace points for non-return variants. This improves the tracability by always recording the new _refcount value after the modifications has occurred. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 102 +++++++++----------------------- include/trace/events/page_ref.h | 24 ++------ mm/debug_page_ref.c | 14 ----- 3 files changed, 34 insertions(+), 106 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index c7033f506d68..8c76bf3bf7e1 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -8,8 +8,6 @@ #include DECLARE_TRACEPOINT(page_ref_init); -DECLARE_TRACEPOINT(page_ref_mod); -DECLARE_TRACEPOINT(page_ref_mod_and_test); DECLARE_TRACEPOINT(page_ref_mod_and_return); DECLARE_TRACEPOINT(page_ref_add_unless); DECLARE_TRACEPOINT(page_ref_freeze); @@ -27,8 +25,6 @@ DECLARE_TRACEPOINT(page_ref_unfreeze); #define page_ref_tracepoint_active(t) tracepoint_enabled(t) extern void __page_ref_init(struct page *page); -extern void __page_ref_mod(struct page *page, int v); -extern void __page_ref_mod_and_test(struct page *page, int v, int ret); extern void __page_ref_mod_and_return(struct page *page, int v, int ret); extern void __page_ref_add_unless(struct page *page, int v, int u, int ret); extern void __page_ref_freeze(struct page *page, int v, int ret); @@ -41,12 +37,6 @@ extern void __page_ref_unfreeze(struct page *page, int v); static inline void __page_ref_init(struct page *page) { } -static inline void __page_ref_mod(struct page *page, int v) -{ -} -static inline void __page_ref_mod_and_test(struct page *page, int v, int ret) -{ -} static inline void __page_ref_mod_and_return(struct page *page, int v, int ret) { } @@ -127,12 +117,7 @@ static inline int page_ref_add_return(struct page *page, int nr) static inline void page_ref_add(struct page *page, int nr) { - int old_val = atomic_fetch_add(nr, &page->_refcount); - int new_val = old_val + nr; - - VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, nr); + page_ref_add_return(page, nr); } static inline void folio_ref_add(struct folio *folio, int nr) @@ -140,30 +125,25 @@ static inline void folio_ref_add(struct folio *folio, int nr) page_ref_add(&folio->page, nr); } -static inline void page_ref_sub(struct page *page, int nr) +static inline int page_ref_sub_return(struct page *page, int nr) { int old_val = atomic_fetch_sub(nr, &page->_refcount); int new_val = old_val - nr; VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, -nr); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, -nr, new_val); + return new_val; } -static inline void folio_ref_sub(struct folio *folio, int nr) +static inline void page_ref_sub(struct page *page, int nr) { - page_ref_sub(&folio->page, nr); + page_ref_sub_return(page, nr); } -static inline int page_ref_sub_return(struct page *page, int nr) +static inline void folio_ref_sub(struct folio *folio, int nr) { - int old_val = atomic_fetch_sub(nr, &page->_refcount); - int new_val = old_val - nr; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -nr, new_val); - return new_val; + page_ref_sub(&folio->page, nr); } static inline int folio_ref_sub_return(struct folio *folio, int nr) @@ -171,14 +151,20 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr) return page_ref_sub_return(&folio->page, nr); } -static inline void page_ref_inc(struct page *page) +static inline int page_ref_inc_return(struct page *page) { int old_val = atomic_fetch_inc(&page->_refcount); int new_val = old_val + 1; VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, 1); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, 1, new_val); + return new_val; +} + +static inline void page_ref_inc(struct page *page) +{ + page_ref_inc_return(page); } static inline void folio_ref_inc(struct folio *folio) @@ -186,14 +172,20 @@ static inline void folio_ref_inc(struct folio *folio) page_ref_inc(&folio->page); } -static inline void page_ref_dec(struct page *page) +static inline int page_ref_dec_return(struct page *page) { int old_val = atomic_fetch_dec(&page->_refcount); int new_val = old_val - 1; VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, -1); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, -1, new_val); + return new_val; +} + +static inline void page_ref_dec(struct page *page) +{ + page_ref_dec_return(page); } static inline void folio_ref_dec(struct folio *folio) @@ -203,14 +195,7 @@ static inline void folio_ref_dec(struct folio *folio) static inline int page_ref_sub_and_test(struct page *page, int nr) { - int old_val = atomic_fetch_sub(nr, &page->_refcount); - int new_val = old_val - nr; - int ret = new_val == 0; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_test)) - __page_ref_mod_and_test(page, -nr, ret); - return ret; + return page_ref_sub_return(page, nr) == 0; } static inline int folio_ref_sub_and_test(struct folio *folio, int nr) @@ -218,17 +203,6 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr) return page_ref_sub_and_test(&folio->page, nr); } -static inline int page_ref_inc_return(struct page *page) -{ - int old_val = atomic_fetch_inc(&page->_refcount); - int new_val = old_val + 1; - - VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, 1, new_val); - return new_val; -} - static inline int folio_ref_inc_return(struct folio *folio) { return page_ref_inc_return(&folio->page); @@ -236,14 +210,7 @@ static inline int folio_ref_inc_return(struct folio *folio) static inline int page_ref_dec_and_test(struct page *page) { - int old_val = atomic_fetch_dec(&page->_refcount); - int new_val = old_val - 1; - int ret = new_val == 0; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_test)) - __page_ref_mod_and_test(page, -1, ret); - return ret; + return page_ref_dec_return(page) == 0; } static inline int folio_ref_dec_and_test(struct folio *folio) @@ -251,17 +218,6 @@ static inline int folio_ref_dec_and_test(struct folio *folio) return page_ref_dec_and_test(&folio->page); } -static inline int page_ref_dec_return(struct page *page) -{ - int old_val = atomic_fetch_dec(&page->_refcount); - int new_val = old_val - 1; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -1, new_val); - return new_val; -} - static inline int folio_ref_dec_return(struct folio *folio) { return page_ref_dec_return(&folio->page); diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index 2b8e5a4df53b..600ea20c3e11 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -49,7 +49,7 @@ DEFINE_EVENT(page_ref_init_template, page_ref_init, TP_ARGS(page) ); -DECLARE_EVENT_CLASS(page_ref_mod_template, +DECLARE_EVENT_CLASS(page_ref_unfreeze_template, TP_PROTO(struct page *page, int v), @@ -83,14 +83,7 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __entry->val) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_mod, - - TP_PROTO(struct page *page, int v), - - TP_ARGS(page, v) -); - -DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, +DECLARE_EVENT_CLASS(page_ref_mod_template, TP_PROTO(struct page *page, int v, int ret), @@ -163,14 +156,7 @@ DECLARE_EVENT_CLASS(page_ref_add_unless_template, __entry->val, __entry->unless, __entry->ret) ); -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_test, - - TP_PROTO(struct page *page, int v, int ret), - - TP_ARGS(page, v, ret) -); - -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_return, +DEFINE_EVENT(page_ref_mod_template, page_ref_mod_and_return, TP_PROTO(struct page *page, int v, int ret), @@ -184,14 +170,14 @@ DEFINE_EVENT(page_ref_add_unless_template, page_ref_add_unless, TP_ARGS(page, v, u, ret) ); -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_freeze, +DEFINE_EVENT(page_ref_mod_template, page_ref_freeze, TP_PROTO(struct page *page, int v, int ret), TP_ARGS(page, v, ret) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_unfreeze, +DEFINE_EVENT(page_ref_unfreeze_template, page_ref_unfreeze, TP_PROTO(struct page *page, int v), diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c index ad21abfec463..f5f39a77c6da 100644 --- a/mm/debug_page_ref.c +++ b/mm/debug_page_ref.c @@ -12,20 +12,6 @@ void __page_ref_init(struct page *page) EXPORT_SYMBOL(__page_ref_init); EXPORT_TRACEPOINT_SYMBOL(page_ref_init); -void __page_ref_mod(struct page *page, int v) -{ - trace_page_ref_mod(page, v); -} -EXPORT_SYMBOL(__page_ref_mod); -EXPORT_TRACEPOINT_SYMBOL(page_ref_mod); - -void __page_ref_mod_and_test(struct page *page, int v, int ret) -{ - trace_page_ref_mod_and_test(page, v, ret); -} -EXPORT_SYMBOL(__page_ref_mod_and_test); -EXPORT_TRACEPOINT_SYMBOL(page_ref_mod_and_test); - void __page_ref_mod_and_return(struct page *page, int v, int ret) { trace_page_ref_mod_and_return(page, v, ret); From patchwork Wed Nov 17 01:20:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3086C433F5 for ; Wed, 17 Nov 2021 01:26:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 518E461C12 for ; Wed, 17 Nov 2021 01:26:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 518E461C12 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5F4B56B0082; Tue, 16 Nov 2021 20:21:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5A2806B0083; Tue, 16 Nov 2021 20:21:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 443756B0085; Tue, 16 Nov 2021 20:21:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id 359C06B0082 for ; Tue, 16 Nov 2021 20:21:20 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id F40728248D7C for ; Wed, 17 Nov 2021 01:21:09 +0000 (UTC) X-FDA: 78816668700.14.EE5B436 Received: from mail-qv1-f44.google.com (mail-qv1-f44.google.com [209.85.219.44]) by imf09.hostedemail.com (Postfix) with ESMTP id 397283000103 for ; Wed, 17 Nov 2021 01:21:08 +0000 (UTC) Received: by mail-qv1-f44.google.com with SMTP id u16so870646qvk.4 for ; Tue, 16 Nov 2021 17:21:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=dKJmUx0S3b3gU9QerEdM5MtTK89f9bkEO3AdXrfthGg=; b=SiJvkIEmmAa+081NeDmkVtE5qXMKEyWz2TjohNpRU6u4LtKyx1uEhrJppd73vJzoFM 8BY29UHCMxrocLY5y8FJwbRKNg6rvaOr7/BCdVobyI3GcPT5w+zM9lIOzu7/Wbk0BEZR e9hAbvkI7utgriQ3AL8kzQ4cwOISlKrA9LJXK4vnEUb5RqZKIQHhhCxEx/UyFzJ1FY07 pRtmRje2wnvSN56pig/ow3liUabXKOy+ebNopsBKUDbYRi4Y95kZ+2Av9hHhlRUf9isj 9ihzFXX4IeTiln6zC9lZaS63ci3YUFDZCouE7fLp5n77zubKk27nRzSOQ6DI4RK6i71T CwOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dKJmUx0S3b3gU9QerEdM5MtTK89f9bkEO3AdXrfthGg=; b=bhBrOI1VaAmRUfMKljWmCt57rvKGpAZyDv1JpxyNunWNRRI0jrXwIstVLThuAnKCbx LaEp4kiOd+fqBHnXI7PoOuvIm81cpqn1On3aVs15Wm81ebeF8AWod1G0aE4KrioRYMGB EHJV4khIwTvzcqsJWzVv7INRJy7ETaAtQPsMcPeQY6r0zRAz6ObZw0K8Zhz7OpcmUP1n 6zN7DbTdhFDt6iV1pBiq42U5yCtxaIOvuGbtxUbcE7sLB9dSFVVVONFMtEUu7lH/TzkY DBUZdS5RbMKMUWCODO1yenxkdwDwwlqL4jq5wKhWOys/Qd4Qfo4WcHVi1cQUiKRXuyYz +bLg== X-Gm-Message-State: AOAM53286m7mQ/QW0LMT7NhrU2XSj3dlI8hK1sSlhfQgheK0wH6dl//m sE3FfUH7ljs/1Lppq24qztfjJw== X-Google-Smtp-Source: ABdhPJyGremPP9L92/6lcac/wA3df9Bul/Oomg4HlgGr7phDDIKwAGBqFNBXjyfBYYwK7urnRXunoQ== X-Received: by 2002:ad4:5fcd:: with SMTP id jq13mr49700248qvb.29.1637112069006; Tue, 16 Nov 2021 17:21:09 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i6sm3482289qti.40.2021.11.16.17.21.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 17:21:08 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [RFC v2 09/10] mm: do not use atomic_set_release in page_ref_unfreeze() Date: Wed, 17 Nov 2021 01:20:58 +0000 Message-Id: <20211117012059.141450-10-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117012059.141450-1-pasha.tatashin@soleen.com> References: <20211117012059.141450-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 397283000103 X-Stat-Signature: anuozkawjsspni5gptzf4359cwo7ngir Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=SiJvkIEm; spf=pass (imf09.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.44 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1637112068-828912 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000015, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In we set the old _refcount value after verifying that the old value was indeed 0. VM_BUG_ON_PAGE(page_count(page) != 0, page); < the _refcount may change here> atomic_set_release(&page->_refcount, count); To avoid the smal gap where _refcount may change lets verify the time of the _refcount at the time of the set operation. Use atomic_xchg_release() and at the set time verify that the value was 0. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 8c76bf3bf7e1..26676d3bcd58 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -322,10 +322,9 @@ static inline int folio_ref_freeze(struct folio *folio, int count) static inline void page_ref_unfreeze(struct page *page, int count) { - VM_BUG_ON_PAGE(page_count(page) != 0, page); - VM_BUG_ON(count == 0); + int old_val = atomic_xchg_release(&page->_refcount, count); - atomic_set_release(&page->_refcount, count); + VM_BUG_ON_PAGE(count == 0 || old_val != 0, page); if (page_ref_tracepoint_active(page_ref_unfreeze)) __page_ref_unfreeze(page, count); } From patchwork Wed Nov 17 01:20:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12623463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF17CC433F5 for ; Wed, 17 Nov 2021 01:26:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5248261BFE for ; Wed, 17 Nov 2021 01:26:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5248261BFE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 335876B0083; Tue, 16 Nov 2021 20:21:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E3E76B0085; Tue, 16 Nov 2021 20:21:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1ABCC6B0087; Tue, 16 Nov 2021 20:21:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0230.hostedemail.com [216.40.44.230]) by kanga.kvack.org (Postfix) with ESMTP id 0DD066B0083 for ; Tue, 16 Nov 2021 20:21:21 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id CF4C785D6F for ; Wed, 17 Nov 2021 01:21:10 +0000 (UTC) X-FDA: 78816668700.28.A91BAEE Received: from mail-qv1-f44.google.com (mail-qv1-f44.google.com [209.85.219.44]) by imf04.hostedemail.com (Postfix) with ESMTP id B3DA350000A1 for ; Wed, 17 Nov 2021 01:21:09 +0000 (UTC) Received: by mail-qv1-f44.google.com with SMTP id j9so837102qvm.10 for ; Tue, 16 Nov 2021 17:21:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=175Dn+OmrWLalikvHXgtLaL0RinzWYRBjCoX9yGsvr8=; b=nRlJZzzt9m88Z0nj/5kEB+Zes3UkJ20EFtCk6ZM48f4cGo5kl2wcnggusHD2Dz7vhJ qAtHdjH9IyoAgHS5377lUw68u2uP7GUDcU3rk+mbiS+vyiLJ2e58T5UIfkY4HOqj+Qeh g5ad5XTxjzetjTTUD987sYt0ChOKZBfxwprjP9ti8pdDaQBQ1JSj++YdhF2SGP4twv82 bZcrCJDYcvlP/eZ8w5pR8rsbjnUKX5BfxJsKwXkQii6xqYCMXp6kPm5uV5rH29tXYxks n3rvnnTdxpxz4g4VdgvhLT5lNrq7flTarDX62meRhLNYdqUFQ0rmod1NjGyVyV9mUfIH /26A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=175Dn+OmrWLalikvHXgtLaL0RinzWYRBjCoX9yGsvr8=; b=QEUS+/u3i0XCG6NiWBagGbDtrcvKQ3EW8qu20Udyw4gnf/j+qivrUabtJATLGpBoqr 47AWntWB1S58b6eBp8qlmXYYredO4mPqJArZy/G1EE/ahcSuGlpCRQt6BvgMyLnVY6ir I2i++r2gPlzWIGzpbfec9PdjTAK8+WSrBLoLwj/+91wMBUeLP5qc8br2IrUXO7HMIdof 8TQA+LqqKNAwsbkffITB4J86jNFx2LDbd3FxTOPLPLP8IduPmEOYPAUMiNaAbJd4WrzW 5NgBxowUbTR7uY4JBrJrHOzTQ3KNsNTRyaXSQ3AoiipmnE79Z4Q4ClYFXAAZKQN9lJeK H5MQ== X-Gm-Message-State: AOAM531toKEvbn99sSh5ojrLr4c9+TJUllpWkn5IlttvRPSqNPddUcdK 2/ry1MEjPYUwB+2uZL9THjQhIw== X-Google-Smtp-Source: ABdhPJxOZPahTDzBZTUSHs/wzhS9iI2h8iV/m3zGhfmCPnFCVKUT0V4XucRscYGgLdy3/nkIiDQutQ== X-Received: by 2002:a0c:e98a:: with SMTP id z10mr50125465qvn.43.1637112069776; Tue, 16 Nov 2021 17:21:09 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id i6sm3482289qti.40.2021.11.16.17.21.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Nov 2021 17:21:09 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [RFC v2 10/10] mm: use atomic_cmpxchg_acquire in page_ref_freeze(). Date: Wed, 17 Nov 2021 01:20:59 +0000 Message-Id: <20211117012059.141450-11-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211117012059.141450-1-pasha.tatashin@soleen.com> References: <20211117012059.141450-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: B3DA350000A1 X-Stat-Signature: cexi5pk1b49uprt4cybzj3yqa9oy75mi Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=nRlJZzzt; dmarc=none; spf=pass (imf04.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.44 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1637112069-97028 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: page_ref_freeze and page_ref_unfreeze are designed to be used as a pair. They protect critical sections where struct page can be modified. page_ref_unfreeze() is protected by _release() atomic operation, but page_ref_freeze() is not as it is assumed that cmpxch provides the full barrier. Instead, use the appropriate atomic_cmpxchg_acquire() to ensure that memory model is excplicitly followed. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 26676d3bcd58..ecd92d7f3eef 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -308,7 +308,8 @@ static inline bool folio_try_get_rcu(struct folio *folio) static inline int page_ref_freeze(struct page *page, int count) { - int ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count); + int old_val = atomic_cmpxchg_acquire(&page->_refcount, count, 0); + int ret = likely(old_val == count); if (page_ref_tracepoint_active(page_ref_freeze)) __page_ref_freeze(page, count, ret);