From patchwork Wed Dec 8 20:35:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12665233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AC6FC433EF for ; Wed, 8 Dec 2021 20:36:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0F2296B0073; Wed, 8 Dec 2021 15:36:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A3676B0074; Wed, 8 Dec 2021 15:36:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5CE66B0075; Wed, 8 Dec 2021 15:35:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D7B626B0073 for ; Wed, 8 Dec 2021 15:35:59 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9720518524637 for ; Wed, 8 Dec 2021 20:35:49 +0000 (UTC) X-FDA: 78895783218.30.3A2B5A1 Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) by imf27.hostedemail.com (Postfix) with ESMTP id 38E2570000A2 for ; Wed, 8 Dec 2021 20:35:49 +0000 (UTC) Received: by mail-qt1-f181.google.com with SMTP id l8so3378913qtk.6 for ; Wed, 08 Dec 2021 12:35:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=tu40FcAUliflfcHeT2YzopoDatCuYXpYlxABNkTfLZw=; b=i9Wq0r836a0UFapMz5KHGOJqXNti7r7LInFE2IO+szYL4kCBptpFhbutdOGrSQBIqS W/wBikbRTJjv94NsL+wn/KDjRHaVFB7a8NUBU+wTsQyz4swECet68aBg/xY7N367vkWF +EZ9/O/2Dp7nyL07Ti1H98EfWciytP5bLd8dJOCYfsXQl6PmOhdtCxqJXpAHhK+MVIqB VB7k+PxHsD9tq2M2HvIvlp/k5IGHiik85pjYT+UwHfxdJ/fbI6JsyA8pk2GmVeM9YQoQ 2+ujb5dZ2nfuwQ4Mtr9q2Atj2YjSlGvI1V673GC1H5yDIjew8iMOfe+/uSNrl4IA/dpu 6wqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tu40FcAUliflfcHeT2YzopoDatCuYXpYlxABNkTfLZw=; b=OzDIwWmQywI320VDn0pjbQuOwNEd3vCH1jGm7TzXv0wDFXKq3BJ0lqtXEZpg+JC6UX 6HCnAp4aM/huKsIylndfiL5VZuiTi5PNb7TLB3WBV7dPQrz0+oYyDCk5uzfogMaT51nz B8qniXCaDU69qBITMnglwRWxlIeYN4Xp4D+mMFhT7f7zbUEC22HrjfuKGQbdvTpbyld5 0wrs2hE8vGh23MqvnbtL0i4GWC24WTTdaY1gArWx6bGQ7rsrwsEAP2PGxJXaFEFKiKO+ nL6OL9lEiZhqRyVzj/sJUHFnKJg3+C23VOs+qwTVKdgjMVj7vzpx8nxMf1A4rZ9v7NMI Uygg== X-Gm-Message-State: AOAM530sVxS9jaaKf3fK/GhfIBAuOSnfpc1X5S3VCHHgVQRzJRXsHC18 FnijUJFHNzRXjEH07YCJm/3BjA== X-Google-Smtp-Source: ABdhPJyNTdq0txrgmXkqbrm2+8Kq1Y8rtQQ7EhHAR9Yv53ulJozGbzX0xugSMtQBz2sBHdKZUl3CfA== X-Received: by 2002:a05:622a:134e:: with SMTP id w14mr10954454qtk.587.1638995748530; Wed, 08 Dec 2021 12:35:48 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id 143sm1898710qkg.87.2021.12.08.12.35.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Dec 2021 12:35:47 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH 01/10] mm: page_ref_add_unless() does not trace 'u' argument Date: Wed, 8 Dec 2021 20:35:35 +0000 Message-Id: <20211208203544.2297121-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.400.ga245620fadb-goog In-Reply-To: <20211208203544.2297121-1-pasha.tatashin@soleen.com> References: <20211208203544.2297121-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=i9Wq0r83; dmarc=none; spf=pass (imf27.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 38E2570000A2 X-Stat-Signature: 9pqaygourjw1jsdyp4p6i5y4m6dir8en X-HE-Tag: 1638995749-445808 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In other page_ref_* functions all arguments and returns are traced, but in page_ref_add_unless the 'u' argument which stands for unless boolean is not traced. However, what is more confusing is that in the tracing routine: __page_ref_mod_unless(struct page *page, int v, int u); The 'u' argument present, but instead a return value is passed into this argument. Add a new template specific for page_ref_add_unless(), and trace all arguments and the return value. Fixes: 95813b8faa0c ("mm/page_ref: add tracepoint to track down page reference manipulation") Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 10 ++++---- include/trace/events/page_ref.h | 43 ++++++++++++++++++++++++++++++--- mm/debug_page_ref.c | 8 +++--- 3 files changed, 49 insertions(+), 12 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 2e677e6ad09f..1903af5fb087 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -11,7 +11,7 @@ DECLARE_TRACEPOINT(page_ref_set); DECLARE_TRACEPOINT(page_ref_mod); DECLARE_TRACEPOINT(page_ref_mod_and_test); DECLARE_TRACEPOINT(page_ref_mod_and_return); -DECLARE_TRACEPOINT(page_ref_mod_unless); +DECLARE_TRACEPOINT(page_ref_add_unless); DECLARE_TRACEPOINT(page_ref_freeze); DECLARE_TRACEPOINT(page_ref_unfreeze); @@ -30,7 +30,7 @@ extern void __page_ref_set(struct page *page, int v); extern void __page_ref_mod(struct page *page, int v); extern void __page_ref_mod_and_test(struct page *page, int v, int ret); extern void __page_ref_mod_and_return(struct page *page, int v, int ret); -extern void __page_ref_mod_unless(struct page *page, int v, int u); +extern void __page_ref_add_unless(struct page *page, int v, int u, int ret); extern void __page_ref_freeze(struct page *page, int v, int ret); extern void __page_ref_unfreeze(struct page *page, int v); @@ -50,7 +50,7 @@ static inline void __page_ref_mod_and_test(struct page *page, int v, int ret) static inline void __page_ref_mod_and_return(struct page *page, int v, int ret) { } -static inline void __page_ref_mod_unless(struct page *page, int v, int u) +static inline void __page_ref_add_unless(struct page *page, int v, int u, int ret) { } static inline void __page_ref_freeze(struct page *page, int v, int ret) @@ -237,8 +237,8 @@ static inline bool page_ref_add_unless(struct page *page, int nr, int u) { bool ret = atomic_add_unless(&page->_refcount, nr, u); - if (page_ref_tracepoint_active(page_ref_mod_unless)) - __page_ref_mod_unless(page, nr, ret); + if (page_ref_tracepoint_active(page_ref_add_unless)) + __page_ref_add_unless(page, nr, u, ret); return ret; } diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index 8a99c1cd417b..c32d6d161cdb 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -94,6 +94,43 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, __entry->val, __entry->ret) ); +DECLARE_EVENT_CLASS(page_ref_add_unless_template, + + TP_PROTO(struct page *page, int v, int u, int ret), + + TP_ARGS(page, v, u, ret), + + TP_STRUCT__entry( + __field(unsigned long, pfn) + __field(unsigned long, flags) + __field(int, count) + __field(int, mapcount) + __field(void *, mapping) + __field(int, mt) + __field(int, val) + __field(int, unless) + __field(int, ret) + ), + + TP_fast_assign( + __entry->pfn = page_to_pfn(page); + __entry->flags = page->flags; + __entry->count = page_ref_count(page); + __entry->mapcount = page_mapcount(page); + __entry->mapping = page->mapping; + __entry->mt = get_pageblock_migratetype(page); + __entry->val = v; + __entry->ret = ret; + ), + + TP_printk("pfn=0x%lx flags=%s count=%d mapcount=%d mapping=%p mt=%d val=%d unless=%d ret=%d", + __entry->pfn, + show_page_flags(__entry->flags & PAGEFLAGS_MASK), + __entry->count, + __entry->mapcount, __entry->mapping, __entry->mt, + __entry->val, __entry->unless, __entry->ret) +); + DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_test, TP_PROTO(struct page *page, int v, int ret), @@ -108,11 +145,11 @@ DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_return, TP_ARGS(page, v, ret) ); -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_unless, +DEFINE_EVENT(page_ref_add_unless_template, page_ref_add_unless, - TP_PROTO(struct page *page, int v, int ret), + TP_PROTO(struct page *page, int v, int u, int ret), - TP_ARGS(page, v, ret) + TP_ARGS(page, v, u, ret) ); DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_freeze, diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c index f3b2c9d3ece2..1426d6887b01 100644 --- a/mm/debug_page_ref.c +++ b/mm/debug_page_ref.c @@ -33,12 +33,12 @@ void __page_ref_mod_and_return(struct page *page, int v, int ret) EXPORT_SYMBOL(__page_ref_mod_and_return); EXPORT_TRACEPOINT_SYMBOL(page_ref_mod_and_return); -void __page_ref_mod_unless(struct page *page, int v, int u) +void __page_ref_add_unless(struct page *page, int v, int u, int ret) { - trace_page_ref_mod_unless(page, v, u); + trace_page_ref_add_unless(page, v, u, ret); } -EXPORT_SYMBOL(__page_ref_mod_unless); -EXPORT_TRACEPOINT_SYMBOL(page_ref_mod_unless); +EXPORT_SYMBOL(__page_ref_add_unless); +EXPORT_TRACEPOINT_SYMBOL(page_ref_add_unless); void __page_ref_freeze(struct page *page, int v, int ret) { From patchwork Wed Dec 8 20:35:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12665235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54CCAC433EF for ; Wed, 8 Dec 2021 20:37:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E63206B0074; Wed, 8 Dec 2021 15:36:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DE7E56B0075; Wed, 8 Dec 2021 15:36:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAE606B0078; Wed, 8 Dec 2021 15:36:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0113.hostedemail.com [216.40.44.113]) by kanga.kvack.org (Postfix) with ESMTP id BD8586B0074 for ; Wed, 8 Dec 2021 15:36:01 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6BD0E18023457 for ; Wed, 8 Dec 2021 20:35:51 +0000 (UTC) X-FDA: 78895783302.05.572E069 Received: from mail-qv1-f49.google.com (mail-qv1-f49.google.com [209.85.219.49]) by imf20.hostedemail.com (Postfix) with ESMTP id 111F9D0000A2 for ; Wed, 8 Dec 2021 20:35:50 +0000 (UTC) Received: by mail-qv1-f49.google.com with SMTP id bu11so3360603qvb.0 for ; Wed, 08 Dec 2021 12:35:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=I9hjX1+HYuhymLX54rLUMz7Fb9EGj/+yboeTN9cuaik=; b=HosY8k6M8KwXj6waxDnv3fHeVcX5Kq3aV4Cio/Yqj+URHnbpB1uDy980igrfzOtCWb IV9Ql7Hv0H+V5EqD5k3wF/nsGUQ+40ZLc8aHxm26PeReE38pz8SXuKeey3xF3jlqLwUI KUJM84C2WGGHz6BSz5IuHVeTT7qrRplmwkA6hiUew2pn/l1xkLhtd1r96vvL/S+nxcjz qPLWOYhkqO1+TaLW7xQEzG8fBueeiJrbFZQfAVZntDUxxB1sGQFJInZLOByVOf/pIzoP cenMjmG35J+4kOLZhasRGfFkQGoehZ6qUqF32ewLoCsiMacqDVT9D5390hBA3Kko0bqP kNvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=I9hjX1+HYuhymLX54rLUMz7Fb9EGj/+yboeTN9cuaik=; b=l2niEF0DK0Wq/Vg6qJ3v17C9/gM1+QzNA8oZxGXuQYlOaNEOv5KSQPO/avnkW8TI7l BbENjkhv+jryvUfG1UvzsU6dG48lNQQrITNj/KcIJG8mHE8/TXTW7OQcxICuvzFwoqm5 1CwR9HOFXQebCD/5PcvrOjjv7saROWa9bH1bi7ThhY2z9XKbsjN042Ebek6PYwLivc84 oMGsfpBNmFgExwD79ZXSCglQfItYQ1yQup9Qa+aJrKELzk0gdsdYQQP5R09+JuVtvAP5 C2gIML5xFtdAbUWmSuLSm5yaMnP/wtvumNFGfKAUiSf4q7+RvIhy9HLsUHNV9Yj5/DWR o7OQ== X-Gm-Message-State: AOAM5330fymVjWryH+DMYkUPEQeykY2H3dW3ym31jGtT1Wgmbu3XdWeI iIWu8duP2r82Twz2n/Tm/T4wmQ== X-Google-Smtp-Source: ABdhPJyB+Xz22eIWfi+LFrWo3avfTpkeZv3Hz7oSjfE9RUr8pK1GkW6k4oxFk4Q5BcZpLm8kYBqxSQ== X-Received: by 2002:ad4:5bac:: with SMTP id 12mr10895900qvq.63.1638995750347; Wed, 08 Dec 2021 12:35:50 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id 143sm1898710qkg.87.2021.12.08.12.35.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Dec 2021 12:35:49 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH 02/10] mm: add overflow and underflow checks for page->_refcount Date: Wed, 8 Dec 2021 20:35:36 +0000 Message-Id: <20211208203544.2297121-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.400.ga245620fadb-goog In-Reply-To: <20211208203544.2297121-1-pasha.tatashin@soleen.com> References: <20211208203544.2297121-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Stat-Signature: ikucwkwbiid8jz948gtg5shmz9rbbyzx Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=HosY8k6M; spf=pass (imf20.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.49 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 111F9D0000A2 X-HE-Tag: 1638995750-204597 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The problems with page->_refcount are hard to debug, because usually when they are detected, the damage has occurred a long time ago. Yet, the problems with invalid page refcount may be catastrophic and lead to memory corruptions. Reduce the scope of when the _refcount problems manifest themselves by adding checks for underflows and overflows into functions that modify _refcount. Use atomic_fetch_* functions to get the old values of the _refcount, and use it to check for overflow/underflow. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 59 +++++++++++++++++++++++++++++----------- 1 file changed, 43 insertions(+), 16 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 1903af5fb087..f3c61dc6344a 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -117,7 +117,10 @@ static inline void init_page_count(struct page *page) static inline void page_ref_add(struct page *page, int nr) { - atomic_add(nr, &page->_refcount); + int old_val = atomic_fetch_add(nr, &page->_refcount); + int new_val = old_val + nr; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, nr); } @@ -129,7 +132,10 @@ static inline void folio_ref_add(struct folio *folio, int nr) static inline void page_ref_sub(struct page *page, int nr) { - atomic_sub(nr, &page->_refcount); + int old_val = atomic_fetch_sub(nr, &page->_refcount); + int new_val = old_val - nr; + + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, -nr); } @@ -141,11 +147,13 @@ static inline void folio_ref_sub(struct folio *folio, int nr) static inline int page_ref_sub_return(struct page *page, int nr) { - int ret = atomic_sub_return(nr, &page->_refcount); + int old_val = atomic_fetch_sub(nr, &page->_refcount); + int new_val = old_val - nr; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -nr, ret); - return ret; + __page_ref_mod_and_return(page, -nr, new_val); + return new_val; } static inline int folio_ref_sub_return(struct folio *folio, int nr) @@ -155,7 +163,10 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr) static inline void page_ref_inc(struct page *page) { - atomic_inc(&page->_refcount); + int old_val = atomic_fetch_inc(&page->_refcount); + int new_val = old_val + 1; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, 1); } @@ -167,7 +178,10 @@ static inline void folio_ref_inc(struct folio *folio) static inline void page_ref_dec(struct page *page) { - atomic_dec(&page->_refcount); + int old_val = atomic_fetch_dec(&page->_refcount); + int new_val = old_val - 1; + + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, -1); } @@ -179,8 +193,11 @@ static inline void folio_ref_dec(struct folio *folio) static inline int page_ref_sub_and_test(struct page *page, int nr) { - int ret = atomic_sub_and_test(nr, &page->_refcount); + int old_val = atomic_fetch_sub(nr, &page->_refcount); + int new_val = old_val - nr; + int ret = new_val == 0; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_test)) __page_ref_mod_and_test(page, -nr, ret); return ret; @@ -193,11 +210,13 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr) static inline int page_ref_inc_return(struct page *page) { - int ret = atomic_inc_return(&page->_refcount); + int old_val = atomic_fetch_inc(&page->_refcount); + int new_val = old_val + 1; + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, 1, ret); - return ret; + __page_ref_mod_and_return(page, 1, new_val); + return new_val; } static inline int folio_ref_inc_return(struct folio *folio) @@ -207,8 +226,11 @@ static inline int folio_ref_inc_return(struct folio *folio) static inline int page_ref_dec_and_test(struct page *page) { - int ret = atomic_dec_and_test(&page->_refcount); + int old_val = atomic_fetch_dec(&page->_refcount); + int new_val = old_val - 1; + int ret = new_val == 0; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_test)) __page_ref_mod_and_test(page, -1, ret); return ret; @@ -221,11 +243,13 @@ static inline int folio_ref_dec_and_test(struct folio *folio) static inline int page_ref_dec_return(struct page *page) { - int ret = atomic_dec_return(&page->_refcount); + int old_val = atomic_fetch_dec(&page->_refcount); + int new_val = old_val - 1; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -1, ret); - return ret; + __page_ref_mod_and_return(page, -1, new_val); + return new_val; } static inline int folio_ref_dec_return(struct folio *folio) @@ -235,8 +259,11 @@ static inline int folio_ref_dec_return(struct folio *folio) static inline bool page_ref_add_unless(struct page *page, int nr, int u) { - bool ret = atomic_add_unless(&page->_refcount, nr, u); + int old_val = atomic_fetch_add_unless(&page->_refcount, nr, u); + int new_val = old_val + nr; + int ret = old_val != u; + VM_BUG_ON_PAGE(ret && (unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_add_unless)) __page_ref_add_unless(page, nr, u, ret); return ret; From patchwork Wed Dec 8 20:35:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12665237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37A90C433F5 for ; Wed, 8 Dec 2021 20:37:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 845BC6B0075; Wed, 8 Dec 2021 15:36:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7CCE96B0078; Wed, 8 Dec 2021 15:36:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6954A6B007B; Wed, 8 Dec 2021 15:36:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id 5C11A6B0075 for ; Wed, 8 Dec 2021 15:36:03 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 22A688249980 for ; Wed, 8 Dec 2021 20:35:53 +0000 (UTC) X-FDA: 78895783386.12.CA4EFF5 Received: from mail-qv1-f45.google.com (mail-qv1-f45.google.com [209.85.219.45]) by imf09.hostedemail.com (Postfix) with ESMTP id 9CE4E3000109 for ; Wed, 8 Dec 2021 20:35:52 +0000 (UTC) Received: by mail-qv1-f45.google.com with SMTP id m17so3293454qvx.8 for ; Wed, 08 Dec 2021 12:35:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=foMj7rdrqKExpuW/5b2/9AtW6rfu8q0fYE86wv94MNg=; b=Usqr6WHfajDNAQvGjwiUgRjegcPmoT8iAYMVVkdTi17iBAWK6NqycYpldReeX4Kp/X CLophKDJ0Fzd92y2a34OoBrtYFRDQQ+v2E/Y73DMFzLVtH9dZ59Qalshv02NEdQtVF2f +unp2bAFPXD4y8jz4D1R08C2Vbp7ZeWRtuxqkn2NJX1QMqw4fOzaRmIGAGXldbZXm2nU nkeiZhh/VTopJ2LwKrEb0RyOL933F/SdVt9trRx8fexaJjRvxKvvRPBYCNef9558SVt3 oYU/C2wyyUTd4VFb7fBh9j68V+Ctwww/JRUiEQXzzjHLK2VKyWGEuUa2l5gxHKOlHYeu nqow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=foMj7rdrqKExpuW/5b2/9AtW6rfu8q0fYE86wv94MNg=; b=tsDvsPTJRyzAIqMFQhtNHxBXSnkViHSsT35X/Nj24h44E8hRdMZ7zK1Er8HldAwG5m vn7MCYzFHkktdONrsDrsT3KmeuZSDREPEbMsFKFb5Fn1N1CR5lpKc0TaELS9u2lPF01V ErDJaMvxiv9gEN2oP2XatN3POylv3IU7QFTuFuzRzAhWKKxTd0jhr4uv7/KNMxQonRKC et7pTi03IAAKLUtV/wbA5KPxAwshy/7RpPqgwSBR1lPfex2AC0CFbSlHupZCf8+6lct3 /J0DZEyYC5jpMjLqzq+rNRzLtlKxKFsruBdxNbJtsLbGKncj16igvt4YSx2Lh3B2zhYE x52Q== X-Gm-Message-State: AOAM532tHbz6e3aDgXwO/yFaMmeiUNsr6zCQOAwdNnz7WdWvJR3FhMFk J/5Uv/DlHDlyeRpHpL0Zn//SJQ== X-Google-Smtp-Source: ABdhPJyH976re0ArjKDAQ3EQSjXztAKBneGw7A3SAjIUjHZgPDdUMU9iIx+yXRn1v6AZEDQ6IjlXfw== X-Received: by 2002:a05:6214:20e4:: with SMTP id 4mr10695770qvk.95.1638995751606; Wed, 08 Dec 2021 12:35:51 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id 143sm1898710qkg.87.2021.12.08.12.35.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Dec 2021 12:35:51 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH 03/10] mm: Avoid using set_page_count() in set_page_recounted() Date: Wed, 8 Dec 2021 20:35:37 +0000 Message-Id: <20211208203544.2297121-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.400.ga245620fadb-goog In-Reply-To: <20211208203544.2297121-1-pasha.tatashin@soleen.com> References: <20211208203544.2297121-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 9CE4E3000109 X-Stat-Signature: a554nisnmc5dyexsyoqrcge43gfa7him Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=Usqr6WHf; spf=pass (imf09.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.45 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1638995752-472399 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_refcounted() converts a non-refcounted page that has (page->_refcount == 0) into a refcounted page by setting _refcount to 1. The current apporach uses the following logic: VM_BUG_ON_PAGE(page_ref_count(page), page); set_page_count(page, 1); However, if _refcount changes from 0 to 1 between the VM_BUG_ON_PAGE() and set_page_count() we can break _refcount, which can cause other problems such as memory corruptions. Instead, use a safer method: increment _refcount first and verify that at increment time it was indeed 1. refcnt = page_ref_inc_return(page); VM_BUG_ON_PAGE(refcnt != 1, page); Use page_ref_inc_return() to avoid unconditionally overwriting the _refcount value with set_page_count(), and check the return value. Signed-off-by: Pasha Tatashin --- mm/internal.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 94909fcce671..47d1d3c892fb 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -132,9 +132,11 @@ static inline bool page_evictable(struct page *page) */ static inline void set_page_refcounted(struct page *page) { + int refcnt; + VM_BUG_ON_PAGE(PageTail(page), page); - VM_BUG_ON_PAGE(page_ref_count(page), page); - set_page_count(page, 1); + refcnt = page_ref_inc_return(page); + VM_BUG_ON_PAGE(refcnt != 1, page); } extern unsigned long highest_memmap_pfn; From patchwork Wed Dec 8 20:35:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12665239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48D06C433F5 for ; Wed, 8 Dec 2021 20:38:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D5F16B0078; Wed, 8 Dec 2021 15:36:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 55B906B007B; Wed, 8 Dec 2021 15:36:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 426BA6B007D; Wed, 8 Dec 2021 15:36:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id 3085F6B0078 for ; Wed, 8 Dec 2021 15:36:04 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E722D8277E for ; Wed, 8 Dec 2021 20:35:53 +0000 (UTC) X-FDA: 78895783386.04.17887E9 Received: from mail-qv1-f52.google.com (mail-qv1-f52.google.com [209.85.219.52]) by imf05.hostedemail.com (Postfix) with ESMTP id 7FA4C100007 for ; Wed, 8 Dec 2021 20:35:53 +0000 (UTC) Received: by mail-qv1-f52.google.com with SMTP id kl8so3314235qvb.3 for ; Wed, 08 Dec 2021 12:35:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=H8bfJiPx+C1Gj1sLxl+17sC+GHcuXjOENqlet2hUeQ4=; b=UASIM45poc9r/hy5Qw0CWMse+e7qE87XjU38rMUffDJREx2nslIqtYJeGf457vZ4lq NlUBHHvZPf29YIdXfeZCAZgN0WfJ4JazodUa9v2wUZJ6mzjtKYZU4JhIYhkUARDb0TWk mODU2KXoEPx3mU8k7DOB/tbd4lXH/sE2GjSFk5NRxNaOhQpAaDOcHWT4Lc0EAcygT+hp TF0/YACFNpoVOEdP6XqLohEJ26Ncks8rjri5se+h9pzz+2pVcMNNlcF8KCdRB9z+g3OQ gRfBNZeiPaWUsH3WDt+eciXBRNxy8VS8DuXcomKiKkacDBREm8YmGYyKT7P6j2F8yHpF DZzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=H8bfJiPx+C1Gj1sLxl+17sC+GHcuXjOENqlet2hUeQ4=; b=158dLtKBYG8kwLQsVgE76bmybETn3GdDlKFy1n3jFySwngz0NeEjH/OTEgR3gkmCDB C5dMDDOTGk3cKaQ/Bdp04Dy2FtEfIU9ItOaIbtX95RmQoKP0unR5LIk5NEwvmHI6vZaP fh8Eco+QHwH2thEwO+F3WmNvE6Co4Xs/6iGwaO3YeulKYZ6vHdQ4rDvCHs2vrkM0Es6g Ts9IkP2zvG2MqW4MGg8O0zpTuHEIRRaq4bD9xTXemGS/CUPrEvZLDjUq4OwLJ+se23HW WPFwYb5cOFcQClrbSXISMd/BMzUnnzktQnN4NIf86/5EjhMujzABo2FAH3XKv5kDFMuK WNfQ== X-Gm-Message-State: AOAM532urGDuma3D6CzepxrqqL0TUl1K0OHlE9PYdvE8MxMw0iV1jDx0 y0888bT/VAWbGYRvLUC2FRiq/Q== X-Google-Smtp-Source: ABdhPJxKwmC+cugSIJHXimSTOxZumHky+tRs51kvJketZHKIvDnq1bFR2LPJyWfUzwAp1lD+tToUvg== X-Received: by 2002:a05:6214:e83:: with SMTP id hf3mr10808149qvb.52.1638995752827; Wed, 08 Dec 2021 12:35:52 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id 143sm1898710qkg.87.2021.12.08.12.35.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Dec 2021 12:35:52 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH 04/10] mm: remove set_page_count() from page_frag_alloc_align Date: Wed, 8 Dec 2021 20:35:38 +0000 Message-Id: <20211208203544.2297121-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.400.ga245620fadb-goog In-Reply-To: <20211208203544.2297121-1-pasha.tatashin@soleen.com> References: <20211208203544.2297121-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=UASIM45p; dmarc=none; spf=pass (imf05.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.52 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 7FA4C100007 X-Stat-Signature: 9yihp1qgfyz9ksi5khuphwtq5bh78s7r X-HE-Tag: 1638995753-772758 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_count() unconditionally resets the value of _ref_count and that is dangerous, as it is not programmatically verified. Instead we rely on comments like: "OK, page count is 0, we can safely set it". Add a new refcount function: page_ref_add_return() to return the new refcount value after adding to it. Use the return value to verify that the _ref_count was indeed the expected one. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 11 +++++++++++ mm/page_alloc.c | 6 ++++-- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index f3c61dc6344a..27880aca2e2f 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -115,6 +115,17 @@ static inline void init_page_count(struct page *page) set_page_count(page, 1); } +static inline int page_ref_add_return(struct page *page, int nr) +{ + int old_val = atomic_fetch_add(nr, &page->_refcount); + int new_val = old_val + nr; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, nr, new_val); + return new_val; +} + static inline void page_ref_add(struct page *page, int nr) { int old_val = atomic_fetch_add(nr, &page->_refcount); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index edfd6c81af82..b5554767b9de 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5523,6 +5523,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int size = PAGE_SIZE; struct page *page; int offset; + int refcnt; if (unlikely(!nc->va)) { refill: @@ -5561,8 +5562,9 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, /* if size can vary use size else just use PAGE_SIZE */ size = nc->size; #endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + /* page count is 0, set it to PAGE_FRAG_CACHE_MAX_SIZE + 1 */ + refcnt = page_ref_add_return(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + VM_BUG_ON_PAGE(refcnt != PAGE_FRAG_CACHE_MAX_SIZE + 1, page); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; From patchwork Wed Dec 8 20:35:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12665241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9541C433EF for ; Wed, 8 Dec 2021 20:38:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8BA3D6B007B; Wed, 8 Dec 2021 15:36:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 869006B007D; Wed, 8 Dec 2021 15:36:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 730986B007E; Wed, 8 Dec 2021 15:36:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0144.hostedemail.com [216.40.44.144]) by kanga.kvack.org (Postfix) with ESMTP id 669B36B007B for ; Wed, 8 Dec 2021 15:36:07 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 347838248076 for ; Wed, 8 Dec 2021 20:35:57 +0000 (UTC) X-FDA: 78895783554.07.7851D1C Received: from mail-qv1-f52.google.com (mail-qv1-f52.google.com [209.85.219.52]) by imf12.hostedemail.com (Postfix) with ESMTP id DCF6110000A5 for ; Wed, 8 Dec 2021 20:35:56 +0000 (UTC) Received: by mail-qv1-f52.google.com with SMTP id jo22so3267275qvb.13 for ; Wed, 08 Dec 2021 12:35:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=fl012lbYwxHxKLRGEaFCBukqf325ONY6vMrvhFzajc4=; b=Om3EKJa9JkBQ3/exYw2B3y8QcQXr3XLHAWmrsf1qrbGoMy+A2cGu5jReBvGVCPkR4+ hDBjqch1JjgL2dikAiy5MOULE1bwntbfSVyT4+pceFLvXwveMW3o+Af3gExKwl068JS8 nMaKnEQUNNnHWgl27+yFR+aLuie6VxwdMa2Twg6vh9I/RhI9u9ewOILF8ekONEAKfdOy n0U8XSXFs1uRKi/7Y4+a2EmEqIHl/h6UuyQk/KCCT29WCO5RX6Jb4/SwhT1lY6YMkO3c n5jqn+4cKGoNBN4nHw0Hm0EHXkQzfjvDQQ1+mkDUNr82JvE/5Tr304ypL3ylLkxC9Ztp hG5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fl012lbYwxHxKLRGEaFCBukqf325ONY6vMrvhFzajc4=; b=J2WcW8JpVHFEJ4sq0sD6LPbtnvW6QXN2U5iB/l0DOcTcothWkUFWxWXDEk7ggIaOxi kp77Qo+Nl3dYBN+ppyR2jgwOlGRxBpzD4ThGo6LmTD8Bob12TRIK7KhpB6PMDDHaO+4A KTRDw7HCzyyoEE9kK7eZEqGZ0/E4nvB5BdqDtK0HMVH9TsLI4Dk/Iacm+u6c5gC6YuBh 3MbmxGGPlM51FxzVXfq3N5L2F7WZbAF57RxJRVnDWLZ7SXPNu39SP8D9drdaTaJZ2udh Q0SfEa4qaOFdFNRStjEfxvYLaet5dRSWzwrLlX/Ci/0iFC0oo8lldiXq+Wzz49zDP0yO TRFQ== X-Gm-Message-State: AOAM530BXJYPHMhMZ1P6cd29K9Z9Hb8N+3LrFwLunn4Kb/g7W1ZHAYwM YhBW42jebLk2gzbBGnL3opZHdA== X-Google-Smtp-Source: ABdhPJz0AXqWQx1xH0f1dHb3b35gcDCp0zKrnLcKzYEmnuKLPzBR55Tmt/GsDZa4yfAJbxQLl1V2kA== X-Received: by 2002:a05:6214:21ae:: with SMTP id t14mr10996719qvc.66.1638995756224; Wed, 08 Dec 2021 12:35:56 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id 143sm1898710qkg.87.2021.12.08.12.35.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Dec 2021 12:35:55 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH 06/10] mm: rename init_page_count() -> page_ref_init() Date: Wed, 8 Dec 2021 20:35:40 +0000 Message-Id: <20211208203544.2297121-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.400.ga245620fadb-goog In-Reply-To: <20211208203544.2297121-1-pasha.tatashin@soleen.com> References: <20211208203544.2297121-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: DCF6110000A5 X-Stat-Signature: 886u7ukd9hpzdfhi56rczqdnsedchw6j Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=Om3EKJa9; spf=pass (imf12.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.52 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1638995756-841292 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that set_page_count() is not called from outside anymore and about to be removed, init_page_count() is the only function that is going to be used to unconditionally set _refcount, however it is restricted to set it only to 1. Make init_page_count() aligned with the other page_ref_* functions by renaming it. Signed-off-by: Pasha Tatashin Acked-by: Geert Uytterhoeven --- arch/m68k/mm/motorola.c | 2 +- include/linux/mm.h | 2 +- include/linux/page_ref.h | 10 +++++++--- mm/page_alloc.c | 2 +- 4 files changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index ecbe948f4c1a..dd3b77d03d5c 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -133,7 +133,7 @@ void __init init_pointer_table(void *table, int type) /* unreserve the page so it's possible to free that page */ __ClearPageReserved(PD_PAGE(dp)); - init_page_count(PD_PAGE(dp)); + page_ref_init(PD_PAGE(dp)); return; } diff --git a/include/linux/mm.h b/include/linux/mm.h index 44d75a8d1b92..9a0ba44d4cde 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2470,7 +2470,7 @@ extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end); static inline void free_reserved_page(struct page *page) { ClearPageReserved(page); - init_page_count(page); + page_ref_init(page); __free_page(page); adjust_managed_page_count(page, 1); } diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 27880aca2e2f..ff946d753df8 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -107,10 +107,14 @@ static inline void folio_set_count(struct folio *folio, int v) } /* - * Setup the page count before being freed into the page allocator for - * the first time (boot or memory hotplug) + * Setup the page refcount to one before being freed into the page allocator. + * The memory might not be initialized and therefore there cannot be any + * assumptions about the current value of page->_refcount. This call should be + * done during boot when memory is being initialized, during memory hotplug + * when new memory is added, or when a previous reserved memory is unreserved + * this is the first time kernel take control of the given memory. */ -static inline void init_page_count(struct page *page) +static inline void page_ref_init(struct page *page) { set_page_count(page, 1); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 13d989d62012..000c057a2d24 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1569,7 +1569,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn, { mm_zero_struct_page(page); set_page_links(page, zone, nid, pfn); - init_page_count(page); + page_ref_init(page); page_mapcount_reset(page); page_cpupid_reset_last(page); page_kasan_tag_reset(page); From patchwork Wed Dec 8 20:35:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12665247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEF23C433F5 for ; Wed, 8 Dec 2021 20:39:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2052B6B007D; Wed, 8 Dec 2021 15:36:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B7096B007E; Wed, 8 Dec 2021 15:36:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 02C746B0080; Wed, 8 Dec 2021 15:36:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0048.hostedemail.com [216.40.44.48]) by kanga.kvack.org (Postfix) with ESMTP id EA0156B007D for ; Wed, 8 Dec 2021 15:36:08 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id AB741824556B for ; Wed, 8 Dec 2021 20:35:58 +0000 (UTC) X-FDA: 78895783596.17.3FC7710 Received: from mail-qv1-f42.google.com (mail-qv1-f42.google.com [209.85.219.42]) by imf10.hostedemail.com (Postfix) with ESMTP id 3B87A6001982 for ; Wed, 8 Dec 2021 20:35:58 +0000 (UTC) Received: by mail-qv1-f42.google.com with SMTP id jo22so3267333qvb.13 for ; Wed, 08 Dec 2021 12:35:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=nFlqSAxjHRTohPVW75pf8hfyBbKzHbgE+SMNZ8WyTOw=; b=W8Cl1HqSgamLrsbyj88V9dAY9gM+b3j959c+eLQP6RfgNi1HBomES3PqdfvKvNP/mL c9Fsd55wwvkoiO3iUJ4uO2ItAZO8Q/1TgyPGRf2JNqB/9EHeNj4Y60cVhbjNWosx4Ob1 IenzOcVguLX75xLpGoonD29DI7NDe4SRvRPE7agbiwDMNyNuicf6xTp78XMFmX3wibd3 zPp0Rf0q8uS7U3chIPlS9ZkkCREyjbpYCtHW3ecEAOkILiQszfu58rgVkvTPkOmSBWQP nM29S+Gcro5luZ5vtdR+fc8QXlec6Ueec4u9znIOVOSgAndCFNJYVpEh10mbc+p/SEWJ qHbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nFlqSAxjHRTohPVW75pf8hfyBbKzHbgE+SMNZ8WyTOw=; b=hy8gujegGd1DB7cJlKjQNsJXaOxD0vaGxJ0J7uvQNQrI1NpyKkJlxqUrsE+BzJXCxG iUnUe26GJQJeuLjRzdJG0e52Uz5lVmbRNZRGmnqzZFls6eyjEOrHoL6togLmjDi9fMOH 1Voo+TgBEWEiewx67eqjuHMlUw7YzdYy5ziAht9BzXX9uLiA/7egGKBsKHqQmKYhpWf0 Dx0Bj0Gome6U3Z9yjWA1pHNAaq7rrjtiyb6+NJzlo+titfB2dty9zhy1PvIQ8oHVDlzR HpRUup8n+89ZB+4t1tVg4sClkspCyLV8b2o9VlVt27FPimyI5n7moyCM50jWeAaviZ/a LdgA== X-Gm-Message-State: AOAM531ei6b4UEgaltayXrGAt67OQ9Q1Z8paovA/u0G1hqDG5iCzz/Xw HaZeXgtOxXXSrUHwP7L4efzQaw== X-Google-Smtp-Source: ABdhPJxZHH62p453Nmc2nRkAwRrMlxRykexAEhKok4wf3RDqtJOIhBUDkUS9nroexMlzP1I7Vj7ACg== X-Received: by 2002:ad4:50c7:: with SMTP id e7mr10269127qvq.53.1638995757597; Wed, 08 Dec 2021 12:35:57 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id 143sm1898710qkg.87.2021.12.08.12.35.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Dec 2021 12:35:56 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH 07/10] mm: remove set_page_count() Date: Wed, 8 Dec 2021 20:35:41 +0000 Message-Id: <20211208203544.2297121-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.400.ga245620fadb-goog In-Reply-To: <20211208203544.2297121-1-pasha.tatashin@soleen.com> References: <20211208203544.2297121-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 3B87A6001982 X-Stat-Signature: tdw417chkmt986369y1wyoizsfo48khs Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=W8Cl1HqS; spf=pass (imf10.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.42 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1638995758-10517 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_count() is dangerous because it resets _refcount to an arbitrary value. Instead we now initialize _refcount to 1 only once, and the rest of the time we are using add/dec/cmpxchg to have a contiguous track of the counter. Remove set_page_count() and add new tracing hooks to page_ref_init(). Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 27 ++++++++----------- include/trace/events/page_ref.h | 46 ++++++++++++++++++++++++++++----- mm/debug_page_ref.c | 8 +++--- 3 files changed, 54 insertions(+), 27 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index ff946d753df8..c7033f506d68 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -7,7 +7,7 @@ #include #include -DECLARE_TRACEPOINT(page_ref_set); +DECLARE_TRACEPOINT(page_ref_init); DECLARE_TRACEPOINT(page_ref_mod); DECLARE_TRACEPOINT(page_ref_mod_and_test); DECLARE_TRACEPOINT(page_ref_mod_and_return); @@ -26,7 +26,7 @@ DECLARE_TRACEPOINT(page_ref_unfreeze); */ #define page_ref_tracepoint_active(t) tracepoint_enabled(t) -extern void __page_ref_set(struct page *page, int v); +extern void __page_ref_init(struct page *page); extern void __page_ref_mod(struct page *page, int v); extern void __page_ref_mod_and_test(struct page *page, int v, int ret); extern void __page_ref_mod_and_return(struct page *page, int v, int ret); @@ -38,7 +38,7 @@ extern void __page_ref_unfreeze(struct page *page, int v); #define page_ref_tracepoint_active(t) false -static inline void __page_ref_set(struct page *page, int v) +static inline void __page_ref_init(struct page *page) { } static inline void __page_ref_mod(struct page *page, int v) @@ -94,18 +94,6 @@ static inline int page_count(const struct page *page) return folio_ref_count(page_folio(page)); } -static inline void set_page_count(struct page *page, int v) -{ - atomic_set(&page->_refcount, v); - if (page_ref_tracepoint_active(page_ref_set)) - __page_ref_set(page, v); -} - -static inline void folio_set_count(struct folio *folio, int v) -{ - set_page_count(&folio->page, v); -} - /* * Setup the page refcount to one before being freed into the page allocator. * The memory might not be initialized and therefore there cannot be any @@ -116,7 +104,14 @@ static inline void folio_set_count(struct folio *folio, int v) */ static inline void page_ref_init(struct page *page) { - set_page_count(page, 1); + atomic_set(&page->_refcount, 1); + if (page_ref_tracepoint_active(page_ref_init)) + __page_ref_init(page); +} + +static inline void folio_ref_init(struct folio *folio) +{ + page_ref_init(&folio->page); } static inline int page_ref_add_return(struct page *page, int nr) diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index c32d6d161cdb..2b8e5a4df53b 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -10,6 +10,45 @@ #include #include +DECLARE_EVENT_CLASS(page_ref_init_template, + + TP_PROTO(struct page *page), + + TP_ARGS(page), + + TP_STRUCT__entry( + __field(unsigned long, pfn) + __field(unsigned long, flags) + __field(int, count) + __field(int, mapcount) + __field(void *, mapping) + __field(int, mt) + __field(int, val) + ), + + TP_fast_assign( + __entry->pfn = page_to_pfn(page); + __entry->flags = page->flags; + __entry->count = page_ref_count(page); + __entry->mapcount = page_mapcount(page); + __entry->mapping = page->mapping; + __entry->mt = get_pageblock_migratetype(page); + ), + + TP_printk("pfn=0x%lx flags=%s count=%d mapcount=%d mapping=%p mt=%d", + __entry->pfn, + show_page_flags(__entry->flags & PAGEFLAGS_MASK), + __entry->count, + __entry->mapcount, __entry->mapping, __entry->mt) +); + +DEFINE_EVENT(page_ref_init_template, page_ref_init, + + TP_PROTO(struct page *page), + + TP_ARGS(page) +); + DECLARE_EVENT_CLASS(page_ref_mod_template, TP_PROTO(struct page *page, int v), @@ -44,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __entry->val) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_set, - - TP_PROTO(struct page *page, int v), - - TP_ARGS(page, v) -); - DEFINE_EVENT(page_ref_mod_template, page_ref_mod, TP_PROTO(struct page *page, int v), diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c index 1426d6887b01..ad21abfec463 100644 --- a/mm/debug_page_ref.c +++ b/mm/debug_page_ref.c @@ -5,12 +5,12 @@ #define CREATE_TRACE_POINTS #include -void __page_ref_set(struct page *page, int v) +void __page_ref_init(struct page *page) { - trace_page_ref_set(page, v); + trace_page_ref_init(page); } -EXPORT_SYMBOL(__page_ref_set); -EXPORT_TRACEPOINT_SYMBOL(page_ref_set); +EXPORT_SYMBOL(__page_ref_init); +EXPORT_TRACEPOINT_SYMBOL(page_ref_init); void __page_ref_mod(struct page *page, int v) { From patchwork Wed Dec 8 20:35:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12665249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BAB5C433F5 for ; Wed, 8 Dec 2021 20:39:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C640B6B007E; Wed, 8 Dec 2021 15:36:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C12D26B0080; Wed, 8 Dec 2021 15:36:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8D676B0081; Wed, 8 Dec 2021 15:36:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0029.hostedemail.com [216.40.44.29]) by kanga.kvack.org (Postfix) with ESMTP id 9C8BC6B007E for ; Wed, 8 Dec 2021 15:36:11 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5969981216 for ; Wed, 8 Dec 2021 20:36:01 +0000 (UTC) X-FDA: 78895783722.19.147FE1C Received: from mail-qk1-f172.google.com (mail-qk1-f172.google.com [209.85.222.172]) by imf16.hostedemail.com (Postfix) with ESMTP id B4091F00009F for ; Wed, 8 Dec 2021 20:35:59 +0000 (UTC) Received: by mail-qk1-f172.google.com with SMTP id g28so3115844qkk.9 for ; Wed, 08 Dec 2021 12:35:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=YwWL6Z/o+fBQkn272GUHrOKwFpx5GVVpY9HiBMHqSDs=; b=E/r1UW/nMI23U617YwZO+vjXw+OKz6DlXAYuajBIxQwLf3T3vkFd7Gx3U/+g/+8pzP MsHRp5v5mN6kE+Y7/d0xUWxmlo/ogk9drp/nNqcv08CU9uWvYB3gEIxPY8OC4ueGG2Vx nfpnaUMHxVVtt6IedABv1h32rAgQFs2/oDHWUUinKS+Xjdc7EPP5I2/XR2rd8m9GO3r9 LP1eGNDYDEzz6Wn+7agY3wNr2DrgufEHrPGBbqAo05It5qkj8f+vZqbo8tZV96Z1C5VH GPfAkx5Zuh5aKZ5kHxLn0Ly0RyWB7OTjBHdclRxbH9mCZHuXCUulAXvvy7abpkGXT/sN 4rbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YwWL6Z/o+fBQkn272GUHrOKwFpx5GVVpY9HiBMHqSDs=; b=2Jc/f1KNRPI58cGQ4ieDteMmMZhtTCxb66Ysx0omP4iNzhVU7fkH3uOtiU3yfKmcTz YmL+slkEYzzAebIkgQENmLx3MwMLBgwGDyPhCsWZcAHcWWhRes6jdyeUnFNUnYPK6lH+ oMU8iXfphj/nGPSxoiJYEAR4cRt7NVO1qAGUO73Wqka0f1TTcP6WphEQyU63ph+Ugsep gb+Co+duka5URV8SCBLRyfc3A9iSnv9auV0bTCE4I9yIiPkaZI/9l5u4fyCyt6nDbdb6 5Bt9vTlAJuAGT3jZDuKY3ybqsRDf+u1mWfdm13SVHpWWzUfHfYuVkulr/v0YnFd/3NMz hHCQ== X-Gm-Message-State: AOAM533P7Riq2pMUO5gJK/DmgXDwWMEPfL7HA78H2/j9OCD0mcI9YG1b tZtJFMCzqw4FWIuNuXZsc75OQg== X-Google-Smtp-Source: ABdhPJw8PsTjimMkXokwcJPuWrBR8MiYuAdRR0FVOvY9hXv7QDJPR7++qhOXgzPxoUlHi6ZzdxmsAQ== X-Received: by 2002:a05:620a:85e:: with SMTP id u30mr9299824qku.765.1638995758934; Wed, 08 Dec 2021 12:35:58 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id 143sm1898710qkg.87.2021.12.08.12.35.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Dec 2021 12:35:58 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH 08/10] mm: simplify page_ref_* functions Date: Wed, 8 Dec 2021 20:35:42 +0000 Message-Id: <20211208203544.2297121-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.400.ga245620fadb-goog In-Reply-To: <20211208203544.2297121-1-pasha.tatashin@soleen.com> References: <20211208203544.2297121-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b="E/r1UW/n"; dmarc=none; spf=pass (imf16.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.172 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: B4091F00009F X-Stat-Signature: zowc68x9eopt55y1b7mibe7n4irhcj1q X-HE-Tag: 1638995759-523939 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that we are using atomic_fetch* variants to add/sub/inc/dec page _refcount, it makes sense to combined page_ref_* return and non return functions. Also remove some extra trace points for non-return variants. This improves the tracability by always recording the new _refcount value after the modifications has occurred. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 102 +++++++++----------------------- include/trace/events/page_ref.h | 24 ++------ mm/debug_page_ref.c | 14 ----- 3 files changed, 34 insertions(+), 106 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index c7033f506d68..8c76bf3bf7e1 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -8,8 +8,6 @@ #include DECLARE_TRACEPOINT(page_ref_init); -DECLARE_TRACEPOINT(page_ref_mod); -DECLARE_TRACEPOINT(page_ref_mod_and_test); DECLARE_TRACEPOINT(page_ref_mod_and_return); DECLARE_TRACEPOINT(page_ref_add_unless); DECLARE_TRACEPOINT(page_ref_freeze); @@ -27,8 +25,6 @@ DECLARE_TRACEPOINT(page_ref_unfreeze); #define page_ref_tracepoint_active(t) tracepoint_enabled(t) extern void __page_ref_init(struct page *page); -extern void __page_ref_mod(struct page *page, int v); -extern void __page_ref_mod_and_test(struct page *page, int v, int ret); extern void __page_ref_mod_and_return(struct page *page, int v, int ret); extern void __page_ref_add_unless(struct page *page, int v, int u, int ret); extern void __page_ref_freeze(struct page *page, int v, int ret); @@ -41,12 +37,6 @@ extern void __page_ref_unfreeze(struct page *page, int v); static inline void __page_ref_init(struct page *page) { } -static inline void __page_ref_mod(struct page *page, int v) -{ -} -static inline void __page_ref_mod_and_test(struct page *page, int v, int ret) -{ -} static inline void __page_ref_mod_and_return(struct page *page, int v, int ret) { } @@ -127,12 +117,7 @@ static inline int page_ref_add_return(struct page *page, int nr) static inline void page_ref_add(struct page *page, int nr) { - int old_val = atomic_fetch_add(nr, &page->_refcount); - int new_val = old_val + nr; - - VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, nr); + page_ref_add_return(page, nr); } static inline void folio_ref_add(struct folio *folio, int nr) @@ -140,30 +125,25 @@ static inline void folio_ref_add(struct folio *folio, int nr) page_ref_add(&folio->page, nr); } -static inline void page_ref_sub(struct page *page, int nr) +static inline int page_ref_sub_return(struct page *page, int nr) { int old_val = atomic_fetch_sub(nr, &page->_refcount); int new_val = old_val - nr; VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, -nr); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, -nr, new_val); + return new_val; } -static inline void folio_ref_sub(struct folio *folio, int nr) +static inline void page_ref_sub(struct page *page, int nr) { - page_ref_sub(&folio->page, nr); + page_ref_sub_return(page, nr); } -static inline int page_ref_sub_return(struct page *page, int nr) +static inline void folio_ref_sub(struct folio *folio, int nr) { - int old_val = atomic_fetch_sub(nr, &page->_refcount); - int new_val = old_val - nr; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -nr, new_val); - return new_val; + page_ref_sub(&folio->page, nr); } static inline int folio_ref_sub_return(struct folio *folio, int nr) @@ -171,14 +151,20 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr) return page_ref_sub_return(&folio->page, nr); } -static inline void page_ref_inc(struct page *page) +static inline int page_ref_inc_return(struct page *page) { int old_val = atomic_fetch_inc(&page->_refcount); int new_val = old_val + 1; VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, 1); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, 1, new_val); + return new_val; +} + +static inline void page_ref_inc(struct page *page) +{ + page_ref_inc_return(page); } static inline void folio_ref_inc(struct folio *folio) @@ -186,14 +172,20 @@ static inline void folio_ref_inc(struct folio *folio) page_ref_inc(&folio->page); } -static inline void page_ref_dec(struct page *page) +static inline int page_ref_dec_return(struct page *page) { int old_val = atomic_fetch_dec(&page->_refcount); int new_val = old_val - 1; VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, -1); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, -1, new_val); + return new_val; +} + +static inline void page_ref_dec(struct page *page) +{ + page_ref_dec_return(page); } static inline void folio_ref_dec(struct folio *folio) @@ -203,14 +195,7 @@ static inline void folio_ref_dec(struct folio *folio) static inline int page_ref_sub_and_test(struct page *page, int nr) { - int old_val = atomic_fetch_sub(nr, &page->_refcount); - int new_val = old_val - nr; - int ret = new_val == 0; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_test)) - __page_ref_mod_and_test(page, -nr, ret); - return ret; + return page_ref_sub_return(page, nr) == 0; } static inline int folio_ref_sub_and_test(struct folio *folio, int nr) @@ -218,17 +203,6 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr) return page_ref_sub_and_test(&folio->page, nr); } -static inline int page_ref_inc_return(struct page *page) -{ - int old_val = atomic_fetch_inc(&page->_refcount); - int new_val = old_val + 1; - - VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, 1, new_val); - return new_val; -} - static inline int folio_ref_inc_return(struct folio *folio) { return page_ref_inc_return(&folio->page); @@ -236,14 +210,7 @@ static inline int folio_ref_inc_return(struct folio *folio) static inline int page_ref_dec_and_test(struct page *page) { - int old_val = atomic_fetch_dec(&page->_refcount); - int new_val = old_val - 1; - int ret = new_val == 0; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_test)) - __page_ref_mod_and_test(page, -1, ret); - return ret; + return page_ref_dec_return(page) == 0; } static inline int folio_ref_dec_and_test(struct folio *folio) @@ -251,17 +218,6 @@ static inline int folio_ref_dec_and_test(struct folio *folio) return page_ref_dec_and_test(&folio->page); } -static inline int page_ref_dec_return(struct page *page) -{ - int old_val = atomic_fetch_dec(&page->_refcount); - int new_val = old_val - 1; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -1, new_val); - return new_val; -} - static inline int folio_ref_dec_return(struct folio *folio) { return page_ref_dec_return(&folio->page); diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index 2b8e5a4df53b..600ea20c3e11 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -49,7 +49,7 @@ DEFINE_EVENT(page_ref_init_template, page_ref_init, TP_ARGS(page) ); -DECLARE_EVENT_CLASS(page_ref_mod_template, +DECLARE_EVENT_CLASS(page_ref_unfreeze_template, TP_PROTO(struct page *page, int v), @@ -83,14 +83,7 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __entry->val) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_mod, - - TP_PROTO(struct page *page, int v), - - TP_ARGS(page, v) -); - -DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, +DECLARE_EVENT_CLASS(page_ref_mod_template, TP_PROTO(struct page *page, int v, int ret), @@ -163,14 +156,7 @@ DECLARE_EVENT_CLASS(page_ref_add_unless_template, __entry->val, __entry->unless, __entry->ret) ); -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_test, - - TP_PROTO(struct page *page, int v, int ret), - - TP_ARGS(page, v, ret) -); - -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_return, +DEFINE_EVENT(page_ref_mod_template, page_ref_mod_and_return, TP_PROTO(struct page *page, int v, int ret), @@ -184,14 +170,14 @@ DEFINE_EVENT(page_ref_add_unless_template, page_ref_add_unless, TP_ARGS(page, v, u, ret) ); -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_freeze, +DEFINE_EVENT(page_ref_mod_template, page_ref_freeze, TP_PROTO(struct page *page, int v, int ret), TP_ARGS(page, v, ret) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_unfreeze, +DEFINE_EVENT(page_ref_unfreeze_template, page_ref_unfreeze, TP_PROTO(struct page *page, int v), diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c index ad21abfec463..f5f39a77c6da 100644 --- a/mm/debug_page_ref.c +++ b/mm/debug_page_ref.c @@ -12,20 +12,6 @@ void __page_ref_init(struct page *page) EXPORT_SYMBOL(__page_ref_init); EXPORT_TRACEPOINT_SYMBOL(page_ref_init); -void __page_ref_mod(struct page *page, int v) -{ - trace_page_ref_mod(page, v); -} -EXPORT_SYMBOL(__page_ref_mod); -EXPORT_TRACEPOINT_SYMBOL(page_ref_mod); - -void __page_ref_mod_and_test(struct page *page, int v, int ret) -{ - trace_page_ref_mod_and_test(page, v, ret); -} -EXPORT_SYMBOL(__page_ref_mod_and_test); -EXPORT_TRACEPOINT_SYMBOL(page_ref_mod_and_test); - void __page_ref_mod_and_return(struct page *page, int v, int ret) { trace_page_ref_mod_and_return(page, v, ret); From patchwork Wed Dec 8 20:35:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12665251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD3A2C433EF for ; Wed, 8 Dec 2021 20:40:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 400F86B0080; Wed, 8 Dec 2021 15:36:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D5926B0081; Wed, 8 Dec 2021 15:36:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 276CF6B0082; Wed, 8 Dec 2021 15:36:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id 1A92C6B0080 for ; Wed, 8 Dec 2021 15:36:12 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DA2C218572429 for ; Wed, 8 Dec 2021 20:36:01 +0000 (UTC) X-FDA: 78895783722.30.4217BDA Received: from mail-qv1-f41.google.com (mail-qv1-f41.google.com [209.85.219.41]) by imf12.hostedemail.com (Postfix) with ESMTP id 9478F10000A9 for ; Wed, 8 Dec 2021 20:36:01 +0000 (UTC) Received: by mail-qv1-f41.google.com with SMTP id kl8so3314532qvb.3 for ; Wed, 08 Dec 2021 12:36:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=efM54ZomnMnKj0fHwD8epFpVs6vA4uxGPqXneKhOTQ4=; b=CZ/IogjHt1FbEK64p+q+xuoukwqzLKyhR90K8mytumvWC+vu/dpMMkLo0NclHiem3C AwyOLMF2L37XfyjnKdZHevKOTvxFQPHHYuN4I8sFBKFU3Dt4toJ0aVQkF8FYQ5U9piS9 mWZuncXku17SW28aLVS8oXi7hSJciCeLq8rGiOj5nHJBrgs/8rJbamPH1q59ZGD90FsB gk+TYOHPsQW4eNpXD3trVyulGXNsZL/DjJH/6/3DMb/fYpCoS+QfvZ3GaJ7MeZafXUPp +j0E2jLWB3qHikLo0dcmjZdUCvVTN/K8yAKp5jdEMQ4AhdKSGGi4QwdFI4dIurB7zlUq +2bA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=efM54ZomnMnKj0fHwD8epFpVs6vA4uxGPqXneKhOTQ4=; b=yLv1s4+/xrq0VPh8DHUaaBWXeJYskr39exIrpaj2GW7u2tfu1NVevSFZCxWlPUbbS3 z1M1du97y/ptQ1hEzpdJ33DKDNmIHif93z0imNx3bUeQ8Bt0Tdnl2dIbCo+yn5gELCnl +coWSnPiAtqSBOBadWXIbwxt3aopwJ/jBoKR2mwCMn6cUQpy83PrzDNFdDkjqD0IYx+a 0XFJhcwm8XTGGEouQ5GtXs3WaUO/FM+SafbZnRCnGH5BipeHEJR+JOGJmjV/vZhvgvoe 6jIvcrKj1maRppANZRbhEjAAU70rjyFnHHshrSlpoGuF4DR7MmWR1nGiDrZDITwAVw0y CLkw== X-Gm-Message-State: AOAM531kx+LCrQvGHIYaycMqq3Pu6PSbf5sQ8TabSwG1vdxAkF/3jj3e 6MtSpohHFGS+54UwztXvjMmrig== X-Google-Smtp-Source: ABdhPJw5Y75cfC2gowj44e6l5E9jZvUJuE9e1B0YDT6zuJ1nBlD+sF7LQYQG/tOa7zC/p8ZoFaHJWA== X-Received: by 2002:ad4:5744:: with SMTP id q4mr10173588qvx.99.1638995760976; Wed, 08 Dec 2021 12:36:00 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id 143sm1898710qkg.87.2021.12.08.12.35.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Dec 2021 12:35:59 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH 09/10] mm: do not use atomic_set_release in page_ref_unfreeze() Date: Wed, 8 Dec 2021 20:35:43 +0000 Message-Id: <20211208203544.2297121-10-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.400.ga245620fadb-goog In-Reply-To: <20211208203544.2297121-1-pasha.tatashin@soleen.com> References: <20211208203544.2297121-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 9478F10000A9 X-Stat-Signature: unh8ci9gbegxqwz876wcgr5fzdht7ene Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b="CZ/IogjH"; spf=pass (imf12.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.41 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1638995761-649460 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In we set the old _refcount value after verifying that the old value was indeed 0. VM_BUG_ON_PAGE(page_count(page) != 0, page); < the _refcount may change here> atomic_set_release(&page->_refcount, count); To avoid the smal gap where _refcount may change lets verify the time of the _refcount at the time of the set operation. Use atomic_xchg_release() and at the set time verify that the value was 0. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 8c76bf3bf7e1..26676d3bcd58 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -322,10 +322,9 @@ static inline int folio_ref_freeze(struct folio *folio, int count) static inline void page_ref_unfreeze(struct page *page, int count) { - VM_BUG_ON_PAGE(page_count(page) != 0, page); - VM_BUG_ON(count == 0); + int old_val = atomic_xchg_release(&page->_refcount, count); - atomic_set_release(&page->_refcount, count); + VM_BUG_ON_PAGE(count == 0 || old_val != 0, page); if (page_ref_tracepoint_active(page_ref_unfreeze)) __page_ref_unfreeze(page, count); } From patchwork Wed Dec 8 20:35:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12665253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46AC7C433F5 for ; Wed, 8 Dec 2021 20:40:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EAFC26B0081; Wed, 8 Dec 2021 15:36:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E5CC26B0082; Wed, 8 Dec 2021 15:36:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C885F6B0083; Wed, 8 Dec 2021 15:36:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id BBB4A6B0081 for ; Wed, 8 Dec 2021 15:36:14 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8BCE08277E for ; Wed, 8 Dec 2021 20:36:04 +0000 (UTC) X-FDA: 78895783848.20.B9D702A Received: from mail-qv1-f43.google.com (mail-qv1-f43.google.com [209.85.219.43]) by imf08.hostedemail.com (Postfix) with ESMTP id A07FC30000B4 for ; Wed, 8 Dec 2021 20:36:02 +0000 (UTC) Received: by mail-qv1-f43.google.com with SMTP id bu11so3361099qvb.0 for ; Wed, 08 Dec 2021 12:36:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=TNgBFR2jfrvrXs2YyapfvQCLu5M2/StKJMmjuovUQTs=; b=omntkBN3Tfs3I1WBLg5+7c4phKsJBox+G8HICkXoS3kH+6yRdLxFcodJ9xKyl9IoiV tcjkm/7dOMBC+J7Y65mfQWP/VXOIqkIe1BYuVg0/UeqqgV14DIoWmhYOgqkvJ0dksWJM yhe24oVkmqp9IYnbA+WSyLvwzs/2ZXXEVDI9dgOsKtpHW9333ry5Y/tBjS02b3z53wAU hEg3UZCM2U/tNpwKAhlJ/g9l8ZB1Y1S4iGXr3Zb0/Dfe4/C4+U3+MqtYa+eWJW7GMml1 4TFuNEidQCzNw1ubewJk7/mM2ZuFcSrJ7+DY7PfV/zzmlIOlmnjljJ5d7q05ATvysKul sd9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TNgBFR2jfrvrXs2YyapfvQCLu5M2/StKJMmjuovUQTs=; b=dLo8RwXiZ8aXa6hxTa+QuLsbKFq4Phqj6tBdWXuEldSDOOFKKQ1Dk7crpmS4Tn701d fO7OqiHch9gOMxMER0AJcZpxctqelAjr0ng4eSdBP/htsPKAksbZ5Fh6GCtY2Giyhsho qiNAeidt4muBUqXQYhdNelSk+T0vDywfcXNkkPJQe9skXvOKQOc3dVuQaNhIk+YBIZ3r em+oSJ/ukZnqxxpRnxr32o2dYDX2Xu9xZSO20ys5dHrWzw4+g1bIu5YQIqq7Y3Cpr2Nl IP15Sda7XQtHhc3ggd8B27YAeK78wmkiCgrbe3zH/WXB0Xl75dmQsNtaH8fehC1clsu/ 4RLg== X-Gm-Message-State: AOAM532WH4GnColmyTonh0TbfK/mpK4lyXyPwzecMtg1f/ICl1XdR2Gy R8R3rkmvNgD/6XcpvbwzPgzIrPOUkORz3w== X-Google-Smtp-Source: ABdhPJzMR03yNKnGfrXlWyU1obhS7MOvT1xu4hAl3EjL7RVWQooSmoKydvdAqAFgx2q/dwK3dHYJEQ== X-Received: by 2002:ad4:4ee4:: with SMTP id dv4mr10231097qvb.59.1638995762436; Wed, 08 Dec 2021 12:36:02 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id 143sm1898710qkg.87.2021.12.08.12.36.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Dec 2021 12:36:01 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH 10/10] mm: use atomic_cmpxchg_acquire in page_ref_freeze(). Date: Wed, 8 Dec 2021 20:35:44 +0000 Message-Id: <20211208203544.2297121-11-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.400.ga245620fadb-goog In-Reply-To: <20211208203544.2297121-1-pasha.tatashin@soleen.com> References: <20211208203544.2297121-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: A07FC30000B4 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=omntkBN3; spf=pass (imf08.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.43 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-Rspamd-Server: rspam04 X-Stat-Signature: pcbhrenw1epcqzj696gjwxbgm3bemqif X-HE-Tag: 1638995762-353699 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: page_ref_freeze and page_ref_unfreeze are designed to be used as a pair. They protect critical sections where struct page can be modified. page_ref_unfreeze() is protected by _release() atomic operation, but page_ref_freeze() is not as it is assumed that cmpxch provides the full barrier. Instead, use the appropriate atomic_cmpxchg_acquire() to ensure that memory model is excplicitly followed. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 26676d3bcd58..ecd92d7f3eef 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -308,7 +308,8 @@ static inline bool folio_try_get_rcu(struct folio *folio) static inline int page_ref_freeze(struct page *page, int count) { - int ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count); + int old_val = atomic_cmpxchg_acquire(&page->_refcount, count, 0); + int ret = likely(old_val == count); if (page_ref_tracepoint_active(page_ref_freeze)) __page_ref_freeze(page, count, ret);