From patchwork Wed Jan 26 18:34:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12725584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93A5BC63682 for ; Wed, 26 Jan 2022 18:34:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A70406B0071; Wed, 26 Jan 2022 13:34:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FA576B0073; Wed, 26 Jan 2022 13:34:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8721E6B0074; Wed, 26 Jan 2022 13:34:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0173.hostedemail.com [216.40.44.173]) by kanga.kvack.org (Postfix) with ESMTP id 758576B0071 for ; Wed, 26 Jan 2022 13:34:34 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 287D491ACF for ; Wed, 26 Jan 2022 18:34:34 +0000 (UTC) X-FDA: 79073288868.05.D1B9897 Received: from mail-qv1-f49.google.com (mail-qv1-f49.google.com [209.85.219.49]) by imf06.hostedemail.com (Postfix) with ESMTP id AD3C8180020 for ; Wed, 26 Jan 2022 18:34:33 +0000 (UTC) Received: by mail-qv1-f49.google.com with SMTP id g13so632487qvw.4 for ; Wed, 26 Jan 2022 10:34:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=5WscGeHykgFKvtNA2a3xQVpoktH8d9hYnkzSqx4X754=; b=c5KOiDhIQTjr76e99jrvXjP+zQIipG3qWrlyoBFSGPZvpXa3a24bsABSirLKilejPC HkTjj42IFw8t4AGBMUZWfJ7deIiZuar7QUetQ8j4b4ofpD9nqn3+4cZ+Qs8vY95+vQF3 0YiNinlkA9NmsHXyZHGsJIIXnG2UIJ6XjZZ7qxMufoPDrAK4ntjA7Duo9NfAdnsNtZjp s/Q8JPMpXFQvDIwl2QI3Apz9+P5MxV3P6qHkV3oq9kortiYH97cGqcZNdyBFGhEUXVIX TdnOrdrUYAxm1YWbVGQ0v05o4Zwi7U4IYlcDrSNMwEf1C1RR/aG//gsOJJPFR3WepWN6 bd7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5WscGeHykgFKvtNA2a3xQVpoktH8d9hYnkzSqx4X754=; b=AyfwVAjiaPhSzAeBuNCJxGnyUYueaQqD5fJel2TkrLkZ1IbZ6khr+fOKo+h7yuG5Kv DagB/NV7yQrE5b6T6vxN0qGdGVNz86W1nXNG5zCbzjHYV5zuaiRC9INaP1XbTyJYrxdL 5fsnbaNlFQL7N+IVWdU53n51BOZ4N8M1rNCKBb6dbHlYbk8GLfbNcOTY3Jqkn8fVmiOO m6LMruQaxVzoQFKUVKS2+W0vmFV7z2l3MZZVYh6NZB9DVA21gcAK7vL95Ff3rEaOBTNk OuQPYhZIh9+Ly+iMSsXDt0DU6B7bDJYqg5oO3l/ijdLO6vt/smAFMZtW6qg5aFeQpGkZ Ds1Q== X-Gm-Message-State: AOAM532DVmKGBN4J2s2tOlRjdjXaTSndMcx/9lLegOulDDr9n2YvWHJr NEMYNVRSTEygD8LJam7l/LbE0w== X-Google-Smtp-Source: ABdhPJwFL8yN6047PgmxSq7+98SLjxRErvrti9/csMHETfIdMupPBgY4rmaoYYVPLw9Ssr5QtQmqpw== X-Received: by 2002:a05:6214:2b06:: with SMTP id jx6mr15297174qvb.117.1643222072958; Wed, 26 Jan 2022 10:34:32 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id u17sm35886qki.12.2022.01.26.10.34.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jan 2022 10:34:32 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com, hughd@google.com Subject: [PATCH v3 1/9] mm: add overflow and underflow checks for page->_refcount Date: Wed, 26 Jan 2022 18:34:21 +0000 Message-Id: <20220126183429.1840447-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.35.0.rc0.227.g00780c9af4-goog In-Reply-To: <20220126183429.1840447-1-pasha.tatashin@soleen.com> References: <20220126183429.1840447-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: AD3C8180020 X-Stat-Signature: 98btdge3q9q49e15s748eu7feg99ky9r X-Rspam-User: nil Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=c5KOiDhI; spf=pass (imf06.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.49 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1643222073-986929 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The problems with page->_refcount are hard to debug, because usually when they are detected, the damage has occurred a long time ago. Yet, the problems with invalid page refcount may be catastrophic and lead to memory corruptions. Reduce the scope of when the _refcount problems manifest themselves by adding checks for underflows and overflows into functions that modify _refcount. Use atomic_fetch_* functions to get the old values of the _refcount, and use it to check for overflow/underflow. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 59 +++++++++++++++++++++++++++++----------- 1 file changed, 43 insertions(+), 16 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 2e677e6ad09f..fe4864f7f69c 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -117,7 +117,10 @@ static inline void init_page_count(struct page *page) static inline void page_ref_add(struct page *page, int nr) { - atomic_add(nr, &page->_refcount); + int old_val = atomic_fetch_add(nr, &page->_refcount); + int new_val = old_val + nr; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, nr); } @@ -129,7 +132,10 @@ static inline void folio_ref_add(struct folio *folio, int nr) static inline void page_ref_sub(struct page *page, int nr) { - atomic_sub(nr, &page->_refcount); + int old_val = atomic_fetch_sub(nr, &page->_refcount); + int new_val = old_val - nr; + + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, -nr); } @@ -141,11 +147,13 @@ static inline void folio_ref_sub(struct folio *folio, int nr) static inline int page_ref_sub_return(struct page *page, int nr) { - int ret = atomic_sub_return(nr, &page->_refcount); + int old_val = atomic_fetch_sub(nr, &page->_refcount); + int new_val = old_val - nr; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -nr, ret); - return ret; + __page_ref_mod_and_return(page, -nr, new_val); + return new_val; } static inline int folio_ref_sub_return(struct folio *folio, int nr) @@ -155,7 +163,10 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr) static inline void page_ref_inc(struct page *page) { - atomic_inc(&page->_refcount); + int old_val = atomic_fetch_inc(&page->_refcount); + int new_val = old_val + 1; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, 1); } @@ -167,7 +178,10 @@ static inline void folio_ref_inc(struct folio *folio) static inline void page_ref_dec(struct page *page) { - atomic_dec(&page->_refcount); + int old_val = atomic_fetch_dec(&page->_refcount); + int new_val = old_val - 1; + + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, -1); } @@ -179,8 +193,11 @@ static inline void folio_ref_dec(struct folio *folio) static inline int page_ref_sub_and_test(struct page *page, int nr) { - int ret = atomic_sub_and_test(nr, &page->_refcount); + int old_val = atomic_fetch_sub(nr, &page->_refcount); + int new_val = old_val - nr; + int ret = new_val == 0; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_test)) __page_ref_mod_and_test(page, -nr, ret); return ret; @@ -193,11 +210,13 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr) static inline int page_ref_inc_return(struct page *page) { - int ret = atomic_inc_return(&page->_refcount); + int old_val = atomic_fetch_inc(&page->_refcount); + int new_val = old_val + 1; + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, 1, ret); - return ret; + __page_ref_mod_and_return(page, 1, new_val); + return new_val; } static inline int folio_ref_inc_return(struct folio *folio) @@ -207,8 +226,11 @@ static inline int folio_ref_inc_return(struct folio *folio) static inline int page_ref_dec_and_test(struct page *page) { - int ret = atomic_dec_and_test(&page->_refcount); + int old_val = atomic_fetch_dec(&page->_refcount); + int new_val = old_val - 1; + int ret = new_val == 0; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_test)) __page_ref_mod_and_test(page, -1, ret); return ret; @@ -221,11 +243,13 @@ static inline int folio_ref_dec_and_test(struct folio *folio) static inline int page_ref_dec_return(struct page *page) { - int ret = atomic_dec_return(&page->_refcount); + int old_val = atomic_fetch_dec(&page->_refcount); + int new_val = old_val - 1; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -1, ret); - return ret; + __page_ref_mod_and_return(page, -1, new_val); + return new_val; } static inline int folio_ref_dec_return(struct folio *folio) @@ -235,8 +259,11 @@ static inline int folio_ref_dec_return(struct folio *folio) static inline bool page_ref_add_unless(struct page *page, int nr, int u) { - bool ret = atomic_add_unless(&page->_refcount, nr, u); + int old_val = atomic_fetch_add_unless(&page->_refcount, nr, u); + int new_val = old_val + nr; + int ret = old_val != u; + VM_BUG_ON_PAGE(ret && (unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_unless)) __page_ref_mod_unless(page, nr, ret); return ret; From patchwork Wed Jan 26 18:34:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12725586 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEA16C2BA4C for ; Wed, 26 Jan 2022 18:34:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 69D546B0073; Wed, 26 Jan 2022 13:34:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5C2606B0078; Wed, 26 Jan 2022 13:34:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3690B6B0078; Wed, 26 Jan 2022 13:34:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0005.hostedemail.com [216.40.44.5]) by kanga.kvack.org (Postfix) with ESMTP id 233846B0073 for ; Wed, 26 Jan 2022 13:34:35 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id DA9FF1816C8DC for ; Wed, 26 Jan 2022 18:34:34 +0000 (UTC) X-FDA: 79073288868.16.9EC2F4F Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) by imf09.hostedemail.com (Postfix) with ESMTP id 7CAB6140019 for ; Wed, 26 Jan 2022 18:34:34 +0000 (UTC) Received: by mail-qv1-f51.google.com with SMTP id b12so627260qvz.5 for ; Wed, 26 Jan 2022 10:34:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=4mw64hU1ZpbwSPVpRDpj2TUUxEzGleIpOUNlQVlyvBg=; b=F3hPkfc5vl0lCp4Fc8VE6hdllDFjQ/KJ4ChxD81oXql/kI+Wqg+5Eg6mJp84q5ARi0 bwTA2AeDywIsvmiI0bsQwfYhuw5oHutsV5TEGXWen0Fswl2QklzgqTABOo6pfqUX6H4A 9iw9QJC5YKH4ek5QFHZ7USKm0q6pF0Hv/5ucF13RYA0PqDHQMv+uzuEM91G9R/l4ehP2 FKA2+uQkBZtYXRHaOK3gVvGZ6YGKSt/5ZzAzKnPdoMy8jM7rcvPnHIKlRB/z3n0fM5tC qw27euVm+4l4GXriH7OI+46l5IztN1E0Q7uHd4+RVdjriFD0RFh1sJb4k12yDj0T6ywZ gvCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4mw64hU1ZpbwSPVpRDpj2TUUxEzGleIpOUNlQVlyvBg=; b=lBWhpOqbOtatWJNKMFNffbD6ldOaxLEjEnXeOjRDzudTejA9wwjgXuaOeYxwKAywmy Q3QApXJzs8l/6yuyvMk8BhkIIF3jEE8NS1Vcs2FB8LwZylD95paRXrDvE8y4fRe5flxT 1uaxwfQJk+g2pp5Czmrx2Wh6sQYvZ+FfcAUpYqYSH/hq4rRWZu/Zh67Q19Nc+jUf7hVv +sWpqwVqVznfaRcloZ6/n0r6GJKnwWCHCR8xLWpHHahl15q/YqFgcot7neLQdI7dkQkN 8yIaTB0zr9NKPhW7ZDMj8U6FVvMbcuzQSwW+evuDeaK8BmJCtw4C2aOeXDWkuI2YKKie bCbg== X-Gm-Message-State: AOAM530qxFXVpaO3yNU/bw612r8DaAhoywpmHOKTObt3LtIpV81LGdmU ZyuvdiZhYTg0ouwZkMAhb1reSA== X-Google-Smtp-Source: ABdhPJwb0TuIX2qSbyMJVsMfaoSMdbbLW4f4sRMAfDNGo3TL8yjF/ORtCOVu1ucqYV6uSVM24X6+vg== X-Received: by 2002:a05:6214:401b:: with SMTP id kd27mr77991qvb.22.1643222073781; Wed, 26 Jan 2022 10:34:33 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id u17sm35886qki.12.2022.01.26.10.34.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jan 2022 10:34:33 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com, hughd@google.com Subject: [PATCH v3 2/9] mm: Avoid using set_page_count() in set_page_recounted() Date: Wed, 26 Jan 2022 18:34:22 +0000 Message-Id: <20220126183429.1840447-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.35.0.rc0.227.g00780c9af4-goog In-Reply-To: <20220126183429.1840447-1-pasha.tatashin@soleen.com> References: <20220126183429.1840447-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7CAB6140019 X-Rspam-User: nil Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=F3hPkfc5; dmarc=none; spf=pass (imf09.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.51 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-Stat-Signature: bx314xyayjesg9pmmc1kosy88xq7n6ub X-HE-Tag: 1643222074-575398 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_refcounted() converts a non-refcounted page that has (page->_refcount == 0) into a refcounted page by setting _refcount to 1. The current apporach uses the following logic: VM_BUG_ON_PAGE(page_ref_count(page), page); set_page_count(page, 1); However, if _refcount changes from 0 to 1 between the VM_BUG_ON_PAGE() and set_page_count() we can break _refcount, which can cause other problems such as memory corruptions. Instead, use a safer method: increment _refcount first and verify that at increment time it was indeed 1. refcnt = page_ref_inc_return(page); VM_BUG_ON_PAGE(refcnt != 1, page); Use page_ref_inc_return() to avoid unconditionally overwriting the _refcount value with set_page_count(), and check the return value. Signed-off-by: Pasha Tatashin --- mm/internal.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 4c2d06a2f50b..6b74f7f32613 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -141,9 +141,11 @@ static inline bool page_evictable(struct page *page) */ static inline void set_page_refcounted(struct page *page) { + int refcnt; + VM_BUG_ON_PAGE(PageTail(page), page); - VM_BUG_ON_PAGE(page_ref_count(page), page); - set_page_count(page, 1); + refcnt = page_ref_inc_return(page); + VM_BUG_ON_PAGE(refcnt != 1, page); } extern unsigned long highest_memmap_pfn; From patchwork Wed Jan 26 18:34:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12725587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEB89C63686 for ; Wed, 26 Jan 2022 18:34:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 235D76B0074; Wed, 26 Jan 2022 13:34:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 171176B0078; Wed, 26 Jan 2022 13:34:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EDEE16B007D; Wed, 26 Jan 2022 13:34:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id D2DEF6B0074 for ; Wed, 26 Jan 2022 13:34:37 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 82E6B8249980 for ; Wed, 26 Jan 2022 18:34:37 +0000 (UTC) X-FDA: 79073288994.23.0332AC5 Received: from mail-qk1-f174.google.com (mail-qk1-f174.google.com [209.85.222.174]) by imf04.hostedemail.com (Postfix) with ESMTP id 5E3034002C for ; Wed, 26 Jan 2022 18:34:35 +0000 (UTC) Received: by mail-qk1-f174.google.com with SMTP id c189so357983qkg.11 for ; Wed, 26 Jan 2022 10:34:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=6rktNgcbrbK1gQESupfA4dHtrh4dYlEFYGvtlevr0bg=; b=ZOPvyP2LJa1z3UAKsmFkGIvb9SGcukKRw1eJDnZ03Qy7wZ4AeqMOKXH3MMBQGrbIrD /K1gtEKq72kR4D2ZLluXJ7N65IkST6vYrjwupXq4pYplOy24Hwrbt019FVcqjyTfs4uM lsxV8J1HoLT678mTxAyiXcz4eRp1gSd1iPLhXoXTOfQioYdf5YHp5LPIvXTIzhL/PYZg tFa4DDUfUv2/nhf4ixyqPQOrpe0bQFsJHGraZDElfYhxqzwN7HggCEhbvvXk+a5O2CJ8 zuTcV5TSk5C5M3X8ND+XcwEmPXrCXMwHne1ra/PCaFUMgtU1czwB78aFNGC/vAvu+kVG NUuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6rktNgcbrbK1gQESupfA4dHtrh4dYlEFYGvtlevr0bg=; b=rBfOMNn8RB7WMbm08XG1e9kOQNOWc6yPVOH1yaXrI0k7CxxQhMCu8GJ5qfxjpkKem/ HbLsQDF14LA3RAHurahBXtTqvQ6bs+JlzpS1AUSkssBxZM+Op/LnyIhUohoPWi/5dgEV u5AoT2wIZByS+/2yTyVpuFIbqmfMVXAzWlh32rJF1pC+cMwIZReoeHuRMudlH/PAiabj MxhqjlDnB6womI3v2GUQaZw4jyyQVShwfFIfvKmkiOUJ90PlT6Utk5m7b6Jz+gu/U++c 2hbYCg+kdHQ8O0fNuVFj6a+taFFlRw9QLUZ/7MKWVJxXBK+Bh2ZAE+RoB3QnxT7fG41W FVBQ== X-Gm-Message-State: AOAM531YlcStYkA0Rp+cGqhAHc9Vqzc6iOxHNJxFh/eUaz1vuhloHGCv K4fN3k+ezZrJYTUxKD68Unz5bQ== X-Google-Smtp-Source: ABdhPJyOZcmiDwXUXWvvn+PHKnj59WKkQW/1KnbYA3kyw9zl35YOaxOviGVfeae+pb/kVHJjKl9WDQ== X-Received: by 2002:a05:620a:25ca:: with SMTP id y10mr59059qko.546.1643222074652; Wed, 26 Jan 2022 10:34:34 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id u17sm35886qki.12.2022.01.26.10.34.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jan 2022 10:34:34 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com, hughd@google.com Subject: [PATCH v3 3/9] mm: remove set_page_count() from page_frag_alloc_align Date: Wed, 26 Jan 2022 18:34:23 +0000 Message-Id: <20220126183429.1840447-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.35.0.rc0.227.g00780c9af4-goog In-Reply-To: <20220126183429.1840447-1-pasha.tatashin@soleen.com> References: <20220126183429.1840447-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 5E3034002C X-Stat-Signature: d5hi53r44j11knwcmugaiaiowsf8qi8o X-Rspam-User: nil Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=ZOPvyP2L; spf=pass (imf04.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.174 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1643222075-2031 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000005, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_count() unconditionally resets the value of _ref_count and that is dangerous, as it is not programmatically verified. Instead we rely on comments like: "OK, page count is 0, we can safely set it". Add a new refcount function: page_ref_add_return() to return the new refcount value after adding to it. Use the return value to verify that the _ref_count was indeed the expected one. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 11 +++++++++++ mm/page_alloc.c | 6 ++++-- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index fe4864f7f69c..03e21ce2f1bd 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -115,6 +115,17 @@ static inline void init_page_count(struct page *page) set_page_count(page, 1); } +static inline int page_ref_add_return(struct page *page, int nr) +{ + int old_val = atomic_fetch_add(nr, &page->_refcount); + int new_val = old_val + nr; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, nr, new_val); + return new_val; +} + static inline void page_ref_add(struct page *page, int nr) { int old_val = atomic_fetch_add(nr, &page->_refcount); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8dd6399bafb5..5a9167bda279 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5528,6 +5528,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int size = PAGE_SIZE; struct page *page; int offset; + int refcnt; if (unlikely(!nc->va)) { refill: @@ -5566,8 +5567,9 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, /* if size can vary use size else just use PAGE_SIZE */ size = nc->size; #endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + /* page count is 0, set it to PAGE_FRAG_CACHE_MAX_SIZE + 1 */ + refcnt = page_ref_add_return(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + VM_BUG_ON_PAGE(refcnt != PAGE_FRAG_CACHE_MAX_SIZE + 1, page); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; From patchwork Wed Jan 26 18:34:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12725588 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2798C6369B for ; Wed, 26 Jan 2022 18:34:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A63E6B0078; Wed, 26 Jan 2022 13:34:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E5BF6B007E; Wed, 26 Jan 2022 13:34:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 05D776B007B; Wed, 26 Jan 2022 13:34:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0096.hostedemail.com [216.40.44.96]) by kanga.kvack.org (Postfix) with ESMTP id E1B116B0078 for ; Wed, 26 Jan 2022 13:34:37 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 98CD1181B0495 for ; Wed, 26 Jan 2022 18:34:37 +0000 (UTC) X-FDA: 79073288994.14.6CC2410 Received: from mail-qv1-f44.google.com (mail-qv1-f44.google.com [209.85.219.44]) by imf11.hostedemail.com (Postfix) with ESMTP id 328544002C for ; Wed, 26 Jan 2022 18:34:37 +0000 (UTC) Received: by mail-qv1-f44.google.com with SMTP id d8so644449qvv.2 for ; Wed, 26 Jan 2022 10:34:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Ie3wyi01fH7CgmQYiqIN+tfxdxDxn1n1+kzHm4Q9RTE=; b=Bh400EnOT890ni7sht/44ozXW/6kzxp07ZHx7PHxWuPEpO5UWUblrE8yWYYEvyOZpy MEeDREPxYzJitlr3ychWp4Zzd53yjtbV1lTT6U4sXKY5QOs8eT0Ao/oZIAnwcHjyATZz tvRrEJaOpgFl+PLRLGNYDmv7E1xAUepdJHaTNhjB3fefVxY+fVJFRhn6o2dRH5BjqRJo syBZkBNB2cBomk/aVeT/u9aFQAkG7UcRC0tnPO4eZmPSZcT6zo03CZ/wyTukDnwagYtN RZjniCisA268LqO5pXzQ5ABPwHsfQoiYEfCJwAa8+UqYzT3ImXNuyo7FY6tV1pzS48q6 1hxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ie3wyi01fH7CgmQYiqIN+tfxdxDxn1n1+kzHm4Q9RTE=; b=2vDWN5oaXbW2Gtr/H2Y9K1nHOYGaMMR88kvBF6OtDQzUstq/cILJOtk+rng2E9f+DW mktaVWZSKPjuwRElE5b1u2WGQM8ZcwoYE+gzxwBgvCLVwl75lsiCRAvSR7zJ3LIXHIV8 VbukKsP6zyEKQ7Nibofx5LOSZexXuwefS3B6/QzLXAm5j0VhKl4j9tXrK2elmwAYvO/s 7mciKck5VxT7KaIWT72em7uOKJxvLB9HNoR9rg8zsZcSBdmwJc75rW9cD2NhQ+8HH0X9 hDNA7ptPmFzftFUYLwcqwdiDjh1yqTbuBm8D/IioNa/jt3w8nDP/nvxkha/oQ6S8tA5D 3YEQ== X-Gm-Message-State: AOAM533SGSDENO2/T4DMYeTLl0F204CEVztPlOVzNZBfbiiLrXM4pXet wlHwC3ngVaDivc16jhh9AyFG/A== X-Google-Smtp-Source: ABdhPJzoIdGrjBW0d47PH4jGXBlBszWnfJdkQI7JvZmOWO27z83YtLTGz3Vk6XtP2izWkdLA7bnMTg== X-Received: by 2002:a05:6214:21c4:: with SMTP id d4mr22950102qvh.90.1643222076515; Wed, 26 Jan 2022 10:34:36 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id u17sm35886qki.12.2022.01.26.10.34.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jan 2022 10:34:35 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com, hughd@google.com Subject: [PATCH v3 4/9] mm: avoid using set_page_count() when pages are freed into allocator Date: Wed, 26 Jan 2022 18:34:24 +0000 Message-Id: <20220126183429.1840447-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.35.0.rc0.227.g00780c9af4-goog In-Reply-To: <20220126183429.1840447-1-pasha.tatashin@soleen.com> References: <20220126183429.1840447-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 328544002C X-Stat-Signature: whhp5tr73r3qr8us99szyiunh99yq1po Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=Bh400EnO; dmarc=none; spf=pass (imf11.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.44 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-Rspam-User: nil X-HE-Tag: 1643222077-111624 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When struct pages are first initialized the page->_refcount field is set 1. However, later when pages are freed into allocator we set _refcount to 0 via set_page_count(). Unconditionally resetting _refcount is dangerous. Instead use page_ref_dec_return(), and verify that the _refcount is what is expected. Signed-off-by: Pasha Tatashin --- mm/page_alloc.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5a9167bda279..0fa100152a2a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1668,6 +1668,7 @@ void __free_pages_core(struct page *page, unsigned int order) unsigned int nr_pages = 1 << order; struct page *p = page; unsigned int loop; + int refcnt; /* * When initializing the memmap, __init_single_page() sets the refcount @@ -1678,10 +1679,12 @@ void __free_pages_core(struct page *page, unsigned int order) for (loop = 0; loop < (nr_pages - 1); loop++, p++) { prefetchw(p + 1); __ClearPageReserved(p); - set_page_count(p, 0); + refcnt = page_ref_dec_return(p); + VM_BUG_ON_PAGE(refcnt, p); } __ClearPageReserved(p); - set_page_count(p, 0); + refcnt = page_ref_dec_return(p); + VM_BUG_ON_PAGE(refcnt, p); atomic_long_add(nr_pages, &page_zone(page)->managed_pages); @@ -2253,10 +2256,12 @@ void __init init_cma_reserved_pageblock(struct page *page) { unsigned i = pageblock_nr_pages; struct page *p = page; + int refcnt; do { __ClearPageReserved(p); - set_page_count(p, 0); + refcnt = page_ref_dec_return(p); + VM_BUG_ON_PAGE(refcnt, p); } while (++p, --i); set_pageblock_migratetype(page, MIGRATE_CMA); From patchwork Wed Jan 26 18:34:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12725589 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D58FFC63684 for ; Wed, 26 Jan 2022 18:34:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CE2E06B007B; Wed, 26 Jan 2022 13:34:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C6A466B007D; Wed, 26 Jan 2022 13:34:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE41B6B007E; Wed, 26 Jan 2022 13:34:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0068.hostedemail.com [216.40.44.68]) by kanga.kvack.org (Postfix) with ESMTP id 9C5566B007B for ; Wed, 26 Jan 2022 13:34:38 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 643A68249980 for ; Wed, 26 Jan 2022 18:34:38 +0000 (UTC) X-FDA: 79073289036.09.E7E93C9 Received: from mail-qk1-f171.google.com (mail-qk1-f171.google.com [209.85.222.171]) by imf28.hostedemail.com (Postfix) with ESMTP id 1357EC0036 for ; Wed, 26 Jan 2022 18:34:37 +0000 (UTC) Received: by mail-qk1-f171.google.com with SMTP id o25so380777qkj.7 for ; Wed, 26 Jan 2022 10:34:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=aBUwytSQMfy2PxxuYrxKtq3rfi9+4lfxfuQGWEZYavU=; b=l3wPcJdVsThu2Fd/44toWOV3aZBf2u1UKyJRPPbZryrarAUeut+CJwDZ8ZBn8Iu6Z1 9gfTp4Bv075j/BwF+Ngy6Ay6sMzIDMcqa4X8TTgaUs+Im6V7S3ofLK7W93oJ0z4lz3xr Qq5pM6XuZ4+WM30Swh3KI2BP+9imuy0GGJRJc3Y7t/ct2om5THUg1HsY771nUugjwhNB o6yhRFb/OKjGZiUe0daItrOPgtpA/FyZgE2RgrIndndd8D7dpKNP5udwWejZmB+vBbc1 b7IDp70VdgRBYRugGCONiyNlFJLAo0NVlGsvjEU/xzgfMrMAzq/Q9HVB0NvZ4Aa8v4++ +MWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aBUwytSQMfy2PxxuYrxKtq3rfi9+4lfxfuQGWEZYavU=; b=MCYhaY/d7dyUYij/K6t08Ae4rkIJ+le5kD0EyEwaoSHUB6O86KijqJOcC4QlJw82R1 JyqOxvsvmXQ8uL3GfNqhbc1EayL/S5C7onvb/5LuBrJ4EDg1I93yP8RJs15U+C5o9AAw n42/tx7v70LfSN+LPqFbOsy88cFdMWc8cNLmzxtXqm8UWZT7TxQNHyAHxp+g9AW4Q0rY gtcnOUVtZXHZAY7QZBWsfcJJ7zSL9h9RVWtMtxgB/B+3+pQtbQR0UyxXQiUt+sMt5SD7 1iO3t5dAuHWfzmbxitJEPq0FDdMnCKbO1qkPK+C2bFOGVNybBM1RYjZPrHV0MXdugDFd eNWA== X-Gm-Message-State: AOAM5309HtIvYusLuOpVD6a3m3d94kCAazVuGF8rh7DYXhyaQdUASOWc v9PNADoLNIgBcOXk/Z8pdhV20A== X-Google-Smtp-Source: ABdhPJzoa7OWWtfZhqM2BvNvH16AEuWpcEepXXHuHzqnh7Po9LG5Z02BFoetP3Q074/fbmgo2SkhZA== X-Received: by 2002:a05:620a:ecb:: with SMTP id x11mr18649qkm.399.1643222077431; Wed, 26 Jan 2022 10:34:37 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id u17sm35886qki.12.2022.01.26.10.34.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jan 2022 10:34:37 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com, hughd@google.com Subject: [PATCH v3 5/9] mm: rename init_page_count() -> page_ref_init() Date: Wed, 26 Jan 2022 18:34:25 +0000 Message-Id: <20220126183429.1840447-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.35.0.rc0.227.g00780c9af4-goog In-Reply-To: <20220126183429.1840447-1-pasha.tatashin@soleen.com> References: <20220126183429.1840447-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 1357EC0036 X-Stat-Signature: 846ud1499dxtjwih98631zgkfzre5ho6 X-Rspam-User: nil Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=l3wPcJdV; spf=pass (imf28.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.171 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1643222077-41936 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that set_page_count() is not called from outside anymore and about to be removed, init_page_count() is the only function that is going to be used to unconditionally set _refcount, however it is restricted to set it only to 1. Make init_page_count() aligned with the other page_ref_* functions by renaming it. Signed-off-by: Pasha Tatashin Acked-by: Geert Uytterhoeven --- arch/m68k/mm/motorola.c | 2 +- include/linux/mm.h | 2 +- include/linux/page_ref.h | 10 +++++++--- mm/page_alloc.c | 2 +- 4 files changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index ecbe948f4c1a..dd3b77d03d5c 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -133,7 +133,7 @@ void __init init_pointer_table(void *table, int type) /* unreserve the page so it's possible to free that page */ __ClearPageReserved(PD_PAGE(dp)); - init_page_count(PD_PAGE(dp)); + page_ref_init(PD_PAGE(dp)); return; } diff --git a/include/linux/mm.h b/include/linux/mm.h index 45bcd6f78141..cd8b9a592235 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2467,7 +2467,7 @@ extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end); static inline void free_reserved_page(struct page *page) { ClearPageReserved(page); - init_page_count(page); + page_ref_init(page); __free_page(page); adjust_managed_page_count(page, 1); } diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 03e21ce2f1bd..1af12a0d7ba1 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -107,10 +107,14 @@ static inline void folio_set_count(struct folio *folio, int v) } /* - * Setup the page count before being freed into the page allocator for - * the first time (boot or memory hotplug) + * Setup the page refcount to one before being freed into the page allocator. + * The memory might not be initialized and therefore there cannot be any + * assumptions about the current value of page->_refcount. This call should be + * done during boot when memory is being initialized, during memory hotplug + * when new memory is added, or when a previous reserved memory is unreserved + * this is the first time kernel take control of the given memory. */ -static inline void init_page_count(struct page *page) +static inline void page_ref_init(struct page *page) { set_page_count(page, 1); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0fa100152a2a..cbe444d74e8a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1570,7 +1570,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn, { mm_zero_struct_page(page); set_page_links(page, zone, nid, pfn); - init_page_count(page); + page_ref_init(page); page_mapcount_reset(page); page_cpupid_reset_last(page); page_kasan_tag_reset(page); From patchwork Wed Jan 26 18:34:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12725590 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC7DFC2BA4C for ; Wed, 26 Jan 2022 18:34:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 911A06B007D; Wed, 26 Jan 2022 13:34:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 898CD6B007E; Wed, 26 Jan 2022 13:34:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69E6D6B0080; Wed, 26 Jan 2022 13:34:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id 48B0D6B007D for ; Wed, 26 Jan 2022 13:34:40 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 01A29876F6 for ; Wed, 26 Jan 2022 18:34:40 +0000 (UTC) X-FDA: 79073289120.15.0809190 Received: from mail-qv1-f47.google.com (mail-qv1-f47.google.com [209.85.219.47]) by imf30.hostedemail.com (Postfix) with ESMTP id 080D080010 for ; Wed, 26 Jan 2022 18:34:38 +0000 (UTC) Received: by mail-qv1-f47.google.com with SMTP id b12so627470qvz.5 for ; Wed, 26 Jan 2022 10:34:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=gYivpEDsQjEq78GhHcAcDgg59f4QbW6Q5APhuV5RVYI=; b=PE3E80IyrXq8BCsBStLnLyvmcgNmV/BpCZmgYh+K7CEQaynk18nntY72LuFtCe7+ha e9s+aKdASKSnS081OwbL3xJFK1jhEuWqFt7PsB5bp5pa9tjcBRynTMdFH8CXbWj4B/xd k+VNwwpTTB2t0ppy9CwDHCTpZaP/pQjGE5H5oXAmrqW5EAJCmoWqnD34gpvK+1bH5TnQ ldZmu8X1VXTq40kLf8Y8enLhHNbaWFRCjbfITCbKAIwgxTWSbWYykxQEAYsCkIhPPYlD 2T3JlwV/pLOZxZ2PzS1Z+mkkWt91mujqKbavrDfOPwxP2roA7Yddmbz1UckgKSb4hThF JZcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gYivpEDsQjEq78GhHcAcDgg59f4QbW6Q5APhuV5RVYI=; b=4hpMWeh2XxjzyhcNLOtgKX/d1DVRH1k6TmlqtQ17jfvRrjCeD8FuSc2YyI5nioEX3W ICd/sn70EBtLQvhq//kHlW1lpAjr9m3ZbvignC12RWB6kNPL+bLYIibl+7/v5vKHaZpK quFzWHYc50+Jz9J9cTFUp6m4l1jtF7Z28XdqwJFWhEwBkLGSSNLQCfB9p2NXUYxzTwMY KkHxM92ZgWWZW7YT9QRGQPu3YAzS5zAR35zbhE7FLfVzrLjfJy1nIWnJDnGiHI4Ra0vr jXVj5LKMJEdpUu5WZhiUyZMsi3Zur0wLdh8z8UAkfwCPZs8MNlNjVi7NvqIa3tDI/Aam ZMmQ== X-Gm-Message-State: AOAM530ypQmJ18V52zAQoYLqsYDAr8NzG7RcA+22wJG3ONHtJltPQsOR +UrryHkzMCjFlIvihuear6n3hA== X-Google-Smtp-Source: ABdhPJwprdDGrDVv4RqpUnk5EiygoP6cVKM2Clr2Z/fCbgxkukmcUlI8BNQaQsir3IwuPZgHzOc71A== X-Received: by 2002:a05:6214:d6e:: with SMTP id 14mr22475789qvs.63.1643222078322; Wed, 26 Jan 2022 10:34:38 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id u17sm35886qki.12.2022.01.26.10.34.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jan 2022 10:34:37 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com, hughd@google.com Subject: [PATCH v3 6/9] mm: remove set_page_count() Date: Wed, 26 Jan 2022 18:34:26 +0000 Message-Id: <20220126183429.1840447-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.35.0.rc0.227.g00780c9af4-goog In-Reply-To: <20220126183429.1840447-1-pasha.tatashin@soleen.com> References: <20220126183429.1840447-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Stat-Signature: 9i3pm7pcb7jgexqbinypyeh4kxppmfw9 X-Rspam-User: nil Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=PE3E80Iy; spf=pass (imf30.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.47 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 080D080010 X-HE-Tag: 1643222078-367818 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_count() is dangerous because it resets _refcount to an arbitrary value. Instead we now initialize _refcount to 1 only once, and the rest of the time we are using add/dec/cmpxchg to have a contiguous track of the counter. Remove set_page_count() and add new tracing hooks to page_ref_init(). Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 27 ++++++++----------- include/trace/events/page_ref.h | 46 ++++++++++++++++++++++++++++----- mm/debug_page_ref.c | 8 +++--- 3 files changed, 54 insertions(+), 27 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 1af12a0d7ba1..d7316881626c 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -7,7 +7,7 @@ #include #include -DECLARE_TRACEPOINT(page_ref_set); +DECLARE_TRACEPOINT(page_ref_init); DECLARE_TRACEPOINT(page_ref_mod); DECLARE_TRACEPOINT(page_ref_mod_and_test); DECLARE_TRACEPOINT(page_ref_mod_and_return); @@ -26,7 +26,7 @@ DECLARE_TRACEPOINT(page_ref_unfreeze); */ #define page_ref_tracepoint_active(t) tracepoint_enabled(t) -extern void __page_ref_set(struct page *page, int v); +extern void __page_ref_init(struct page *page); extern void __page_ref_mod(struct page *page, int v); extern void __page_ref_mod_and_test(struct page *page, int v, int ret); extern void __page_ref_mod_and_return(struct page *page, int v, int ret); @@ -38,7 +38,7 @@ extern void __page_ref_unfreeze(struct page *page, int v); #define page_ref_tracepoint_active(t) false -static inline void __page_ref_set(struct page *page, int v) +static inline void __page_ref_init(struct page *page) { } static inline void __page_ref_mod(struct page *page, int v) @@ -94,18 +94,6 @@ static inline int page_count(const struct page *page) return folio_ref_count(page_folio(page)); } -static inline void set_page_count(struct page *page, int v) -{ - atomic_set(&page->_refcount, v); - if (page_ref_tracepoint_active(page_ref_set)) - __page_ref_set(page, v); -} - -static inline void folio_set_count(struct folio *folio, int v) -{ - set_page_count(&folio->page, v); -} - /* * Setup the page refcount to one before being freed into the page allocator. * The memory might not be initialized and therefore there cannot be any @@ -116,7 +104,14 @@ static inline void folio_set_count(struct folio *folio, int v) */ static inline void page_ref_init(struct page *page) { - set_page_count(page, 1); + atomic_set(&page->_refcount, 1); + if (page_ref_tracepoint_active(page_ref_init)) + __page_ref_init(page); +} + +static inline void folio_ref_init(struct folio *folio) +{ + page_ref_init(&folio->page); } static inline int page_ref_add_return(struct page *page, int nr) diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index 8a99c1cd417b..87551bb1df9e 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -10,6 +10,45 @@ #include #include +DECLARE_EVENT_CLASS(page_ref_init_template, + + TP_PROTO(struct page *page), + + TP_ARGS(page), + + TP_STRUCT__entry( + __field(unsigned long, pfn) + __field(unsigned long, flags) + __field(int, count) + __field(int, mapcount) + __field(void *, mapping) + __field(int, mt) + __field(int, val) + ), + + TP_fast_assign( + __entry->pfn = page_to_pfn(page); + __entry->flags = page->flags; + __entry->count = page_ref_count(page); + __entry->mapcount = page_mapcount(page); + __entry->mapping = page->mapping; + __entry->mt = get_pageblock_migratetype(page); + ), + + TP_printk("pfn=0x%lx flags=%s count=%d mapcount=%d mapping=%p mt=%d", + __entry->pfn, + show_page_flags(__entry->flags & PAGEFLAGS_MASK), + __entry->count, + __entry->mapcount, __entry->mapping, __entry->mt) +); + +DEFINE_EVENT(page_ref_init_template, page_ref_init, + + TP_PROTO(struct page *page), + + TP_ARGS(page) +); + DECLARE_EVENT_CLASS(page_ref_mod_template, TP_PROTO(struct page *page, int v), @@ -44,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __entry->val) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_set, - - TP_PROTO(struct page *page, int v), - - TP_ARGS(page, v) -); - DEFINE_EVENT(page_ref_mod_template, page_ref_mod, TP_PROTO(struct page *page, int v), diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c index f3b2c9d3ece2..e32149734122 100644 --- a/mm/debug_page_ref.c +++ b/mm/debug_page_ref.c @@ -5,12 +5,12 @@ #define CREATE_TRACE_POINTS #include -void __page_ref_set(struct page *page, int v) +void __page_ref_init(struct page *page) { - trace_page_ref_set(page, v); + trace_page_ref_init(page); } -EXPORT_SYMBOL(__page_ref_set); -EXPORT_TRACEPOINT_SYMBOL(page_ref_set); +EXPORT_SYMBOL(__page_ref_init); +EXPORT_TRACEPOINT_SYMBOL(page_ref_init); void __page_ref_mod(struct page *page, int v) { From patchwork Wed Jan 26 18:34:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12725591 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E46CC63682 for ; Wed, 26 Jan 2022 18:34:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F5E86B007E; Wed, 26 Jan 2022 13:34:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F4F86B0080; Wed, 26 Jan 2022 13:34:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 122C56B0081; Wed, 26 Jan 2022 13:34:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0166.hostedemail.com [216.40.44.166]) by kanga.kvack.org (Postfix) with ESMTP id E98716B007E for ; Wed, 26 Jan 2022 13:34:40 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9C83891E50 for ; Wed, 26 Jan 2022 18:34:40 +0000 (UTC) X-FDA: 79073289120.12.E20A0DF Received: from mail-qv1-f51.google.com (mail-qv1-f51.google.com [209.85.219.51]) by imf11.hostedemail.com (Postfix) with ESMTP id 022E140035 for ; Wed, 26 Jan 2022 18:34:39 +0000 (UTC) Received: by mail-qv1-f51.google.com with SMTP id t7so680006qvj.0 for ; Wed, 26 Jan 2022 10:34:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=C+eeUKO5CVQ7ifoeaB9j3JsCJ7+YnH2kUfg22hZzWck=; b=XlIIm3wFS2VYZgNZKO5PJxQyYem5gwT2tE11aOfFUFxbLG8yhzwPtal/KkTwyPV6jq hq7je5EMTfT6eIOKF0UtJyZGr1AEA80ucW0fZFZ+kF+1ZAhRp7G19Ib/V2EsejOfjrI1 0k35qJFNU3ZaGcaTggA7hfMLQZc+gq+K8Fh2KbqC7/Dwdv9U7IVlrAAUt0EeGa5cz6ae SDb/wUBW1tkvRbnTEIoKDAiSr7Qy5GL0TJ9Eq+cgV9VvvI5FZtpeXiyi9roHEEO0sGY7 owIqf2CsLI7NGOad5+EtsSD7l8LN6widewoojQRFBiDTVsa0S9IXPM5RqIvlqslly+iL xLCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=C+eeUKO5CVQ7ifoeaB9j3JsCJ7+YnH2kUfg22hZzWck=; b=X7KQQ7942UDmL8K6g8sU2dQz5C72F2gCXXU3jZYlA+/9+EHZ3KGdTWvShclekXqhTY DvFhTnwaB01e6+FsmiIK0Jb7SheSTYpOIB4Yv3xOKXTRxFRV+Orff3mJHBHa8ICXvsSl D97ZuRdjm/KnsC+QtJaLtLg+WevE+miLouZk6ktWqyVqqPmaIk+LYTcMCpicvPacOph0 6gcu6hOjNS4m8QiHBC+MtptRupqb27+W5ObFlWvwzZnUbr4foH6UclAUTUJBQMG1YthD O4+V6rUY55md99JhJfFKqpmNCM7FIBy6ahWt3E79u4gwhOIdzviXSjSBhTxbLfB0Z0KU 0rQQ== X-Gm-Message-State: AOAM531sgKnASBTaXicy/NQcbVPOmgz2L4b/tbZavoIuoKMf2Pc7RfEK 71LmuHeHhcnUliKVRCFVpUQjtQ== X-Google-Smtp-Source: ABdhPJwtx1adUUMZjurVfu5lfu5E+GD/aExk1KJah16hTOTHhksPValmJe7VDZJFddjF2UR8dJc5hg== X-Received: by 2002:a05:6214:c82:: with SMTP id r2mr6785664qvr.81.1643222079173; Wed, 26 Jan 2022 10:34:39 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id u17sm35886qki.12.2022.01.26.10.34.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jan 2022 10:34:38 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com, hughd@google.com Subject: [PATCH v3 7/9] mm: simplify page_ref_* functions Date: Wed, 26 Jan 2022 18:34:27 +0000 Message-Id: <20220126183429.1840447-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.35.0.rc0.227.g00780c9af4-goog In-Reply-To: <20220126183429.1840447-1-pasha.tatashin@soleen.com> References: <20220126183429.1840447-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspam-User: nil X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 022E140035 X-Stat-Signature: i83u58surkzhotnekkmw4bj8jdeabd1i Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=XlIIm3wF; spf=pass (imf11.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.51 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1643222079-748217 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000294, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that we are using atomic_fetch* variants to add/sub/inc/dec page _refcount, it makes sense to combined page_ref_* return and non return functions. Also remove some extra trace points for non-return variants. This improves the tracability by always recording the new _refcount value after the modifications has occurred. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 102 +++++++++----------------------- include/trace/events/page_ref.h | 18 +----- mm/debug_page_ref.c | 14 ----- 3 files changed, 31 insertions(+), 103 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index d7316881626c..243fc60ae6c8 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -8,8 +8,6 @@ #include DECLARE_TRACEPOINT(page_ref_init); -DECLARE_TRACEPOINT(page_ref_mod); -DECLARE_TRACEPOINT(page_ref_mod_and_test); DECLARE_TRACEPOINT(page_ref_mod_and_return); DECLARE_TRACEPOINT(page_ref_mod_unless); DECLARE_TRACEPOINT(page_ref_freeze); @@ -27,8 +25,6 @@ DECLARE_TRACEPOINT(page_ref_unfreeze); #define page_ref_tracepoint_active(t) tracepoint_enabled(t) extern void __page_ref_init(struct page *page); -extern void __page_ref_mod(struct page *page, int v); -extern void __page_ref_mod_and_test(struct page *page, int v, int ret); extern void __page_ref_mod_and_return(struct page *page, int v, int ret); extern void __page_ref_mod_unless(struct page *page, int v, int u); extern void __page_ref_freeze(struct page *page, int v, int ret); @@ -41,12 +37,6 @@ extern void __page_ref_unfreeze(struct page *page, int v); static inline void __page_ref_init(struct page *page) { } -static inline void __page_ref_mod(struct page *page, int v) -{ -} -static inline void __page_ref_mod_and_test(struct page *page, int v, int ret) -{ -} static inline void __page_ref_mod_and_return(struct page *page, int v, int ret) { } @@ -127,12 +117,7 @@ static inline int page_ref_add_return(struct page *page, int nr) static inline void page_ref_add(struct page *page, int nr) { - int old_val = atomic_fetch_add(nr, &page->_refcount); - int new_val = old_val + nr; - - VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, nr); + page_ref_add_return(page, nr); } static inline void folio_ref_add(struct folio *folio, int nr) @@ -140,30 +125,25 @@ static inline void folio_ref_add(struct folio *folio, int nr) page_ref_add(&folio->page, nr); } -static inline void page_ref_sub(struct page *page, int nr) +static inline int page_ref_sub_return(struct page *page, int nr) { int old_val = atomic_fetch_sub(nr, &page->_refcount); int new_val = old_val - nr; VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, -nr); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, -nr, new_val); + return new_val; } -static inline void folio_ref_sub(struct folio *folio, int nr) +static inline void page_ref_sub(struct page *page, int nr) { - page_ref_sub(&folio->page, nr); + page_ref_sub_return(page, nr); } -static inline int page_ref_sub_return(struct page *page, int nr) +static inline void folio_ref_sub(struct folio *folio, int nr) { - int old_val = atomic_fetch_sub(nr, &page->_refcount); - int new_val = old_val - nr; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -nr, new_val); - return new_val; + page_ref_sub(&folio->page, nr); } static inline int folio_ref_sub_return(struct folio *folio, int nr) @@ -171,14 +151,20 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr) return page_ref_sub_return(&folio->page, nr); } -static inline void page_ref_inc(struct page *page) +static inline int page_ref_inc_return(struct page *page) { int old_val = atomic_fetch_inc(&page->_refcount); int new_val = old_val + 1; VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, 1); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, 1, new_val); + return new_val; +} + +static inline void page_ref_inc(struct page *page) +{ + page_ref_inc_return(page); } static inline void folio_ref_inc(struct folio *folio) @@ -186,14 +172,20 @@ static inline void folio_ref_inc(struct folio *folio) page_ref_inc(&folio->page); } -static inline void page_ref_dec(struct page *page) +static inline int page_ref_dec_return(struct page *page) { int old_val = atomic_fetch_dec(&page->_refcount); int new_val = old_val - 1; VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, -1); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, -1, new_val); + return new_val; +} + +static inline void page_ref_dec(struct page *page) +{ + page_ref_dec_return(page); } static inline void folio_ref_dec(struct folio *folio) @@ -203,14 +195,7 @@ static inline void folio_ref_dec(struct folio *folio) static inline int page_ref_sub_and_test(struct page *page, int nr) { - int old_val = atomic_fetch_sub(nr, &page->_refcount); - int new_val = old_val - nr; - int ret = new_val == 0; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_test)) - __page_ref_mod_and_test(page, -nr, ret); - return ret; + return page_ref_sub_return(page, nr) == 0; } static inline int folio_ref_sub_and_test(struct folio *folio, int nr) @@ -218,17 +203,6 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr) return page_ref_sub_and_test(&folio->page, nr); } -static inline int page_ref_inc_return(struct page *page) -{ - int old_val = atomic_fetch_inc(&page->_refcount); - int new_val = old_val + 1; - - VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, 1, new_val); - return new_val; -} - static inline int folio_ref_inc_return(struct folio *folio) { return page_ref_inc_return(&folio->page); @@ -236,14 +210,7 @@ static inline int folio_ref_inc_return(struct folio *folio) static inline int page_ref_dec_and_test(struct page *page) { - int old_val = atomic_fetch_dec(&page->_refcount); - int new_val = old_val - 1; - int ret = new_val == 0; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_test)) - __page_ref_mod_and_test(page, -1, ret); - return ret; + return page_ref_dec_return(page) == 0; } static inline int folio_ref_dec_and_test(struct folio *folio) @@ -251,17 +218,6 @@ static inline int folio_ref_dec_and_test(struct folio *folio) return page_ref_dec_and_test(&folio->page); } -static inline int page_ref_dec_return(struct page *page) -{ - int old_val = atomic_fetch_dec(&page->_refcount); - int new_val = old_val - 1; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -1, new_val); - return new_val; -} - static inline int folio_ref_dec_return(struct folio *folio) { return page_ref_dec_return(&folio->page); diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index 87551bb1df9e..35cd795aa7c6 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -49,7 +49,7 @@ DEFINE_EVENT(page_ref_init_template, page_ref_init, TP_ARGS(page) ); -DECLARE_EVENT_CLASS(page_ref_mod_template, +DECLARE_EVENT_CLASS(page_ref_unfreeze_template, TP_PROTO(struct page *page, int v), @@ -83,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __entry->val) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_mod, - - TP_PROTO(struct page *page, int v), - - TP_ARGS(page, v) -); - DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, TP_PROTO(struct page *page, int v, int ret), @@ -126,13 +119,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, __entry->val, __entry->ret) ); -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_test, - - TP_PROTO(struct page *page, int v, int ret), - - TP_ARGS(page, v, ret) -); - DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_return, TP_PROTO(struct page *page, int v, int ret), @@ -154,7 +140,7 @@ DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_freeze, TP_ARGS(page, v, ret) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_unfreeze, +DEFINE_EVENT(page_ref_unfreeze_template, page_ref_unfreeze, TP_PROTO(struct page *page, int v), diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c index e32149734122..1de9d93cca25 100644 --- a/mm/debug_page_ref.c +++ b/mm/debug_page_ref.c @@ -12,20 +12,6 @@ void __page_ref_init(struct page *page) EXPORT_SYMBOL(__page_ref_init); EXPORT_TRACEPOINT_SYMBOL(page_ref_init); -void __page_ref_mod(struct page *page, int v) -{ - trace_page_ref_mod(page, v); -} -EXPORT_SYMBOL(__page_ref_mod); -EXPORT_TRACEPOINT_SYMBOL(page_ref_mod); - -void __page_ref_mod_and_test(struct page *page, int v, int ret) -{ - trace_page_ref_mod_and_test(page, v, ret); -} -EXPORT_SYMBOL(__page_ref_mod_and_test); -EXPORT_TRACEPOINT_SYMBOL(page_ref_mod_and_test); - void __page_ref_mod_and_return(struct page *page, int v, int ret) { trace_page_ref_mod_and_return(page, v, ret); From patchwork Wed Jan 26 18:34:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12725592 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36E99C63684 for ; Wed, 26 Jan 2022 18:34:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BD1356B0080; Wed, 26 Jan 2022 13:34:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B32276B0081; Wed, 26 Jan 2022 13:34:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9ACB56B0082; Wed, 26 Jan 2022 13:34:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0087.hostedemail.com [216.40.44.87]) by kanga.kvack.org (Postfix) with ESMTP id 852F86B0080 for ; Wed, 26 Jan 2022 13:34:41 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 41F52181B0495 for ; Wed, 26 Jan 2022 18:34:41 +0000 (UTC) X-FDA: 79073289162.24.C72CF89 Received: from mail-qv1-f50.google.com (mail-qv1-f50.google.com [209.85.219.50]) by imf22.hostedemail.com (Postfix) with ESMTP id BFA8BC0009 for ; Wed, 26 Jan 2022 18:34:40 +0000 (UTC) Received: by mail-qv1-f50.google.com with SMTP id o9so581290qvy.13 for ; Wed, 26 Jan 2022 10:34:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=9qGtluafW49WVeKC1+T5xXe9Gi8hCvJzE0QRf1Gctb0=; b=WLC8zAaBt+pGLb2dprhgeTFhzFiOz0Dwgv2sKq4nVV6mme5Ik3TLKMC25II36bZZ5S nkiFdbElwzoNvpJfm4IKKcbz4QHq3OId66HMHWWICIgu3RfIP3Y28sd06cMaFCGtpsoS og++vqB9AwLOYf7hTuIUOpHHMLg2tMzq5LGyO4/ee6B06PVN72Mnxy/Dau54Nz815L0m CT7F1/eD+VSzR33uglaTP7hK7ASfdfmOKDZ4FSRaQjQRm6Pxa7fA7N4bO8kgwtKy7d54 bCo3neAgZVI3g2RjtaU1yvqnnOVddQ/gjMInuoUFYJvOFvmOj49aqKxoHSPnVsaElEQ5 B3Mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9qGtluafW49WVeKC1+T5xXe9Gi8hCvJzE0QRf1Gctb0=; b=VAR3O9bb2LXfF9Se8IHGnfx/AK1rhkceaV98tmN0ER2fcyISaqEk7zSPpEGte6t8Zf I7C/TZCTtmN9+7uGmE++KVViLLbQzjLUOzqs3uVswP71xi9mFoOysmBAM5O3JBnKtBqk HgdI7MZlQA67vagN6HLBmRAz+CkX+JkNZrk9foFFpGw+HC16n7pgMKRIWnATHGXe1cjD xbVgt/kkOcCD3W5KoAo1sE7kiUCk1Ba3K3Hdp3YblqEMzRIRzP3E8Hi9JYxSAl3ks0YK +BqybON2wvm26SffUS/6UhjuFH5ojcg2DIpQDnY8YomuF/HQPBxvfgtArbOeCO6acuDI g40A== X-Gm-Message-State: AOAM531f2YPgoljL2n5zjaK1UPB+4fwBUN6C1I+bwDtwbCEf2dHe751E CkwtTX04ehnebr+3wzLFPVTaDA== X-Google-Smtp-Source: ABdhPJxJtoQHYrpWNf8C4q9GS25k4xHwcCuWK9f4FFFmUET3LankryAhv5s7uODB4uZa4iJOyGzaCg== X-Received: by 2002:a05:6214:1ccb:: with SMTP id g11mr25729042qvd.97.1643222080024; Wed, 26 Jan 2022 10:34:40 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id u17sm35886qki.12.2022.01.26.10.34.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jan 2022 10:34:39 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com, hughd@google.com Subject: [PATCH v3 8/9] mm: do not use atomic_set_release in page_ref_unfreeze() Date: Wed, 26 Jan 2022 18:34:28 +0000 Message-Id: <20220126183429.1840447-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.35.0.rc0.227.g00780c9af4-goog In-Reply-To: <20220126183429.1840447-1-pasha.tatashin@soleen.com> References: <20220126183429.1840447-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=WLC8zAaB; spf=pass (imf22.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.50 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-Rspam-User: nil X-Rspamd-Queue-Id: BFA8BC0009 X-Stat-Signature: 9udhawgqmfjbdyq9hywz5ddf41zf5uit X-Rspamd-Server: rspam12 X-HE-Tag: 1643222080-841919 X-Bogosity: Ham, tests=bogofilter, spamicity=0.002564, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In we set the old _refcount value after verifying that the old value was indeed 0. VM_BUG_ON_PAGE(page_count(page) != 0, page); < the _refcount may change here> atomic_set_release(&page->_refcount, count); To avoid the smal gap where _refcount may change lets verify the time of the _refcount at the time of the set operation. Use atomic_xchg_release() and at the set time verify that the value was 0. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 243fc60ae6c8..9efabeff4e06 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -322,10 +322,9 @@ static inline int folio_ref_freeze(struct folio *folio, int count) static inline void page_ref_unfreeze(struct page *page, int count) { - VM_BUG_ON_PAGE(page_count(page) != 0, page); - VM_BUG_ON(count == 0); + int old_val = atomic_xchg_release(&page->_refcount, count); - atomic_set_release(&page->_refcount, count); + VM_BUG_ON_PAGE(count == 0 || old_val != 0, page); if (page_ref_tracepoint_active(page_ref_unfreeze)) __page_ref_unfreeze(page, count); } From patchwork Wed Jan 26 18:34:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12725593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0CB4C28CF5 for ; Wed, 26 Jan 2022 18:34:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A3636B0081; Wed, 26 Jan 2022 13:34:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 62AFD6B0082; Wed, 26 Jan 2022 13:34:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4CB1A6B0083; Wed, 26 Jan 2022 13:34:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 357C26B0081 for ; Wed, 26 Jan 2022 13:34:42 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id F1A1918189161 for ; Wed, 26 Jan 2022 18:34:41 +0000 (UTC) X-FDA: 79073289204.28.F38BD31 Received: from mail-qv1-f54.google.com (mail-qv1-f54.google.com [209.85.219.54]) by imf18.hostedemail.com (Postfix) with ESMTP id 95E191C0019 for ; Wed, 26 Jan 2022 18:34:41 +0000 (UTC) Received: by mail-qv1-f54.google.com with SMTP id d8so644624qvv.2 for ; Wed, 26 Jan 2022 10:34:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=l8kTegpTqMksygyMNd7O9r7qJ6myjdhmrXkVSbv/IzA=; b=dcgO/hZ6elZPBdqxd4MboPCo3EmolMnIS887io5RaZKDoH7H3DsIKjWe4xL0M92rAN NGAs7DZ4N0Cm3Mu1BqA/u4YPeEnAejo0/rDUXvOGHwdQqDTzgDduz/YPu8cTYag6DNR2 pZks0Czhk7nN87F+uqzoh7CXPfDmtA2bXmMEn5cLDfgC74eDx4rtj9zlO5y9Gz/6RdQY QYtkCpcN6yotcPm74Tik/4EYy8j0Ppcfy9AOeQyVai8riP0oJXWDwXI7oyJNNCbj4jxC H/r9ApPE7qNwyGQk9KJq0YaO8bT0ni9gz3Tm8obbCQhHb9KIqrUiniiwj2Db7oc6jaKC ByKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l8kTegpTqMksygyMNd7O9r7qJ6myjdhmrXkVSbv/IzA=; b=3JZNOPdBmd08HnX1LeLd6+mnuRAyMI2RneFkdHuYcyg8ytFTcLc3r7Uv30q6FA1kuL uep1teImv0icKQ98SkAkPbSr3kCjSFFkpfK0BP5WYQxM+nefn4fasG6kXFRdmqDUw8EH kCREwhcPSHv9KaNR1+3P6AQq/rTFnidQeFTWSHFptwbvso4ey3gsSnMbzTVlns4dpzn2 80zLEglQluwuwEqoftrJC7Dp4gUBLQBZvkGE6ogJUtCIs7UwLKCuFAiPwdZAWCa1fP1T XlXJRGdOM5gLFtEkzpb9nS9nqqZKHCzoNAMJBFqTy7YsFt/5hPrIlk2S6q+4xeYhqAYH iodQ== X-Gm-Message-State: AOAM532T2+eGQ1Ona8Po/D3QEMrF//k95wPUFpLG5D+QZ+1b/gVy/60A FbYsWnNdXExu5IJmnNOmUnEOwA== X-Google-Smtp-Source: ABdhPJx1uUL1QguAQjuOjaCUHuB84L0yLxyE8YceJm8x5PIc7mOwOmi6RJ2+VMz5b+XHNn1KPA5orA== X-Received: by 2002:ad4:5ca4:: with SMTP id q4mr24120qvh.64.1643222080942; Wed, 26 Jan 2022 10:34:40 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id u17sm35886qki.12.2022.01.26.10.34.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jan 2022 10:34:40 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com, hughd@google.com Subject: [PATCH v3 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze(). Date: Wed, 26 Jan 2022 18:34:29 +0000 Message-Id: <20220126183429.1840447-10-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.35.0.rc0.227.g00780c9af4-goog In-Reply-To: <20220126183429.1840447-1-pasha.tatashin@soleen.com> References: <20220126183429.1840447-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 95E191C0019 X-Stat-Signature: poum316os3r94d1qfkucoenzcfkfdwe7 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b="dcgO/hZ6"; dmarc=none; spf=pass (imf18.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.54 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-Rspam-User: nil X-HE-Tag: 1643222081-489765 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000024, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: page_ref_freeze and page_ref_unfreeze are designed to be used as a pair. They protect critical sections where struct page can be modified. page_ref_unfreeze() is protected by _release() atomic operation, but page_ref_freeze() is not as it is assumed that cmpxch provides the full barrier. Instead, use the appropriate atomic_cmpxchg_acquire() to ensure that memory model is excplicitly followed. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 9efabeff4e06..45be731d8919 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -308,7 +308,8 @@ static inline bool folio_try_get_rcu(struct folio *folio) static inline int page_ref_freeze(struct page *page, int count) { - int ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count); + int old_val = atomic_cmpxchg_acquire(&page->_refcount, count, 0); + int ret = likely(old_val == count); if (page_ref_tracepoint_active(page_ref_freeze)) __page_ref_freeze(page, count, ret);