From patchwork Tue Dec 21 15:01:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12689805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C89BC433F5 for ; Tue, 21 Dec 2021 15:01:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AC87B6B0071; Tue, 21 Dec 2021 10:01:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A76BC6B0073; Tue, 21 Dec 2021 10:01:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9177F6B0074; Tue, 21 Dec 2021 10:01:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0086.hostedemail.com [216.40.44.86]) by kanga.kvack.org (Postfix) with ESMTP id 7E6926B0071 for ; Tue, 21 Dec 2021 10:01:46 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3BF1918105267 for ; Tue, 21 Dec 2021 15:01:46 +0000 (UTC) X-FDA: 78942115812.13.854B06A Received: from mail-qk1-f171.google.com (mail-qk1-f171.google.com [209.85.222.171]) by imf24.hostedemail.com (Postfix) with ESMTP id B25D5180043 for ; Tue, 21 Dec 2021 15:01:41 +0000 (UTC) Received: by mail-qk1-f171.google.com with SMTP id f138so1328304qke.10 for ; Tue, 21 Dec 2021 07:01:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=VWjTX7nyk2kx8oM7xeRq6hwgPVlUZA1U2SvwycLjyC4=; b=lgtZiYGORboj3gotQhfaYY0bpKv8j7J53Ax30ReWADNOLR3pP3G9Z7zz0Dj3k7Uk5e 0ux/jWhBkcDDWZNbsnxvxvWqzhDnWsFJL6QgUavAiI/eka9aWQq8xoxVtqUlS9RGSBUW nlTTru8N+9u/o7uUhKBpJshNUteAtIguMa1UgZsfWlJD78EVWpHWIFS2slz+8JXVVG1f JLyUy/6E5WopWvHFpq8EHOeA9IBe+56qANZgmDPrZtgk0pC+n3iGJQHqc4n2berXKDeA jCzCBlAGXkRImCHI7kzOxNqbzac8AssrtfSH3cOrvGK1iK8PCqdcWvyNr7g3q0q6Tx+5 tI5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VWjTX7nyk2kx8oM7xeRq6hwgPVlUZA1U2SvwycLjyC4=; b=FJF8omTwet2CwSGVFpd+th+ctxcEqHofPtiIAfMHdSwI8TvG1HbNaXAKLXNCGhagZP iuT1sxy/KRXK4X35WGHl6v/FbHqFckBIsORR78F1kkIr7SLZIHeuSbmOgpF415OhP7/O 2tPSs1eTF8iGKkL2PvY5dLQ+zSuLxebvM/LVAf9z3ebupuA3FhOHWNv85gAQhdPahYdF aAoROrBDI2J871UBd+YFxB+00wAmP7IoS9gCIaMpxYXo+vPoE0hZIEuOC4TlAWeIGMUs KYHTq/XLXF/hN1g5eaxy2BGuXu74/7edamfStNXSHlXMpCXwWzcnBhvqWywYh9ms3uiB 0rfg== X-Gm-Message-State: AOAM530x402n+FxU42OV3giRfKOtfNF1LTgZ7mKrjtMCVxqAVCrRqF7I gQNNsYEMv4kNHXtkYqKoYajlLA== X-Google-Smtp-Source: ABdhPJw9luqKLc24vjtBqjzZvm0/teGU0ZGMeaGDdyuIHDvuLAMXiYA2USkVhgRDMEYtHXrjRz3seg== X-Received: by 2002:a37:2c03:: with SMTP id s3mr2285026qkh.83.1640098904994; Tue, 21 Dec 2021 07:01:44 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id d4sm1991371qkn.79.2021.12.21.07.01.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Dec 2021 07:01:44 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH v2 1/9] mm: add overflow and underflow checks for page->_refcount Date: Tue, 21 Dec 2021 15:01:32 +0000 Message-Id: <20211221150140.988298-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.307.g9b7440fafd-goog In-Reply-To: <20211221150140.988298-1-pasha.tatashin@soleen.com> References: <20211221150140.988298-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B25D5180043 X-Stat-Signature: 1i7o8ph3faedsr46jjcuap7gz34gsz43 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=lgtZiYGO; dmarc=none; spf=pass (imf24.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.171 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-Rspamd-Server: rspam11 X-HE-Tag: 1640098901-776354 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The problems with page->_refcount are hard to debug, because usually when they are detected, the damage has occurred a long time ago. Yet, the problems with invalid page refcount may be catastrophic and lead to memory corruptions. Reduce the scope of when the _refcount problems manifest themselves by adding checks for underflows and overflows into functions that modify _refcount. Use atomic_fetch_* functions to get the old values of the _refcount, and use it to check for overflow/underflow. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 59 +++++++++++++++++++++++++++++----------- 1 file changed, 43 insertions(+), 16 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 2e677e6ad09f..fe4864f7f69c 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -117,7 +117,10 @@ static inline void init_page_count(struct page *page) static inline void page_ref_add(struct page *page, int nr) { - atomic_add(nr, &page->_refcount); + int old_val = atomic_fetch_add(nr, &page->_refcount); + int new_val = old_val + nr; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, nr); } @@ -129,7 +132,10 @@ static inline void folio_ref_add(struct folio *folio, int nr) static inline void page_ref_sub(struct page *page, int nr) { - atomic_sub(nr, &page->_refcount); + int old_val = atomic_fetch_sub(nr, &page->_refcount); + int new_val = old_val - nr; + + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, -nr); } @@ -141,11 +147,13 @@ static inline void folio_ref_sub(struct folio *folio, int nr) static inline int page_ref_sub_return(struct page *page, int nr) { - int ret = atomic_sub_return(nr, &page->_refcount); + int old_val = atomic_fetch_sub(nr, &page->_refcount); + int new_val = old_val - nr; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -nr, ret); - return ret; + __page_ref_mod_and_return(page, -nr, new_val); + return new_val; } static inline int folio_ref_sub_return(struct folio *folio, int nr) @@ -155,7 +163,10 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr) static inline void page_ref_inc(struct page *page) { - atomic_inc(&page->_refcount); + int old_val = atomic_fetch_inc(&page->_refcount); + int new_val = old_val + 1; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, 1); } @@ -167,7 +178,10 @@ static inline void folio_ref_inc(struct folio *folio) static inline void page_ref_dec(struct page *page) { - atomic_dec(&page->_refcount); + int old_val = atomic_fetch_dec(&page->_refcount); + int new_val = old_val - 1; + + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod)) __page_ref_mod(page, -1); } @@ -179,8 +193,11 @@ static inline void folio_ref_dec(struct folio *folio) static inline int page_ref_sub_and_test(struct page *page, int nr) { - int ret = atomic_sub_and_test(nr, &page->_refcount); + int old_val = atomic_fetch_sub(nr, &page->_refcount); + int new_val = old_val - nr; + int ret = new_val == 0; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_test)) __page_ref_mod_and_test(page, -nr, ret); return ret; @@ -193,11 +210,13 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr) static inline int page_ref_inc_return(struct page *page) { - int ret = atomic_inc_return(&page->_refcount); + int old_val = atomic_fetch_inc(&page->_refcount); + int new_val = old_val + 1; + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, 1, ret); - return ret; + __page_ref_mod_and_return(page, 1, new_val); + return new_val; } static inline int folio_ref_inc_return(struct folio *folio) @@ -207,8 +226,11 @@ static inline int folio_ref_inc_return(struct folio *folio) static inline int page_ref_dec_and_test(struct page *page) { - int ret = atomic_dec_and_test(&page->_refcount); + int old_val = atomic_fetch_dec(&page->_refcount); + int new_val = old_val - 1; + int ret = new_val == 0; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_test)) __page_ref_mod_and_test(page, -1, ret); return ret; @@ -221,11 +243,13 @@ static inline int folio_ref_dec_and_test(struct folio *folio) static inline int page_ref_dec_return(struct page *page) { - int ret = atomic_dec_return(&page->_refcount); + int old_val = atomic_fetch_dec(&page->_refcount); + int new_val = old_val - 1; + VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -1, ret); - return ret; + __page_ref_mod_and_return(page, -1, new_val); + return new_val; } static inline int folio_ref_dec_return(struct folio *folio) @@ -235,8 +259,11 @@ static inline int folio_ref_dec_return(struct folio *folio) static inline bool page_ref_add_unless(struct page *page, int nr, int u) { - bool ret = atomic_add_unless(&page->_refcount, nr, u); + int old_val = atomic_fetch_add_unless(&page->_refcount, nr, u); + int new_val = old_val + nr; + int ret = old_val != u; + VM_BUG_ON_PAGE(ret && (unsigned int)new_val < (unsigned int)old_val, page); if (page_ref_tracepoint_active(page_ref_mod_unless)) __page_ref_mod_unless(page, nr, ret); return ret; From patchwork Tue Dec 21 15:01:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12689811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66A35C43219 for ; Tue, 21 Dec 2021 15:01:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F340A6B0075; Tue, 21 Dec 2021 10:01:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EE3586B0078; Tue, 21 Dec 2021 10:01:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DABA16B007B; Tue, 21 Dec 2021 10:01:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0054.hostedemail.com [216.40.44.54]) by kanga.kvack.org (Postfix) with ESMTP id CD4E86B0075 for ; Tue, 21 Dec 2021 10:01:58 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8FF59805C1 for ; Tue, 21 Dec 2021 15:01:58 +0000 (UTC) X-FDA: 78942116316.26.1CD6195 Received: from mail-qk1-f175.google.com (mail-qk1-f175.google.com [209.85.222.175]) by imf17.hostedemail.com (Postfix) with ESMTP id 1F13440087 for ; Tue, 21 Dec 2021 15:01:35 +0000 (UTC) Received: by mail-qk1-f175.google.com with SMTP id t6so12792599qkg.1 for ; Tue, 21 Dec 2021 07:01:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=8OLKCLK70MdlV7lY3Fm4nJPND3+4SBjM5tfYfr3at+o=; b=J7/VHH0JOS8YGb7cCfiDWVyL9L52RvtQXQMwPLoGbG4THf4d7TZ+nr3GN+lZxvr3gt DR+3zRsmjf8qmyeuY98lOO6DG0APhdsS3T9PbV7zB70pmM2ErA+pPzEESJOwAv5Fi81w KtsaENFiwW8dBAVRiZ7W+I6iah2RetGECM6ejHg3WUYuyUdI9g0vOUEOBGedrKnZPVaQ rSd6Yw4YzULM473hI0xFGj7CPWywVbUSzXVqRK4jNNnUPE9VstMypT1I0KCXBUKyb9WL V/LJtDyszKXavEJ85nP31HaFTBUoUKK4UZeO7tPUy+R2KUqthGLCG7zbzK/iMb56yo+U SmxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8OLKCLK70MdlV7lY3Fm4nJPND3+4SBjM5tfYfr3at+o=; b=4P86xaWnQPReo5/InvU1w/GaPAny+PThEsX3xX5pUr0Qu7tlCjprL3WdZX0VWKp3dX I64GMLGWxPKnSTsA5W1Wee6cKWTYbWm+aKTPj6ZccXIKoCK3ktRYnn8XebUtkoZcxQnU b3CUGRu1WpbRLefRw3Zy8L1MntaBatCKbsDAMSTwR1UM5kOFhlfNXgxNT4M0+Tqug85b W4xaY2c0rmt97cRuBjwKzWXDbiovNHImBBU0qyyHQ4NAh9eL2I4tV0llbkU5n2rKo8Y1 gNzhxxy11WgxTAu+raQWvueM4BM11QStsXtQw5Mg52TGSjPtSoJmgPRTru4YJHBWDpJp OZzA== X-Gm-Message-State: AOAM5319EQrG9KYEAl91uwqSOum5YBgGlsHnjvHj4jy630/jG0FEwBhT JuIighwkRNIlJsK1oI5dngdXFA== X-Google-Smtp-Source: ABdhPJzN9SRSZZItQKCqO558WN2Y7tM5Mh2bBAZZpVvE6EZpGwsiLYjKEfjD3vbJIkTtFd7w2FLsYQ== X-Received: by 2002:a05:620a:c4f:: with SMTP id u15mr2250058qki.565.1640098905805; Tue, 21 Dec 2021 07:01:45 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id d4sm1991371qkn.79.2021.12.21.07.01.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Dec 2021 07:01:45 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH v2 2/9] mm: Avoid using set_page_count() in set_page_recounted() Date: Tue, 21 Dec 2021 15:01:33 +0000 Message-Id: <20211221150140.988298-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.307.g9b7440fafd-goog In-Reply-To: <20211221150140.988298-1-pasha.tatashin@soleen.com> References: <20211221150140.988298-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1F13440087 X-Stat-Signature: yonp54derpjxahre55o8hs7u5mnpns6z Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b="J7/VHH0J"; dmarc=none; spf=pass (imf17.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.175 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-Rspamd-Server: rspam02 X-HE-Tag: 1640098895-828409 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_refcounted() converts a non-refcounted page that has (page->_refcount == 0) into a refcounted page by setting _refcount to 1. The current apporach uses the following logic: VM_BUG_ON_PAGE(page_ref_count(page), page); set_page_count(page, 1); However, if _refcount changes from 0 to 1 between the VM_BUG_ON_PAGE() and set_page_count() we can break _refcount, which can cause other problems such as memory corruptions. Instead, use a safer method: increment _refcount first and verify that at increment time it was indeed 1. refcnt = page_ref_inc_return(page); VM_BUG_ON_PAGE(refcnt != 1, page); Use page_ref_inc_return() to avoid unconditionally overwriting the _refcount value with set_page_count(), and check the return value. Signed-off-by: Pasha Tatashin --- mm/internal.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index deb9bda18e59..4d45ef2ffea6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -162,9 +162,11 @@ static inline bool page_evictable(struct page *page) */ static inline void set_page_refcounted(struct page *page) { + int refcnt; + VM_BUG_ON_PAGE(PageTail(page), page); - VM_BUG_ON_PAGE(page_ref_count(page), page); - set_page_count(page, 1); + refcnt = page_ref_inc_return(page); + VM_BUG_ON_PAGE(refcnt != 1, page); } extern unsigned long highest_memmap_pfn; From patchwork Tue Dec 21 15:01:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12689807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56CA4C433FE for ; Tue, 21 Dec 2021 15:01:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8CC646B0073; Tue, 21 Dec 2021 10:01:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 87BDE6B0074; Tue, 21 Dec 2021 10:01:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F65B6B0075; Tue, 21 Dec 2021 10:01:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id 61D426B0073 for ; Tue, 21 Dec 2021 10:01:48 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2281182499B9 for ; Tue, 21 Dec 2021 15:01:48 +0000 (UTC) X-FDA: 78942115896.12.3196CA3 Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com [209.85.222.173]) by imf05.hostedemail.com (Postfix) with ESMTP id 8AFA3100050 for ; Tue, 21 Dec 2021 15:01:47 +0000 (UTC) Received: by mail-qk1-f173.google.com with SMTP id t83so12763319qke.8 for ; Tue, 21 Dec 2021 07:01:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=/Bz+RH2sd3d5dh0W3emMfNr+x1w1TZodqurj+f1M1QQ=; b=KidDaTtrQ8tAIOl8DDgN3FOj8v1GQoe8AsMNbfRPu+zCoyTqxLCHTwzt9TbJbGovo0 6Q9nLCCLveS3W+IiLFrcheP6UIKLC/0UIEGvzCdIJEB3tS03dwPcqR5y4Gx65v2xT1pJ 9DSwSVolFxtdTKHj0Ik+YdvPhw6379bJal5U2mQLhnKWK+y+akA/COr5nZyW37G/V5Pt LnhAmG0s96vB1BlwKg3qDz836SVwtKNrueM8zE6fy78U14axejFvjq5Gfq1OVGF5Z/7C Wk9q1LEkgVQYUNYNyzxS2vRVpOaUifCNJ5eGJPO6psx4JzNMhN5x+6NRyXbvV/t0qYjL UO7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/Bz+RH2sd3d5dh0W3emMfNr+x1w1TZodqurj+f1M1QQ=; b=gQpvT4QCq1vuIBL9AHP5Wsvu7KVSBOpvpie/zTy933bzx5WLYC9+ilmOANaWWDu7MP NCJRqoVQbxBilBoHUX/JpL/DzK+OlQCV9yOyAJToqTzJmcdfp8Fh7WEGKaOossttXrfC Gq4WpSNMgD6r1DalVROyPnqYm4ybNAABXeYWQXbbu4ah5OfHOiZh75HnW+wSKinfoN7k orSR/bitdhxvEvpTlnkoNSJiMocHPmrrumfHGvB2XvOzjPJshAFRRf8KNQhl1zivIwnM 9RsM2ch/9XUdSHI7hCP9Q+XNfLL4HVKMDvLkC3x9IoToCFPbPYVM4IiY58dlcg7/VIe5 89Xw== X-Gm-Message-State: AOAM533NCRld6HVqgVPMBWKSsVdZPbbhXfT7Qw0V4L/UxkmnGltwvBg3 O79PPLK/rIiK0saT50ZbefIWDg== X-Google-Smtp-Source: ABdhPJwWtWwvP9UVXTIMJbtTvAxC6RyjKtLEk/a3kJ8GOYA632y7N+nkICuXAEpwnyyJVHEAouzJYA== X-Received: by 2002:a37:60a:: with SMTP id 10mr1540467qkg.19.1640098906882; Tue, 21 Dec 2021 07:01:46 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id d4sm1991371qkn.79.2021.12.21.07.01.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Dec 2021 07:01:46 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH v2 3/9] mm: remove set_page_count() from page_frag_alloc_align Date: Tue, 21 Dec 2021 15:01:34 +0000 Message-Id: <20211221150140.988298-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.307.g9b7440fafd-goog In-Reply-To: <20211221150140.988298-1-pasha.tatashin@soleen.com> References: <20211221150140.988298-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 8AFA3100050 X-Stat-Signature: tn98e4pk3zd4n5e48nxeadcp7p7b3ar5 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=KidDaTtr; spf=pass (imf05.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.173 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-HE-Tag: 1640098907-544564 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_count() unconditionally resets the value of _ref_count and that is dangerous, as it is not programmatically verified. Instead we rely on comments like: "OK, page count is 0, we can safely set it". Add a new refcount function: page_ref_add_return() to return the new refcount value after adding to it. Use the return value to verify that the _ref_count was indeed the expected one. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 11 +++++++++++ mm/page_alloc.c | 6 ++++-- 2 files changed, 15 insertions(+), 2 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index fe4864f7f69c..03e21ce2f1bd 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -115,6 +115,17 @@ static inline void init_page_count(struct page *page) set_page_count(page, 1); } +static inline int page_ref_add_return(struct page *page, int nr) +{ + int old_val = atomic_fetch_add(nr, &page->_refcount); + int new_val = old_val + nr; + + VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, nr, new_val); + return new_val; +} + static inline void page_ref_add(struct page *page, int nr) { int old_val = atomic_fetch_add(nr, &page->_refcount); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index edfd6c81af82..b5554767b9de 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5523,6 +5523,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int size = PAGE_SIZE; struct page *page; int offset; + int refcnt; if (unlikely(!nc->va)) { refill: @@ -5561,8 +5562,9 @@ void *page_frag_alloc_align(struct page_frag_cache *nc, /* if size can vary use size else just use PAGE_SIZE */ size = nc->size; #endif - /* OK, page count is 0, we can safely set it */ - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + /* page count is 0, set it to PAGE_FRAG_CACHE_MAX_SIZE + 1 */ + refcnt = page_ref_add_return(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); + VM_BUG_ON_PAGE(refcnt != PAGE_FRAG_CACHE_MAX_SIZE + 1, page); /* reset page count bias and offset to start of new frag */ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; From patchwork Tue Dec 21 15:01:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12689813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31F3AC433F5 for ; Tue, 21 Dec 2021 15:02:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BB44E6B0078; Tue, 21 Dec 2021 10:02:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B615A6B007B; Tue, 21 Dec 2021 10:02:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DB846B007D; Tue, 21 Dec 2021 10:02:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id 8FFBA6B0078 for ; Tue, 21 Dec 2021 10:02:01 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 52DD9181AC9CC for ; Tue, 21 Dec 2021 15:02:01 +0000 (UTC) X-FDA: 78942116442.12.687EE83 Received: from mail-qk1-f181.google.com (mail-qk1-f181.google.com [209.85.222.181]) by imf11.hostedemail.com (Postfix) with ESMTP id EC8694008C for ; Tue, 21 Dec 2021 15:01:46 +0000 (UTC) Received: by mail-qk1-f181.google.com with SMTP id a11so12725145qkh.13 for ; Tue, 21 Dec 2021 07:01:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=SsQjilbSmtOxN4/+kaclrIIaX0+LAsp7vHqyKiKMlRY=; b=gcBvZoUtrYF3NYCTKHXzdceQiMgG3aqp8QQC/1YCPMu//H6PPmBh6cbnqiASj5UIOO 2wJEeGwU4eOhh4IP+XQ974yhSApwdUidA9SYQadh0fRVAuP4IbUKVo6lqjggb2oOK5lq DM9iFM+R8v7OAN4Xa6Lb1xPTsXCEgta3V/iUyRbPzgcSGntUN2RXWivv3j6pSrJn7CpH Pmj2c2F8EET1EyM0FQ3c/4EfrGh5jpd71L68gnn3IL+M+OT4a47k1HGc4YJWxVgW0tYd njGijwXT7U5gG64oLXvjZ4CFri64I8WnnI91/PXtSVFrpKzw5l/A1IG64P1bXl6rX+p8 uPcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SsQjilbSmtOxN4/+kaclrIIaX0+LAsp7vHqyKiKMlRY=; b=G80mWHSscN/0CFGKTvGMtMuK2nwMfhh9g806pRqZULfpuNZaG4k19/GEXl/3e0U9yn GGzyjAs1kNh5CAOqy6KFHddka2vBtI312NqUEmdcm3DvaK/9K6U104IaVytyn3zPC7IL 1Z4arI1HtxI5NtkAXSeXPglv8sabsWUj/UYodhuDqAdZJZ5gJoHbj1wd2kg21OgJriam 6qFdni6Z6VBrcorbY+M7TZrfiW+A9pPdduDlH//2lbOE2N+DM30Foo/xAqvtDmYYKO9y GHGmxvTH02M11D/oQzyPoMkT1hJDrqVbH979PdHrGL8IbOaa8b95u5D67mVsX6CJWvj8 5VQA== X-Gm-Message-State: AOAM533Oadn2xmpvLsLMoJS971jr9QuzOirFNgBJzUlMJSdwVtMw/ST/ rE79NRmQVyoHvfHB+c+OWJNXdw== X-Google-Smtp-Source: ABdhPJzRSE2PQv8kZTNTCLI9TYbronVdzGDNuPGOGxLks2rlUw8onAlSVNgM6PFYon6DnS7EEo9SMg== X-Received: by 2002:a05:620a:2989:: with SMTP id r9mr2208491qkp.630.1640098908598; Tue, 21 Dec 2021 07:01:48 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id d4sm1991371qkn.79.2021.12.21.07.01.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Dec 2021 07:01:48 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH v2 5/9] mm: rename init_page_count() -> page_ref_init() Date: Tue, 21 Dec 2021 15:01:36 +0000 Message-Id: <20211221150140.988298-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.307.g9b7440fafd-goog In-Reply-To: <20211221150140.988298-1-pasha.tatashin@soleen.com> References: <20211221150140.988298-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: EC8694008C X-Stat-Signature: ii6rjg1wytanjhkor4wmkhi5jyicabhu Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=gcBvZoUt; dmarc=none; spf=pass (imf11.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.181 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-Rspamd-Server: rspam02 X-HE-Tag: 1640098906-883141 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that set_page_count() is not called from outside anymore and about to be removed, init_page_count() is the only function that is going to be used to unconditionally set _refcount, however it is restricted to set it only to 1. Make init_page_count() aligned with the other page_ref_* functions by renaming it. Signed-off-by: Pasha Tatashin Acked-by: Geert Uytterhoeven --- arch/m68k/mm/motorola.c | 2 +- include/linux/mm.h | 2 +- include/linux/page_ref.h | 10 +++++++--- mm/page_alloc.c | 2 +- 4 files changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index ecbe948f4c1a..dd3b77d03d5c 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -133,7 +133,7 @@ void __init init_pointer_table(void *table, int type) /* unreserve the page so it's possible to free that page */ __ClearPageReserved(PD_PAGE(dp)); - init_page_count(PD_PAGE(dp)); + page_ref_init(PD_PAGE(dp)); return; } diff --git a/include/linux/mm.h b/include/linux/mm.h index d211a06784d5..fae3b6ef66a5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2451,7 +2451,7 @@ extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end); static inline void free_reserved_page(struct page *page) { ClearPageReserved(page); - init_page_count(page); + page_ref_init(page); __free_page(page); adjust_managed_page_count(page, 1); } diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 03e21ce2f1bd..1af12a0d7ba1 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -107,10 +107,14 @@ static inline void folio_set_count(struct folio *folio, int v) } /* - * Setup the page count before being freed into the page allocator for - * the first time (boot or memory hotplug) + * Setup the page refcount to one before being freed into the page allocator. + * The memory might not be initialized and therefore there cannot be any + * assumptions about the current value of page->_refcount. This call should be + * done during boot when memory is being initialized, during memory hotplug + * when new memory is added, or when a previous reserved memory is unreserved + * this is the first time kernel take control of the given memory. */ -static inline void init_page_count(struct page *page) +static inline void page_ref_init(struct page *page) { set_page_count(page, 1); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 13d989d62012..000c057a2d24 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1569,7 +1569,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn, { mm_zero_struct_page(page); set_page_links(page, zone, nid, pfn); - init_page_count(page); + page_ref_init(page); page_mapcount_reset(page); page_cpupid_reset_last(page); page_kasan_tag_reset(page); From patchwork Tue Dec 21 15:01:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12689809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2810C433EF for ; Tue, 21 Dec 2021 15:01:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 374666B0074; Tue, 21 Dec 2021 10:01:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2FDB86B0075; Tue, 21 Dec 2021 10:01:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0DBEE6B0078; Tue, 21 Dec 2021 10:01:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id F298C6B0074 for ; Tue, 21 Dec 2021 10:01:50 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B981281743 for ; Tue, 21 Dec 2021 15:01:50 +0000 (UTC) X-FDA: 78942115980.08.E7868F5 Received: from mail-qk1-f171.google.com (mail-qk1-f171.google.com [209.85.222.171]) by imf05.hostedemail.com (Postfix) with ESMTP id 56716100025 for ; Tue, 21 Dec 2021 15:01:50 +0000 (UTC) Received: by mail-qk1-f171.google.com with SMTP id m186so12784750qkb.4 for ; Tue, 21 Dec 2021 07:01:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=3o2eHohDdMg61Zg9MpGxB9pEXtCaHvT+otQKTxX/uSo=; b=iuzU+b7NbW+JYMr1V/VFN1cbVqPFRX5aWHIoVZDiIgnseTjlV3RZHt4inZvvgmNRiS 01ywWKPt4olBDuMB/G1vZsXnq/5JNtN4cFf8rNAE9i2LIfxY8HvWDs3HsMMlzj5yggBf HSVm+l9Bs5QEbBB3MPZJ3jXuWn97SEV6rFoB7RSNQB1liZdXprh5UII555VqXNchYERw pCJRILLIx2kG3kjiUNQoY5L80OVihCOmxL+8yCEyIfKzyECxruuJtip8pYmKzD68tVYb osDvRwFMs5lrz2LGtgcutefqKs0k+tqH4+ohnMQJh2FuFy5z6Rv12pl6VXlcLzmpsd/4 EBNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3o2eHohDdMg61Zg9MpGxB9pEXtCaHvT+otQKTxX/uSo=; b=3999X2tABswJadOKH7ttezdb4ZvtbCMEOnkR76t19eEsNIthpyzkYIBsHI915EcJOb VjFl518QNPnliwFKY31GClX2NOZL4N/OJKNk4V0FOkjVncXybYEzFBx06cRGlJhdK+9V Lu32zNx7FI9SyuvDM4GqWMy2k+knIRMEdGjY4Swyx928BntLgYNTTRLO4QuNql7/Y2/K e+Iv3cl3FcHQl3bbt74DXy8kJMqRaHjnm5cArXkEXDLDw0wZC5/pa5ze9zpE/8dJpgPx phEWNxKRqNftzkkOqjc3PRkf7YK+sjaVlGIBRNufXAP3mKVvZnAEKA7aMuYdLE9Ck5C+ GN8w== X-Gm-Message-State: AOAM5317qZJ9ddKy7gMzVpavj3pGKeGkXRkdNuRFreItbt2d76IQHb21 nyobEIMvjMO7s/3tuTyJQuvV+A== X-Google-Smtp-Source: ABdhPJzlLOcPJEIg41oZHOlIl3LtbdhfBgCsn83TjJQh/PpUyX+gZqSSqFtrvHTI7G/6ZdMWfpiZ0w== X-Received: by 2002:a05:620a:710:: with SMTP id 16mr2197192qkc.379.1640098909644; Tue, 21 Dec 2021 07:01:49 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id d4sm1991371qkn.79.2021.12.21.07.01.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Dec 2021 07:01:49 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH v2 6/9] mm: remove set_page_count() Date: Tue, 21 Dec 2021 15:01:37 +0000 Message-Id: <20211221150140.988298-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.307.g9b7440fafd-goog In-Reply-To: <20211221150140.988298-1-pasha.tatashin@soleen.com> References: <20211221150140.988298-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=iuzU+b7N; spf=pass (imf05.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.222.171 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 56716100025 X-Stat-Signature: cehxbfzfj3rcjfmr7w761wps6qa7zaef X-HE-Tag: 1640098910-25761 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: set_page_count() is dangerous because it resets _refcount to an arbitrary value. Instead we now initialize _refcount to 1 only once, and the rest of the time we are using add/dec/cmpxchg to have a contiguous track of the counter. Remove set_page_count() and add new tracing hooks to page_ref_init(). Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 27 ++++++++----------- include/trace/events/page_ref.h | 46 ++++++++++++++++++++++++++++----- mm/debug_page_ref.c | 8 +++--- 3 files changed, 54 insertions(+), 27 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 1af12a0d7ba1..d7316881626c 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -7,7 +7,7 @@ #include #include -DECLARE_TRACEPOINT(page_ref_set); +DECLARE_TRACEPOINT(page_ref_init); DECLARE_TRACEPOINT(page_ref_mod); DECLARE_TRACEPOINT(page_ref_mod_and_test); DECLARE_TRACEPOINT(page_ref_mod_and_return); @@ -26,7 +26,7 @@ DECLARE_TRACEPOINT(page_ref_unfreeze); */ #define page_ref_tracepoint_active(t) tracepoint_enabled(t) -extern void __page_ref_set(struct page *page, int v); +extern void __page_ref_init(struct page *page); extern void __page_ref_mod(struct page *page, int v); extern void __page_ref_mod_and_test(struct page *page, int v, int ret); extern void __page_ref_mod_and_return(struct page *page, int v, int ret); @@ -38,7 +38,7 @@ extern void __page_ref_unfreeze(struct page *page, int v); #define page_ref_tracepoint_active(t) false -static inline void __page_ref_set(struct page *page, int v) +static inline void __page_ref_init(struct page *page) { } static inline void __page_ref_mod(struct page *page, int v) @@ -94,18 +94,6 @@ static inline int page_count(const struct page *page) return folio_ref_count(page_folio(page)); } -static inline void set_page_count(struct page *page, int v) -{ - atomic_set(&page->_refcount, v); - if (page_ref_tracepoint_active(page_ref_set)) - __page_ref_set(page, v); -} - -static inline void folio_set_count(struct folio *folio, int v) -{ - set_page_count(&folio->page, v); -} - /* * Setup the page refcount to one before being freed into the page allocator. * The memory might not be initialized and therefore there cannot be any @@ -116,7 +104,14 @@ static inline void folio_set_count(struct folio *folio, int v) */ static inline void page_ref_init(struct page *page) { - set_page_count(page, 1); + atomic_set(&page->_refcount, 1); + if (page_ref_tracepoint_active(page_ref_init)) + __page_ref_init(page); +} + +static inline void folio_ref_init(struct folio *folio) +{ + page_ref_init(&folio->page); } static inline int page_ref_add_return(struct page *page, int nr) diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index 8a99c1cd417b..87551bb1df9e 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -10,6 +10,45 @@ #include #include +DECLARE_EVENT_CLASS(page_ref_init_template, + + TP_PROTO(struct page *page), + + TP_ARGS(page), + + TP_STRUCT__entry( + __field(unsigned long, pfn) + __field(unsigned long, flags) + __field(int, count) + __field(int, mapcount) + __field(void *, mapping) + __field(int, mt) + __field(int, val) + ), + + TP_fast_assign( + __entry->pfn = page_to_pfn(page); + __entry->flags = page->flags; + __entry->count = page_ref_count(page); + __entry->mapcount = page_mapcount(page); + __entry->mapping = page->mapping; + __entry->mt = get_pageblock_migratetype(page); + ), + + TP_printk("pfn=0x%lx flags=%s count=%d mapcount=%d mapping=%p mt=%d", + __entry->pfn, + show_page_flags(__entry->flags & PAGEFLAGS_MASK), + __entry->count, + __entry->mapcount, __entry->mapping, __entry->mt) +); + +DEFINE_EVENT(page_ref_init_template, page_ref_init, + + TP_PROTO(struct page *page), + + TP_ARGS(page) +); + DECLARE_EVENT_CLASS(page_ref_mod_template, TP_PROTO(struct page *page, int v), @@ -44,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __entry->val) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_set, - - TP_PROTO(struct page *page, int v), - - TP_ARGS(page, v) -); - DEFINE_EVENT(page_ref_mod_template, page_ref_mod, TP_PROTO(struct page *page, int v), diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c index f3b2c9d3ece2..e32149734122 100644 --- a/mm/debug_page_ref.c +++ b/mm/debug_page_ref.c @@ -5,12 +5,12 @@ #define CREATE_TRACE_POINTS #include -void __page_ref_set(struct page *page, int v) +void __page_ref_init(struct page *page) { - trace_page_ref_set(page, v); + trace_page_ref_init(page); } -EXPORT_SYMBOL(__page_ref_set); -EXPORT_TRACEPOINT_SYMBOL(page_ref_set); +EXPORT_SYMBOL(__page_ref_init); +EXPORT_TRACEPOINT_SYMBOL(page_ref_init); void __page_ref_mod(struct page *page, int v) { From patchwork Tue Dec 21 15:01:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12689817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DD4BC433F5 for ; Tue, 21 Dec 2021 15:02:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AEE2D6B0082; Tue, 21 Dec 2021 10:02:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A9E396B0083; Tue, 21 Dec 2021 10:02:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9180D6B0085; Tue, 21 Dec 2021 10:02:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id 7F31B6B0082 for ; Tue, 21 Dec 2021 10:02:08 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4203A8BEEE for ; Tue, 21 Dec 2021 15:02:08 +0000 (UTC) X-FDA: 78942116736.23.B85DEC9 Received: from mail-qv1-f44.google.com (mail-qv1-f44.google.com [209.85.219.44]) by imf28.hostedemail.com (Postfix) with ESMTP id 9030AC00A6 for ; Tue, 21 Dec 2021 15:01:52 +0000 (UTC) Received: by mail-qv1-f44.google.com with SMTP id kj16so8660187qvb.2 for ; Tue, 21 Dec 2021 07:01:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=jaGLmQY52APl8v98fQta2llNqqHuAoOAllJQBEPRHxk=; b=eBNsLd7ARojSEaIKr1fNBnwoyaIGKyZ1YHAKdIfcl9DOJfyYTkckGPZbs6qzYnm8KF UJGzZZh8frzcL5dLuDfj2s2helG8nS8rsopxSfDzz9tw/cLdXPv2mCjPvrstIh/YQPYO MbaoqWmM6lsBlKCaN1uZQkuIZ41kY/Gcg1uwL/9zkiAhhYTohOSNbbZBjIn2BSzk9Nm0 q/s7BnOn/guBwmkQKFZUpYCqMI3XtWH6SDbIOS1QqygMsL4z0JmE0xbqcgBZAaN02AU7 azLIyYmAUcTHEF5X85XZmwKKXfDKkG6YkmrdODo9qNwJSKd0aZg2I6SaJzvz7h66dyAp JdJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jaGLmQY52APl8v98fQta2llNqqHuAoOAllJQBEPRHxk=; b=OgluuxbejXyc9hUFS9uwV9k48Wo/1WkSKDCEuXm/w7dx7oj6vg1aX8B63GBlIBrGPN znTN0CNdcMHlopJnK7rTJSAxpa1RtsvMF+LUzVumXyp28hGHFv2KDLVatfIyasLwm9ed ASQwwvcgORYnws+1ksnRkRF1vAGzJwPRa2pE2kLwek+XtQP+KCbyXyn3H1Zclr2c39N6 XCIXWSl9VEBoePk7o2wSVI6dMX9LMHwFbJr3UaXHOC59GesrPYtMGNMc5UR8RWichcXf vATQDSkb6zkUlAmZ8f3KizX2N75y/80ZLHS8TnXDOj97WvL4Dc5aMYtt7Qpj6fNaIuqh L1pg== X-Gm-Message-State: AOAM532Ri6gIaeukSnJ2pJEjcLejwmpVbaMwVaYT9J9K7TuQDEH0Exov 4RjSOTJZptTxrTCnk1YXr6oOqg== X-Google-Smtp-Source: ABdhPJwMlFRiSYhQumZA31hM2R0ePuuAlQ2ZhyLXodIAC4YlYRRVox4HfZF1uSX5tXZr7Hd0UOKJ3A== X-Received: by 2002:a0c:8e08:: with SMTP id v8mr2635635qvb.93.1640098910549; Tue, 21 Dec 2021 07:01:50 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id d4sm1991371qkn.79.2021.12.21.07.01.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Dec 2021 07:01:50 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH v2 7/9] mm: simplify page_ref_* functions Date: Tue, 21 Dec 2021 15:01:38 +0000 Message-Id: <20211221150140.988298-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.307.g9b7440fafd-goog In-Reply-To: <20211221150140.988298-1-pasha.tatashin@soleen.com> References: <20211221150140.988298-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 9030AC00A6 X-Stat-Signature: 7s4oren8kcnc4n1sm3ndfffqpb19ex7b Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=eBNsLd7A; dmarc=none; spf=pass (imf28.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.44 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-HE-Tag: 1640098912-234296 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that we are using atomic_fetch* variants to add/sub/inc/dec page _refcount, it makes sense to combined page_ref_* return and non return functions. Also remove some extra trace points for non-return variants. This improves the tracability by always recording the new _refcount value after the modifications has occurred. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 102 +++++++++----------------------- include/trace/events/page_ref.h | 18 +----- mm/debug_page_ref.c | 14 ----- 3 files changed, 31 insertions(+), 103 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index d7316881626c..243fc60ae6c8 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -8,8 +8,6 @@ #include DECLARE_TRACEPOINT(page_ref_init); -DECLARE_TRACEPOINT(page_ref_mod); -DECLARE_TRACEPOINT(page_ref_mod_and_test); DECLARE_TRACEPOINT(page_ref_mod_and_return); DECLARE_TRACEPOINT(page_ref_mod_unless); DECLARE_TRACEPOINT(page_ref_freeze); @@ -27,8 +25,6 @@ DECLARE_TRACEPOINT(page_ref_unfreeze); #define page_ref_tracepoint_active(t) tracepoint_enabled(t) extern void __page_ref_init(struct page *page); -extern void __page_ref_mod(struct page *page, int v); -extern void __page_ref_mod_and_test(struct page *page, int v, int ret); extern void __page_ref_mod_and_return(struct page *page, int v, int ret); extern void __page_ref_mod_unless(struct page *page, int v, int u); extern void __page_ref_freeze(struct page *page, int v, int ret); @@ -41,12 +37,6 @@ extern void __page_ref_unfreeze(struct page *page, int v); static inline void __page_ref_init(struct page *page) { } -static inline void __page_ref_mod(struct page *page, int v) -{ -} -static inline void __page_ref_mod_and_test(struct page *page, int v, int ret) -{ -} static inline void __page_ref_mod_and_return(struct page *page, int v, int ret) { } @@ -127,12 +117,7 @@ static inline int page_ref_add_return(struct page *page, int nr) static inline void page_ref_add(struct page *page, int nr) { - int old_val = atomic_fetch_add(nr, &page->_refcount); - int new_val = old_val + nr; - - VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, nr); + page_ref_add_return(page, nr); } static inline void folio_ref_add(struct folio *folio, int nr) @@ -140,30 +125,25 @@ static inline void folio_ref_add(struct folio *folio, int nr) page_ref_add(&folio->page, nr); } -static inline void page_ref_sub(struct page *page, int nr) +static inline int page_ref_sub_return(struct page *page, int nr) { int old_val = atomic_fetch_sub(nr, &page->_refcount); int new_val = old_val - nr; VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, -nr); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, -nr, new_val); + return new_val; } -static inline void folio_ref_sub(struct folio *folio, int nr) +static inline void page_ref_sub(struct page *page, int nr) { - page_ref_sub(&folio->page, nr); + page_ref_sub_return(page, nr); } -static inline int page_ref_sub_return(struct page *page, int nr) +static inline void folio_ref_sub(struct folio *folio, int nr) { - int old_val = atomic_fetch_sub(nr, &page->_refcount); - int new_val = old_val - nr; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -nr, new_val); - return new_val; + page_ref_sub(&folio->page, nr); } static inline int folio_ref_sub_return(struct folio *folio, int nr) @@ -171,14 +151,20 @@ static inline int folio_ref_sub_return(struct folio *folio, int nr) return page_ref_sub_return(&folio->page, nr); } -static inline void page_ref_inc(struct page *page) +static inline int page_ref_inc_return(struct page *page) { int old_val = atomic_fetch_inc(&page->_refcount); int new_val = old_val + 1; VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, 1); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, 1, new_val); + return new_val; +} + +static inline void page_ref_inc(struct page *page) +{ + page_ref_inc_return(page); } static inline void folio_ref_inc(struct folio *folio) @@ -186,14 +172,20 @@ static inline void folio_ref_inc(struct folio *folio) page_ref_inc(&folio->page); } -static inline void page_ref_dec(struct page *page) +static inline int page_ref_dec_return(struct page *page) { int old_val = atomic_fetch_dec(&page->_refcount); int new_val = old_val - 1; VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod)) - __page_ref_mod(page, -1); + if (page_ref_tracepoint_active(page_ref_mod_and_return)) + __page_ref_mod_and_return(page, -1, new_val); + return new_val; +} + +static inline void page_ref_dec(struct page *page) +{ + page_ref_dec_return(page); } static inline void folio_ref_dec(struct folio *folio) @@ -203,14 +195,7 @@ static inline void folio_ref_dec(struct folio *folio) static inline int page_ref_sub_and_test(struct page *page, int nr) { - int old_val = atomic_fetch_sub(nr, &page->_refcount); - int new_val = old_val - nr; - int ret = new_val == 0; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_test)) - __page_ref_mod_and_test(page, -nr, ret); - return ret; + return page_ref_sub_return(page, nr) == 0; } static inline int folio_ref_sub_and_test(struct folio *folio, int nr) @@ -218,17 +203,6 @@ static inline int folio_ref_sub_and_test(struct folio *folio, int nr) return page_ref_sub_and_test(&folio->page, nr); } -static inline int page_ref_inc_return(struct page *page) -{ - int old_val = atomic_fetch_inc(&page->_refcount); - int new_val = old_val + 1; - - VM_BUG_ON_PAGE((unsigned int)new_val < (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, 1, new_val); - return new_val; -} - static inline int folio_ref_inc_return(struct folio *folio) { return page_ref_inc_return(&folio->page); @@ -236,14 +210,7 @@ static inline int folio_ref_inc_return(struct folio *folio) static inline int page_ref_dec_and_test(struct page *page) { - int old_val = atomic_fetch_dec(&page->_refcount); - int new_val = old_val - 1; - int ret = new_val == 0; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_test)) - __page_ref_mod_and_test(page, -1, ret); - return ret; + return page_ref_dec_return(page) == 0; } static inline int folio_ref_dec_and_test(struct folio *folio) @@ -251,17 +218,6 @@ static inline int folio_ref_dec_and_test(struct folio *folio) return page_ref_dec_and_test(&folio->page); } -static inline int page_ref_dec_return(struct page *page) -{ - int old_val = atomic_fetch_dec(&page->_refcount); - int new_val = old_val - 1; - - VM_BUG_ON_PAGE((unsigned int)new_val > (unsigned int)old_val, page); - if (page_ref_tracepoint_active(page_ref_mod_and_return)) - __page_ref_mod_and_return(page, -1, new_val); - return new_val; -} - static inline int folio_ref_dec_return(struct folio *folio) { return page_ref_dec_return(&folio->page); diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index 87551bb1df9e..35cd795aa7c6 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -49,7 +49,7 @@ DEFINE_EVENT(page_ref_init_template, page_ref_init, TP_ARGS(page) ); -DECLARE_EVENT_CLASS(page_ref_mod_template, +DECLARE_EVENT_CLASS(page_ref_unfreeze_template, TP_PROTO(struct page *page, int v), @@ -83,13 +83,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __entry->val) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_mod, - - TP_PROTO(struct page *page, int v), - - TP_ARGS(page, v) -); - DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, TP_PROTO(struct page *page, int v, int ret), @@ -126,13 +119,6 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, __entry->val, __entry->ret) ); -DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_test, - - TP_PROTO(struct page *page, int v, int ret), - - TP_ARGS(page, v, ret) -); - DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_mod_and_return, TP_PROTO(struct page *page, int v, int ret), @@ -154,7 +140,7 @@ DEFINE_EVENT(page_ref_mod_and_test_template, page_ref_freeze, TP_ARGS(page, v, ret) ); -DEFINE_EVENT(page_ref_mod_template, page_ref_unfreeze, +DEFINE_EVENT(page_ref_unfreeze_template, page_ref_unfreeze, TP_PROTO(struct page *page, int v), diff --git a/mm/debug_page_ref.c b/mm/debug_page_ref.c index e32149734122..1de9d93cca25 100644 --- a/mm/debug_page_ref.c +++ b/mm/debug_page_ref.c @@ -12,20 +12,6 @@ void __page_ref_init(struct page *page) EXPORT_SYMBOL(__page_ref_init); EXPORT_TRACEPOINT_SYMBOL(page_ref_init); -void __page_ref_mod(struct page *page, int v) -{ - trace_page_ref_mod(page, v); -} -EXPORT_SYMBOL(__page_ref_mod); -EXPORT_TRACEPOINT_SYMBOL(page_ref_mod); - -void __page_ref_mod_and_test(struct page *page, int v, int ret) -{ - trace_page_ref_mod_and_test(page, v, ret); -} -EXPORT_SYMBOL(__page_ref_mod_and_test); -EXPORT_TRACEPOINT_SYMBOL(page_ref_mod_and_test); - void __page_ref_mod_and_return(struct page *page, int v, int ret) { trace_page_ref_mod_and_return(page, v, ret); From patchwork Tue Dec 21 15:01:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12689821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAB8EC433EF for ; Tue, 21 Dec 2021 15:02:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4AE446B0088; Tue, 21 Dec 2021 10:02:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 45B2C6B0089; Tue, 21 Dec 2021 10:02:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 323CD6B008A; Tue, 21 Dec 2021 10:02:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 234A66B0088 for ; Tue, 21 Dec 2021 10:02:43 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D695E82499B9 for ; Tue, 21 Dec 2021 15:02:42 +0000 (UTC) X-FDA: 78942118164.16.EDC8287 Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) by imf07.hostedemail.com (Postfix) with ESMTP id F32794008C for ; Tue, 21 Dec 2021 15:01:52 +0000 (UTC) Received: by mail-qt1-f181.google.com with SMTP id f9so1065793qtk.4 for ; Tue, 21 Dec 2021 07:01:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=9Ho7P1Z1WqP7ryS19lrI0k8d2U4f/eXw/1oJbaqWceE=; b=Gq4vEZXFfCrx18wyS4Qq27rBeHdcAXH69jksx+OkwhSowQMlXtH4nFIJdXpj0jffND GMRkTkEsROTk+Q0fN/zQv/aA8Nw+i6ac14Mx8BBAW3gONlgYyoUM4BzZ5Oc3F3mY6x7O q2mF8Q2KFRGrCnnGXI/fEWXmh5E5q0wklFMVekfIC7hRBna0SIm50MbZ9WVxjPTgp+3W z4EzflFfMl0tNruyp4qrk3cUMkdYdVCQx7xv+fIJKz/0uYLLCSDlawMXFO6s0/kqT9Ff MS3Y8+2ZU5HfUKGCMZQ9wgByhNWhrueYcDvj4Yj9qoCYc+RLj/eBDzEmztt+cmbowjXB mpMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9Ho7P1Z1WqP7ryS19lrI0k8d2U4f/eXw/1oJbaqWceE=; b=T1Cz6bBtStDtKPQho8IGkJZ+q4advhGs2mYjBMDj2WoPiLqkGmZNIiZBTbHM/oK2vD RTm2o2M0jSBLED+Ix8kKC5O4IsOeXbxIBI//b4dAxo59GzrgF7ry2YQOs3LBaCDhrjqI j3I+Rj63sM43cZz1R1wzzGOd1hc9pjX1s43ODAXWa+3KKppwCCoz9Ot8NFl6m+YIvlHB IX7NDRdEnheJ8eu3nok/cXxfozX3ItuGDNvA8iEseS7hlfKtcOGMn+1ZfoCeSPJHiUkT fmTzhf/2CZyW3lASJzDdLi+A6UxkqO2iQEVAifjuxmn3t398qVtaWvZLVELOwEZgD8XE vvvw== X-Gm-Message-State: AOAM530I0ojgC648WUBPY9L5m2M/kEgvdGJAnZ9qlNg7OX0wK3M6SAwd e2lshyDtb/FDUetzjieGTBI+NQ== X-Google-Smtp-Source: ABdhPJwpEZVujf1Pk3vgH5fkaQH3jWmAOGB6Xqm6lM1kpQ6rkwA951oHxisyuwNEIFAkYBGCXlnQ6w== X-Received: by 2002:ac8:7f4b:: with SMTP id g11mr2566406qtk.4.1640098912343; Tue, 21 Dec 2021 07:01:52 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id d4sm1991371qkn.79.2021.12.21.07.01.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Dec 2021 07:01:51 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH v2 8/9] mm: do not use atomic_set_release in page_ref_unfreeze() Date: Tue, 21 Dec 2021 15:01:39 +0000 Message-Id: <20211221150140.988298-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.307.g9b7440fafd-goog In-Reply-To: <20211221150140.988298-1-pasha.tatashin@soleen.com> References: <20211221150140.988298-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: F32794008C X-Stat-Signature: 8e5zer7mbbqrz4tejopxcrw8zfcjm6nn Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b=Gq4vEZXF; dmarc=none; spf=pass (imf07.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com X-Rspamd-Server: rspam11 X-HE-Tag: 1640098912-85002 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In we set the old _refcount value after verifying that the old value was indeed 0. VM_BUG_ON_PAGE(page_count(page) != 0, page); < the _refcount may change here> atomic_set_release(&page->_refcount, count); To avoid the smal gap where _refcount may change lets verify the time of the _refcount at the time of the set operation. Use atomic_xchg_release() and at the set time verify that the value was 0. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 243fc60ae6c8..9efabeff4e06 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -322,10 +322,9 @@ static inline int folio_ref_freeze(struct folio *folio, int count) static inline void page_ref_unfreeze(struct page *page, int count) { - VM_BUG_ON_PAGE(page_count(page) != 0, page); - VM_BUG_ON(count == 0); + int old_val = atomic_xchg_release(&page->_refcount, count); - atomic_set_release(&page->_refcount, count); + VM_BUG_ON_PAGE(count == 0 || old_val != 0, page); if (page_ref_tracepoint_active(page_ref_unfreeze)) __page_ref_unfreeze(page, count); } From patchwork Tue Dec 21 15:01:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12689819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D75C3C433F5 for ; Tue, 21 Dec 2021 15:02:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 60C706B0085; Tue, 21 Dec 2021 10:02:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5BB526B0087; Tue, 21 Dec 2021 10:02:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 484DD6B0088; Tue, 21 Dec 2021 10:02:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id 3764F6B0085 for ; Tue, 21 Dec 2021 10:02:25 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 015798CA2A for ; Tue, 21 Dec 2021 15:02:25 +0000 (UTC) X-FDA: 78942117450.07.0DFA693 Received: from mail-qv1-f42.google.com (mail-qv1-f42.google.com [209.85.219.42]) by imf15.hostedemail.com (Postfix) with ESMTP id C6399A011D for ; Tue, 21 Dec 2021 15:01:46 +0000 (UTC) Received: by mail-qv1-f42.google.com with SMTP id o10so12663545qvc.5 for ; Tue, 21 Dec 2021 07:01:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=ujiBeyeY0ZhWFNLdTCKueYw4ysMT9Aw/7apPybkWG1s=; b=PA3Q/yDh7W+DgSnKOTeVJUsVJL2iHQHGtRuXH0oQMCfi3NgBzbAlRtnkbB9VslFuln JgUXIsrq1dntIqfnKtVub3TAjFSBwovezuGjEs3vddMSz4B5E6iP75xXidJlVHdWmWRJ K2Hka8iaYZ5H/DI+OAzA2Wp0g/1hRTlVUi1AkgHDGaUEHi4zkn+tCPyqgXJyaUlevaHs LbEjFWY6ySLuQAHYvgf2ziB9zfVBqb9otAVbfhkvHbGoUQrnySsFwdw6XlQqKyE9RhVo XBcZRWgYNo4vWOlPwLvSnUALA/P7VH0q92uNKhHlNB67+Ut18sXhqjf6Ed+1ptUU0jls bK+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ujiBeyeY0ZhWFNLdTCKueYw4ysMT9Aw/7apPybkWG1s=; b=OjjNqQGohZtPYwonEf+8ISAKlcdFiuvOsxp6H2vNIWEWoKftW6ki/x9JJ/X578pGOn IjEbJLLkEHq+DjJVXsg4llg2dagddJ6w02yKQmxcqCslNJy4u4wSWnut57fjOPY++++k dHf12sqNE99X7IPLGndsmLnSAlc4mrF3FG6F5LvddO9HILgEZ4jBXysa8hhPGf0rcQf9 PJHgb1RmwzbUZAm3p39fg/ddHzwYpt1x1Fq9cb8w10ioBvVIxln9x95Ief8yetoB3T/F /wrBJJkFyn+rACQgTMMk4Tde3h5f0RplR2pZDJAfjSkc5FxV/ZMT1mQQVks/kKIvI0Og dp6A== X-Gm-Message-State: AOAM532eNuwolUvwv3Te1BxDWjd9D1f8nt2UDX1423WFVDl8Lw8M8jSC hDKFnBpbw5uG1pmPfSSjujsR4g== X-Google-Smtp-Source: ABdhPJwstE7x+0HpgXRt/GfFm68tAcQrAuhOR2pvSynOcMLtQsVanoM/IfrenS7mms+J84bIphIm9Q== X-Received: by 2002:ad4:5aad:: with SMTP id u13mr2841845qvg.46.1640098913240; Tue, 21 Dec 2021 07:01:53 -0800 (PST) Received: from soleen.c.googlers.com.com (189.216.85.34.bc.googleusercontent.com. [34.85.216.189]) by smtp.gmail.com with ESMTPSA id d4sm1991371qkn.79.2021.12.21.07.01.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Dec 2021 07:01:52 -0800 (PST) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-m68k@lists.linux-m68k.org, anshuman.khandual@arm.com, willy@infradead.org, akpm@linux-foundation.org, william.kucharski@oracle.com, mike.kravetz@oracle.com, vbabka@suse.cz, geert@linux-m68k.org, schmitzmic@gmail.com, rostedt@goodmis.org, mingo@redhat.com, hannes@cmpxchg.org, guro@fb.com, songmuchun@bytedance.com, weixugc@google.com, gthelen@google.com, rientjes@google.com, pjt@google.com Subject: [PATCH v2 9/9] mm: use atomic_cmpxchg_acquire in page_ref_freeze(). Date: Tue, 21 Dec 2021 15:01:40 +0000 Message-Id: <20211221150140.988298-10-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.34.1.307.g9b7440fafd-goog In-Reply-To: <20211221150140.988298-1-pasha.tatashin@soleen.com> References: <20211221150140.988298-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b="PA3Q/yDh"; spf=pass (imf15.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.219.42 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none X-Rspamd-Queue-Id: C6399A011D X-Stat-Signature: f5grw533n97wxkj1r6jogqsr6yh87fme X-Rspamd-Server: rspam04 X-HE-Tag: 1640098906-248390 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: page_ref_freeze and page_ref_unfreeze are designed to be used as a pair. They protect critical sections where struct page can be modified. page_ref_unfreeze() is protected by _release() atomic operation, but page_ref_freeze() is not as it is assumed that cmpxch provides the full barrier. Instead, use the appropriate atomic_cmpxchg_acquire() to ensure that memory model is excplicitly followed. Signed-off-by: Pasha Tatashin --- include/linux/page_ref.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 9efabeff4e06..45be731d8919 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -308,7 +308,8 @@ static inline bool folio_try_get_rcu(struct folio *folio) static inline int page_ref_freeze(struct page *page, int count) { - int ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count); + int old_val = atomic_cmpxchg_acquire(&page->_refcount, count, 0); + int ret = likely(old_val == count); if (page_ref_tracepoint_active(page_ref_freeze)) __page_ref_freeze(page, count, ret);