From patchwork Fri May 1 21:05:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 11523275 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B9FCC913 for ; Fri, 1 May 2020 21:07:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 91D1D208C3 for ; Fri, 1 May 2020 21:07:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91D1D208C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 062BE8E0006; Fri, 1 May 2020 17:06:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EDD608E0001; Fri, 1 May 2020 17:06:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCCFD8E0006; Fri, 1 May 2020 17:06:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0130.hostedemail.com [216.40.44.130]) by kanga.kvack.org (Postfix) with ESMTP id C549E8E0001 for ; Fri, 1 May 2020 17:06:58 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8221E5836 for ; Fri, 1 May 2020 21:06:58 +0000 (UTC) X-FDA: 76769384916.16.geese71_1fba732bbae39 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dave.hansen@linux.intel.com,,RULES_HIT:30054:30070,0,RBL:192.55.52.43:@linux.intel.com:.lbl8.mailshell.net-64.95.201.95 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: geese71_1fba732bbae39 X-Filterd-Recvd-Size: 3184 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Fri, 1 May 2020 21:06:57 +0000 (UTC) IronPort-SDR: 5OGlPz6srnbsUI/WXF8A7XKtk0otqzXWSeiPbXQk2Q94DWRqCrLmqz3/Ei4rW1cw2BitNgkjUF RmfM1YsBhCQw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 May 2020 14:06:56 -0700 IronPort-SDR: fLz4oPOZC60fv9eFRf6XoS+qjHY6BgQN9a7XdIo1PDtpEPeDxgJAPiz1VJsNXCDCRhxZAcnxfs PJXezy1GJ5+A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,341,1583222400"; d="scan'208";a="433425589" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga005.jf.intel.com with ESMTP; 01 May 2020 14:06:56 -0700 Subject: [RFC][PATCH 1/2] mm/migrate: remove extra page_count() check To: linux-kernel@vger.kernel.org Cc: Dave Hansen ,npiggin@gmail.com,akpm@linux-foundation.org,willy@infradead.org,yang.shi@linux.alibaba.com,linux-mm@kvack.org From: Dave Hansen Date: Fri, 01 May 2020 14:05:18 -0700 References: <20200501210516.DFAFF456@viggo.jf.intel.com> In-Reply-To: <20200501210516.DFAFF456@viggo.jf.intel.com> Message-Id: <20200501210518.DA161B7E@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen This is not a bug fix. It was found by inspection, but I believe that it is confusing as it stands. First, page_ref_freeze() is implemented internally with: atomic_cmpxchg(&page->_refcount, expected, 0) == expected The "cmp" part of cmpxchg is making sure that _refcount==expected which means that there's an implicit check here, equivalent to: page_count(page) == expected_count This appears to have originated in "e286781: mm: speculative page references", which is pretty ancient. This check is also somewhat dangerous to have here because it might lead someone to think that page_ref_freeze() *doesn't* do its own page_count() checking. Remove the unnecessary check. Signed-off-by: Dave Hansen Cc: Nicholas Piggin Cc: Andrew Morton Cc: Matthew Wilcox (Oracle) Cc: Yang Shi Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- b/mm/migrate.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff -puN mm/migrate.c~remove_extra_page_count_check mm/migrate.c --- a/mm/migrate.c~remove_extra_page_count_check 2020-05-01 14:00:42.331525924 -0700 +++ b/mm/migrate.c 2020-05-01 14:00:42.336525924 -0700 @@ -425,11 +425,12 @@ int migrate_page_move_mapping(struct add newzone = page_zone(newpage); xas_lock_irq(&xas); - if (page_count(page) != expected_count || xas_load(&xas) != page) { + if (xas_load(&xas) != page) { xas_unlock_irq(&xas); return -EAGAIN; } + /* Freezing will fail if page_count()!=expected_count */ if (!page_ref_freeze(page, expected_count)) { xas_unlock_irq(&xas); return -EAGAIN;