From patchwork Fri May 1 21:05:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 11523275 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B9FCC913 for ; Fri, 1 May 2020 21:07:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 91D1D208C3 for ; Fri, 1 May 2020 21:07:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91D1D208C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 062BE8E0006; Fri, 1 May 2020 17:06:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EDD608E0001; Fri, 1 May 2020 17:06:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCCFD8E0006; Fri, 1 May 2020 17:06:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0130.hostedemail.com [216.40.44.130]) by kanga.kvack.org (Postfix) with ESMTP id C549E8E0001 for ; Fri, 1 May 2020 17:06:58 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8221E5836 for ; Fri, 1 May 2020 21:06:58 +0000 (UTC) X-FDA: 76769384916.16.geese71_1fba732bbae39 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dave.hansen@linux.intel.com,,RULES_HIT:30054:30070,0,RBL:192.55.52.43:@linux.intel.com:.lbl8.mailshell.net-64.95.201.95 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: geese71_1fba732bbae39 X-Filterd-Recvd-Size: 3184 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Fri, 1 May 2020 21:06:57 +0000 (UTC) IronPort-SDR: 5OGlPz6srnbsUI/WXF8A7XKtk0otqzXWSeiPbXQk2Q94DWRqCrLmqz3/Ei4rW1cw2BitNgkjUF RmfM1YsBhCQw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 May 2020 14:06:56 -0700 IronPort-SDR: fLz4oPOZC60fv9eFRf6XoS+qjHY6BgQN9a7XdIo1PDtpEPeDxgJAPiz1VJsNXCDCRhxZAcnxfs PJXezy1GJ5+A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,341,1583222400"; d="scan'208";a="433425589" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga005.jf.intel.com with ESMTP; 01 May 2020 14:06:56 -0700 Subject: [RFC][PATCH 1/2] mm/migrate: remove extra page_count() check To: linux-kernel@vger.kernel.org Cc: Dave Hansen ,npiggin@gmail.com,akpm@linux-foundation.org,willy@infradead.org,yang.shi@linux.alibaba.com,linux-mm@kvack.org From: Dave Hansen Date: Fri, 01 May 2020 14:05:18 -0700 References: <20200501210516.DFAFF456@viggo.jf.intel.com> In-Reply-To: <20200501210516.DFAFF456@viggo.jf.intel.com> Message-Id: <20200501210518.DA161B7E@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen This is not a bug fix. It was found by inspection, but I believe that it is confusing as it stands. First, page_ref_freeze() is implemented internally with: atomic_cmpxchg(&page->_refcount, expected, 0) == expected The "cmp" part of cmpxchg is making sure that _refcount==expected which means that there's an implicit check here, equivalent to: page_count(page) == expected_count This appears to have originated in "e286781: mm: speculative page references", which is pretty ancient. This check is also somewhat dangerous to have here because it might lead someone to think that page_ref_freeze() *doesn't* do its own page_count() checking. Remove the unnecessary check. Signed-off-by: Dave Hansen Cc: Nicholas Piggin Cc: Andrew Morton Cc: Matthew Wilcox (Oracle) Cc: Yang Shi Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- b/mm/migrate.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff -puN mm/migrate.c~remove_extra_page_count_check mm/migrate.c --- a/mm/migrate.c~remove_extra_page_count_check 2020-05-01 14:00:42.331525924 -0700 +++ b/mm/migrate.c 2020-05-01 14:00:42.336525924 -0700 @@ -425,11 +425,12 @@ int migrate_page_move_mapping(struct add newzone = page_zone(newpage); xas_lock_irq(&xas); - if (page_count(page) != expected_count || xas_load(&xas) != page) { + if (xas_load(&xas) != page) { xas_unlock_irq(&xas); return -EAGAIN; } + /* Freezing will fail if page_count()!=expected_count */ if (!page_ref_freeze(page, expected_count)) { xas_unlock_irq(&xas); return -EAGAIN; From patchwork Fri May 1 21:05:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Hansen X-Patchwork-Id: 11523277 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6258D92C for ; Fri, 1 May 2020 21:07:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 38341208C3 for ; Fri, 1 May 2020 21:07:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 38341208C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F2D748E0007; Fri, 1 May 2020 17:07:01 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EDD228E0001; Fri, 1 May 2020 17:07:01 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCD098E0007; Fri, 1 May 2020 17:07:01 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id BDE9A8E0001 for ; Fri, 1 May 2020 17:07:01 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 843A645CD for ; Fri, 1 May 2020 21:07:01 +0000 (UTC) X-FDA: 76769385042.02.rings82_20248a4a47157 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dave.hansen@linux.intel.com,,RULES_HIT:7943:30041:30045:30054:30070,0,RBL:134.134.136.65:@linux.intel.com:.lbl8.mailshell.net-64.95.201.95 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:82,LUA_SUMMARY:none X-HE-Tag: rings82_20248a4a47157 X-Filterd-Recvd-Size: 3539 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Fri, 1 May 2020 21:07:00 +0000 (UTC) IronPort-SDR: 5lKGOBsMKx6G0B928OILD1A+gJfEeO9nxkHidB7W+V0wn23gn215RfVjRaGDeyGW8S6XtQfRfz Uw6juMzTrkDg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 May 2020 14:06:59 -0700 IronPort-SDR: ndrAxZLEU5/DjQqA4yIL2o6b5+ROOEw6B1BXjG7cT6PX4yclHY8sdrTh2h5rSxX9mxT64XJeEI pGW4vEMquB+g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,341,1583222400"; d="scan'208";a="405849459" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga004.jf.intel.com with ESMTP; 01 May 2020 14:06:59 -0700 Subject: [RFC][PATCH 2/2] mm/migrate: annotate possible unnecessary xas_load() To: linux-kernel@vger.kernel.org Cc: Dave Hansen ,npiggin@gmail.com,akpm@linux-foundation.org,willy@infradead.org,yang.shi@linux.alibaba.com,linux-mm@kvack.org From: Dave Hansen Date: Fri, 01 May 2020 14:05:20 -0700 References: <20200501210516.DFAFF456@viggo.jf.intel.com> In-Reply-To: <20200501210516.DFAFF456@viggo.jf.intel.com> Message-Id: <20200501210520.6B29706C@viggo.jf.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dave Hansen The xas_load() in question also originated in "e286781: mm: speculative page references" as a radix_tree_deref_slot(), the only one in the tree at the time. I'm thoroughly confused why it is needed, though. A page's slot in the page cache should be stabilized by lock_page() being held. So, first of all, add a VM_BUG_ON_ONCE() to make it totally clear that the page is locked. But, even if the page was truncated, we normally check: page_mapping(page) != mapping to check for truncation. This would seem to imply that we are looking for some kind of state change that can happen to the xarray slot for a page, but without changing page->mapping. I'm at a loss for that that might be. Stick a WARN_ON_ONCE() in there to see if we ever actually hit this. Signed-off-by: Dave Hansen Cc: Nicholas Piggin Cc: Andrew Morton Cc: Matthew Wilcox (Oracle) Cc: Yang Shi Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- b/mm/migrate.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff -puN mm/migrate.c~remove_extra_xas_load_check mm/migrate.c --- a/mm/migrate.c~remove_extra_xas_load_check 2020-05-01 14:00:43.377525921 -0700 +++ b/mm/migrate.c 2020-05-01 14:00:43.381525921 -0700 @@ -407,6 +407,8 @@ int migrate_page_move_mapping(struct add int dirty; int expected_count = expected_page_refs(mapping, page) + extra_count; + VM_WARN_ONCE(!PageLocked(page)); + if (!mapping) { /* Anonymous page without mapping */ if (page_count(page) != expected_count) @@ -425,7 +427,13 @@ int migrate_page_move_mapping(struct add newzone = page_zone(newpage); xas_lock_irq(&xas); + /* + * 'mapping' was established under the page lock, which + * prevents the xarray slot for 'page' from being changed. + * Thus, xas_load() failure here is unexpected. + */ if (xas_load(&xas) != page) { + WARN_ON_ONCE(1); xas_unlock_irq(&xas); return -EAGAIN; }