From patchwork Wed Jul 15 13:50:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "kirill.shutemov@linux.intel.com" X-Patchwork-Id: 11665373 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D9668618 for ; Wed, 15 Jul 2020 13:50:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B0F8620657 for ; Wed, 15 Jul 2020 13:50:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B0F8620657 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A2D1E6B0002; Wed, 15 Jul 2020 09:50:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9B6426B0005; Wed, 15 Jul 2020 09:50:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 87DBC6B0006; Wed, 15 Jul 2020 09:50:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id 6E9866B0002 for ; Wed, 15 Jul 2020 09:50:19 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D7E89181AC9BF for ; Wed, 15 Jul 2020 13:50:18 +0000 (UTC) X-FDA: 77040444516.03.ghost14_1d0ef5b26efa Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 828F728A4E9 for ; Wed, 15 Jul 2020 13:50:18 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,kirill.shutemov@linux.intel.com,,RULES_HIT:30054:30064:30070,0,RBL:134.134.136.65:@linux.intel.com:.lbl8.mailshell.net-62.18.0.100 64.95.201.95;04y8fm8n1d8bjdpce6adnboojiybfopn9x37jjaemcojn9qxtcqkho59oddbrfh.7dg8dzfgiz5os3ychtjrbhmz18xkwpghikx199qw1afdwgz3k56ytf367kp4tot.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: ghost14_1d0ef5b26efa X-Filterd-Recvd-Size: 4421 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Jul 2020 13:50:17 +0000 (UTC) IronPort-SDR: STOL+5ZjUtaLkLGDkhXoGgZH9q6zE+fsp5aDMPzPh0zcKtDN/6YD3fusrnWzHJSY9PVaEAhi6d k1u8nsSmAEfQ== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="149141380" X-IronPort-AV: E=Sophos;i="5.75,355,1589266800"; d="scan'208";a="149141380" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2020 06:50:15 -0700 IronPort-SDR: HGWwevdqxTxDRDHxu6Loac2IHFbvALH+N778mrEpxGQOVurNvv8tjElWvYNleM189heoRQy4oy DriO0I4yhElA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,355,1589266800"; d="scan'208";a="282095027" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga003.jf.intel.com with ESMTP; 15 Jul 2020 06:50:13 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id EA8C476; Wed, 15 Jul 2020 16:50:12 +0300 (EEST) From: "Kirill A. Shutemov" To: Andrew Morton Cc: Linus Torvalds , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Naresh Kamboju , Joel Fernandes , William Kucharski Subject: [PATCHv2] mm: Fix warning in move_normal_pmd() Date: Wed, 15 Jul 2020 16:50:11 +0300 Message-Id: <20200715135011.42743-1-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Rspamd-Queue-Id: 828F728A4E9 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: mremap(2) does not allow source and destination regions to overlap, but shift_arg_pages() calls move_page_tables() directly and in this case the source and destination overlap often. It confuses move_normal_pmd(): WARNING: CPU: 3 PID: 27091 at mm/mremap.c:211 move_page_tables+0x6ef/0x720 move_normal_pmd() expects the destination PMD to be empty, but when ranges overlap nobody removes PTE page tables on source side. move_ptes() only removes PTE entries, leaving tables behind. When the source PMD becomes destination and alignment/size is right we step onto the warning. The warning is harmless: kernel correctly fallbacks to handle entries on per-entry basis. The fix is to avoid move_normal_pmd() if we see that source and destination ranges overlap. Signed-off-by: Kirill A. Shutemov Fixes: 2c91bd4a4e2e ("mm: speed up mremap by 20x on large regions") Link: https://lore.kernel.org/lkml/20200713025354.GB3644504@google.com/ Reported-and-tested-by: Naresh Kamboju Reviewed-by: Joel Fernandes (Google) Cc: William Kucharski --- mm/mremap.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/mm/mremap.c b/mm/mremap.c index 5dd572d57ca9..340a96a29cbb 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -245,6 +245,26 @@ unsigned long move_page_tables(struct vm_area_struct *vma, unsigned long extent, next, old_end; struct mmu_notifier_range range; pmd_t *old_pmd, *new_pmd; + bool overlaps; + + /* + * shift_arg_pages() can call move_page_tables() on overlapping ranges. + * In this case we cannot use move_normal_pmd() because destination pmd + * might be established page table: move_ptes() doesn't free page + * table. + */ + if (old_addr > new_addr) { + overlaps = old_addr - new_addr < len; + } else { + overlaps = new_addr - old_addr < len; + + /* + * We are interating over ranges forward. It means we cannot + * handle overlapping ranges with new_addr > old_addr without + * risking data corruption. Don't do this. + */ + WARN_ON(overlaps); + } old_end = old_addr + len; flush_cache_range(vma, old_addr, old_end); @@ -282,7 +302,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, split_huge_pmd(vma, old_pmd, old_addr); if (pmd_trans_unstable(old_pmd)) continue; - } else if (extent == PMD_SIZE) { + } else if (!overlaps && extent == PMD_SIZE) { #ifdef CONFIG_HAVE_MOVE_PMD /* * If the extent is PMD-sized, try to speed the move by