From patchwork Fri Jan 21 07:55:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12719427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2889C433EF for ; Fri, 21 Jan 2022 07:56:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B3476B007E; Fri, 21 Jan 2022 02:56:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 36EB86B0080; Fri, 21 Jan 2022 02:56:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 251776B0081; Fri, 21 Jan 2022 02:56:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0150.hostedemail.com [216.40.44.150]) by kanga.kvack.org (Postfix) with ESMTP id 16A696B007E for ; Fri, 21 Jan 2022 02:56:32 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C25F1181CBDB1 for ; Fri, 21 Jan 2022 07:56:31 +0000 (UTC) X-FDA: 79053536982.06.6B44502 Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by imf13.hostedemail.com (Postfix) with ESMTP id 0243320026 for ; Fri, 21 Jan 2022 07:56:30 +0000 (UTC) Received: by mail-pj1-f45.google.com with SMTP id s61-20020a17090a69c300b001b4d0427ea2so12675548pjj.4 for ; Thu, 20 Jan 2022 23:56:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=yazOhP0ZHzlY8srvqZA4aBLnRzwBUTc6naQJNmJIPto=; b=l4crWR5N15uNGOzFWkRqNP1rs6DtIDG8844ZqupvfUpmQECG23k+vyhFEhXsny/5xw fa/xQBPl/XEZfrXn6weA7s/O4oobHkUuX/789CQQV8R8YA0k+NweSnP67sj1F9yGUtnq lhTZBXn2zqx4wYNwYucAP7uQmKbAazKl5IbHqaYlIlDJuuHZ1SwITsVBpqVrwUwmA0MU buOQRB0QdA1vPqCn1sNGqJejfqaJrk98h8PnV9IioeIRqu/aPh33qRLDCuSi002egg/N QHkUeqChy7k5tfTqYGUL7a94JmyPPvubU4Xil9E+orA/juejflD69PJiRlrnguXM1pot VnGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=yazOhP0ZHzlY8srvqZA4aBLnRzwBUTc6naQJNmJIPto=; b=zi3svl5+pyDTWsO3/Bh6HKaHqqGx2bLNh4VoyiEaZuEJxccIxRwqUsMdllQlhlo8h4 XVORS1gzlq/LCllyQv3GHUaQdJcJj0yqVXYm6AgsT6NBrjrRQRU0J+B2BAMPiNYelsBa WGAltqlSDQn4zBjZjUFnwGQdLfjyUY+IKoJlKjx+lXsgB4Oac+6K8JAQypgTwwdABCYP 9fWa56LttVnfFR/yIVPEcFT/F4s1sEjxS/Tcyvvxby0m6MZwbElxe77sRYvNppULE4ii 2Zdw4n1JJjhCWTKe+IJPkwGimrpFAaHeHcxK/axOm2kS6NDVwOMPQf8xzUfoAcNfUQRG 8Ufg== X-Gm-Message-State: AOAM531g8CVDtR8sr5NeLafTUZp9uqTvjU6ppKvQ4UtS4M0qyIk90W6Q CqKWFb0xFFmqvhyPT31mxRAkOg== X-Google-Smtp-Source: ABdhPJzsnK6W/iuE/aijaL5gz1UJdJ5Uk7MIPGSQiG/0M5loABE5QAM8YgMpA18TTWDghqAi5SCyAA== X-Received: by 2002:a17:902:e54d:b0:14b:1a2b:e842 with SMTP id n13-20020a170902e54d00b0014b1a2be842mr2707933plf.102.1642751789637; Thu, 20 Jan 2022 23:56:29 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id t15sm10778178pjy.17.2022.01.20.23.56.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jan 2022 23:56:29 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH 1/5] mm: rmap: fix cache flush on THP pages Date: Fri, 21 Jan 2022 15:55:11 +0800 Message-Id: <20220121075515.79311-1-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) MIME-Version: 1.0 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=l4crWR5N; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf13.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.45 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Stat-Signature: ni9qnwqnnnaoux6q9xxqb3g7txz7gnni X-Rspamd-Queue-Id: 0243320026 X-Rspamd-Server: rspam12 X-HE-Tag: 1642751790-772336 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. At least, no problems were found due to this. Maybe because the architectures that have virtual indexed caches is less. Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") Signed-off-by: Muchun Song --- mm/rmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index b0fd9dc19eba..65670cb805d6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -974,7 +974,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) continue; - flush_cache_page(vma, address, page_to_pfn(page)); + flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); entry = pmdp_invalidate(vma, address, pmd); entry = pmd_wrprotect(entry); entry = pmd_mkclean(entry); From patchwork Fri Jan 21 07:55:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12719428 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 757A7C43219 for ; Fri, 21 Jan 2022 07:56:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0A6506B0080; Fri, 21 Jan 2022 02:56:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0568F6B0081; Fri, 21 Jan 2022 02:56:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E86E46B0083; Fri, 21 Jan 2022 02:56:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay032.a.hostedemail.com [64.99.140.32]) by kanga.kvack.org (Postfix) with ESMTP id DA1D56B0080 for ; Fri, 21 Jan 2022 02:56:36 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id AB7772305B for ; Fri, 21 Jan 2022 07:56:36 +0000 (UTC) X-FDA: 79053537192.02.803802F Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf05.hostedemail.com (Postfix) with ESMTP id 16382100014 for ; Fri, 21 Jan 2022 07:56:35 +0000 (UTC) Received: by mail-pl1-f182.google.com with SMTP id d1so7651756plh.10 for ; Thu, 20 Jan 2022 23:56:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=g85iyBrbQfGIa6oI3cy0UnhFSe/ynzyb0z74yUFTllk=; b=W4ZdDO/SKqbmtzYR/fK+Rrxosm61hwItSMOkaFU/2YX7icv3gLM22PlkNp8St7zagA UoHLQimZ796bnmN3njyvgHwul7SoNkSBEdJ9IG/MbFpKSoA2CAaUyjBax2geIygUt1g9 5ndWslIKU0n0WHiQQDf0T3KGUCK5lyBYsNRJj5bZ5/Hk/R8x7Des5mPQavrM+bQFF06Q yKfhRzFP1c0jCTWn8WNdLEm/dtLo6ZXu5MP+c19NS1mQPYcIl83IWtXuJrZLPJyFofHE oKTVP4jynRsJO4b7OajRDhEO6MGTZJHDYgYjZu38f6inX6lzKJNTofnei2qpVdCJ6XfQ /YKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g85iyBrbQfGIa6oI3cy0UnhFSe/ynzyb0z74yUFTllk=; b=W+Kr/dUZDKUDBwsm4uS0vFDfPpO5dWLAcL2PuWi7sTvITtNqBq2zu3gQIj3QwNva8h QCnDjnsARoIGve4nEdBq5KRorvJfT4OfX+mPq13HffFFLKjJWQuFv+nZMgYU+dtLAE8/ wXUxOR/81B/zow+CIi8yZw8TjKXNhEcQ5fJlcIV0cQ8Zd57U8kYANOsFlafOJ/glGKWh wgEh0J8f+TRpC0nyN/0N5z4DRI31eBh9WGp/YY+9ajuWGTUmpgTw45pxr5PTgZ/O5SoA uD2zk1kBnp8wBJd3ZKAAvfQx1JCgErndsYKGUISMMceorEkkG1ZxAVfD2biiLk6rswaU gM/A== X-Gm-Message-State: AOAM5307+JkW2MerM2HNTrlfFkVjBFVfLb0dBSWcZDKtiHfKTnGyDxYL csG9MJJXW9Y9zc3ln+Bf5xakU+xfKQj1Bh8l X-Google-Smtp-Source: ABdhPJz6D9O2uvj40X0dQzDK5rSPcYK0+vGHyAMFzHzrHrk7fEsv1fTssN4tSehxXfv9S9hqt076Kg== X-Received: by 2002:a17:902:9894:b0:149:8a72:98ae with SMTP id s20-20020a170902989400b001498a7298aemr2720829plp.132.1642751795469; Thu, 20 Jan 2022 23:56:35 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id t15sm10778178pjy.17.2022.01.20.23.56.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jan 2022 23:56:35 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH 2/5] dax: fix cache flush on PMD-mapped pages Date: Fri, 21 Jan 2022 15:55:12 +0800 Message-Id: <20220121075515.79311-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220121075515.79311-1-songmuchun@bytedance.com> References: <20220121075515.79311-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 16382100014 X-Stat-Signature: amo39c4zyzwcbs63fo119is7sp6m5wpa Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b="W4ZdDO/S"; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf05.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam06 X-HE-Tag: 1642751795-197335 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean") Signed-off-by: Muchun Song --- fs/dax.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/dax.c b/fs/dax.c index 88be1c02a151..2955ec65eb65 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -857,7 +857,7 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) goto unlock_pmd; - flush_cache_page(vma, address, pfn); + flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); pmd = pmdp_invalidate(vma, address, pmdp); pmd = pmd_wrprotect(pmd); pmd = pmd_mkclean(pmd); From patchwork Fri Jan 21 07:55:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12719429 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55A81C433EF for ; Fri, 21 Jan 2022 07:56:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C687D6B0081; Fri, 21 Jan 2022 02:56:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C159D6B0083; Fri, 21 Jan 2022 02:56:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ADD446B0085; Fri, 21 Jan 2022 02:56:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0177.hostedemail.com [216.40.44.177]) by kanga.kvack.org (Postfix) with ESMTP id A19A96B0081 for ; Fri, 21 Jan 2022 02:56:43 -0500 (EST) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 59924181D46E6 for ; Fri, 21 Jan 2022 07:56:43 +0000 (UTC) X-FDA: 79053537486.31.955A2FA Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) by imf25.hostedemail.com (Postfix) with ESMTP id EE941A0005 for ; Fri, 21 Jan 2022 07:56:42 +0000 (UTC) Received: by mail-pg1-f174.google.com with SMTP id g2so7531409pgo.9 for ; Thu, 20 Jan 2022 23:56:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5LmIcTdcw5bzUgUHpTw2bCJj+na0Z521FW9c8N2esaA=; b=22o48qCSnTj2QsxfYrrrbTL2ZLTm+5kp2xt+PBrL9IZCu9VTNf2HZ3MMuvMQFJ6WEJ fmNxtfezvbSk4mnjwpp2ukyS+dKOMFksQ028ZzDT1FqcskGpB96N9Bcfy40H6A8ihVhI uI9gm4ueLnW6R8rpV6zLadbIH/0jVYVlr1A+5jalDSFrhdR0v7RrQQ068CGvsA6zrPc8 VCTYJrwrANLegGUjsFr1h+iltnH9DYWPvuBzErK6f8iprm57Mp5la3lraJzZXNihYYFc uoixMu00KyHpoxby/B8l6QqpgcXtGuIndYXP9RAS3xwC2Pc492eaA5BtC6zhfXJEKolZ jmeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5LmIcTdcw5bzUgUHpTw2bCJj+na0Z521FW9c8N2esaA=; b=jvmk2nlr17F2zGAQPZmApzlzaOaH9SrJsqcZtNNXIC2m2iWPlyitwx3II+qoN1ext5 aXxFm++/n9PnUwQsJg1LmRI2lq1quZUb3wiJfdAAaIr7KjsQyngz8RK7xH2d5DN7R4oT 38mEv8s3D1Q5KeC4QwIwOPJv6gJ8ZyTdpvZ61xVNSRTNT4ELIQ2wSCCh2VQWhnlzW3Ga ISnm24u1gMyEkOp2fmECU/nr0/kynaWozAFhlzoU2svE2ZIjjIRlbtCoLy132jQ7WAVg 2EZ28q1PsMFNmRE6aZkpGgTCcVZx0aAMNmorQp6hE+od6Y0pHpZAvyOb7m73pN2bnMQi dwZQ== X-Gm-Message-State: AOAM5316dy0l1+1/lr+s5rHyv/2+7jBBcdxFglQ2j5ps7yori6AOhxzy djtLVLkauJR+XsS/jF4ld+O2eQ== X-Google-Smtp-Source: ABdhPJxZsh1kEVKkZgg5mmiTTuA7SA5fu83x81BaMEGYdZzVIbC0H3RSF/yR7MaU91mltsILBevkGg== X-Received: by 2002:a63:c5e:: with SMTP id 30mr2122312pgm.522.1642751801695; Thu, 20 Jan 2022 23:56:41 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id t15sm10778178pjy.17.2022.01.20.23.56.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jan 2022 23:56:41 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH 3/5] mm: page_vma_mapped: support checking if a pfn is mapped into a vma Date: Fri, 21 Jan 2022 15:55:13 +0800 Message-Id: <20220121075515.79311-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220121075515.79311-1-songmuchun@bytedance.com> References: <20220121075515.79311-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=22o48qCS; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf25.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.174 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Stat-Signature: yj464bx7pjmt93dgr75akpznkffefbb1 X-Rspamd-Queue-Id: EE941A0005 X-Rspamd-Server: rspam12 X-HE-Tag: 1642751802-236023 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: page_vma_mapped_walk() is supposed to check if a page is mapped into a vma. However, not all page frames (e.g. PFN_DEV) have a associated struct page with it. There is going to be some duplicate codes similar with this function if someone want to check if a pfn (without a struct page) is mapped into a vma. So add support for checking if a pfn is mapped into a vma. In the next patch, the dax will use this new feature. Signed-off-by: Muchun Song --- include/linux/rmap.h | 13 +++++++++-- mm/internal.h | 25 +++++++++++++------- mm/page_vma_mapped.c | 65 +++++++++++++++++++++++++++++++++------------------- 3 files changed, 70 insertions(+), 33 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 221c3c6438a7..7628474732e7 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -204,9 +204,18 @@ int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, #define PVMW_SYNC (1 << 0) /* Look for migarion entries rather than present PTEs */ #define PVMW_MIGRATION (1 << 1) +/* Walk the page table by checking the pfn instead of a struct page */ +#define PVMW_PFN_WALK (1 << 2) struct page_vma_mapped_walk { - struct page *page; + union { + struct page *page; + struct { + unsigned long pfn; + unsigned int nr; + pgoff_t index; + }; + }; struct vm_area_struct *vma; unsigned long address; pmd_t *pmd; @@ -218,7 +227,7 @@ struct page_vma_mapped_walk { static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) { /* HugeTLB pte is set to the relevant page table entry without pte_mapped. */ - if (pvmw->pte && !PageHuge(pvmw->page)) + if (pvmw->pte && ((pvmw->flags & PVMW_PFN_WALK) || !PageHuge(pvmw->page))) pte_unmap(pvmw->pte); if (pvmw->ptl) spin_unlock(pvmw->ptl); diff --git a/mm/internal.h b/mm/internal.h index deb9bda18e59..d6e3e8e1be2d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -478,25 +478,34 @@ vma_address(struct page *page, struct vm_area_struct *vma) } /* - * Then at what user virtual address will none of the page be found in vma? - * Assumes that vma_address() already returned a good starting address. - * If page is a compound head, the entire compound page is considered. + * Return the end of user virtual address at the specific offset within + * a vma. */ static inline unsigned long -vma_address_end(struct page *page, struct vm_area_struct *vma) +vma_pgoff_address_end(pgoff_t pgoff, unsigned long nr_pages, + struct vm_area_struct *vma) { - pgoff_t pgoff; unsigned long address; - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ - pgoff = page_to_pgoff(page) + compound_nr(page); - address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + address = vma->vm_start + ((pgoff + nr_pages - vma->vm_pgoff) << PAGE_SHIFT); /* Check for address beyond vma (or wrapped through 0?) */ if (address < vma->vm_start || address > vma->vm_end) address = vma->vm_end; return address; } +/* + * Then at what user virtual address will none of the page be found in vma? + * Assumes that vma_address() already returned a good starting address. + * If page is a compound head, the entire compound page is considered. + */ +static inline unsigned long +vma_address_end(struct page *page, struct vm_area_struct *vma) +{ + VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ + return vma_pgoff_address_end(page_to_pgoff(page), compound_nr(page), vma); +} + static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf, struct file *fpin) { diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index f7b331081791..c8819770d457 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -53,10 +53,16 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw) return true; } -static inline bool pfn_is_match(struct page *page, unsigned long pfn) +static inline bool pfn_is_match(struct page_vma_mapped_walk *pvmw, unsigned long pfn) { - unsigned long page_pfn = page_to_pfn(page); + struct page *page; + unsigned long page_pfn; + if (pvmw->flags & PVMW_PFN_WALK) + return pfn >= pvmw->pfn && pfn - pvmw->pfn < pvmw->nr; + + page = pvmw->page; + page_pfn = page_to_pfn(page); /* normal page and hugetlbfs page */ if (!PageTransCompound(page) || PageHuge(page)) return page_pfn == pfn; @@ -116,7 +122,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) pfn = pte_pfn(*pvmw->pte); } - return pfn_is_match(pvmw->page, pfn); + return pfn_is_match(pvmw, pfn); } static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) @@ -127,24 +133,24 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) } /** - * page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at - * @pvmw->address - * @pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags - * must be set. pmd, pte and ptl must be NULL. + * page_vma_mapped_walk - check if @pvmw->page or @pvmw->pfn is mapped in + * @pvmw->vma at @pvmw->address + * @pvmw: pointer to struct page_vma_mapped_walk. page (or pfn and nr and + * index), vma, address and flags must be set. pmd, pte and ptl must be NULL. * - * Returns true if the page is mapped in the vma. @pvmw->pmd and @pvmw->pte point - * to relevant page table entries. @pvmw->ptl is locked. @pvmw->address is - * adjusted if needed (for PTE-mapped THPs). + * Returns true if the page or pfn is mapped in the vma. @pvmw->pmd and + * @pvmw->pte point to relevant page table entries. @pvmw->ptl is locked. + * @pvmw->address is adjusted if needed (for PTE-mapped THPs). * * If @pvmw->pmd is set but @pvmw->pte is not, you have found PMD-mapped page - * (usually THP). For PTE-mapped THP, you should run page_vma_mapped_walk() in - * a loop to find all PTEs that map the THP. + * (usually THP or Huge DEVMAP). For PMD-mapped page, you should run + * page_vma_mapped_walk() in a loop to find all PTEs that map the huge page. * * For HugeTLB pages, @pvmw->pte is set to the relevant page table entry * regardless of which page table level the page is mapped at. @pvmw->pmd is * NULL. * - * Returns false if there are no more page table entries for the page in + * Returns false if there are no more page table entries for the page or pfn in * the vma. @pvmw->ptl is unlocked and @pvmw->pte is unmapped. * * If you need to stop the walk before page_vma_mapped_walk() returned false, @@ -153,18 +159,27 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) { struct mm_struct *mm = pvmw->vma->vm_mm; - struct page *page = pvmw->page; + struct page *page; unsigned long end; + unsigned long pfn; pgd_t *pgd; p4d_t *p4d; pud_t *pud; pmd_t pmde; + if (pvmw->flags & PVMW_PFN_WALK) { + page = NULL; + pfn = pvmw->pfn; + } else { + page = pvmw->page; + pfn = page_to_pfn(page); + } + /* The only possible pmd mapping has been handled on last iteration */ if (pvmw->pmd && !pvmw->pte) return not_found(pvmw); - if (unlikely(PageHuge(page))) { + if (unlikely(page && PageHuge(page))) { /* The only possible mapping was handled on last iteration */ if (pvmw->pte) return not_found(pvmw); @@ -187,9 +202,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) * any PageKsm page: whose page->index misleads vma_address() * and vma_address_end() to disaster. */ - end = PageTransCompound(page) ? - vma_address_end(page, pvmw->vma) : - pvmw->address + PAGE_SIZE; + if (page) + end = PageTransCompound(page) ? + vma_address_end(page, pvmw->vma) : + pvmw->address + PAGE_SIZE; + else + end = vma_pgoff_address_end(pvmw->index, pvmw->nr, pvmw->vma); + if (pvmw->pte) goto next_pte; restart: @@ -218,13 +237,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) */ pmde = READ_ONCE(*pvmw->pmd); - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { + if (pmd_leaf(pmde) || is_pmd_migration_entry(pmde)) { pvmw->ptl = pmd_lock(mm, pvmw->pmd); pmde = *pvmw->pmd; - if (likely(pmd_trans_huge(pmde))) { + if (likely(pmd_leaf(pmde))) { if (pvmw->flags & PVMW_MIGRATION) return not_found(pvmw); - if (pmd_page(pmde) != page) + if (pmd_pfn(pmde) != pfn) return not_found(pvmw); return true; } @@ -236,7 +255,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return not_found(pvmw); entry = pmd_to_swp_entry(pmde); if (!is_migration_entry(entry) || - pfn_swap_entry_to_page(entry) != page) + page_to_pfn(pfn_swap_entry_to_page(entry)) != pfn) return not_found(pvmw); return true; } @@ -249,7 +268,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) * cannot return prematurely, while zap_huge_pmd() has * cleared *pmd but not decremented compound_mapcount(). */ - if ((pvmw->flags & PVMW_SYNC) && + if ((pvmw->flags & PVMW_SYNC) && page && PageTransCompound(page)) { spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); From patchwork Fri Jan 21 07:55:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12719430 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA43FC433EF for ; Fri, 21 Jan 2022 07:56:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7AB6B6B0083; Fri, 21 Jan 2022 02:56:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 75A236B0085; Fri, 21 Jan 2022 02:56:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6229B6B0087; Fri, 21 Jan 2022 02:56:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0224.hostedemail.com [216.40.44.224]) by kanga.kvack.org (Postfix) with ESMTP id 55DCA6B0083 for ; Fri, 21 Jan 2022 02:56:49 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0BF1A8248076 for ; Fri, 21 Jan 2022 07:56:49 +0000 (UTC) X-FDA: 79053537738.10.EB45AC1 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf07.hostedemail.com (Postfix) with ESMTP id A1C994000B for ; Fri, 21 Jan 2022 07:56:48 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id c9so7646483plg.11 for ; Thu, 20 Jan 2022 23:56:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cxhdA6+GNSH3L8fLFneVmiJKUQg5LlLtQjJYK6qW9gQ=; b=d4m5wUHhjebtfXqUNg2JEUv0GYabWqPb+pYMY/UZY5EVNTeXWBn14TfqbeTqWYSjHU DLk8Gek8ZIKBGlWKp0kkByozNHjPs5bthLGSQdOqjuWapq2C+ateGW4Bo18AZKdrrxrh Gz3IxqFY3Ty8GIDiZeDRCEasWDBMbxeTxShd4V4GluVICMczavbVvpZkiHUYgelIAzte Th3yQ6vHEvK6v8h2ZYxikaIDG0U4xdJnHrpWSWdZqlXJJBuy4D1XAI5k4U4NG/JCRzyn hdVn38s2J5IpekaMWHrqjN/LGJ2y3NZVMKi38Gf5erH47jpxuxgJViot194BegMoFi6U E5PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cxhdA6+GNSH3L8fLFneVmiJKUQg5LlLtQjJYK6qW9gQ=; b=VWu2mhffLH71vyMhMFZ6Y4HQBgwz+p4Rp5y5Ih+5/Rm7/5/YSPEdRkucgu6KNgtuu1 1wFEa+gpsXnLnZVcKpVaby+dv9/r50PT/Pwos9XMC2/tLwjmugqYkIesF++at52ekrB9 9Bdq47WMuUIVmzSygI/R5EGiMW+4SYwjvKgwjQcjLhWRPOuUU8rfVWt9EFiEobXKE3i6 PnNaiF0NnQNsNM0iVc1KZMNt5UteeqGqiNxDzrqVUN3DjvG8gcJnjiuCGsKEDQozUYYy fesy6EW4eCNaiPqQFOV7hRFBv6KCN1A6lsYUYgrxqICJOr1/eoUSzNTsPzhFIlUqnAmi QVmg== X-Gm-Message-State: AOAM533bVi0ToXS3Jll66ZjUd73jUhwBuApPOX+eQh/zLXzn/PWR/zhf R/XhCeBTxthfsDaH2aixhxroLA== X-Google-Smtp-Source: ABdhPJwjBFBtUfkbNZS3WuvqaSXfHYYx06TiIenEmLr6NZXnpwBZ9OvA7cRP76QnyQCb1D7JyUFJlQ== X-Received: by 2002:a17:902:a703:b0:149:7087:9904 with SMTP id w3-20020a170902a70300b0014970879904mr2669682plq.126.1642751807650; Thu, 20 Jan 2022 23:56:47 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id t15sm10778178pjy.17.2022.01.20.23.56.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jan 2022 23:56:47 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH 4/5] dax: fix missing writeprotect the pte entry Date: Fri, 21 Jan 2022 15:55:14 +0800 Message-Id: <20220121075515.79311-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220121075515.79311-1-songmuchun@bytedance.com> References: <20220121075515.79311-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A1C994000B X-Stat-Signature: xjd35y3gg66nkyf9w199ga3qqg3689yg Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=d4m5wUHh; spf=pass (imf07.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1642751808-51154 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently dax_mapping_entry_mkclean() fails to clean and write protect the pte entry within a DAX PMD entry during an *sync operation. This can result in data loss in the following sequence: 1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and making the pmd entry dirty and writeable. 2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K) write to the same file, dirtying PMD radix tree entry (already done in 1)) and making the pte entry dirty and writeable. 3) fsync, flushing out PMD data and cleaning the radix tree entry. We currently fail to mark the pte entry as clean and write protected since the vma of process B is not covered in dax_entry_mkclean(). 4) process B writes to the pte. These don't cause any page faults since the pte entry is dirty and writeable. The radix tree entry remains clean. 5) fsync, which fails to flush the dirty PMD data because the radix tree entry was clean. 6) crash - dirty data that should have been fsync'd as part of 5) could still have been in the processor cache, and is lost. Reuse some infrastructure of page_mkclean_one() to let DAX can handle similar case to fix this issue. Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush") Signed-off-by: Muchun Song --- fs/dax.c | 78 +++++----------------------------------------------- include/linux/rmap.h | 9 ++++++ mm/internal.h | 27 ++++++++++++------ mm/rmap.c | 69 ++++++++++++++++++++++++++++++++++------------ 4 files changed, 85 insertions(+), 98 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 2955ec65eb65..7d4e3e68b861 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #define CREATE_TRACE_POINTS @@ -801,86 +802,21 @@ static void *dax_insert_entry(struct xa_state *xas, return entry; } -static inline -unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma) -{ - unsigned long address; - - address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); - return address; -} - /* Walk all mappings of a given index of a file and writeprotect them */ -static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, - unsigned long pfn) +static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn, + unsigned long npfn, pgoff_t pgoff_start) { struct vm_area_struct *vma; - pte_t pte, *ptep = NULL; - pmd_t *pmdp = NULL; - spinlock_t *ptl; + pgoff_t pgoff_end = pgoff_start + npfn - 1; i_mmap_lock_read(mapping); - vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) { - struct mmu_notifier_range range; - unsigned long address; - + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff_start, pgoff_end) { cond_resched(); if (!(vma->vm_flags & VM_SHARED)) continue; - address = pgoff_address(index, vma); - - /* - * follow_invalidate_pte() will use the range to call - * mmu_notifier_invalidate_range_start() on our behalf before - * taking any lock. - */ - if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep, - &pmdp, &ptl)) - continue; - - /* - * No need to call mmu_notifier_invalidate_range() as we are - * downgrading page table protection not changing it to point - * to a new page. - * - * See Documentation/vm/mmu_notifier.rst - */ - if (pmdp) { -#ifdef CONFIG_FS_DAX_PMD - pmd_t pmd; - - if (pfn != pmd_pfn(*pmdp)) - goto unlock_pmd; - if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) - goto unlock_pmd; - - flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); - pmd = pmdp_invalidate(vma, address, pmdp); - pmd = pmd_wrprotect(pmd); - pmd = pmd_mkclean(pmd); - set_pmd_at(vma->vm_mm, address, pmdp, pmd); -unlock_pmd: -#endif - spin_unlock(ptl); - } else { - if (pfn != pte_pfn(*ptep)) - goto unlock_pte; - if (!pte_dirty(*ptep) && !pte_write(*ptep)) - goto unlock_pte; - - flush_cache_page(vma, address, pfn); - pte = ptep_clear_flush(vma, address, ptep); - pte = pte_wrprotect(pte); - pte = pte_mkclean(pte); - set_pte_at(vma->vm_mm, address, ptep, pte); -unlock_pte: - pte_unmap_unlock(ptep, ptl); - } - - mmu_notifier_invalidate_range_end(&range); + pfn_mkclean_range(pfn, npfn, pgoff_start, vma); } i_mmap_unlock_read(mapping); } @@ -948,7 +884,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, count = 1UL << dax_entry_order(entry); index = xas->xa_index & ~(count - 1); - dax_entry_mkclean(mapping, index, pfn); + dax_entry_mkclean(mapping, pfn, count, index); dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE); /* * After we have flushed the cache, we can clear the dirty tag. There diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 7628474732e7..db41b7392e02 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -236,6 +236,15 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); /* + * Cleans the PTEs of shared mappings. + * (and since clean PTEs should also be readonly, write protects them too) + * + * returns the number of cleaned PTEs. + */ +int pfn_mkclean_range(unsigned long pfn, int npfn, pgoff_t pgoff_start, + struct vm_area_struct *vma); + +/* * Used by swapoff to help locate where page is expected in vma. */ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *); diff --git a/mm/internal.h b/mm/internal.h index d6e3e8e1be2d..6acf3a45feaf 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -449,26 +449,22 @@ extern void clear_page_mlock(struct page *page); extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); /* - * At what user virtual address is page expected in vma? - * Returns -EFAULT if all of the page is outside the range of vma. - * If page is a compound head, the entire compound page is considered. + * Return the start of user virtual address at the specific offset within + * a vma. */ static inline unsigned long -vma_address(struct page *page, struct vm_area_struct *vma) +vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages, + struct vm_area_struct *vma) { - pgoff_t pgoff; unsigned long address; - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ - pgoff = page_to_pgoff(page); if (pgoff >= vma->vm_pgoff) { address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); /* Check for address beyond vma (or wrapped through 0?) */ if (address < vma->vm_start || address >= vma->vm_end) address = -EFAULT; - } else if (PageHead(page) && - pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) { + } else if (pgoff + nr_pages - 1 >= vma->vm_pgoff) { /* Test above avoids possibility of wrap to 0 on 32-bit */ address = vma->vm_start; } else { @@ -477,6 +473,19 @@ vma_address(struct page *page, struct vm_area_struct *vma) return address; } + +/* + * At what user virtual address is page expected in vma? + * Returns -EFAULT if all of the page is outside the range of vma. + * If page is a compound head, the entire compound page is considered. + */ +static inline unsigned long +vma_address(struct page *page, struct vm_area_struct *vma) +{ + VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ + return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma); +} + /* * Return the end of user virtual address at the specific offset within * a vma. diff --git a/mm/rmap.c b/mm/rmap.c index 65670cb805d6..ee37cff13143 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -928,34 +928,33 @@ int page_referenced(struct page *page, return pra.referenced; } -static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, - unsigned long address, void *arg) +static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) { - struct page_vma_mapped_walk pvmw = { - .page = page, - .vma = vma, - .address = address, - .flags = PVMW_SYNC, - }; + int cleaned = 0; + struct vm_area_struct *vma = pvmw->vma; struct mmu_notifier_range range; - int *cleaned = arg; + unsigned long end; + + if (pvmw->flags & PVMW_PFN_WALK) + end = vma_pgoff_address_end(pvmw->index, pvmw->nr, vma); + else + end = vma_address_end(pvmw->page, vma); /* * We have to assume the worse case ie pmd for invalidation. Note that * the page can not be free from this function. */ - mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, - 0, vma, vma->vm_mm, address, - vma_address_end(page, vma)); + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma, + vma->vm_mm, pvmw->address, end); mmu_notifier_invalidate_range_start(&range); - while (page_vma_mapped_walk(&pvmw)) { + while (page_vma_mapped_walk(pvmw)) { int ret = 0; + unsigned long address = pvmw->address; - address = pvmw.address; - if (pvmw.pte) { + if (pvmw->pte) { pte_t entry; - pte_t *pte = pvmw.pte; + pte_t *pte = pvmw->pte; if (!pte_dirty(*pte) && !pte_write(*pte)) continue; @@ -968,7 +967,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, ret = 1; } else { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - pmd_t *pmd = pvmw.pmd; + pmd_t *pmd = pvmw->pmd; pmd_t entry; if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) @@ -994,11 +993,45 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, * See Documentation/vm/mmu_notifier.rst */ if (ret) - (*cleaned)++; + cleaned++; } mmu_notifier_invalidate_range_end(&range); + return cleaned; +} + +int pfn_mkclean_range(unsigned long pfn, int npfn, pgoff_t pgoff_start, + struct vm_area_struct *vma) +{ + unsigned long address = vma_pgoff_address(pgoff_start, npfn, vma); + struct page_vma_mapped_walk pvmw = { + .pfn = pfn, + .nr = npfn, + .index = pgoff_start, + .vma = vma, + .address = address, + .flags = PVMW_SYNC | PVMW_PFN_WALK, + }; + + VM_BUG_ON_VMA(address == -EFAULT, vma); + + return page_vma_mkclean_one(&pvmw); +} + +static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, + unsigned long address, void *arg) +{ + struct page_vma_mapped_walk pvmw = { + .page = page, + .vma = vma, + .address = address, + .flags = PVMW_SYNC, + }; + int *cleaned = arg; + + *cleaned += page_vma_mkclean_one(&pvmw); + return true; } From patchwork Fri Jan 21 07:55:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12719431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A29F3C433EF for ; Fri, 21 Jan 2022 07:56:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B2B76B0085; Fri, 21 Jan 2022 02:56:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3628C6B0087; Fri, 21 Jan 2022 02:56:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 251B06B0088; Fri, 21 Jan 2022 02:56:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 17BC46B0085 for ; Fri, 21 Jan 2022 02:56:55 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id BF60D181D46E6 for ; Fri, 21 Jan 2022 07:56:54 +0000 (UTC) X-FDA: 79053537948.24.8889BB9 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by imf22.hostedemail.com (Postfix) with ESMTP id 6C7EDC0004 for ; Fri, 21 Jan 2022 07:56:54 +0000 (UTC) Received: by mail-pj1-f53.google.com with SMTP id my12-20020a17090b4c8c00b001b528ba1cd7so3446034pjb.1 for ; Thu, 20 Jan 2022 23:56:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yfAXAcV/wZ+uHO6JBR1mzR4k32V5m276WxyzPLvNLTE=; b=tsCSqk1SwZfKiz4ejFhGZDtmCqw1BaXmIG8GnY1LmOqW6W34x7ZWmogKVCWpUEv+ED c/rvCRFm6ZFzNHCApLvlhG28OHe4IG/cw7kTVwnGWxQ//EN/1JNbHyZkalCAXATmuqpV NsjevRa91/GcIqUtSYviL9C3VbU5COEXDSB/w81ygjvpB14LbyXqvqrB71bfm/zRZsne h6PY30/T83gzrCY7cvBcDPJNQvJv5e/k6m3HbpZJPNQV9/qiikBXS3exSFVsMq5qMe/j 59qq4m+Kw5pJJAFd4Wwdt4Lpn7k//nWZG/wIWhg4LU7poy/H6IRz/+KH599kO8n7+pnc iE1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yfAXAcV/wZ+uHO6JBR1mzR4k32V5m276WxyzPLvNLTE=; b=DiNhQzToSEyT8k173A8+cjL/mky9jsqEdgjoi3oe17WBUerKK8ryEtUUgJKtl3xY/N FDRBlH/bbY3culTmie+p8mKd1FOdZbQcpDWJWxGkcAJySdLiDw/7Suzzg6bz1XaZEvOv lAKgFhOhEqCmR7OGl9RX1oab1Sebgo8EY34bg/hYD7IqommfV4G7YBIKRPxHfT4lGrnz NBuDZzX6LRs5WqgE5ghZDywHaOGaVJxTw5+vJU4gch4+SXDLg5OMTgIeidnBh53HHM3Q GK64oNMpCXCu1SbzVfgRzJdkpfKAPu5V2NMUw42mYRO/cyJBmqe1jU4/Bq3FClU6YGpC 0qvA== X-Gm-Message-State: AOAM532As5FUDII/+t1+F1/SProcC88MEa+PKn68hHbPtKQIbResvO3P uulPfmHTYHaAOa+8hsR1fPefXA== X-Google-Smtp-Source: ABdhPJyF+AFCIJwh5y14gbJR641hVBi2LRzmSf89BfdP8AWEWYlQZY5kaMYEvUoNkqJgILGBmJjr3w== X-Received: by 2002:a17:90b:248f:: with SMTP id nt15mr3418448pjb.137.1642751813587; Thu, 20 Jan 2022 23:56:53 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id t15sm10778178pjy.17.2022.01.20.23.56.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Jan 2022 23:56:53 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Muchun Song Subject: [PATCH 5/5] mm: remove range parameter from follow_invalidate_pte() Date: Fri, 21 Jan 2022 15:55:15 +0800 Message-Id: <20220121075515.79311-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220121075515.79311-1-songmuchun@bytedance.com> References: <20220121075515.79311-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=tsCSqk1S; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf22.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.53 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Stat-Signature: br6bzrm7b6yaxar71sj3n8ud4gfrwx5e X-Rspamd-Queue-Id: 6C7EDC0004 X-Rspamd-Server: rspam12 X-HE-Tag: 1642751814-52031 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only user (DAX) of range parameter of follow_invalidate_pte() is gone, it safe to remove the range paramter and make it static to simlify the code. Signed-off-by: Muchun Song --- include/linux/mm.h | 3 --- mm/memory.c | 23 +++-------------------- 2 files changed, 3 insertions(+), 23 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d211a06784d5..7895b17f6847 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1814,9 +1814,6 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling); int copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma); -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp); int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/memory.c b/mm/memory.c index 514a81cdd1ae..e8ce066be5f2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4869,9 +4869,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) } #endif /* __PAGETABLE_PMD_FOLDED */ -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp) +static int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, + pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) { pgd_t *pgd; p4d_t *p4d; @@ -4898,31 +4897,17 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, if (!pmdpp) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, - NULL, mm, address & PMD_MASK, - (address & PMD_MASK) + PMD_SIZE); - mmu_notifier_invalidate_range_start(range); - } *ptlp = pmd_lock(mm, pmd); if (pmd_huge(*pmd)) { *pmdpp = pmd; return 0; } spin_unlock(*ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); } if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, NULL, mm, - address & PAGE_MASK, - (address & PAGE_MASK) + PAGE_SIZE); - mmu_notifier_invalidate_range_start(range); - } ptep = pte_offset_map_lock(mm, pmd, address, ptlp); if (!pte_present(*ptep)) goto unlock; @@ -4930,8 +4915,6 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, return 0; unlock: pte_unmap_unlock(ptep, *ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); out: return -EINVAL; } @@ -4960,7 +4943,7 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp) { - return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp); + return follow_invalidate_pte(mm, address, ptepp, NULL, ptlp); } EXPORT_SYMBOL_GPL(follow_pte);