From patchwork Wed Feb 2 14:33:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12733094 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09690C433EF for ; Wed, 2 Feb 2022 14:34:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C0A88D0109; Wed, 2 Feb 2022 09:34:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 848478D00F9; Wed, 2 Feb 2022 09:34:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7110F8D0109; Wed, 2 Feb 2022 09:34:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 62FF08D00F9 for ; Wed, 2 Feb 2022 09:34:27 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 274A880457 for ; Wed, 2 Feb 2022 14:34:27 +0000 (UTC) X-FDA: 79098085374.01.E598BC8 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf04.hostedemail.com (Postfix) with ESMTP id 7CB0640012 for ; Wed, 2 Feb 2022 14:34:26 +0000 (UTC) Received: by mail-pl1-f178.google.com with SMTP id u11so18402982plh.13 for ; Wed, 02 Feb 2022 06:34:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gpzrZ3UL1NNnyKoM6H+wPQyBpC1NKCVMLCNYU70r2B0=; b=FP6uo0J+XGUKU/YjQtKSKRs4FCJsisKC/hx8Y5wk6tYiUm9x62TdWeN3eu9+XCbJ1T TbJ16jy2DDntW86mp/sgYUpOOoGM5nlDBwUdy22h9MAmHMjRI0elhETTyVqpGzEgkHLZ Amj6MaV9Qz8yIzmzZqDG+kqBaTuwIfrJgFh+n6YxIPB2UkmIVsx2Rh9lrv9c4GRca5au mia8XrC7PMNdb8X6VPlKHtUztzZlvGKzqqXwjAXp+wfogI4Bs/9xJRbFy6cgBidDMhNw SSMl2asEvsdP1oodO1jj7EY/xuSnAsGCMdXd7EfYxgPrjWGi6QUaCnO4/rnadNS6lvow LU7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gpzrZ3UL1NNnyKoM6H+wPQyBpC1NKCVMLCNYU70r2B0=; b=xEqiqdthqvWNcYK1DnqXYAfAnRTfUzAy5V4dYu6pfg3jS41aq8hsRgPsv15sQj1G2k 7Ix8qOcowZRkjkyDv1UFZCGuWt2AkXRMIJe8GGIOI3c8mSYwjEtsfVaCRtMjsKg9cQur B7qDjMYRim6Co6Ry85+Y41wok1zPJHn05YbVP160KzXoBRgzKxKRVgIC4jOLbPuiUOV4 ti7Wyc+n3SNRqve27/JmCIjfcvZF+XezoSSMKuYbsRHTt7hKYbuJH+ngT5iteoftBCPp PjBXQ+dK8lEyVLkf2MAta585Ed4zWuAFsVLTEbHaP+R5Pm8JvaeCYMRhSHsB2ANQQF57 e4ug== X-Gm-Message-State: AOAM530G2d3le/jkNIv9jO+CxZoX+PRqqHF7X2TxP6YIlrRiNKUCWdDD 45g1d6LgcJESoD4opYI96CCvNA== X-Google-Smtp-Source: ABdhPJx1R74lXZlM/Cxg6cfKwxH1FD4NPDRYz154Mh+OGethIUvLeH+xKUNAPo6FkkGG3/cUrVF6pQ== X-Received: by 2002:a17:902:c412:: with SMTP id k18mr30904459plk.142.1643812465457; Wed, 02 Feb 2022 06:34:25 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id s9sm29079268pgm.76.2022.02.02.06.34.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Feb 2022 06:34:25 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v2 1/6] mm: rmap: fix cache flush on THP pages Date: Wed, 2 Feb 2022 22:33:02 +0800 Message-Id: <20220202143307.96282-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220202143307.96282-1-songmuchun@bytedance.com> References: <20220202143307.96282-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 7CB0640012 X-Stat-Signature: sfa7hqr6xwqny8rczg9tinie8iz3xhcd Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=FP6uo0J+; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf04.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: nil X-HE-Tag: 1643812466-957363 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. At least, no problems were found due to this. Maybe because the architectures that have virtual indexed caches is less. Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") Signed-off-by: Muchun Song Reviewed-by: Yang Shi Reviewed-by: Jan Kara --- mm/rmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index b0fd9dc19eba..0ba12dc9fae3 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -974,7 +974,8 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) continue; - flush_cache_page(vma, address, page_to_pfn(page)); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); entry = pmdp_invalidate(vma, address, pmd); entry = pmd_wrprotect(entry); entry = pmd_mkclean(entry); From patchwork Wed Feb 2 14:33:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12733095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF61DC433F5 for ; Wed, 2 Feb 2022 14:34:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 58AE98D010A; Wed, 2 Feb 2022 09:34:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 514578D00F9; Wed, 2 Feb 2022 09:34:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 390908D010A; Wed, 2 Feb 2022 09:34:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0052.hostedemail.com [216.40.44.52]) by kanga.kvack.org (Postfix) with ESMTP id 26FC68D00F9 for ; Wed, 2 Feb 2022 09:34:34 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id DCC898101D20 for ; Wed, 2 Feb 2022 14:34:33 +0000 (UTC) X-FDA: 79098085626.16.AE2E35D Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf14.hostedemail.com (Postfix) with ESMTP id 830FE100007 for ; Wed, 2 Feb 2022 14:34:33 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id oa14-20020a17090b1bce00b001b61aed4a03so6177572pjb.5 for ; Wed, 02 Feb 2022 06:34:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yqTwEL7kOTcKChIBcjkLKs2qIde8K14LqGVT2DBRTU0=; b=QfoqDLtleFoLTKKXN2gEfiVcozG97ChIFEO+Dxb9kfB8mws46ckDiFZ0spHu+3Ckwi KQZam58ZAHqgcYKidzpFN3iPnYbSqMyyJG9+H9OJMKcVySUshYFShTfVPilrnyzKLSYH /hwxBMauDgvjBwXaDv+t8Aigdr4S4kh8m6NusA04i61YrBQTjYU+aXUCrcvJKRfVAtto sw938xa5LtXWuBW9vMuJyLDJ7JhQtLB6VO31Lkk4zgYI1K22xsLAs86nIFN1bdsqem9D b9A0oRcRgZgOiBmtV/hB8lrUnmP7ugc5phBXv7BZKIExxz7n+Io5ja4QbCqxemwNNnNZ LTNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yqTwEL7kOTcKChIBcjkLKs2qIde8K14LqGVT2DBRTU0=; b=l3ZNQA5sscbRGBnZC14G7nYrHj0p9jmepBy4kY9Qu1ftIWuhTEgCUrqkKV6zO7BKdK gN3mL8Vwaa+nP1Uj81c1bTljo3D3yabdff0mvE8sxEZemy5ADFfiEgFphvatxfOzcf2r 3zhKS6jzVhiZB3GLQ64G1aGPghmOIXM/wCl3P0yl1ekpgtKt2WUFkHUZqLqUG7TcFOIh gbk/Z3IKVG7+3o/XSIWnXVwe3bqa+hEBzVhmO7/HwYI0LzHwEs2V9ATb0hclJoYEi5sr VXsMjy01StNATLVTow1tAJUIKLxbjcvD0ckzNt3NE8Pe68Hq3AnFnjKDwkDnwGmdOoyO Boog== X-Gm-Message-State: AOAM533W0R5bQzyF1Udnjq/PRl+s4VW1+arW9MyjE/qvTPFJx9nu0JRc R7zbO1tJjydzRfvKWjbbJSVqjw== X-Google-Smtp-Source: ABdhPJzSJUsu2k5lK4qLz9mFNP97HsqVWca02fgKwu6JP8Fe8B9Slwu6HkVpi8yiT/kqkiwV52Yk5Q== X-Received: by 2002:a17:90a:cc07:: with SMTP id b7mr8389698pju.43.1643812472560; Wed, 02 Feb 2022 06:34:32 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id s9sm29079268pgm.76.2022.02.02.06.34.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Feb 2022 06:34:32 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v2 2/6] dax: fix cache flush on PMD-mapped pages Date: Wed, 2 Feb 2022 22:33:03 +0800 Message-Id: <20220202143307.96282-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220202143307.96282-1-songmuchun@bytedance.com> References: <20220202143307.96282-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspam-User: nil X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 830FE100007 X-Stat-Signature: 43ck3qdp5sd8feiza9y1e1kdi5qcrc16 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=QfoqDLtl; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf14.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1643812473-298622 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean") Signed-off-by: Muchun Song Reviewed-by: Jan Kara --- fs/dax.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/dax.c b/fs/dax.c index 88be1c02a151..e031e4b6c13c 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -857,7 +857,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) goto unlock_pmd; - flush_cache_page(vma, address, pfn); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); pmd = pmdp_invalidate(vma, address, pmdp); pmd = pmd_wrprotect(pmd); pmd = pmd_mkclean(pmd); From patchwork Wed Feb 2 14:33:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12733096 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1155EC433EF for ; Wed, 2 Feb 2022 14:34:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7AE4F8D010B; Wed, 2 Feb 2022 09:34:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 75DB18D00F9; Wed, 2 Feb 2022 09:34:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D7DC8D010B; Wed, 2 Feb 2022 09:34:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0248.hostedemail.com [216.40.44.248]) by kanga.kvack.org (Postfix) with ESMTP id 4E0138D00F9 for ; Wed, 2 Feb 2022 09:34:41 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0CC42181C77D0 for ; Wed, 2 Feb 2022 14:34:41 +0000 (UTC) X-FDA: 79098085962.21.CC8DFBC Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf10.hostedemail.com (Postfix) with ESMTP id AC786C0005 for ; Wed, 2 Feb 2022 14:34:40 +0000 (UTC) Received: by mail-pj1-f51.google.com with SMTP id d5so20401195pjk.5 for ; Wed, 02 Feb 2022 06:34:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CF0auyXfFKE1IEYVq03Jkg+soFc4skAhc1lxx6fAtcg=; b=zOnd2QjsWbes5CPtqMDtzwXXx4Qnh1nB96Y73gx80JMX1YjE5DYvNj1kR9Rf3wf1Px wPuL2t4JGGdjFrn3XZwBVG9F97cAQb8RsouQZBjVNVlljY0CH7I4qVCxJPzanyQSnmuI z71LOW9oAQWGsSJ1NUqzh5Vrqxv/E0Jt45Av/0yEAyBFuJciQZXRDGzQkl/3ilBDl5vR jKsqrChDzxht2TufcoW2BxuBTd7zqUeVt/m99Tym+LFF8zmYzhKc1GWZowCKaAWNH6FN Y8Sz6J6ClIh39VVIbK6GaQ6pwV6GArpJpzIClZo9pXpaDCVMFxz9DSlMeibVUkc8PRwG 9kZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CF0auyXfFKE1IEYVq03Jkg+soFc4skAhc1lxx6fAtcg=; b=p7UneA7d4raxG5GjkgDhSv1mgkR1FRSyO/hslY6lr6ER33jnz3b9KdpAFHufyUfeHX gQTNpMf3f3mwhqCQHgPzCICcWcKXP7WAQ4L0SjioSCZyMXL1l4HHtdF3PJtXQ+fcwiC2 0jLqQh+LVIPWkJcZRci07xr3Qbb2o2g1GGy7HKyvOoFTRu0KUdnP0HtzaA+oCOwE9hry ELHXaJxI5utZR9cv2rlp6aaHUYyMpjXdEUO7UyLeZ2rKHyqRA7T9foWAHlIYNwRs1prY oJSRBiVwSAwOq9TwZ7mUvRXkHwy2nHtkP68sRZXL4eW/fXfDZMv6JHGRWi6QHGRfyV19 RwMA== X-Gm-Message-State: AOAM530JvD/lxVe5N4mF56mTHP16j3J9TdsRvMR4tuBh26JvHt2NJnl7 yGapR/NErznN3eTkbZXOv53uQw== X-Google-Smtp-Source: ABdhPJzKJWYQyKWHxR8pIvNPFkTCP1Kls8SRhFx7AdpdZ5ZMRZ5hqgrfl3+Hv7AKQVSi09LI0EwDXA== X-Received: by 2002:a17:90a:94cc:: with SMTP id j12mr8341153pjw.39.1643812479607; Wed, 02 Feb 2022 06:34:39 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id s9sm29079268pgm.76.2022.02.02.06.34.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Feb 2022 06:34:39 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma Date: Wed, 2 Feb 2022 22:33:04 +0800 Message-Id: <20220202143307.96282-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220202143307.96282-1-songmuchun@bytedance.com> References: <20220202143307.96282-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: AC786C0005 X-Rspam-User: nil Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=zOnd2Qjs; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf10.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Stat-Signature: 7h1ndnjtknaw6pa1agzkf7wb9c4gu8uu X-HE-Tag: 1643812480-673983 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: page_vma_mapped_walk() is supposed to check if a page is mapped into a vma. However, not all page frames (e.g. PFN_DEV) have a associated struct page with it. There is going to be some duplicate codes similar with this function if someone want to check if a pfn (without a struct page) is mapped into a vma. So add support for checking if a pfn is mapped into a vma. In the next patch, the dax will use this new feature. Signed-off-by: Muchun Song --- include/linux/rmap.h | 14 ++++++++-- include/linux/swapops.h | 13 +++++++--- mm/internal.h | 28 +++++++++++++------- mm/page_vma_mapped.c | 68 +++++++++++++++++++++++++++++++------------------ 4 files changed, 83 insertions(+), 40 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 221c3c6438a7..78373935ad49 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -204,9 +204,18 @@ int make_device_exclusive_range(struct mm_struct *mm, unsigned long start, #define PVMW_SYNC (1 << 0) /* Look for migarion entries rather than present PTEs */ #define PVMW_MIGRATION (1 << 1) +/* Walk the page table by checking the pfn instead of a struct page */ +#define PVMW_PFN_WALK (1 << 2) struct page_vma_mapped_walk { - struct page *page; + union { + struct page *page; + struct { + unsigned long pfn; + unsigned int nr; + pgoff_t index; + }; + }; struct vm_area_struct *vma; unsigned long address; pmd_t *pmd; @@ -218,7 +227,8 @@ struct page_vma_mapped_walk { static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) { /* HugeTLB pte is set to the relevant page table entry without pte_mapped. */ - if (pvmw->pte && !PageHuge(pvmw->page)) + if (pvmw->pte && (pvmw->flags & PVMW_PFN_WALK || + !PageHuge(pvmw->page))) pte_unmap(pvmw->pte); if (pvmw->ptl) spin_unlock(pvmw->ptl); diff --git a/include/linux/swapops.h b/include/linux/swapops.h index d356ab4047f7..d28bf65fd6a5 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -247,17 +247,22 @@ static inline int is_writable_migration_entry(swp_entry_t entry) #endif -static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) +static inline unsigned long pfn_swap_entry_to_pfn(swp_entry_t entry) { - struct page *p = pfn_to_page(swp_offset(entry)); + unsigned long pfn = swp_offset(entry); /* * Any use of migration entries may only occur while the * corresponding page is locked */ - BUG_ON(is_migration_entry(entry) && !PageLocked(p)); + BUG_ON(is_migration_entry(entry) && !PageLocked(pfn_to_page(pfn))); + + return pfn; +} - return p; +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) +{ + return pfn_to_page(pfn_swap_entry_to_pfn(entry)); } /* diff --git a/mm/internal.h b/mm/internal.h index deb9bda18e59..5458cd08df33 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -478,25 +478,35 @@ vma_address(struct page *page, struct vm_area_struct *vma) } /* - * Then at what user virtual address will none of the page be found in vma? - * Assumes that vma_address() already returned a good starting address. - * If page is a compound head, the entire compound page is considered. + * Return the end of user virtual address at the specific offset within + * a vma. */ static inline unsigned long -vma_address_end(struct page *page, struct vm_area_struct *vma) +vma_pgoff_address_end(pgoff_t pgoff, unsigned long nr_pages, + struct vm_area_struct *vma) { - pgoff_t pgoff; - unsigned long address; + unsigned long address = vma->vm_start; - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ - pgoff = page_to_pgoff(page) + compound_nr(page); - address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + address += (pgoff + nr_pages - vma->vm_pgoff) << PAGE_SHIFT; /* Check for address beyond vma (or wrapped through 0?) */ if (address < vma->vm_start || address > vma->vm_end) address = vma->vm_end; return address; } +/* + * Return the end of user virtual address of a page within a vma. Assumes that + * vma_address() already returned a good starting address. If page is a compound + * head, the entire compound page is considered. + */ +static inline unsigned long +vma_address_end(struct page *page, struct vm_area_struct *vma) +{ + VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ + return vma_pgoff_address_end(page_to_pgoff(page), compound_nr(page), + vma); +} + static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf, struct file *fpin) { diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index f7b331081791..bd172268084f 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -53,10 +53,17 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw) return true; } -static inline bool pfn_is_match(struct page *page, unsigned long pfn) +static inline bool pfn_is_match(struct page_vma_mapped_walk *pvmw, + unsigned long pfn) { - unsigned long page_pfn = page_to_pfn(page); + struct page *page; + unsigned long page_pfn; + if (pvmw->flags & PVMW_PFN_WALK) + return pfn >= pvmw->pfn && pfn - pvmw->pfn < pvmw->nr; + + page = pvmw->page; + page_pfn = page_to_pfn(page); /* normal page and hugetlbfs page */ if (!PageTransCompound(page) || PageHuge(page)) return page_pfn == pfn; @@ -116,7 +123,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) pfn = pte_pfn(*pvmw->pte); } - return pfn_is_match(pvmw->page, pfn); + return pfn_is_match(pvmw, pfn); } static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) @@ -127,24 +134,24 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) } /** - * page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at - * @pvmw->address - * @pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags - * must be set. pmd, pte and ptl must be NULL. + * page_vma_mapped_walk - check if @pvmw->page or @pvmw->pfn is mapped in + * @pvmw->vma at @pvmw->address + * @pvmw: pointer to struct page_vma_mapped_walk. page (or pfn and nr and + * index), vma, address and flags must be set. pmd, pte and ptl must be NULL. * - * Returns true if the page is mapped in the vma. @pvmw->pmd and @pvmw->pte point - * to relevant page table entries. @pvmw->ptl is locked. @pvmw->address is - * adjusted if needed (for PTE-mapped THPs). + * Returns true if the page or pfn is mapped in the vma. @pvmw->pmd and + * @pvmw->pte point to relevant page table entries. @pvmw->ptl is locked. + * @pvmw->address is adjusted if needed (for PTE-mapped THPs). * * If @pvmw->pmd is set but @pvmw->pte is not, you have found PMD-mapped page - * (usually THP). For PTE-mapped THP, you should run page_vma_mapped_walk() in - * a loop to find all PTEs that map the THP. + * (usually THP or Huge DEVMAP). For PMD-mapped page, you should run + * page_vma_mapped_walk() in a loop to find all PTEs that map the huge page. * * For HugeTLB pages, @pvmw->pte is set to the relevant page table entry * regardless of which page table level the page is mapped at. @pvmw->pmd is * NULL. * - * Returns false if there are no more page table entries for the page in + * Returns false if there are no more page table entries for the page or pfn in * the vma. @pvmw->ptl is unlocked and @pvmw->pte is unmapped. * * If you need to stop the walk before page_vma_mapped_walk() returned false, @@ -153,8 +160,9 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) { struct mm_struct *mm = pvmw->vma->vm_mm; - struct page *page = pvmw->page; + struct page *page = NULL; unsigned long end; + unsigned long pfn; pgd_t *pgd; p4d_t *p4d; pud_t *pud; @@ -164,7 +172,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (pvmw->pmd && !pvmw->pte) return not_found(pvmw); - if (unlikely(PageHuge(page))) { + if (!(pvmw->flags & PVMW_PFN_WALK)) + page = pvmw->page; + pfn = page ? page_to_pfn(page) : pvmw->pfn; + + if (unlikely(page && PageHuge(page))) { /* The only possible mapping was handled on last iteration */ if (pvmw->pte) return not_found(pvmw); @@ -187,9 +199,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) * any PageKsm page: whose page->index misleads vma_address() * and vma_address_end() to disaster. */ - end = PageTransCompound(page) ? - vma_address_end(page, pvmw->vma) : - pvmw->address + PAGE_SIZE; + if (page) + end = PageTransCompound(page) ? + vma_address_end(page, pvmw->vma) : + pvmw->address + PAGE_SIZE; + else + end = vma_pgoff_address_end(pvmw->index, pvmw->nr, pvmw->vma); + if (pvmw->pte) goto next_pte; restart: @@ -217,14 +233,14 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) * subsequent update. */ pmde = READ_ONCE(*pvmw->pmd); - - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (pmd_leaf(pmde) || is_pmd_migration_entry(pmde)) { pvmw->ptl = pmd_lock(mm, pvmw->pmd); pmde = *pvmw->pmd; - if (likely(pmd_trans_huge(pmde))) { + if (likely(pmd_leaf(pmde))) { if (pvmw->flags & PVMW_MIGRATION) return not_found(pvmw); - if (pmd_page(pmde) != page) + if (pmd_pfn(pmde) != pfn) return not_found(pvmw); return true; } @@ -236,20 +252,22 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return not_found(pvmw); entry = pmd_to_swp_entry(pmde); if (!is_migration_entry(entry) || - pfn_swap_entry_to_page(entry) != page) + pfn_swap_entry_to_pfn(entry) != pfn) return not_found(pvmw); return true; } /* THP pmd was split under us: handle on pte level */ spin_unlock(pvmw->ptl); pvmw->ptl = NULL; - } else if (!pmd_present(pmde)) { + } else +#endif + if (!pmd_present(pmde)) { /* * If PVMW_SYNC, take and drop THP pmd lock so that we * cannot return prematurely, while zap_huge_pmd() has * cleared *pmd but not decremented compound_mapcount(). */ - if ((pvmw->flags & PVMW_SYNC) && + if ((pvmw->flags & PVMW_SYNC) && page && PageTransCompound(page)) { spinlock_t *ptl = pmd_lock(mm, pvmw->pmd); From patchwork Wed Feb 2 14:33:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12733097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 067B5C433EF for ; Wed, 2 Feb 2022 14:34:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 983568D010C; Wed, 2 Feb 2022 09:34:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 932A08D00F9; Wed, 2 Feb 2022 09:34:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D4338D010C; Wed, 2 Feb 2022 09:34:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id 6E4A68D00F9 for ; Wed, 2 Feb 2022 09:34:48 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 294F694FCA for ; Wed, 2 Feb 2022 14:34:48 +0000 (UTC) X-FDA: 79098086256.15.DD99E69 Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) by imf12.hostedemail.com (Postfix) with ESMTP id B03C240007 for ; Wed, 2 Feb 2022 14:34:47 +0000 (UTC) Received: by mail-pg1-f172.google.com with SMTP id q132so778316pgq.7 for ; Wed, 02 Feb 2022 06:34:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mmPXIembDYUQZdiojpjRF6icQ2WPRb2Yss79vxHhOMc=; b=w0K0g72cMCrNdP/NHtVjGHimcUtWez4Q0dbbJwob2g29Qm+MKRkXJKsAMPMbvPgiw+ 1YbaDoKvlgrKcV/gNFOt3KX2lNiiWw25pDlU8mCVcAWxtAMLpRxMy8NP4E38IpJl8FSg TUtczFKyIKLmpq6xL9Etr29Cyvfj+GoPMjs7AlkoEOTKC+ArBaIROC/64ZODgijemBWU xRHE63dtPn+HU71VPAlFooNZcCNJ43alHiCHXv5BXrakTtEXbJv9pWijrQIGmbUJ7pSV MRuJPgs/gXVmZkIWpm/BkfpG2RSz5s2MNXzkSHG+fZycTFIEVeTTW6IyU9ny2eL2OoGf vL5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mmPXIembDYUQZdiojpjRF6icQ2WPRb2Yss79vxHhOMc=; b=wvRMQ+svqlcStxxBSDWrQh1FfxJbQiShcaS1rD9fSMs+ElDAwrHFiPOciGYec5LLTF ZtjL/Fo1vuNrZAVMWninmWF/9CxEkPb3BP/wkMSxc8iHGrM0W3Dmrz6vV2GsQk+w/1sD kvMOs3um67GST+P1Zuzuk5gfkeh16tMnTjbVnFJ6zpxNq6HnQabQNQldUQfGqkY08Jl0 gi/c63q9IJO2PC7O1FMba7mRc9RtIPY+4B2ajGUzh8mFzuEbe4MkAXX4uCH37wF8FeKH pT6+E8b5U0Z9EiUTHMvVoLevZrJpOcD8IXC+quBn6SB0MOw8UosDxA9QEI+2j/pFGXPn 2KOg== X-Gm-Message-State: AOAM533TwmfeEYmbExMSLHVWN0ZeG7UP7fr+FsnLedHlAo+yttgKk71L ffFmkd6qQMHdZhmiIjOe70TF1g== X-Google-Smtp-Source: ABdhPJwtEKR0kboEHhouJDRLQjjqdXwqy8VsDDw7NR+0RyLEXV/5vfpSBnHEVq3onHxjnGD5YWze7Q== X-Received: by 2002:a63:496:: with SMTP id 144mr25240840pge.380.1643812486551; Wed, 02 Feb 2022 06:34:46 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id s9sm29079268pgm.76.2022.02.02.06.34.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Feb 2022 06:34:46 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v2 4/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Date: Wed, 2 Feb 2022 22:33:05 +0800 Message-Id: <20220202143307.96282-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220202143307.96282-1-songmuchun@bytedance.com> References: <20220202143307.96282-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B03C240007 X-Stat-Signature: ti384rqcgwfwtuecnjeppqu8hfjx3zo4 X-Rspam-User: nil Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=w0K0g72c; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.172 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1643812487-321272 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The page_mkclean_one() is supposed to be used with the pfn that has a associated struct page, but not all the pfns (e.g. DAX) have a struct page. Introduce a new function pfn_mkclean_range() to cleans the PTEs (including PMDs) mapped with range of pfns which has no struct page associated with them. This helper will be used by DAX device in the next patch to make pfns clean. Signed-off-by: Muchun Song --- include/linux/rmap.h | 3 ++ mm/internal.h | 26 ++++++++++------ mm/rmap.c | 84 +++++++++++++++++++++++++++++++++++++++++----------- 3 files changed, 86 insertions(+), 27 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 78373935ad49..668a1e81b442 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -241,6 +241,9 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); */ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *); +int pfn_mkclean_range(unsigned long pfn, int npfn, pgoff_t pgoff, + struct vm_area_struct *vma); + /* * Cleans the PTEs of shared mappings. * (and since clean PTEs should also be readonly, write protects them too) diff --git a/mm/internal.h b/mm/internal.h index 5458cd08df33..dc71256e568f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -449,26 +449,22 @@ extern void clear_page_mlock(struct page *page); extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); /* - * At what user virtual address is page expected in vma? - * Returns -EFAULT if all of the page is outside the range of vma. - * If page is a compound head, the entire compound page is considered. + * Return the start of user virtual address at the specific offset within + * a vma. */ static inline unsigned long -vma_address(struct page *page, struct vm_area_struct *vma) +vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages, + struct vm_area_struct *vma) { - pgoff_t pgoff; unsigned long address; - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ - pgoff = page_to_pgoff(page); if (pgoff >= vma->vm_pgoff) { address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); /* Check for address beyond vma (or wrapped through 0?) */ if (address < vma->vm_start || address >= vma->vm_end) address = -EFAULT; - } else if (PageHead(page) && - pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) { + } else if (pgoff + nr_pages - 1 >= vma->vm_pgoff) { /* Test above avoids possibility of wrap to 0 on 32-bit */ address = vma->vm_start; } else { @@ -478,6 +474,18 @@ vma_address(struct page *page, struct vm_area_struct *vma) } /* + * Return the start of user virtual address of a page within a vma. + * Returns -EFAULT if all of the page is outside the range of vma. + * If page is a compound head, the entire compound page is considered. + */ +static inline unsigned long +vma_address(struct page *page, struct vm_area_struct *vma) +{ + VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ + return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma); +} + +/* * Return the end of user virtual address at the specific offset within * a vma. */ diff --git a/mm/rmap.c b/mm/rmap.c index 0ba12dc9fae3..8f1860dc22bc 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -928,34 +928,33 @@ int page_referenced(struct page *page, return pra.referenced; } -static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, - unsigned long address, void *arg) +static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) { - struct page_vma_mapped_walk pvmw = { - .page = page, - .vma = vma, - .address = address, - .flags = PVMW_SYNC, - }; + int cleaned = 0; + struct vm_area_struct *vma = pvmw->vma; struct mmu_notifier_range range; - int *cleaned = arg; + unsigned long end; + + if (pvmw->flags & PVMW_PFN_WALK) + end = vma_pgoff_address_end(pvmw->index, pvmw->nr, vma); + else + end = vma_address_end(pvmw->page, vma); /* * We have to assume the worse case ie pmd for invalidation. Note that * the page can not be free from this function. */ - mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, - 0, vma, vma->vm_mm, address, - vma_address_end(page, vma)); + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma, + vma->vm_mm, pvmw->address, end); mmu_notifier_invalidate_range_start(&range); - while (page_vma_mapped_walk(&pvmw)) { + while (page_vma_mapped_walk(pvmw)) { int ret = 0; + unsigned long address = pvmw->address; - address = pvmw.address; - if (pvmw.pte) { + if (pvmw->pte) { pte_t entry; - pte_t *pte = pvmw.pte; + pte_t *pte = pvmw->pte; if (!pte_dirty(*pte) && !pte_write(*pte)) continue; @@ -968,7 +967,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, ret = 1; } else { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - pmd_t *pmd = pvmw.pmd; + pmd_t *pmd = pvmw->pmd; pmd_t entry; if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) @@ -995,11 +994,27 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, * See Documentation/vm/mmu_notifier.rst */ if (ret) - (*cleaned)++; + cleaned++; } mmu_notifier_invalidate_range_end(&range); + return cleaned; +} + +static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, + unsigned long address, void *arg) +{ + struct page_vma_mapped_walk pvmw = { + .page = page, + .vma = vma, + .address = address, + .flags = PVMW_SYNC, + }; + int *cleaned = arg; + + *cleaned += page_vma_mkclean_one(&pvmw); + return true; } @@ -1037,6 +1052,39 @@ int folio_mkclean(struct folio *folio) EXPORT_SYMBOL_GPL(folio_mkclean); /** + * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of + * [@pfn, @pfn + @npfn) at the specific offset (@pgoff) + * within the @vma of shared mappings. And since clean PTEs + * should also be readonly, write protects them too. + * @pfn: start pfn. + * @npfn: number of physically contiguous pfns srarting with @pfn. + * @pgoff: page offset that the @pfn mapped with. + * @vma: vma that @pfn mapped within. + * + * Returns the number of cleaned PTEs (including PMDs). + */ +int pfn_mkclean_range(unsigned long pfn, int npfn, pgoff_t pgoff, + struct vm_area_struct *vma) +{ + unsigned long address = vma_pgoff_address(pgoff, npfn, vma); + struct page_vma_mapped_walk pvmw = { + .pfn = pfn, + .nr = npfn, + .index = pgoff, + .vma = vma, + .address = address, + .flags = PVMW_SYNC | PVMW_PFN_WALK, + }; + + if (invalid_mkclean_vma(vma, NULL)) + return 0; + + VM_BUG_ON_VMA(address == -EFAULT, vma); + + return page_vma_mkclean_one(&pvmw); +} + +/** * page_move_anon_rmap - move a page to our anon_vma * @page: the page to move to our anon_vma * @vma: the vma the page belongs to From patchwork Wed Feb 2 14:33:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12733098 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFCC0C433EF for ; Wed, 2 Feb 2022 14:34:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 712808D010D; Wed, 2 Feb 2022 09:34:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C1358D00F9; Wed, 2 Feb 2022 09:34:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 562BF8D010D; Wed, 2 Feb 2022 09:34:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id 4620B8D00F9 for ; Wed, 2 Feb 2022 09:34:55 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 048EA820B166 for ; Wed, 2 Feb 2022 14:34:55 +0000 (UTC) X-FDA: 79098086550.25.9B280BE Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by imf08.hostedemail.com (Postfix) with ESMTP id 8B166160005 for ; Wed, 2 Feb 2022 14:34:54 +0000 (UTC) Received: by mail-pj1-f48.google.com with SMTP id y5-20020a17090aca8500b001b8127e3d3aso6188193pjt.3 for ; Wed, 02 Feb 2022 06:34:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uplRDMf9P3k9aUR6OrL9VGZsCccO7/3RG36fBhJFyME=; b=kSLOTeQhvBFw7fsL96na0cRlWcVjElWVvAoseaINB/pZHqA7KWxP3av/18pkL1aF+0 MKOQJZSHRAd5a5PwAiR1h2yWe05lQOOjIoyguZQ6FRdfPhWOhYS2Q2cmjxo/HRQGt/Jy BLny0ZANjrtCry2BJhLKzk1uwsIbJqH7I0w5IgRmX7/Xh8uBWszfMynYBEc3fqos21Qu CxFcpkYtSGBHbUPVu/D5nyQUIrxpb/JUykO56+48bAqzUjS16agFbbk3P3Z3QbATWhrM WyboLCUgxUOu26c9aIeOzVgtlweBJ8PzLJ1+xwkL63Hi5ucGoWjq0bEicpnPVjxdUrpG lkKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uplRDMf9P3k9aUR6OrL9VGZsCccO7/3RG36fBhJFyME=; b=CFbOiFAkZw9p90ojES0dsAgWk4vFsL1XgwzWlfMwUbAgmAv0X9UWOCOOOc4Q4jrZlj Oz1DG2Lso4h1QOHK8KGYmiDFITRwMsb0aCb/tVLdTjxBEr4+E6QKUJaddTRQf6qNsxSf GR1lw2+++9u+F3WPXXRvSxKYhpX5tlIEFnmLfiOcA0eUahQR/LOVx1amCcVwv0WeGZfd NYxBbRGSGaANY0/N4dLqRhEJdbEu3JBKIQgkn87vGkPOOLRSZDaQ1T4ec0Apgr2+Sl2b Kzw5ecnTmsQ7TNRyxU2Iqk1U122NODdygRr/sC8izmsVR1+QRrbJe8RheHhlRDzS+HGA ES4g== X-Gm-Message-State: AOAM531POfpyRq0Kf7aECMI9vy79aL3U6Ciuf/s8maDQQtqJLb/6iQaz Qd6r9RlIVvqPyFk/JMndu+wCww== X-Google-Smtp-Source: ABdhPJxYVhkrB3EfA0Rf/jBLdL6J+MWAJGy/3N8zt0fObDKgb6hfYJaRk8Ro75dji+V85rbzZGD4sA== X-Received: by 2002:a17:902:e851:: with SMTP id t17mr30063915plg.102.1643812493488; Wed, 02 Feb 2022 06:34:53 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id s9sm29079268pgm.76.2022.02.02.06.34.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Feb 2022 06:34:53 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v2 5/6] dax: fix missing writeprotect the pte entry Date: Wed, 2 Feb 2022 22:33:06 +0800 Message-Id: <20220202143307.96282-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220202143307.96282-1-songmuchun@bytedance.com> References: <20220202143307.96282-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 8B166160005 X-Stat-Signature: 854shfnah4e3yfe1zzhwxgqk743e5fnx X-Rspam-User: nil Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=kSLOTeQh; spf=pass (imf08.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.48 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1643812494-537766 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently dax_mapping_entry_mkclean() fails to clean and write protect the pte entry within a DAX PMD entry during an *sync operation. This can result in data loss in the following sequence: 1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and making the pmd entry dirty and writeable. 2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K) write to the same file, dirtying PMD radix tree entry (already done in 1)) and making the pte entry dirty and writeable. 3) fsync, flushing out PMD data and cleaning the radix tree entry. We currently fail to mark the pte entry as clean and write protected since the vma of process B is not covered in dax_entry_mkclean(). 4) process B writes to the pte. These don't cause any page faults since the pte entry is dirty and writeable. The radix tree entry remains clean. 5) fsync, which fails to flush the dirty PMD data because the radix tree entry was clean. 6) crash - dirty data that should have been fsync'd as part of 5) could still have been in the processor cache, and is lost. Just to use pfn_mkclean_range() to clean the pfns to fix this issue. Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush") Signed-off-by: Muchun Song --- fs/dax.c | 83 ++++++---------------------------------------------------------- 1 file changed, 7 insertions(+), 76 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index e031e4b6c13c..b64ac02d55d7 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #define CREATE_TRACE_POINTS @@ -801,87 +802,17 @@ static void *dax_insert_entry(struct xa_state *xas, return entry; } -static inline -unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma) -{ - unsigned long address; - - address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); - return address; -} - /* Walk all mappings of a given index of a file and writeprotect them */ -static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, - unsigned long pfn) +static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn, + unsigned long npfn, pgoff_t start) { struct vm_area_struct *vma; - pte_t pte, *ptep = NULL; - pmd_t *pmdp = NULL; - spinlock_t *ptl; + pgoff_t end = start + npfn - 1; i_mmap_lock_read(mapping); - vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) { - struct mmu_notifier_range range; - unsigned long address; - + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) { + pfn_mkclean_range(pfn, npfn, start, vma); cond_resched(); - - if (!(vma->vm_flags & VM_SHARED)) - continue; - - address = pgoff_address(index, vma); - - /* - * follow_invalidate_pte() will use the range to call - * mmu_notifier_invalidate_range_start() on our behalf before - * taking any lock. - */ - if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep, - &pmdp, &ptl)) - continue; - - /* - * No need to call mmu_notifier_invalidate_range() as we are - * downgrading page table protection not changing it to point - * to a new page. - * - * See Documentation/vm/mmu_notifier.rst - */ - if (pmdp) { -#ifdef CONFIG_FS_DAX_PMD - pmd_t pmd; - - if (pfn != pmd_pfn(*pmdp)) - goto unlock_pmd; - if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) - goto unlock_pmd; - - flush_cache_range(vma, address, - address + HPAGE_PMD_SIZE); - pmd = pmdp_invalidate(vma, address, pmdp); - pmd = pmd_wrprotect(pmd); - pmd = pmd_mkclean(pmd); - set_pmd_at(vma->vm_mm, address, pmdp, pmd); -unlock_pmd: -#endif - spin_unlock(ptl); - } else { - if (pfn != pte_pfn(*ptep)) - goto unlock_pte; - if (!pte_dirty(*ptep) && !pte_write(*ptep)) - goto unlock_pte; - - flush_cache_page(vma, address, pfn); - pte = ptep_clear_flush(vma, address, ptep); - pte = pte_wrprotect(pte); - pte = pte_mkclean(pte); - set_pte_at(vma->vm_mm, address, ptep, pte); -unlock_pte: - pte_unmap_unlock(ptep, ptl); - } - - mmu_notifier_invalidate_range_end(&range); } i_mmap_unlock_read(mapping); } @@ -949,7 +880,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, count = 1UL << dax_entry_order(entry); index = xas->xa_index & ~(count - 1); - dax_entry_mkclean(mapping, index, pfn); + dax_entry_mkclean(mapping, pfn, count, index); dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE); /* * After we have flushed the cache, we can clear the dirty tag. There From patchwork Wed Feb 2 14:33:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12733099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBDE1C433EF for ; Wed, 2 Feb 2022 14:35:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 64E5A8D010E; Wed, 2 Feb 2022 09:35:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5FE158D00F9; Wed, 2 Feb 2022 09:35:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C6588D010E; Wed, 2 Feb 2022 09:35:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0083.hostedemail.com [216.40.44.83]) by kanga.kvack.org (Postfix) with ESMTP id 3D82D8D00F9 for ; Wed, 2 Feb 2022 09:35:02 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id ED84B1820751B for ; Wed, 2 Feb 2022 14:35:01 +0000 (UTC) X-FDA: 79098086802.25.BA0619D Received: from mail-pg1-f175.google.com (mail-pg1-f175.google.com [209.85.215.175]) by imf20.hostedemail.com (Postfix) with ESMTP id A16131C0007 for ; Wed, 2 Feb 2022 14:35:01 +0000 (UTC) Received: by mail-pg1-f175.google.com with SMTP id 133so18485145pgb.0 for ; Wed, 02 Feb 2022 06:35:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yfAXAcV/wZ+uHO6JBR1mzR4k32V5m276WxyzPLvNLTE=; b=Jtk3Kzgm/acaQdPFcEkC9Vlykmm8QQmOLbTQ0WNCe5c19+7qPERmzZTnk2K/kXPWT3 awBMTwT+IctxLSgpQTzoCPDBfv7AnZh8a7ms5e+HG/IcD4gkzcs+GVpFwq0BSaaEOBzC +Z8F1B1ZxZOxRnIMDLeurg++5JhMZLe2voq7jns4oadz++eOKm6B6z+X9URirr4NHkff oy5mB5XhSrwbAb6rAdlIm9OzVkgFJv+SE8i0xol8ijOkdIvv1Hfrr/Ck5yvb/m/I5Jrq 45/qvfmRK3Rxch2z7W9+sPNOSQH6/gz0279qxMsX6gU59wHHjcAL2zSo6NjBGpmVSnyW a01w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yfAXAcV/wZ+uHO6JBR1mzR4k32V5m276WxyzPLvNLTE=; b=a5GxWyf668+WvtRB9aEl5CqTTseaMlgKYXXduKZfRDwj4u6PRgDwD6nbQmfxqx6xEf DdYkDB1zGEtvta9AVxNph2a8uGGrqolj6kANMTxQzPiUg7r3wi19pxVslLQxRYbW5UE6 FUqP1W61BR0ecGbjyICFV8ahi5bU5HHLd6fb0Xmksy6T8TmDHyIJ4R/6lvMWR1+FKzkI /XBoDjsj6eOGs8tsL5FSMLkf8Z2LWW8aNnBFxZwHEfcYhVK4VMLqwcLqmRw0MagxXHiH 4q8eD+hB5n8q0Kf6wXT5VqINL1T8bBA9cDWl25/XgmUdCL+Jc9mqTBRt7MmNn6/6qIgp fkqA== X-Gm-Message-State: AOAM532yq/3qhVZKqAIrXPsZtspzcFEwS77VHlJus1LJDGw4XT0gU5KM 0ZONLIKSXvm0/cZzU2N8kiedfg== X-Google-Smtp-Source: ABdhPJzW7VqsfdOXDPUXv6t69WSXlrB+FLAIOju7z/IcFQKVnTEKauVXBfJqD5J3XyiRSyiWJX7H4w== X-Received: by 2002:a05:6a00:174b:: with SMTP id j11mr9475901pfc.19.1643812500607; Wed, 02 Feb 2022 06:35:00 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id s9sm29079268pgm.76.2022.02.02.06.34.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Feb 2022 06:35:00 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, Muchun Song Subject: [PATCH v2 6/6] mm: remove range parameter from follow_invalidate_pte() Date: Wed, 2 Feb 2022 22:33:07 +0800 Message-Id: <20220202143307.96282-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220202143307.96282-1-songmuchun@bytedance.com> References: <20220202143307.96282-1-songmuchun@bytedance.com> MIME-Version: 1.0 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=Jtk3Kzgm; spf=pass (imf20.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.175 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: nil X-Rspamd-Queue-Id: A16131C0007 X-Stat-Signature: g8ux3ka4sp345gda9a6134ufwhgiuyzn X-Rspamd-Server: rspam12 X-HE-Tag: 1643812501-314748 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only user (DAX) of range parameter of follow_invalidate_pte() is gone, it safe to remove the range paramter and make it static to simlify the code. Signed-off-by: Muchun Song --- include/linux/mm.h | 3 --- mm/memory.c | 23 +++-------------------- 2 files changed, 3 insertions(+), 23 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d211a06784d5..7895b17f6847 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1814,9 +1814,6 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling); int copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma); -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp); int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/memory.c b/mm/memory.c index 514a81cdd1ae..e8ce066be5f2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4869,9 +4869,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) } #endif /* __PAGETABLE_PMD_FOLDED */ -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp) +static int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, + pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) { pgd_t *pgd; p4d_t *p4d; @@ -4898,31 +4897,17 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, if (!pmdpp) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, - NULL, mm, address & PMD_MASK, - (address & PMD_MASK) + PMD_SIZE); - mmu_notifier_invalidate_range_start(range); - } *ptlp = pmd_lock(mm, pmd); if (pmd_huge(*pmd)) { *pmdpp = pmd; return 0; } spin_unlock(*ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); } if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, NULL, mm, - address & PAGE_MASK, - (address & PAGE_MASK) + PAGE_SIZE); - mmu_notifier_invalidate_range_start(range); - } ptep = pte_offset_map_lock(mm, pmd, address, ptlp); if (!pte_present(*ptep)) goto unlock; @@ -4930,8 +4915,6 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, return 0; unlock: pte_unmap_unlock(ptep, *ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); out: return -EINVAL; } @@ -4960,7 +4943,7 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp) { - return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp); + return follow_invalidate_pte(mm, address, ptepp, NULL, ptlp); } EXPORT_SYMBOL_GPL(follow_pte);