From patchwork Mon Feb 28 06:35:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12762410 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E832512D for ; Mon, 28 Feb 2022 06:36:04 +0000 (UTC) Received: by mail-pj1-f42.google.com with SMTP id bx5so10251264pjb.3 for ; Sun, 27 Feb 2022 22:36:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dOhvmpBLieRVF+NKQPb4pdavGDEjnXCIzLlMLWWpOKA=; b=GLedueKeHKRO2HkdYcdfxC2EEpOAVOh+ibI1MzEf9M9KEnYukSMBeJqZ6sb5fecmh9 R8NyktTgdG0x5Z8DW9eNTXhARk8wE61mQcydrkh0HLuj7P7SeO0WwM/sBC8K98q6OP8p IERc1o2vniYjYyIE4NkC+baW8ic1VU7rpHUmGU5X0QD3luAibl2XNs9c4D80ydzqjHWn CNW9quMGgyhAf8u7OKUuHd1KuzjLgHeLvTZVSUX8Gy0+kfVcWJG0BWoIhvOQtOdv6srh 0mxLOeOYimflwRkRO1fDCadVjMJE3S4xQwk2h0AlhCmAyM7v6XyS7tsYSsMRtXgj2QsO WnwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dOhvmpBLieRVF+NKQPb4pdavGDEjnXCIzLlMLWWpOKA=; b=etnrtFZd8B9KvethV0pZBjkujOCtROSJIYM6wnE7AND9aezKURRuu7Tf8EX45zeI31 HY+JL1iG1GAlqMcaPn516BBJxEC0+uXU36B2IlGHFeOC7krO2xFKio+Pvi9neOsrBRaN mmoIwPv5nvSUCckvlW/CCb78vOhAlF/8EQEnAjpFjeiPsQGECXXGIg+0L/Oi8fpy1ocq 0gfPhw2D68cA6HDoSgusmYROF6rSOpM21wHqvp6j9JwO+cdIWXC9n8Fv38LPFA+BQ5pk 0fswu+XJV+LHdS+6mxdcSYrZBl1MRT1RLRV3AHa5slSPPDGHj7fBmI3lDxPhKZ7ZmqDi GUrw== X-Gm-Message-State: AOAM532cQttWvp/zuk1iH965kxiawnysA2N3QnMVWU7LEISEtr+oTXdj 1ILN9kdHjnAgmXe4zL0zJXpghA== X-Google-Smtp-Source: ABdhPJwToVZOj3y/qJS6gPvMQ/JUzaojXn+cxmZ3XptNaVSIvdrv3PQbOne/2kTWhsRv6czal7dTHw== X-Received: by 2002:a17:902:d882:b0:14f:efee:6de0 with SMTP id b2-20020a170902d88200b0014fefee6de0mr19598400plz.116.1646030163867; Sun, 27 Feb 2022 22:36:03 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id q13-20020aa7960d000000b004f13804c100sm11126472pfg.165.2022.02.27.22.35.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 22:36:03 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 1/6] mm: rmap: fix cache flush on THP pages Date: Mon, 28 Feb 2022 14:35:31 +0800 Message-Id: <20220228063536.24911-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228063536.24911-1-songmuchun@bytedance.com> References: <20220228063536.24911-1-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. At least, no problems were found due to this. Maybe because the architectures that have virtual indexed caches is less. Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") Signed-off-by: Muchun Song Reviewed-by: Yang Shi --- mm/rmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index fc46a3d7b704..723682ddb9e8 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -970,7 +970,8 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) continue; - flush_cache_page(vma, address, folio_pfn(folio)); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); entry = pmdp_invalidate(vma, address, pmd); entry = pmd_wrprotect(entry); entry = pmd_mkclean(entry); From patchwork Mon Feb 28 06:35:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12762411 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 47841512D for ; Mon, 28 Feb 2022 06:36:11 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id bd1so9831298plb.13 for ; Sun, 27 Feb 2022 22:36:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eiCDiO7SeVqlcAe7b/JTkNhJRMnMslj9oR2/HG6e9n8=; b=hNvJJ0ezpU6tq5crtKQjZkvHgC7+HknHsNaTEEdgZR8JxxtwLFGoDAyD7x5IwET35y ZPRng5dZP2a5eK9ia13pSI2xguMnF9Za8S99u0xgEEo87HHYB2P4Xej2LVHL7eH+AKct xrxdslSfTa0qiPzYYh9Uj7RiCdwmZhmN4QcvdcS4l0U5en87upCMi2dRj4MX+zYpyYdE 9bx2sftxHiohg9+NFs058WNmeoDpjma0q+zV41eABj5OLtmklp0RsU+byN9O8I5+y/Bh tS+HxIBYiIWT8UFKIb4XxBS4fOT1yRMgLmmi9yUSb3Xpk4yxR/XN6G6R+JLCWfBCibNi H5ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eiCDiO7SeVqlcAe7b/JTkNhJRMnMslj9oR2/HG6e9n8=; b=G6q+g4BcGpsxjmeVVNNUeFKfQGo/4AWFT74VIHYMDKthBILEm5IxkTxPcLDo4KLvdI Ml/EcLEG1dxPrmvf1ZtjAzLnOJRrH0iws4uSpHZjM3prWtfZA9xl1nKF6RO1WDG4zOHK qGFZfdRehbYg5La/vAZM6RcuXAQgoDP4bohbyQF3qWfyAQB3Jl2cLTPdS4qna02g/d6y rkwKEtWXr4cCIObKxoASgrAksqJZeTDguObNFjOJYy2+t30DT8F7uhWuvvPmBYy8VQKS 9WHYQp4whan5G/dQGMt+6kaElZgCI0dszvv3+iJGldiUPlBcVzsjHFn+04e+742XqB/d xU5A== X-Gm-Message-State: AOAM5314C80x7SMGTlr184SN5ncD4gdqRGFa+GSUOKYeVl+jzjwOxaJK AHqWzigUXymCl7Ay6RgS92kcLg== X-Google-Smtp-Source: ABdhPJzWNg+jqXhXX9lmsTizHuYtvGyPT0fg3+W582PsfWEVtMonxYeOPpLeTi3y6GAcOQpWl4KCZw== X-Received: by 2002:a17:902:ce8d:b0:150:1d25:694 with SMTP id f13-20020a170902ce8d00b001501d250694mr17105506plg.36.1646030170821; Sun, 27 Feb 2022 22:36:10 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id q13-20020aa7960d000000b004f13804c100sm11126472pfg.165.2022.02.27.22.36.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 22:36:10 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 2/6] dax: fix cache flush on PMD-mapped pages Date: Mon, 28 Feb 2022 14:35:32 +0800 Message-Id: <20220228063536.24911-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228063536.24911-1-songmuchun@bytedance.com> References: <20220228063536.24911-1-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean") Signed-off-by: Muchun Song --- fs/dax.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/dax.c b/fs/dax.c index 67a08a32fccb..a372304c9695 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -845,7 +845,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) goto unlock_pmd; - flush_cache_page(vma, address, pfn); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); pmd = pmdp_invalidate(vma, address, pmdp); pmd = pmd_wrprotect(pmd); pmd = pmd_mkclean(pmd); From patchwork Mon Feb 28 06:35:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12762412 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 36490512D for ; Mon, 28 Feb 2022 06:36:18 +0000 (UTC) Received: by mail-pg1-f176.google.com with SMTP id e6so9183991pgn.2 for ; Sun, 27 Feb 2022 22:36:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5g/0QWFd98PH+GDbIHxo8Dsrvq3fGfzY+G57IkW5WS4=; b=HX8+nAEYjlfswBQ5nj4httJxCQA4BhPACV7BlZZAhSmNF0RdD2JCR6g8o4WwpkGQ/5 hdB9bW1/6Cyhwei8IAElPg1nVq2znfvIiObfUNOK6N4Fyb5pxX7qE/pXvjnIyhIwL40E a9YCpF/qt8VRsYBdacCTFnwT6ah4GTtRXmr0c87QhDFiMpPed23MpSM5wpqu5sU/JMma bOFrDgqAx9BHJS8YWgjV21MrYg3L8NljxaWZJYlhOIQZeRKwCYTHSPMBssrX0cRnbhG8 uHiOoTE4yCZESmFPEX3p+4dWs6VwKTJqUYTjUYYqBUeXPkcEzinYEyBSFitJIYZyk2mL NezQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5g/0QWFd98PH+GDbIHxo8Dsrvq3fGfzY+G57IkW5WS4=; b=Y7NmPbpVPVu/GMESf1kDXFMeEetMLPgZvINIuykplS+mtO3G/ZvS/XCF96aOR1oJKL AAxNz42hMTd6z67JesuYiuN6jJcFeMZeLMoTttewfmjqJyRYkrXffoZrm0s/8KZZ52Th GBcCWJTqB9/ujCQJw/Hrh/jHKnI+xfGOIFhzAnpr5uNIbmYwnFxnMzSZfqxNXzLX8U0w e34iKvJS0hx6/9uARVn9weMZlxc3n4E+rtgp8abnSrfTyyg3P9WYOyxkMa0zPy51oSJg OdL6JmQ6/wnQtfX/ae4n6eozvPk1WegzXdksxVIsyUiLLVQNueyLcc8YGkXMzQzdb8UU d0DQ== X-Gm-Message-State: AOAM533Yv3gEio8lEv2PJMyhhmNHx1rIEuzpyddw9jc819jrgl6vAjPz xmp3Fyy6bUDIkWygSE17yekLdg== X-Google-Smtp-Source: ABdhPJypY+8lhMBAxJc6LX0WefVBcScVdW93PI4OxjW8x5PZWFmpys0R1IuEdZjE11NT+4jWvS1kBA== X-Received: by 2002:a63:3e45:0:b0:378:c5a9:abd1 with SMTP id l66-20020a633e45000000b00378c5a9abd1mr167718pga.25.1646030177716; Sun, 27 Feb 2022 22:36:17 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id q13-20020aa7960d000000b004f13804c100sm11126472pfg.165.2022.02.27.22.36.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 22:36:17 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Date: Mon, 28 Feb 2022 14:35:33 +0800 Message-Id: <20220228063536.24911-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228063536.24911-1-songmuchun@bytedance.com> References: <20220228063536.24911-1-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The page_mkclean_one() is supposed to be used with the pfn that has a associated struct page, but not all the pfns (e.g. DAX) have a struct page. Introduce a new function pfn_mkclean_range() to cleans the PTEs (including PMDs) mapped with range of pfns which has no struct page associated with them. This helper will be used by DAX device in the next patch to make pfns clean. Signed-off-by: Muchun Song --- include/linux/rmap.h | 3 +++ mm/internal.h | 26 +++++++++++++-------- mm/rmap.c | 65 +++++++++++++++++++++++++++++++++++++++++++--------- 3 files changed, 74 insertions(+), 20 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b58ddb8b2220..a6ec0d3e40c1 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -263,6 +263,9 @@ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *); */ int folio_mkclean(struct folio *); +int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, + struct vm_area_struct *vma); + void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked); /* diff --git a/mm/internal.h b/mm/internal.h index f45292dc4ef5..ff873944749f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -516,26 +516,22 @@ void mlock_page_drain(int cpu); extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); /* - * At what user virtual address is page expected in vma? - * Returns -EFAULT if all of the page is outside the range of vma. - * If page is a compound head, the entire compound page is considered. + * * Return the start of user virtual address at the specific offset within + * a vma. */ static inline unsigned long -vma_address(struct page *page, struct vm_area_struct *vma) +vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages, + struct vm_area_struct *vma) { - pgoff_t pgoff; unsigned long address; - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ - pgoff = page_to_pgoff(page); if (pgoff >= vma->vm_pgoff) { address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); /* Check for address beyond vma (or wrapped through 0?) */ if (address < vma->vm_start || address >= vma->vm_end) address = -EFAULT; - } else if (PageHead(page) && - pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) { + } else if (pgoff + nr_pages - 1 >= vma->vm_pgoff) { /* Test above avoids possibility of wrap to 0 on 32-bit */ address = vma->vm_start; } else { @@ -545,6 +541,18 @@ vma_address(struct page *page, struct vm_area_struct *vma) } /* + * Return the start of user virtual address of a page within a vma. + * Returns -EFAULT if all of the page is outside the range of vma. + * If page is a compound head, the entire compound page is considered. + */ +static inline unsigned long +vma_address(struct page *page, struct vm_area_struct *vma) +{ + VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ + return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma); +} + +/* * Then at what user virtual address will none of the range be found in vma? * Assumes that vma_address() already returned a good starting address. */ diff --git a/mm/rmap.c b/mm/rmap.c index 723682ddb9e8..ad5cf0e45a73 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -929,12 +929,12 @@ int folio_referenced(struct folio *folio, int is_locked, return pra.referenced; } -static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, - unsigned long address, void *arg) +static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) { - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); + int cleaned = 0; + struct vm_area_struct *vma = pvmw->vma; struct mmu_notifier_range range; - int *cleaned = arg; + unsigned long address = pvmw->address; /* * We have to assume the worse case ie pmd for invalidation. Note that @@ -942,16 +942,16 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, */ mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma, vma->vm_mm, address, - vma_address_end(&pvmw)); + vma_address_end(pvmw)); mmu_notifier_invalidate_range_start(&range); - while (page_vma_mapped_walk(&pvmw)) { + while (page_vma_mapped_walk(pvmw)) { int ret = 0; - address = pvmw.address; - if (pvmw.pte) { + address = pvmw->address; + if (pvmw->pte) { pte_t entry; - pte_t *pte = pvmw.pte; + pte_t *pte = pvmw->pte; if (!pte_dirty(*pte) && !pte_write(*pte)) continue; @@ -964,7 +964,7 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, ret = 1; } else { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - pmd_t *pmd = pvmw.pmd; + pmd_t *pmd = pvmw->pmd; pmd_t entry; if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) @@ -991,11 +991,22 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, * See Documentation/vm/mmu_notifier.rst */ if (ret) - (*cleaned)++; + cleaned++; } mmu_notifier_invalidate_range_end(&range); + return cleaned; +} + +static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, + unsigned long address, void *arg) +{ + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); + int *cleaned = arg; + + *cleaned += page_vma_mkclean_one(&pvmw); + return true; } @@ -1033,6 +1044,38 @@ int folio_mkclean(struct folio *folio) EXPORT_SYMBOL_GPL(folio_mkclean); /** + * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of + * [@pfn, @pfn + @nr_pages) at the specific offset (@pgoff) + * within the @vma of shared mappings. And since clean PTEs + * should also be readonly, write protects them too. + * @pfn: start pfn. + * @nr_pages: number of physically contiguous pages srarting with @pfn. + * @pgoff: page offset that the @pfn mapped with. + * @vma: vma that @pfn mapped within. + * + * Returns the number of cleaned PTEs (including PMDs). + */ +int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, + struct vm_area_struct *vma) +{ + struct page_vma_mapped_walk pvmw = { + .pfn = pfn, + .nr_pages = nr_pages, + .pgoff = pgoff, + .vma = vma, + .flags = PVMW_SYNC, + }; + + if (invalid_mkclean_vma(vma, NULL)) + return 0; + + pvmw.address = vma_pgoff_address(pgoff, nr_pages, vma); + VM_BUG_ON_VMA(pvmw.address == -EFAULT, vma); + + return page_vma_mkclean_one(&pvmw); +} + +/** * page_move_anon_rmap - move a page to our anon_vma * @page: the page to move to our anon_vma * @vma: the vma the page belongs to From patchwork Mon Feb 28 06:35:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12762413 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 553A4512D for ; Mon, 28 Feb 2022 06:36:24 +0000 (UTC) Received: by mail-pj1-f53.google.com with SMTP id v4so10252245pjh.2 for ; Sun, 27 Feb 2022 22:36:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q6i31CSY3Duk9LeOLZiT7NizPvWf1K5PHZdMP52ume4=; b=2ZOfoGu1/MFgeEOp7l9nzx855BOTpf6KmEjvuBGsADCtnylUZoSMJkf7HBr0NUjbQ9 FlhYhNEd4W5SrEIiQ7jg9pl2rVCIkXOJKHRwFeG3u/8LgbAMoYAh2GVHOzyKc/+4/1Pe AcKWPJSazVveDRPEU7EgvojmP1Zm0cKZ0d6u8ilffoSTMBoRW7RIPHu52Y/4AUNV8yYj hO2CjE3SbcHqQODlmF4mfEv3grvZmXIeChJBgAFFy8elF3DSzVmcs7AN2G3pduRvj9eR WS0/wAt6k2rcLiw6rPhq36STduyHRutBHeIuwRZDbZQbqtgw3ZqKOIiJEQmN5M7MF9JW aRxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q6i31CSY3Duk9LeOLZiT7NizPvWf1K5PHZdMP52ume4=; b=OEBIvYLhi9X/CagYRDqocfy8SJSR6ciscB/LajfxyEkyuLs/j+ZG+7/hTRPU4kpYHJ CiOjHxl1vKjtEd/o2aZXxd30lU9O8sW+dLtfjs1QAOyohtWj/vcDK1mkiDCszmxrZ5BT MsLf+PL5Si52zbJWIAFGa/diFbU1ColTw4Jn4LWtNguNwp03cSKEYCJhYDyiIjq03RHr bc83aY+CYp1relWsGi0KAm1YBLVDznPOEBBNVUUx34W4Rk15+uUv0p9DxTZhX7Z9+8Ir v49KnA/nCkugtsZR5n/ESJskbxWoFB2SVElSqsCQOs11TaevgsLqabqYYGmMPFEIOaaI BD+g== X-Gm-Message-State: AOAM532UJl6AzUVYhUEBsisFzWKKaMzgSi+EkF7l92+UKN5xvax7uf/0 NUnqY8EBfBacJDDntYycucuqNg== X-Google-Smtp-Source: ABdhPJxsMZqyRAFflkcZZDVJv843JKnWfi3iHw1GFs0mIeO4WlE4MNK7MLekTr+M8I6YkaLXWVfsNA== X-Received: by 2002:a17:902:c944:b0:151:3829:a917 with SMTP id i4-20020a170902c94400b001513829a917mr13578398pla.144.1646030183943; Sun, 27 Feb 2022 22:36:23 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id q13-20020aa7960d000000b004f13804c100sm11126472pfg.165.2022.02.27.22.36.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 22:36:23 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 4/6] mm: pvmw: add support for walking devmap pages Date: Mon, 28 Feb 2022 14:35:34 +0800 Message-Id: <20220228063536.24911-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228063536.24911-1-songmuchun@bytedance.com> References: <20220228063536.24911-1-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The devmap pages can not use page_vma_mapped_walk() to check if a huge devmap page is mapped into a vma. Add support for walking huge devmap pages so that DAX can use it in the next patch. Signed-off-by: Muchun Song Reported-by: kernel test robot --- mm/page_vma_mapped.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 1187f9c1ec5b..3f337e4e7f5f 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -210,10 +210,10 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) */ pmde = READ_ONCE(*pvmw->pmd); - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { + if (pmd_leaf(pmde) || is_pmd_migration_entry(pmde)) { pvmw->ptl = pmd_lock(mm, pvmw->pmd); pmde = *pvmw->pmd; - if (likely(pmd_trans_huge(pmde))) { + if (likely(pmd_leaf(pmde))) { if (pvmw->flags & PVMW_MIGRATION) return not_found(pvmw); if (!check_pmd(pmd_pfn(pmde), pvmw)) From patchwork Mon Feb 28 06:35:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12762414 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 161EB512D for ; Mon, 28 Feb 2022 06:36:31 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id u2so2295315ple.10 for ; Sun, 27 Feb 2022 22:36:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FG/K6837zQPVwcpe+tAz7OlRAj17PMacY8wX1oMHrEs=; b=g/GhQOqHJ24ceuqRTB3sVPp8shVFkKLFyVMAs47Cr+pWlvR8I5AOpmCZe03LtQGTJG jbbT72JbvxN88Ub71vLWOxTzQk6Z6AfcN6QH0oJSkFMYc9zDVQw7QHzxlRLq5uJifMWS H0MlXkgEmiyiDZN6k9zMMD0VP3xHLJO6OaUfSqF2FX0mUXjZS5BtM4D1aFOpTkXJP6SH LgfueRhw4s1GCH6CUZUuQ13cVpIVnn0gkXERZhoJT/MDNk/byxoFpJkoavFzB4uJVL1+ Suty827tYe0lrS8vssDo6OedkAaH1cOTqz7Jt6o+fTaSsy6Ov/WY8nmt0Ga3LxATDXEK N/PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FG/K6837zQPVwcpe+tAz7OlRAj17PMacY8wX1oMHrEs=; b=OJ9nffYpO2ldDKADUag2Z5WmpQmypKE9dP/jjxt/pG9mWoZbLEOg9+mSEFWwxFFGRU R3X++o67cLPiI4ywAfu8ovmFrVO/BFFBDZ1eloKDQstnD129ewI539+i0Xhu5bORaw8b MJzhJoj3Kv/FViu9L37hfNRbRMas6caoh7QW0PSCZ133Aw6RYTtnhKdKSq8V9c0eUhae bU9xuA7NOxoSip8C835+Rpe7a7O+Ok+OBcAZn8hJ4/LdDfBlsgNAPecYrZwepndUFto4 M5gtqpXXzp1Cqtfi9NqpTQJaUM6iN6usfWfuAzLIc5kGx1qp9qIiugqAjERRm1UFh3Ww 2flA== X-Gm-Message-State: AOAM531iJdil/eIC3aHuKbcT6AQ0lCpK8xS0+XmlG0OtgkuxH7Qj9zha /RCkg0A4AX/GJELC4EMddVXXng== X-Google-Smtp-Source: ABdhPJxBp5q5CAro36s0XsQJyPMFX8KTntJhp3Efe/EJOJMtZj9iqhZtKfUiHCZdAD/xYH38wRZ/yw== X-Received: by 2002:a17:90b:3c01:b0:1bc:b160:7811 with SMTP id pb1-20020a17090b3c0100b001bcb1607811mr15115550pjb.164.1646030190625; Sun, 27 Feb 2022 22:36:30 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id q13-20020aa7960d000000b004f13804c100sm11126472pfg.165.2022.02.27.22.36.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 22:36:30 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 5/6] dax: fix missing writeprotect the pte entry Date: Mon, 28 Feb 2022 14:35:35 +0800 Message-Id: <20220228063536.24911-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228063536.24911-1-songmuchun@bytedance.com> References: <20220228063536.24911-1-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently dax_mapping_entry_mkclean() fails to clean and write protect the pte entry within a DAX PMD entry during an *sync operation. This can result in data loss in the following sequence: 1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and making the pmd entry dirty and writeable. 2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K) write to the same file, dirtying PMD radix tree entry (already done in 1)) and making the pte entry dirty and writeable. 3) fsync, flushing out PMD data and cleaning the radix tree entry. We currently fail to mark the pte entry as clean and write protected since the vma of process B is not covered in dax_entry_mkclean(). 4) process B writes to the pte. These don't cause any page faults since the pte entry is dirty and writeable. The radix tree entry remains clean. 5) fsync, which fails to flush the dirty PMD data because the radix tree entry was clean. 6) crash - dirty data that should have been fsync'd as part of 5) could still have been in the processor cache, and is lost. Just to use pfn_mkclean_range() to clean the pfns to fix this issue. Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush") Signed-off-by: Muchun Song --- fs/dax.c | 83 ++++++---------------------------------------------------------- 1 file changed, 7 insertions(+), 76 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index a372304c9695..7fd4a16769f9 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #define CREATE_TRACE_POINTS @@ -789,87 +790,17 @@ static void *dax_insert_entry(struct xa_state *xas, return entry; } -static inline -unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma) -{ - unsigned long address; - - address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); - return address; -} - /* Walk all mappings of a given index of a file and writeprotect them */ -static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, - unsigned long pfn) +static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn, + unsigned long npfn, pgoff_t start) { struct vm_area_struct *vma; - pte_t pte, *ptep = NULL; - pmd_t *pmdp = NULL; - spinlock_t *ptl; + pgoff_t end = start + npfn - 1; i_mmap_lock_read(mapping); - vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) { - struct mmu_notifier_range range; - unsigned long address; - + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) { + pfn_mkclean_range(pfn, npfn, start, vma); cond_resched(); - - if (!(vma->vm_flags & VM_SHARED)) - continue; - - address = pgoff_address(index, vma); - - /* - * follow_invalidate_pte() will use the range to call - * mmu_notifier_invalidate_range_start() on our behalf before - * taking any lock. - */ - if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep, - &pmdp, &ptl)) - continue; - - /* - * No need to call mmu_notifier_invalidate_range() as we are - * downgrading page table protection not changing it to point - * to a new page. - * - * See Documentation/vm/mmu_notifier.rst - */ - if (pmdp) { -#ifdef CONFIG_FS_DAX_PMD - pmd_t pmd; - - if (pfn != pmd_pfn(*pmdp)) - goto unlock_pmd; - if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) - goto unlock_pmd; - - flush_cache_range(vma, address, - address + HPAGE_PMD_SIZE); - pmd = pmdp_invalidate(vma, address, pmdp); - pmd = pmd_wrprotect(pmd); - pmd = pmd_mkclean(pmd); - set_pmd_at(vma->vm_mm, address, pmdp, pmd); -unlock_pmd: -#endif - spin_unlock(ptl); - } else { - if (pfn != pte_pfn(*ptep)) - goto unlock_pte; - if (!pte_dirty(*ptep) && !pte_write(*ptep)) - goto unlock_pte; - - flush_cache_page(vma, address, pfn); - pte = ptep_clear_flush(vma, address, ptep); - pte = pte_wrprotect(pte); - pte = pte_mkclean(pte); - set_pte_at(vma->vm_mm, address, ptep, pte); -unlock_pte: - pte_unmap_unlock(ptep, ptl); - } - - mmu_notifier_invalidate_range_end(&range); } i_mmap_unlock_read(mapping); } @@ -937,7 +868,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, count = 1UL << dax_entry_order(entry); index = xas->xa_index & ~(count - 1); - dax_entry_mkclean(mapping, index, pfn); + dax_entry_mkclean(mapping, pfn, count, index); dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE); /* * After we have flushed the cache, we can clear the dirty tag. There From patchwork Mon Feb 28 06:35:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12762415 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 214C1512D for ; Mon, 28 Feb 2022 06:36:38 +0000 (UTC) Received: by mail-pf1-f169.google.com with SMTP id y11so10215367pfa.6 for ; Sun, 27 Feb 2022 22:36:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4hzepAOQTsZTnVcr4QPZkWOiCoYklQmYHIbKFxu1/40=; b=6HLgk72Kdglyxsw+8BlARvezx741uWUej3+BnNBrAZfWtuioeXLnCZw+qyBFJRBK+m B/IsH1CvI0L08PqX7H7DvZ8vijJppm4uGyOD/GyeVzlZn2M3vR/G6fysXE2TfEv8e50J 3SN77OfLp05Re90/fV2IVGp39pqSfAcY6XapPh81hqbgrd/+qsxV5w+MjnulORgnDBd9 LTslegwrEHzpQVz5ZS/HPDqlRyhFvu3Ez/v1XqbF4dIMbYVDAQryr+wl2iYb3MX2oLQ4 c7GHk+VsgfesZ1iveu5a6GVroAqH4FT0xKFMAMgkm4eJkrLsvbb2EdUuRNtaFgCYP2qr zZDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4hzepAOQTsZTnVcr4QPZkWOiCoYklQmYHIbKFxu1/40=; b=3ueD75zlj42qYg6bjxC6XOuNZHjtpYxZJ+CVS1OpuEfeUj2E0AUEjW+FxkFz5G/02U c8TWY/ssUePTITEndgpp1iCSt8RIylkjBbI5J2nsJIPUp5vGu3fYrPD5ewujRINGLpzz W4gWNWq2cGn/GmTrX56Z3z+Y39Sl1ywGbHBNa2NZaHrDXtT1Q3Wwlv8VLGOiM0oo98d5 jqbNPIrznX4FQwi18IWNisp3bvxCXwPl+zBaL4brGB7sGCsDNomnYL7C5Dr4/Fi7c6z0 sS5QvEHI1E+0yrA4JM0/OQfHuxWriR86J3ZIFnt0d1EEPFqCwErgataWJVWnmBbKfx9e idyw== X-Gm-Message-State: AOAM533rLyEs20FCjYxd2Hr3ChVJTLogQzmyiG9D1aN48TBUx4Mlj4UC hphSggAeRXhh/cvW8ivaGP8sXA== X-Google-Smtp-Source: ABdhPJzCQ653yZEIMrePVZZJFf43neOT6NZL3StlhvT4ynURMlRYilCnkzsv6UTG5+zZqXW071qNng== X-Received: by 2002:a05:6a00:248c:b0:4ce:1932:80dd with SMTP id c12-20020a056a00248c00b004ce193280ddmr20084253pfv.48.1646030197701; Sun, 27 Feb 2022 22:36:37 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id q13-20020aa7960d000000b004f13804c100sm11126472pfg.165.2022.02.27.22.36.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 22:36:37 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 6/6] mm: remove range parameter from follow_invalidate_pte() Date: Mon, 28 Feb 2022 14:35:36 +0800 Message-Id: <20220228063536.24911-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228063536.24911-1-songmuchun@bytedance.com> References: <20220228063536.24911-1-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The only user (DAX) of range parameter of follow_invalidate_pte() is gone, it safe to remove the range paramter and make it static to simlify the code. Signed-off-by: Muchun Song --- include/linux/mm.h | 3 --- mm/memory.c | 23 +++-------------------- 2 files changed, 3 insertions(+), 23 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c9bada4096ac..be7ec4c37ebe 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1871,9 +1871,6 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling); int copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma); -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp); int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/memory.c b/mm/memory.c index cc6968dc8e4e..278ab6d62b54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4964,9 +4964,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) } #endif /* __PAGETABLE_PMD_FOLDED */ -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp) +static int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, + pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) { pgd_t *pgd; p4d_t *p4d; @@ -4993,31 +4992,17 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, if (!pmdpp) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, - NULL, mm, address & PMD_MASK, - (address & PMD_MASK) + PMD_SIZE); - mmu_notifier_invalidate_range_start(range); - } *ptlp = pmd_lock(mm, pmd); if (pmd_huge(*pmd)) { *pmdpp = pmd; return 0; } spin_unlock(*ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); } if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, NULL, mm, - address & PAGE_MASK, - (address & PAGE_MASK) + PAGE_SIZE); - mmu_notifier_invalidate_range_start(range); - } ptep = pte_offset_map_lock(mm, pmd, address, ptlp); if (!pte_present(*ptep)) goto unlock; @@ -5025,8 +5010,6 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, return 0; unlock: pte_unmap_unlock(ptep, *ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); out: return -EINVAL; } @@ -5055,7 +5038,7 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp) { - return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp); + return follow_invalidate_pte(mm, address, ptepp, NULL, ptlp); } EXPORT_SYMBOL_GPL(follow_pte);