From patchwork Mon Feb 28 06:35:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12762417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E7E9C4332F for ; Mon, 28 Feb 2022 06:36:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233252AbiB1Ggm (ORCPT ); Mon, 28 Feb 2022 01:36:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231190AbiB1Ggm (ORCPT ); Mon, 28 Feb 2022 01:36:42 -0500 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 667A566CA0 for ; Sun, 27 Feb 2022 22:36:04 -0800 (PST) Received: by mail-pj1-x102e.google.com with SMTP id ev16-20020a17090aead000b001bc3835fea8so10516660pjb.0 for ; Sun, 27 Feb 2022 22:36:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dOhvmpBLieRVF+NKQPb4pdavGDEjnXCIzLlMLWWpOKA=; b=GLedueKeHKRO2HkdYcdfxC2EEpOAVOh+ibI1MzEf9M9KEnYukSMBeJqZ6sb5fecmh9 R8NyktTgdG0x5Z8DW9eNTXhARk8wE61mQcydrkh0HLuj7P7SeO0WwM/sBC8K98q6OP8p IERc1o2vniYjYyIE4NkC+baW8ic1VU7rpHUmGU5X0QD3luAibl2XNs9c4D80ydzqjHWn CNW9quMGgyhAf8u7OKUuHd1KuzjLgHeLvTZVSUX8Gy0+kfVcWJG0BWoIhvOQtOdv6srh 0mxLOeOYimflwRkRO1fDCadVjMJE3S4xQwk2h0AlhCmAyM7v6XyS7tsYSsMRtXgj2QsO WnwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dOhvmpBLieRVF+NKQPb4pdavGDEjnXCIzLlMLWWpOKA=; b=EVq0gTMumGnrEuJPlUpoxchkpkTXvPL5FoWxI8xTtjUX5tGOryxZPD28FOhLzY2+3F o80N7OTgCGqmw6Yrw4RTnK6h7hLPFQEuCyt4A0UIqADgD/7E4sKoZzNNnpN9MHrvs5Qk EkkhzvMfCzBCm19XlV1NMk9/JjxkFPStZGAxLy9UGqouvCEEphwrf8ZlRQhruASAmE5Y 6J0tuK9rYogrPcy4DztuoAKcsJWP7mj0St5tTKdGZpzQllqI5TiU2CbNZS2xQjtN1RNb wuDuu70kiT26S+RviJDyrk/kXeud8bbOyxOPjcURlhSRGnPSlR+8BhOCm7InYJ5I+FyT Fjeg== X-Gm-Message-State: AOAM531f3JVYgK/ImcQPbTWIL/7V27k73rVHFozozMk9LiClVJ/x/ytU lopwKcbtj8Q8gm301FE8QLpAyQ== X-Google-Smtp-Source: ABdhPJwToVZOj3y/qJS6gPvMQ/JUzaojXn+cxmZ3XptNaVSIvdrv3PQbOne/2kTWhsRv6czal7dTHw== X-Received: by 2002:a17:902:d882:b0:14f:efee:6de0 with SMTP id b2-20020a170902d88200b0014fefee6de0mr19598400plz.116.1646030163867; Sun, 27 Feb 2022 22:36:03 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id q13-20020aa7960d000000b004f13804c100sm11126472pfg.165.2022.02.27.22.35.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 22:36:03 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 1/6] mm: rmap: fix cache flush on THP pages Date: Mon, 28 Feb 2022 14:35:31 +0800 Message-Id: <20220228063536.24911-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228063536.24911-1-songmuchun@bytedance.com> References: <20220228063536.24911-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. At least, no problems were found due to this. Maybe because the architectures that have virtual indexed caches is less. Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") Signed-off-by: Muchun Song Reviewed-by: Yang Shi --- mm/rmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index fc46a3d7b704..723682ddb9e8 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -970,7 +970,8 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) continue; - flush_cache_page(vma, address, folio_pfn(folio)); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); entry = pmdp_invalidate(vma, address, pmd); entry = pmd_wrprotect(entry); entry = pmd_mkclean(entry); From patchwork Mon Feb 28 06:35:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12762418 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6029FC433EF for ; Mon, 28 Feb 2022 06:36:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233258AbiB1Ggx (ORCPT ); Mon, 28 Feb 2022 01:36:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231190AbiB1Ggv (ORCPT ); Mon, 28 Feb 2022 01:36:51 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 443AF66CBE for ; Sun, 27 Feb 2022 22:36:11 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id em10-20020a17090b014a00b001bc3071f921so13892844pjb.5 for ; Sun, 27 Feb 2022 22:36:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eiCDiO7SeVqlcAe7b/JTkNhJRMnMslj9oR2/HG6e9n8=; b=hNvJJ0ezpU6tq5crtKQjZkvHgC7+HknHsNaTEEdgZR8JxxtwLFGoDAyD7x5IwET35y ZPRng5dZP2a5eK9ia13pSI2xguMnF9Za8S99u0xgEEo87HHYB2P4Xej2LVHL7eH+AKct xrxdslSfTa0qiPzYYh9Uj7RiCdwmZhmN4QcvdcS4l0U5en87upCMi2dRj4MX+zYpyYdE 9bx2sftxHiohg9+NFs058WNmeoDpjma0q+zV41eABj5OLtmklp0RsU+byN9O8I5+y/Bh tS+HxIBYiIWT8UFKIb4XxBS4fOT1yRMgLmmi9yUSb3Xpk4yxR/XN6G6R+JLCWfBCibNi H5ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eiCDiO7SeVqlcAe7b/JTkNhJRMnMslj9oR2/HG6e9n8=; b=e3f+LbkNFdfULGbxokPQwv165PcAZDuThLMil8jSgZAnqY9ppET6voi5+GTTGo+d01 /ouMCTxOWY9uY36Ux5dT46bK2IqiHMHlGnNOxAiQN+HINo+cyA6WDaEyZXWyx87Et/Vr 5G4otJ3/6UVGwTHmNpnqmbDh6C9zrjUygRb4xzga9GTgl7pOHRS8HH42jpE5U+CMI4rv WRGrI1IQpFFb6sIkCfOsQk9wWlc77Wp5wF4elfsiH//S4Q3tUKMNhBeLtSPCzrC6J3CU eO+pIseaQqdtCsWJDnNSZlfZwwY+dJKgsMd+2FExsCC9J3u3lOQSjrsOyZcMRpR4jmHE 4ggA== X-Gm-Message-State: AOAM530SrrTcOIHm+EA9H6n1Sm59oioM0tBtN2MQoPcidfgwjdG6oS6/ AI0TVxO5uALkDbmySHQRA+BSwA== X-Google-Smtp-Source: ABdhPJzWNg+jqXhXX9lmsTizHuYtvGyPT0fg3+W582PsfWEVtMonxYeOPpLeTi3y6GAcOQpWl4KCZw== X-Received: by 2002:a17:902:ce8d:b0:150:1d25:694 with SMTP id f13-20020a170902ce8d00b001501d250694mr17105506plg.36.1646030170821; Sun, 27 Feb 2022 22:36:10 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id q13-20020aa7960d000000b004f13804c100sm11126472pfg.165.2022.02.27.22.36.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 22:36:10 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 2/6] dax: fix cache flush on PMD-mapped pages Date: Mon, 28 Feb 2022 14:35:32 +0800 Message-Id: <20220228063536.24911-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228063536.24911-1-songmuchun@bytedance.com> References: <20220228063536.24911-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean") Signed-off-by: Muchun Song --- fs/dax.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/dax.c b/fs/dax.c index 67a08a32fccb..a372304c9695 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -845,7 +845,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) goto unlock_pmd; - flush_cache_page(vma, address, pfn); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); pmd = pmdp_invalidate(vma, address, pmdp); pmd = pmd_wrprotect(pmd); pmd = pmd_mkclean(pmd); From patchwork Mon Feb 28 06:35:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12762419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E3D4C433F5 for ; Mon, 28 Feb 2022 06:36:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233272AbiB1GhF (ORCPT ); Mon, 28 Feb 2022 01:37:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59384 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233257AbiB1Gg4 (ORCPT ); Mon, 28 Feb 2022 01:36:56 -0500 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35C9F66CA0 for ; Sun, 27 Feb 2022 22:36:18 -0800 (PST) Received: by mail-pg1-x531.google.com with SMTP id w37so10576155pga.7 for ; Sun, 27 Feb 2022 22:36:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5g/0QWFd98PH+GDbIHxo8Dsrvq3fGfzY+G57IkW5WS4=; b=HX8+nAEYjlfswBQ5nj4httJxCQA4BhPACV7BlZZAhSmNF0RdD2JCR6g8o4WwpkGQ/5 hdB9bW1/6Cyhwei8IAElPg1nVq2znfvIiObfUNOK6N4Fyb5pxX7qE/pXvjnIyhIwL40E a9YCpF/qt8VRsYBdacCTFnwT6ah4GTtRXmr0c87QhDFiMpPed23MpSM5wpqu5sU/JMma bOFrDgqAx9BHJS8YWgjV21MrYg3L8NljxaWZJYlhOIQZeRKwCYTHSPMBssrX0cRnbhG8 uHiOoTE4yCZESmFPEX3p+4dWs6VwKTJqUYTjUYYqBUeXPkcEzinYEyBSFitJIYZyk2mL NezQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5g/0QWFd98PH+GDbIHxo8Dsrvq3fGfzY+G57IkW5WS4=; b=HyAVfkKtZAFebFv5wqXlQb8/v2CG3VnlPzYoyY5MiUy22aeNnYcQbwnZP+giKE6tT6 xr6ok2fJBbkdWXsOacUMRHLbUFcyqnfxeKnDvt/mV1YSO0KX1lwUHpZGNfQXOquQ6eNw CVDnSP24XCbHYSWNzRHocarN9zB/zeHPOabuLjSZrAVsX3oTrwRoOikhy+HhZtLy1o42 o1YinzCTv0Pxkr6KjIMdlVRIEpL/nunCHS8+KL4V8Re8HPBENwCTyylJkx02CIepraqY W2VafgaW8eF0ubpY1GLpreEQDfHwwl9VQaWGd9o9fZYwniWylFGoEdmoHO6h1L5HWyZw jxqg== X-Gm-Message-State: AOAM533jFUvWZ4Bq5t0pLRUnOdPCrv8S44UiqmxXbfsjCO6Ki5xfqeIh TAL057388v829KH6GiETJeBkIA== X-Google-Smtp-Source: ABdhPJypY+8lhMBAxJc6LX0WefVBcScVdW93PI4OxjW8x5PZWFmpys0R1IuEdZjE11NT+4jWvS1kBA== X-Received: by 2002:a63:3e45:0:b0:378:c5a9:abd1 with SMTP id l66-20020a633e45000000b00378c5a9abd1mr167718pga.25.1646030177716; Sun, 27 Feb 2022 22:36:17 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id q13-20020aa7960d000000b004f13804c100sm11126472pfg.165.2022.02.27.22.36.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 22:36:17 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Date: Mon, 28 Feb 2022 14:35:33 +0800 Message-Id: <20220228063536.24911-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228063536.24911-1-songmuchun@bytedance.com> References: <20220228063536.24911-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The page_mkclean_one() is supposed to be used with the pfn that has a associated struct page, but not all the pfns (e.g. DAX) have a struct page. Introduce a new function pfn_mkclean_range() to cleans the PTEs (including PMDs) mapped with range of pfns which has no struct page associated with them. This helper will be used by DAX device in the next patch to make pfns clean. Signed-off-by: Muchun Song --- include/linux/rmap.h | 3 +++ mm/internal.h | 26 +++++++++++++-------- mm/rmap.c | 65 +++++++++++++++++++++++++++++++++++++++++++--------- 3 files changed, 74 insertions(+), 20 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b58ddb8b2220..a6ec0d3e40c1 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -263,6 +263,9 @@ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *); */ int folio_mkclean(struct folio *); +int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, + struct vm_area_struct *vma); + void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked); /* diff --git a/mm/internal.h b/mm/internal.h index f45292dc4ef5..ff873944749f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -516,26 +516,22 @@ void mlock_page_drain(int cpu); extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); /* - * At what user virtual address is page expected in vma? - * Returns -EFAULT if all of the page is outside the range of vma. - * If page is a compound head, the entire compound page is considered. + * * Return the start of user virtual address at the specific offset within + * a vma. */ static inline unsigned long -vma_address(struct page *page, struct vm_area_struct *vma) +vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages, + struct vm_area_struct *vma) { - pgoff_t pgoff; unsigned long address; - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ - pgoff = page_to_pgoff(page); if (pgoff >= vma->vm_pgoff) { address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); /* Check for address beyond vma (or wrapped through 0?) */ if (address < vma->vm_start || address >= vma->vm_end) address = -EFAULT; - } else if (PageHead(page) && - pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) { + } else if (pgoff + nr_pages - 1 >= vma->vm_pgoff) { /* Test above avoids possibility of wrap to 0 on 32-bit */ address = vma->vm_start; } else { @@ -545,6 +541,18 @@ vma_address(struct page *page, struct vm_area_struct *vma) } /* + * Return the start of user virtual address of a page within a vma. + * Returns -EFAULT if all of the page is outside the range of vma. + * If page is a compound head, the entire compound page is considered. + */ +static inline unsigned long +vma_address(struct page *page, struct vm_area_struct *vma) +{ + VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ + return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma); +} + +/* * Then at what user virtual address will none of the range be found in vma? * Assumes that vma_address() already returned a good starting address. */ diff --git a/mm/rmap.c b/mm/rmap.c index 723682ddb9e8..ad5cf0e45a73 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -929,12 +929,12 @@ int folio_referenced(struct folio *folio, int is_locked, return pra.referenced; } -static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, - unsigned long address, void *arg) +static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) { - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); + int cleaned = 0; + struct vm_area_struct *vma = pvmw->vma; struct mmu_notifier_range range; - int *cleaned = arg; + unsigned long address = pvmw->address; /* * We have to assume the worse case ie pmd for invalidation. Note that @@ -942,16 +942,16 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, */ mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma, vma->vm_mm, address, - vma_address_end(&pvmw)); + vma_address_end(pvmw)); mmu_notifier_invalidate_range_start(&range); - while (page_vma_mapped_walk(&pvmw)) { + while (page_vma_mapped_walk(pvmw)) { int ret = 0; - address = pvmw.address; - if (pvmw.pte) { + address = pvmw->address; + if (pvmw->pte) { pte_t entry; - pte_t *pte = pvmw.pte; + pte_t *pte = pvmw->pte; if (!pte_dirty(*pte) && !pte_write(*pte)) continue; @@ -964,7 +964,7 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, ret = 1; } else { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - pmd_t *pmd = pvmw.pmd; + pmd_t *pmd = pvmw->pmd; pmd_t entry; if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) @@ -991,11 +991,22 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, * See Documentation/vm/mmu_notifier.rst */ if (ret) - (*cleaned)++; + cleaned++; } mmu_notifier_invalidate_range_end(&range); + return cleaned; +} + +static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, + unsigned long address, void *arg) +{ + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); + int *cleaned = arg; + + *cleaned += page_vma_mkclean_one(&pvmw); + return true; } @@ -1033,6 +1044,38 @@ int folio_mkclean(struct folio *folio) EXPORT_SYMBOL_GPL(folio_mkclean); /** + * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of + * [@pfn, @pfn + @nr_pages) at the specific offset (@pgoff) + * within the @vma of shared mappings. And since clean PTEs + * should also be readonly, write protects them too. + * @pfn: start pfn. + * @nr_pages: number of physically contiguous pages srarting with @pfn. + * @pgoff: page offset that the @pfn mapped with. + * @vma: vma that @pfn mapped within. + * + * Returns the number of cleaned PTEs (including PMDs). + */ +int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, + struct vm_area_struct *vma) +{ + struct page_vma_mapped_walk pvmw = { + .pfn = pfn, + .nr_pages = nr_pages, + .pgoff = pgoff, + .vma = vma, + .flags = PVMW_SYNC, + }; + + if (invalid_mkclean_vma(vma, NULL)) + return 0; + + pvmw.address = vma_pgoff_address(pgoff, nr_pages, vma); + VM_BUG_ON_VMA(pvmw.address == -EFAULT, vma); + + return page_vma_mkclean_one(&pvmw); +} + +/** * page_move_anon_rmap - move a page to our anon_vma * @page: the page to move to our anon_vma * @vma: the vma the page belongs to From patchwork Mon Feb 28 06:35:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12762420 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5264C433FE for ; Mon, 28 Feb 2022 06:36:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233287AbiB1GhF (ORCPT ); Mon, 28 Feb 2022 01:37:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233293AbiB1GhD (ORCPT ); Mon, 28 Feb 2022 01:37:03 -0500 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64C6666F89 for ; Sun, 27 Feb 2022 22:36:24 -0800 (PST) Received: by mail-pj1-x102d.google.com with SMTP id v4so10252247pjh.2 for ; Sun, 27 Feb 2022 22:36:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q6i31CSY3Duk9LeOLZiT7NizPvWf1K5PHZdMP52ume4=; b=2ZOfoGu1/MFgeEOp7l9nzx855BOTpf6KmEjvuBGsADCtnylUZoSMJkf7HBr0NUjbQ9 FlhYhNEd4W5SrEIiQ7jg9pl2rVCIkXOJKHRwFeG3u/8LgbAMoYAh2GVHOzyKc/+4/1Pe AcKWPJSazVveDRPEU7EgvojmP1Zm0cKZ0d6u8ilffoSTMBoRW7RIPHu52Y/4AUNV8yYj hO2CjE3SbcHqQODlmF4mfEv3grvZmXIeChJBgAFFy8elF3DSzVmcs7AN2G3pduRvj9eR WS0/wAt6k2rcLiw6rPhq36STduyHRutBHeIuwRZDbZQbqtgw3ZqKOIiJEQmN5M7MF9JW aRxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q6i31CSY3Duk9LeOLZiT7NizPvWf1K5PHZdMP52ume4=; b=Rgqlj0y3pSzZvZVcgZYOyu21WSMJ8na4XCv2938Pssp1SZvLlckX61Ol77EuCjInyY kJa29hldHsnQK9+OX+TMX0TZvRsXHzmP/G1nhq0V3n1XWihsN0GcXJMEZbVasgatIIML j57xUWnJlbL22WUgMd9hlGNG+CbunYAAnQlOXM6JQaYbgnX4VCSP5XzZk9MWNA4ZCzTh 5a8ERB/OnSH1wnd4Ils9K108B0prW9YseAJGY4l/EE+H9eT/wAEqgjkM/KBtQ1gI1rrp Id2fD0IVE+HO9oR51J6Y0TYgDI2lxg7XeAlDiTQCi8Xd7dM7DQ8DOYZHU9kLbI0sRQYO szyQ== X-Gm-Message-State: AOAM532d/tbNlh4NmaEpIsRP1N2Z239zOhD0D9yxMC5TfSZeLVhMDnc7 KyY/gZyfkuFXibj348CXc9LbZA== X-Google-Smtp-Source: ABdhPJxsMZqyRAFflkcZZDVJv843JKnWfi3iHw1GFs0mIeO4WlE4MNK7MLekTr+M8I6YkaLXWVfsNA== X-Received: by 2002:a17:902:c944:b0:151:3829:a917 with SMTP id i4-20020a170902c94400b001513829a917mr13578398pla.144.1646030183943; Sun, 27 Feb 2022 22:36:23 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id q13-20020aa7960d000000b004f13804c100sm11126472pfg.165.2022.02.27.22.36.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 22:36:23 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 4/6] mm: pvmw: add support for walking devmap pages Date: Mon, 28 Feb 2022 14:35:34 +0800 Message-Id: <20220228063536.24911-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228063536.24911-1-songmuchun@bytedance.com> References: <20220228063536.24911-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The devmap pages can not use page_vma_mapped_walk() to check if a huge devmap page is mapped into a vma. Add support for walking huge devmap pages so that DAX can use it in the next patch. Signed-off-by: Muchun Song Reported-by: kernel test robot --- mm/page_vma_mapped.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 1187f9c1ec5b..3f337e4e7f5f 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -210,10 +210,10 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) */ pmde = READ_ONCE(*pvmw->pmd); - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { + if (pmd_leaf(pmde) || is_pmd_migration_entry(pmde)) { pvmw->ptl = pmd_lock(mm, pvmw->pmd); pmde = *pvmw->pmd; - if (likely(pmd_trans_huge(pmde))) { + if (likely(pmd_leaf(pmde))) { if (pvmw->flags & PVMW_MIGRATION) return not_found(pvmw); if (!check_pmd(pmd_pfn(pmde), pvmw)) From patchwork Mon Feb 28 06:35:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12762421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 460B6C433EF for ; Mon, 28 Feb 2022 06:36:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233267AbiB1GhN (ORCPT ); Mon, 28 Feb 2022 01:37:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233273AbiB1GhK (ORCPT ); Mon, 28 Feb 2022 01:37:10 -0500 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1EFFF66CAE for ; Sun, 27 Feb 2022 22:36:31 -0800 (PST) Received: by mail-pl1-x629.google.com with SMTP id n15so7619509plf.4 for ; Sun, 27 Feb 2022 22:36:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FG/K6837zQPVwcpe+tAz7OlRAj17PMacY8wX1oMHrEs=; b=g/GhQOqHJ24ceuqRTB3sVPp8shVFkKLFyVMAs47Cr+pWlvR8I5AOpmCZe03LtQGTJG jbbT72JbvxN88Ub71vLWOxTzQk6Z6AfcN6QH0oJSkFMYc9zDVQw7QHzxlRLq5uJifMWS H0MlXkgEmiyiDZN6k9zMMD0VP3xHLJO6OaUfSqF2FX0mUXjZS5BtM4D1aFOpTkXJP6SH LgfueRhw4s1GCH6CUZUuQ13cVpIVnn0gkXERZhoJT/MDNk/byxoFpJkoavFzB4uJVL1+ Suty827tYe0lrS8vssDo6OedkAaH1cOTqz7Jt6o+fTaSsy6Ov/WY8nmt0Ga3LxATDXEK N/PA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FG/K6837zQPVwcpe+tAz7OlRAj17PMacY8wX1oMHrEs=; b=eup8CGRKGuD5DtYmW5fkVi3PRHn6M4fL5GtcAdPaXzMEF6FOVZ9Kh0Iom8yguMhW6q pMBksPZU8a5FJiIEpIMOnUQIVnVu1hRINg5ujpb5nuLnjLSu36uVfkZ7IS2cDarC42GR pAOsaVUghvwVbucAMADgXnJkw8lCCpCQCNBFofxYW6bYdrVwCIUldmkbIkrvI65M3yHR HoUK28p9o2J0tgZgdFhk6QsPPeM0j5d6a+e7lS8N3KowQO33arDiLb8k8jtM+wmITEFA ArceDeMNrcnTwjl2FlTi9j8cLvJoJ7qpz+PVP8q1m7tqQsDvZ5o0DHOVp0Htd5alueCz tZpQ== X-Gm-Message-State: AOAM53388IemcoNWlJlKwIWW7knRHUmmH3Py6AHn7vrs8/RKe7LlHfvr zDLvj2XHFPV78/h6ws5ZcHFsGQ== X-Google-Smtp-Source: ABdhPJxBp5q5CAro36s0XsQJyPMFX8KTntJhp3Efe/EJOJMtZj9iqhZtKfUiHCZdAD/xYH38wRZ/yw== X-Received: by 2002:a17:90b:3c01:b0:1bc:b160:7811 with SMTP id pb1-20020a17090b3c0100b001bcb1607811mr15115550pjb.164.1646030190625; Sun, 27 Feb 2022 22:36:30 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id q13-20020aa7960d000000b004f13804c100sm11126472pfg.165.2022.02.27.22.36.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 22:36:30 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 5/6] dax: fix missing writeprotect the pte entry Date: Mon, 28 Feb 2022 14:35:35 +0800 Message-Id: <20220228063536.24911-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228063536.24911-1-songmuchun@bytedance.com> References: <20220228063536.24911-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently dax_mapping_entry_mkclean() fails to clean and write protect the pte entry within a DAX PMD entry during an *sync operation. This can result in data loss in the following sequence: 1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and making the pmd entry dirty and writeable. 2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K) write to the same file, dirtying PMD radix tree entry (already done in 1)) and making the pte entry dirty and writeable. 3) fsync, flushing out PMD data and cleaning the radix tree entry. We currently fail to mark the pte entry as clean and write protected since the vma of process B is not covered in dax_entry_mkclean(). 4) process B writes to the pte. These don't cause any page faults since the pte entry is dirty and writeable. The radix tree entry remains clean. 5) fsync, which fails to flush the dirty PMD data because the radix tree entry was clean. 6) crash - dirty data that should have been fsync'd as part of 5) could still have been in the processor cache, and is lost. Just to use pfn_mkclean_range() to clean the pfns to fix this issue. Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush") Signed-off-by: Muchun Song --- fs/dax.c | 83 ++++++---------------------------------------------------------- 1 file changed, 7 insertions(+), 76 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index a372304c9695..7fd4a16769f9 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #define CREATE_TRACE_POINTS @@ -789,87 +790,17 @@ static void *dax_insert_entry(struct xa_state *xas, return entry; } -static inline -unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma) -{ - unsigned long address; - - address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); - return address; -} - /* Walk all mappings of a given index of a file and writeprotect them */ -static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, - unsigned long pfn) +static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn, + unsigned long npfn, pgoff_t start) { struct vm_area_struct *vma; - pte_t pte, *ptep = NULL; - pmd_t *pmdp = NULL; - spinlock_t *ptl; + pgoff_t end = start + npfn - 1; i_mmap_lock_read(mapping); - vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) { - struct mmu_notifier_range range; - unsigned long address; - + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) { + pfn_mkclean_range(pfn, npfn, start, vma); cond_resched(); - - if (!(vma->vm_flags & VM_SHARED)) - continue; - - address = pgoff_address(index, vma); - - /* - * follow_invalidate_pte() will use the range to call - * mmu_notifier_invalidate_range_start() on our behalf before - * taking any lock. - */ - if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep, - &pmdp, &ptl)) - continue; - - /* - * No need to call mmu_notifier_invalidate_range() as we are - * downgrading page table protection not changing it to point - * to a new page. - * - * See Documentation/vm/mmu_notifier.rst - */ - if (pmdp) { -#ifdef CONFIG_FS_DAX_PMD - pmd_t pmd; - - if (pfn != pmd_pfn(*pmdp)) - goto unlock_pmd; - if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) - goto unlock_pmd; - - flush_cache_range(vma, address, - address + HPAGE_PMD_SIZE); - pmd = pmdp_invalidate(vma, address, pmdp); - pmd = pmd_wrprotect(pmd); - pmd = pmd_mkclean(pmd); - set_pmd_at(vma->vm_mm, address, pmdp, pmd); -unlock_pmd: -#endif - spin_unlock(ptl); - } else { - if (pfn != pte_pfn(*ptep)) - goto unlock_pte; - if (!pte_dirty(*ptep) && !pte_write(*ptep)) - goto unlock_pte; - - flush_cache_page(vma, address, pfn); - pte = ptep_clear_flush(vma, address, ptep); - pte = pte_wrprotect(pte); - pte = pte_mkclean(pte); - set_pte_at(vma->vm_mm, address, ptep, pte); -unlock_pte: - pte_unmap_unlock(ptep, ptl); - } - - mmu_notifier_invalidate_range_end(&range); } i_mmap_unlock_read(mapping); } @@ -937,7 +868,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, count = 1UL << dax_entry_order(entry); index = xas->xa_index & ~(count - 1); - dax_entry_mkclean(mapping, index, pfn); + dax_entry_mkclean(mapping, pfn, count, index); dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE); /* * After we have flushed the cache, we can clear the dirty tag. There From patchwork Mon Feb 28 06:35:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12762422 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00AAFC433EF for ; Mon, 28 Feb 2022 06:36:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233293AbiB1Gha (ORCPT ); Mon, 28 Feb 2022 01:37:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59766 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233257AbiB1Gh2 (ORCPT ); Mon, 28 Feb 2022 01:37:28 -0500 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B0CB66F89 for ; Sun, 27 Feb 2022 22:36:38 -0800 (PST) Received: by mail-pf1-x436.google.com with SMTP id a5so9362972pfv.9 for ; Sun, 27 Feb 2022 22:36:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4hzepAOQTsZTnVcr4QPZkWOiCoYklQmYHIbKFxu1/40=; b=6HLgk72Kdglyxsw+8BlARvezx741uWUej3+BnNBrAZfWtuioeXLnCZw+qyBFJRBK+m B/IsH1CvI0L08PqX7H7DvZ8vijJppm4uGyOD/GyeVzlZn2M3vR/G6fysXE2TfEv8e50J 3SN77OfLp05Re90/fV2IVGp39pqSfAcY6XapPh81hqbgrd/+qsxV5w+MjnulORgnDBd9 LTslegwrEHzpQVz5ZS/HPDqlRyhFvu3Ez/v1XqbF4dIMbYVDAQryr+wl2iYb3MX2oLQ4 c7GHk+VsgfesZ1iveu5a6GVroAqH4FT0xKFMAMgkm4eJkrLsvbb2EdUuRNtaFgCYP2qr zZDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4hzepAOQTsZTnVcr4QPZkWOiCoYklQmYHIbKFxu1/40=; b=BwGax2iEdKDmsIrUw6DFYNlc0HgGyZzdTeOCm+lqlvqpJ35NqJCFaT7I41r1yIacpK Rf1rGjWqW3RDXtP5CjMs17zvPrMyFh0W+ElcM/9pVo/UcgkJAg8apTO1NMS+oajJr494 oruGbR5rT6jUmvDfy/G/8xX0Dlzy39l/xbyEEqXgrdgc5ehQuAqaoLzcYHAyimsR+iVB T3UGY8d1/racqKtqjsSfM0Zsi70iVCEqbRuz90OgWI++RcmbOUYwGc2mQUu8IBokyhwT hUlWhuuUcj++iDiKOPCDqjWVYg1GPsGi0AncTVz5DT9wSdatn2QDQ8PNyeApoxdntTDY g9qg== X-Gm-Message-State: AOAM532XQTe8noXad1qnao4u35I0aFfLsiz9gH5iMgkgi6U2ArqdnFBm wnhJd3NxRVwE/OJULoTuBU94Fg== X-Google-Smtp-Source: ABdhPJzCQ653yZEIMrePVZZJFf43neOT6NZL3StlhvT4ynURMlRYilCnkzsv6UTG5+zZqXW071qNng== X-Received: by 2002:a05:6a00:248c:b0:4ce:1932:80dd with SMTP id c12-20020a056a00248c00b004ce193280ddmr20084253pfv.48.1646030197701; Sun, 27 Feb 2022 22:36:37 -0800 (PST) Received: from FVFYT0MHHV2J.tiktokcdn.com ([139.177.225.227]) by smtp.gmail.com with ESMTPSA id q13-20020aa7960d000000b004f13804c100sm11126472pfg.165.2022.02.27.22.36.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 22:36:37 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v3 6/6] mm: remove range parameter from follow_invalidate_pte() Date: Mon, 28 Feb 2022 14:35:36 +0800 Message-Id: <20220228063536.24911-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220228063536.24911-1-songmuchun@bytedance.com> References: <20220228063536.24911-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The only user (DAX) of range parameter of follow_invalidate_pte() is gone, it safe to remove the range paramter and make it static to simlify the code. Signed-off-by: Muchun Song --- include/linux/mm.h | 3 --- mm/memory.c | 23 +++-------------------- 2 files changed, 3 insertions(+), 23 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c9bada4096ac..be7ec4c37ebe 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1871,9 +1871,6 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling); int copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma); -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp); int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/memory.c b/mm/memory.c index cc6968dc8e4e..278ab6d62b54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4964,9 +4964,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) } #endif /* __PAGETABLE_PMD_FOLDED */ -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp) +static int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, + pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) { pgd_t *pgd; p4d_t *p4d; @@ -4993,31 +4992,17 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, if (!pmdpp) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, - NULL, mm, address & PMD_MASK, - (address & PMD_MASK) + PMD_SIZE); - mmu_notifier_invalidate_range_start(range); - } *ptlp = pmd_lock(mm, pmd); if (pmd_huge(*pmd)) { *pmdpp = pmd; return 0; } spin_unlock(*ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); } if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, NULL, mm, - address & PAGE_MASK, - (address & PAGE_MASK) + PAGE_SIZE); - mmu_notifier_invalidate_range_start(range); - } ptep = pte_offset_map_lock(mm, pmd, address, ptlp); if (!pte_present(*ptep)) goto unlock; @@ -5025,8 +5010,6 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, return 0; unlock: pte_unmap_unlock(ptep, *ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); out: return -EINVAL; } @@ -5055,7 +5038,7 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp) { - return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp); + return follow_invalidate_pte(mm, address, ptepp, NULL, ptlp); } EXPORT_SYMBOL_GPL(follow_pte);