From patchwork Wed Mar 2 08:27:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12765595 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F45E2576 for ; Wed, 2 Mar 2022 08:29:13 +0000 (UTC) Received: by mail-pl1-f177.google.com with SMTP id n15so955851plf.4 for ; Wed, 02 Mar 2022 00:29:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dOhvmpBLieRVF+NKQPb4pdavGDEjnXCIzLlMLWWpOKA=; b=Eqy9IPjbLYYbIstc/J+JZ8WTf619n0h2r8uqKqJ+e6S9Uv+eWqpcul4UbryOb5p+/u MaSxXNwLLByowHTqGTJmAsxIcHxjSri8rtIKgWnK7dAxgThGETtK7a3vlVgTsAIso5ar jz6a8FaGafuyrVazLBfr548UwTAKf5gemmCc1q/mdgh/EBxhT2Wvwoij1uyTH3KARIXd jcdjAZ9+pZL04jxy94vLnK5/fYv2dad6VTjRS0mMTD7NNmpGZHjiGq1JUbhrnPSfBT1F 4WAiWLScu72rwaHUnfiXZdJgAL3fWS9Qcg6grtWwG0Zxwea9ard9epbbCH3a5Is0pfue VlFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dOhvmpBLieRVF+NKQPb4pdavGDEjnXCIzLlMLWWpOKA=; b=fdJ8HbeXYuGv/+Jzl9Maoz9NT2jXFA8XilRWUfArNl8BrxJAsQ9FmkXwBpMXWyHceF 1tGON+WvWmnRZyRlDTY40zcUR+XTkdPNv2JNhHoWw/guGAkKFzdC/OjFpumbK4ro/tiw ZpcllrurQWghMCyLlietXAqHMyiLL38lfQz7/umvdFANUZxHErWe2FbooYnk+Mep+tdO s/fWNx9Ebwat9qi7OfKpmtEQdgHFfnQ5BbRc7FmwIlSHGrH/CxZsQvBsFcQxxNSKx7n3 8WWxJCTWrm5drgcmoylReIFSrinkKnhg1vAx3c05nP7e6803NNlducZbvRQEi8Xh+yX+ s8MA== X-Gm-Message-State: AOAM533nzwPVD1ZFP7ypG/LMXJ2WXa0ZjSkErKghKv4F9bk/JSgdaZSU Y9z3c/vA/3BedP41scNxSuN2Pg== X-Google-Smtp-Source: ABdhPJy4/KlcXCarIW7Ivk1JNvCCmPY83Lz2efEDwfG2djMlcALEZNHFvUurWjFDAawaZRfirDGOSw== X-Received: by 2002:a17:90a:ec09:b0:1bc:d7c2:b2d5 with SMTP id l9-20020a17090aec0900b001bcd7c2b2d5mr25500689pjy.22.1646209752693; Wed, 02 Mar 2022 00:29:12 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id a20-20020a056a000c9400b004f396b965a9sm20922228pfv.49.2022.03.02.00.29.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 00:29:12 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v4 1/6] mm: rmap: fix cache flush on THP pages Date: Wed, 2 Mar 2022 16:27:13 +0800 Message-Id: <20220302082718.32268-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220302082718.32268-1-songmuchun@bytedance.com> References: <20220302082718.32268-1-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. At least, no problems were found due to this. Maybe because the architectures that have virtual indexed caches is less. Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") Signed-off-by: Muchun Song Reviewed-by: Yang Shi Reviewed-by: Dan Williams --- mm/rmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index fc46a3d7b704..723682ddb9e8 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -970,7 +970,8 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) continue; - flush_cache_page(vma, address, folio_pfn(folio)); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); entry = pmdp_invalidate(vma, address, pmd); entry = pmd_wrprotect(entry); entry = pmd_mkclean(entry); From patchwork Wed Mar 2 08:27:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12765596 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 447912576 for ; Wed, 2 Mar 2022 08:29:21 +0000 (UTC) Received: by mail-pl1-f181.google.com with SMTP id s1so929770plg.12 for ; Wed, 02 Mar 2022 00:29:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eiCDiO7SeVqlcAe7b/JTkNhJRMnMslj9oR2/HG6e9n8=; b=cYJLK58sIMtE77DATtQlxH9dtPyPDOawr2Hs9P06tVu2j6KUemAg7S9UEgucud0DdX 4h/Uh/mP0MROdl00Wa/vvnYHaH6Ya6dm/OxjneqryKBM8Gp6v7zc8BhA27iNEPG1yHH2 Eojft7GBt8tz5GmBW/hk0eB/lgf+Hbk5vSPzs7SU8EZS8jxOTz8PGvLJ6crgihfH37PS nXcScKKh2ZkdzeEMqPdHtVBCvj2or8wIA4avGEhSTLrskKUn6QAREUoUwwRZk01wFFti xXOkL2PqQ9YVw97HrnKwed0fGb91eHchAYlq83PZW206GlgUEgKY4mhHqxy1QcA26b4q Powg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eiCDiO7SeVqlcAe7b/JTkNhJRMnMslj9oR2/HG6e9n8=; b=31Uht8+K5g4dIBeFwjx4hlY5PUqtTWiGfOXNAMMzgznNy9cdVUyNqk3oohzOWie9Dr uEapeZ7WHbXBFGnx8xxKjRvUVoOBNpgVzqHhoiXG8/LkhNk0orD8jGjMh5kK29rRMDeY JRhKjpVhxT4aKgDhLq1uEue1iJ9bN/Ihazwaf7GRtvcradK6LCRiS8oW6MTzaTdLvOLE ZGN4X8s9q669QX5GzzbvwfasMFu3j54HK9YT5I1ReinDmcJwVNust/EE0TseOdNKmG7V sEpXRfjrHvLrbIj64/IPdAShKI8Qw5DNnYKrJAHiIqyMoXTiZEM1mJXat4d1JlLZFWia pw4A== X-Gm-Message-State: AOAM530fAOj8IYVsNiwIzBvd5HaVs0IaZEDz0WIwCkLEx2vjLYRSnkQP ZoeqnNmVA+fi1H8A9VSeZFzd+Q== X-Google-Smtp-Source: ABdhPJzHW21CrBKiP5Br3BMc/q1Bt4vTHDEYHfTZ2tmAwLrCBHaS+KABg8tBrpmlUqjiIhOiEwMJWw== X-Received: by 2002:a17:90a:550b:b0:1bd:1e3a:a407 with SMTP id b11-20020a17090a550b00b001bd1e3aa407mr19607925pji.112.1646209760828; Wed, 02 Mar 2022 00:29:20 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id a20-20020a056a000c9400b004f396b965a9sm20922228pfv.49.2022.03.02.00.29.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 00:29:20 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v4 2/6] dax: fix cache flush on PMD-mapped pages Date: Wed, 2 Mar 2022 16:27:14 +0800 Message-Id: <20220302082718.32268-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220302082718.32268-1-songmuchun@bytedance.com> References: <20220302082718.32268-1-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean") Signed-off-by: Muchun Song Reviewed-by: Dan Williams --- fs/dax.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/dax.c b/fs/dax.c index 67a08a32fccb..a372304c9695 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -845,7 +845,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) goto unlock_pmd; - flush_cache_page(vma, address, pfn); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); pmd = pmdp_invalidate(vma, address, pmdp); pmd = pmd_wrprotect(pmd); pmd = pmd_mkclean(pmd); From patchwork Wed Mar 2 08:27:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12765597 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EBCC62576 for ; Wed, 2 Mar 2022 08:29:29 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id h17so954855plc.5 for ; Wed, 02 Mar 2022 00:29:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5g/0QWFd98PH+GDbIHxo8Dsrvq3fGfzY+G57IkW5WS4=; b=AAVeeC/1N3G5GcQ+K6GvknxUpCVQsZK39/oE0xJ/wgZT1GfB7gPbGuBOafACxBd/c6 MkH1O0a51SjPMpZ4wg2yNC3c4zpZ9dE5JglFpUNSIXG0tww4bCfNAXwvDe+a5pT1A/P+ 9CUnfcNARVTbx0MaJya8bS1PjE/9KiSW76Iaio2gwlTnxUnXAbKEpjLj6kKMXhDjR6b7 OefsDv/keodJ7ewbycR7dyDT1rZ4Oos4YZlqHiUed2rus7fSEtveiT6lBZy0JF1vvXVe LKMCP0ft+vVnndO2ogo+gRfdYi8O/h6hMPqKgldeWsB49c0/KdDg4htf5wEE883PpJ0B zUow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5g/0QWFd98PH+GDbIHxo8Dsrvq3fGfzY+G57IkW5WS4=; b=P8deqa3ctyZ98A0HOxiQQ1iY+eOLtf1qlVLUsNiJSfrmIkFtz5OpJXExkrw6HurSxX rD9E2ke5CkrJ43ON5PEJaw1u9CLK52Z8pqqrJ9ewc8lZA4LgiLCWx9Cmnae0lmS5gPsY 1A1ASjfHiskJaO7OGuulWlAJTnfVkfEudMFg7VqooD666wCr3FKAYUQW3Lcn+UOEakcq fAulaR3vzGnzNkki9v0AdLeMbp3U54woKmvhG76ifKaSNUL3kolvaHHeGgTdeOQbV7IR /a4D8L7P5zMiDZPi5jcMHvwm26aa+kMmQ6r56xIQFihMiWpYmpMhVn0AoUyqe4doyeYI NuCg== X-Gm-Message-State: AOAM533+SZP1r5kx12sjVbvFB6wndDQym/TprXUgGm8hG8MYgu63GUVM lBO6sqWApXl4Ghi3HaOsHjmXSXKGoeKpG2TwRSc= X-Google-Smtp-Source: ABdhPJwYRvyK+lr9AitLQt9EnXiJDHe8rcmZLfe5ZnI1R9iF0WxPL6NhcheQihzz2A5zbJdCIhQ0vg== X-Received: by 2002:a17:90a:ab89:b0:1bc:71a7:f93a with SMTP id n9-20020a17090aab8900b001bc71a7f93amr25812348pjq.111.1646209769383; Wed, 02 Mar 2022 00:29:29 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id a20-20020a056a000c9400b004f396b965a9sm20922228pfv.49.2022.03.02.00.29.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 00:29:29 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v4 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Date: Wed, 2 Mar 2022 16:27:15 +0800 Message-Id: <20220302082718.32268-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220302082718.32268-1-songmuchun@bytedance.com> References: <20220302082718.32268-1-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The page_mkclean_one() is supposed to be used with the pfn that has a associated struct page, but not all the pfns (e.g. DAX) have a struct page. Introduce a new function pfn_mkclean_range() to cleans the PTEs (including PMDs) mapped with range of pfns which has no struct page associated with them. This helper will be used by DAX device in the next patch to make pfns clean. Signed-off-by: Muchun Song --- include/linux/rmap.h | 3 +++ mm/internal.h | 26 +++++++++++++-------- mm/rmap.c | 65 +++++++++++++++++++++++++++++++++++++++++++--------- 3 files changed, 74 insertions(+), 20 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b58ddb8b2220..a6ec0d3e40c1 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -263,6 +263,9 @@ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *); */ int folio_mkclean(struct folio *); +int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, + struct vm_area_struct *vma); + void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked); /* diff --git a/mm/internal.h b/mm/internal.h index f45292dc4ef5..ff873944749f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -516,26 +516,22 @@ void mlock_page_drain(int cpu); extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); /* - * At what user virtual address is page expected in vma? - * Returns -EFAULT if all of the page is outside the range of vma. - * If page is a compound head, the entire compound page is considered. + * * Return the start of user virtual address at the specific offset within + * a vma. */ static inline unsigned long -vma_address(struct page *page, struct vm_area_struct *vma) +vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages, + struct vm_area_struct *vma) { - pgoff_t pgoff; unsigned long address; - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ - pgoff = page_to_pgoff(page); if (pgoff >= vma->vm_pgoff) { address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); /* Check for address beyond vma (or wrapped through 0?) */ if (address < vma->vm_start || address >= vma->vm_end) address = -EFAULT; - } else if (PageHead(page) && - pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) { + } else if (pgoff + nr_pages - 1 >= vma->vm_pgoff) { /* Test above avoids possibility of wrap to 0 on 32-bit */ address = vma->vm_start; } else { @@ -545,6 +541,18 @@ vma_address(struct page *page, struct vm_area_struct *vma) } /* + * Return the start of user virtual address of a page within a vma. + * Returns -EFAULT if all of the page is outside the range of vma. + * If page is a compound head, the entire compound page is considered. + */ +static inline unsigned long +vma_address(struct page *page, struct vm_area_struct *vma) +{ + VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ + return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma); +} + +/* * Then at what user virtual address will none of the range be found in vma? * Assumes that vma_address() already returned a good starting address. */ diff --git a/mm/rmap.c b/mm/rmap.c index 723682ddb9e8..ad5cf0e45a73 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -929,12 +929,12 @@ int folio_referenced(struct folio *folio, int is_locked, return pra.referenced; } -static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, - unsigned long address, void *arg) +static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) { - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); + int cleaned = 0; + struct vm_area_struct *vma = pvmw->vma; struct mmu_notifier_range range; - int *cleaned = arg; + unsigned long address = pvmw->address; /* * We have to assume the worse case ie pmd for invalidation. Note that @@ -942,16 +942,16 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, */ mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma, vma->vm_mm, address, - vma_address_end(&pvmw)); + vma_address_end(pvmw)); mmu_notifier_invalidate_range_start(&range); - while (page_vma_mapped_walk(&pvmw)) { + while (page_vma_mapped_walk(pvmw)) { int ret = 0; - address = pvmw.address; - if (pvmw.pte) { + address = pvmw->address; + if (pvmw->pte) { pte_t entry; - pte_t *pte = pvmw.pte; + pte_t *pte = pvmw->pte; if (!pte_dirty(*pte) && !pte_write(*pte)) continue; @@ -964,7 +964,7 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, ret = 1; } else { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - pmd_t *pmd = pvmw.pmd; + pmd_t *pmd = pvmw->pmd; pmd_t entry; if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) @@ -991,11 +991,22 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, * See Documentation/vm/mmu_notifier.rst */ if (ret) - (*cleaned)++; + cleaned++; } mmu_notifier_invalidate_range_end(&range); + return cleaned; +} + +static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, + unsigned long address, void *arg) +{ + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); + int *cleaned = arg; + + *cleaned += page_vma_mkclean_one(&pvmw); + return true; } @@ -1033,6 +1044,38 @@ int folio_mkclean(struct folio *folio) EXPORT_SYMBOL_GPL(folio_mkclean); /** + * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of + * [@pfn, @pfn + @nr_pages) at the specific offset (@pgoff) + * within the @vma of shared mappings. And since clean PTEs + * should also be readonly, write protects them too. + * @pfn: start pfn. + * @nr_pages: number of physically contiguous pages srarting with @pfn. + * @pgoff: page offset that the @pfn mapped with. + * @vma: vma that @pfn mapped within. + * + * Returns the number of cleaned PTEs (including PMDs). + */ +int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, + struct vm_area_struct *vma) +{ + struct page_vma_mapped_walk pvmw = { + .pfn = pfn, + .nr_pages = nr_pages, + .pgoff = pgoff, + .vma = vma, + .flags = PVMW_SYNC, + }; + + if (invalid_mkclean_vma(vma, NULL)) + return 0; + + pvmw.address = vma_pgoff_address(pgoff, nr_pages, vma); + VM_BUG_ON_VMA(pvmw.address == -EFAULT, vma); + + return page_vma_mkclean_one(&pvmw); +} + +/** * page_move_anon_rmap - move a page to our anon_vma * @page: the page to move to our anon_vma * @vma: the vma the page belongs to From patchwork Wed Mar 2 08:27:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12765598 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 656162576 for ; Wed, 2 Mar 2022 08:29:38 +0000 (UTC) Received: by mail-pj1-f52.google.com with SMTP id em10-20020a17090b014a00b001bc3071f921so4142751pjb.5 for ; Wed, 02 Mar 2022 00:29:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l/myAdjexodhPdxGEM0MHuk2V1UzL2hIaRiiu46o6gA=; b=5wCqMdPra7WU6QtB0HKKqpqInbC0mjKYJvNn1TUItnMFQQUxMLnPC8gOImTQQjrUJ6 lxZPmTjDu1L0Q72qlAPkPaRlENO07rpX9UYXO0FAF59aYEtozl+ZKAWQZ8W3qqRE+ikr SlRgQWIHHhcTtdgRhGo4D22XRHAZBBeblqRndIow0K+xwrQOm5CHH/9Z9ixzQe6h7kFE zNljQhcoRBU9180TzWuXPS9NCuP82R0Fe0PKzpYQqkyaXnxFW5Nt5cxWBJuTk3L4oHxt KISVkpNb+bLAdydzsYyxsQZ+ibl5wXiET/Dx7dQOH8lcVFxHxuZfrkdIRaRw8w+oE5Al xSiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l/myAdjexodhPdxGEM0MHuk2V1UzL2hIaRiiu46o6gA=; b=t/z3OUkWD9M+KhinurX5ZsmWOxLVH3wvB28jqC55JzM8D0q0fBBoMH9uqR2h8nnA0y +Fv7NoCKSwO2fNl8Am2BLVgFWXfUIQ1YOBxRO3jv8BUgZLt7eZWOOqZW4cT68jQ8MLgy kvUiTz061gV+8DKcalUTH44GqgWvOKBRPLVxIdfO6OFzHjhaTu33X7VQBfGmJVkpYp/A sN7JK/Wc7ngtz4MxYWF0WjO0N5NBKf077MHmH0XXepv5NufrwofJLsan3ASKeVcDJrQk 8b8aag7XoVgQVRmIXsPeawH7rrV0/5uuTJaVATxp7QyBrnTeZpjXys43fiV5AbOv14nT xLDw== X-Gm-Message-State: AOAM531KqD7bJf3zelVb1AaulDhMxjL6otC1htI0cYB/5JuXAjyDuoqv 46bwRvwI8d+SaX2vX0LtMiQ+1Q== X-Google-Smtp-Source: ABdhPJzT8ObZdk6+fbYovMIjQEfRPZciRdCbVZminjRSbL7HjS0V45URjs58A3iPwR4mLPclCDR1PA== X-Received: by 2002:a17:902:e549:b0:150:2412:c94c with SMTP id n9-20020a170902e54900b001502412c94cmr26221501plf.94.1646209777959; Wed, 02 Mar 2022 00:29:37 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id a20-20020a056a000c9400b004f396b965a9sm20922228pfv.49.2022.03.02.00.29.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 00:29:37 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v4 4/6] mm: pvmw: add support for walking devmap pages Date: Wed, 2 Mar 2022 16:27:16 +0800 Message-Id: <20220302082718.32268-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220302082718.32268-1-songmuchun@bytedance.com> References: <20220302082718.32268-1-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The devmap pages can not use page_vma_mapped_walk() to check if a huge devmap page is mapped into a vma. Add support for walking huge devmap pages so that DAX can use it in the next patch. Signed-off-by: Muchun Song --- mm/page_vma_mapped.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 1187f9c1ec5b..f9ffa84adf4d 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -210,10 +210,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) */ pmde = READ_ONCE(*pvmw->pmd); - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { + if (pmd_trans_huge(pmde) || pmd_devmap(pmde) || + is_pmd_migration_entry(pmde)) { pvmw->ptl = pmd_lock(mm, pvmw->pmd); pmde = *pvmw->pmd; - if (likely(pmd_trans_huge(pmde))) { + if (likely(pmd_trans_huge(pmde) || pmd_devmap(pmde))) { if (pvmw->flags & PVMW_MIGRATION) return not_found(pvmw); if (!check_pmd(pmd_pfn(pmde), pvmw)) From patchwork Wed Mar 2 08:27:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12765599 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EC22B2576 for ; Wed, 2 Mar 2022 08:29:46 +0000 (UTC) Received: by mail-pl1-f171.google.com with SMTP id h17so955370plc.5 for ; Wed, 02 Mar 2022 00:29:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FG/K6837zQPVwcpe+tAz7OlRAj17PMacY8wX1oMHrEs=; b=PGIGAurL4qs724ojSh6WoXxqvhMvCZBybFDAP3gGqIW/vXElNJIqLMJ/ek2d2i0/Gj DRVyI03bPjU1MyjcBUJTjfRKlpRh6REUP4Rt0TEw0TdfFlVjGLHr1ts1xWOCoIa7Z7n7 6DAdde+I05MtZqGsiYmzRKyAtq9xB1MsB6FjAP2m+1TtPnQ/DIP34Uyd2JbVYISQp65R nExPxZLC4MOYKdnifRbtN8d5JhZm/XLiy126P2Kzu2fi4yBFh+KKVB0MtSyH0tDfDene hXngnrmCnb/fLZ4hmhGh4D1wKUyfXyi/PmnXMyC3kUQnv8Nc0yCrZnmlAOgBygTB2HZu AAug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FG/K6837zQPVwcpe+tAz7OlRAj17PMacY8wX1oMHrEs=; b=Gvd0K1E1uF5dBMy0gT1dySt8163Wga74cL18hHz5sQULjP78w/+b7ThhTx/wP7jh80 SSNCLp4bQDx/eYm6f0D7kHdrONCBp/VgyhtkY/FZ4Ol2h1jpzQXFjaG7A+kDSswUNXFo 67yHuecWFPiXSXLEx533ECYti1mIDk380+OVfNZZtIBdctVK0j8odnxmqyr2Mmf1N8km jIeg/RRH/rcCtOKxR7uWdkO43a+Iu8VXtqYEnp1HRkQITDBJ3/wMxuhy44JOXlLTTklk ULAXiGtcw5B/qwRTsu44rlzfyyCIo8xYfAmd6CDPVKrW6k+ppalKrC/1fQhRgCeXpm14 iz0w== X-Gm-Message-State: AOAM5322z5WEleoY3O2EwB4TQ7J7ymgeCDCZed8dL4nEtANTStVcJcJK oOaXsvj5pwl7Dhipk6u0Y7arUg== X-Google-Smtp-Source: ABdhPJwPYghkWYEC4o+1xoq9Wh92BCG8knxGA3IJCRknHamVSwQdJF2a2sTBIdNnyYfvR3xQbvXLCQ== X-Received: by 2002:a17:902:e5c4:b0:151:9bf6:f47f with SMTP id u4-20020a170902e5c400b001519bf6f47fmr637485plf.110.1646209786386; Wed, 02 Mar 2022 00:29:46 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id a20-20020a056a000c9400b004f396b965a9sm20922228pfv.49.2022.03.02.00.29.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 00:29:46 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v4 5/6] dax: fix missing writeprotect the pte entry Date: Wed, 2 Mar 2022 16:27:17 +0800 Message-Id: <20220302082718.32268-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220302082718.32268-1-songmuchun@bytedance.com> References: <20220302082718.32268-1-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Currently dax_mapping_entry_mkclean() fails to clean and write protect the pte entry within a DAX PMD entry during an *sync operation. This can result in data loss in the following sequence: 1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and making the pmd entry dirty and writeable. 2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K) write to the same file, dirtying PMD radix tree entry (already done in 1)) and making the pte entry dirty and writeable. 3) fsync, flushing out PMD data and cleaning the radix tree entry. We currently fail to mark the pte entry as clean and write protected since the vma of process B is not covered in dax_entry_mkclean(). 4) process B writes to the pte. These don't cause any page faults since the pte entry is dirty and writeable. The radix tree entry remains clean. 5) fsync, which fails to flush the dirty PMD data because the radix tree entry was clean. 6) crash - dirty data that should have been fsync'd as part of 5) could still have been in the processor cache, and is lost. Just to use pfn_mkclean_range() to clean the pfns to fix this issue. Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush") Signed-off-by: Muchun Song --- fs/dax.c | 83 ++++++---------------------------------------------------------- 1 file changed, 7 insertions(+), 76 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index a372304c9695..7fd4a16769f9 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #define CREATE_TRACE_POINTS @@ -789,87 +790,17 @@ static void *dax_insert_entry(struct xa_state *xas, return entry; } -static inline -unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma) -{ - unsigned long address; - - address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); - return address; -} - /* Walk all mappings of a given index of a file and writeprotect them */ -static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, - unsigned long pfn) +static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn, + unsigned long npfn, pgoff_t start) { struct vm_area_struct *vma; - pte_t pte, *ptep = NULL; - pmd_t *pmdp = NULL; - spinlock_t *ptl; + pgoff_t end = start + npfn - 1; i_mmap_lock_read(mapping); - vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) { - struct mmu_notifier_range range; - unsigned long address; - + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) { + pfn_mkclean_range(pfn, npfn, start, vma); cond_resched(); - - if (!(vma->vm_flags & VM_SHARED)) - continue; - - address = pgoff_address(index, vma); - - /* - * follow_invalidate_pte() will use the range to call - * mmu_notifier_invalidate_range_start() on our behalf before - * taking any lock. - */ - if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep, - &pmdp, &ptl)) - continue; - - /* - * No need to call mmu_notifier_invalidate_range() as we are - * downgrading page table protection not changing it to point - * to a new page. - * - * See Documentation/vm/mmu_notifier.rst - */ - if (pmdp) { -#ifdef CONFIG_FS_DAX_PMD - pmd_t pmd; - - if (pfn != pmd_pfn(*pmdp)) - goto unlock_pmd; - if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) - goto unlock_pmd; - - flush_cache_range(vma, address, - address + HPAGE_PMD_SIZE); - pmd = pmdp_invalidate(vma, address, pmdp); - pmd = pmd_wrprotect(pmd); - pmd = pmd_mkclean(pmd); - set_pmd_at(vma->vm_mm, address, pmdp, pmd); -unlock_pmd: -#endif - spin_unlock(ptl); - } else { - if (pfn != pte_pfn(*ptep)) - goto unlock_pte; - if (!pte_dirty(*ptep) && !pte_write(*ptep)) - goto unlock_pte; - - flush_cache_page(vma, address, pfn); - pte = ptep_clear_flush(vma, address, ptep); - pte = pte_wrprotect(pte); - pte = pte_mkclean(pte); - set_pte_at(vma->vm_mm, address, ptep, pte); -unlock_pte: - pte_unmap_unlock(ptep, ptl); - } - - mmu_notifier_invalidate_range_end(&range); } i_mmap_unlock_read(mapping); } @@ -937,7 +868,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, count = 1UL << dax_entry_order(entry); index = xas->xa_index & ~(count - 1); - dax_entry_mkclean(mapping, index, pfn); + dax_entry_mkclean(mapping, pfn, count, index); dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE); /* * After we have flushed the cache, we can clear the dirty tag. There From patchwork Wed Mar 2 08:27:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12765600 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 769C42576 for ; Wed, 2 Mar 2022 08:29:55 +0000 (UTC) Received: by mail-pj1-f54.google.com with SMTP id v4so1151623pjh.2 for ; Wed, 02 Mar 2022 00:29:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4hzepAOQTsZTnVcr4QPZkWOiCoYklQmYHIbKFxu1/40=; b=BHUW+N1Ur7UISy6INaVLWByZSyogvV+wzA1z7v+CiybCHBc+N8nc1LoUx9B/gbMwk8 HIxbxIEXNDbK6hfHZimmn14xUJYqq3yIVmG70INxE2KYfLl4KmUNNh9XfTB7LkU3YpoV FG8rJke6cX1AZ2UNkxOr9wCCg9VWLMfZ5ewsd/xbWM2tDR4yezKnqBpsuxXvnOn7EZIa cGtrTwXg+9wcFTsu2uis6l8DW1Civ6IoXbVOGDJpqxQ+17Wz2WPYDrgF6vrr7l0UtmwZ 3063ZCy+GqpkS/YiXnDcw0ssHDJXt9k86JVDTmf2n/tlC+Ntv50DXrHDr7skXOtprb2j vqOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4hzepAOQTsZTnVcr4QPZkWOiCoYklQmYHIbKFxu1/40=; b=dEi3gA7h8eLF0ttYSqYBxaaJWBDdRy4jDQXwAwYTxX67AXsx7eC4TODldHy6amCvln 4Xtr25XtBc/mJg+vnKJKzaf1Hh8uuyQFAQ3XaF8ARZ9xekuuwfdRfOv37K6liWAOMptn fDgufRJ0E1XDASd0kb+tIggPUJ38Yhuf4lqY/paP6cYuoCo1CqBC6c/XUFDFfWTGX/M9 kPMsyD+VwojXxy3yI9OF+uBgR9ZAIybc/qWWqR/FTFcHVKabZ4kR89JSePYa67Z4xItA gb5QFaxiqJIeOf+c43kpmFhjfrBw9E63DlURhA4B+KsqILoSqmW/vtYb/haH9/KZcWAS +D2A== X-Gm-Message-State: AOAM5332H07rMl659mfYOMxfZTh0az0H1d2izmvogpsFgpYPfu5WcHp+ 2w7Yb3KGl3dmDISTAtmG5YXmnw== X-Google-Smtp-Source: ABdhPJyz4uqOwKQjoCqmWeCnPA8FTI71YHNESq0Oi6Zb6fkSHMg74TLKOucvxJxvZaZ9ZDbR40TJ5g== X-Received: by 2002:a17:90a:c901:b0:1be:ce4d:7cee with SMTP id v1-20020a17090ac90100b001bece4d7ceemr9400545pjt.213.1646209794788; Wed, 02 Mar 2022 00:29:54 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id a20-20020a056a000c9400b004f396b965a9sm20922228pfv.49.2022.03.02.00.29.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 00:29:54 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v4 6/6] mm: remove range parameter from follow_invalidate_pte() Date: Wed, 2 Mar 2022 16:27:18 +0800 Message-Id: <20220302082718.32268-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220302082718.32268-1-songmuchun@bytedance.com> References: <20220302082718.32268-1-songmuchun@bytedance.com> Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The only user (DAX) of range parameter of follow_invalidate_pte() is gone, it safe to remove the range paramter and make it static to simlify the code. Signed-off-by: Muchun Song Reviewed-by: Dan Williams --- include/linux/mm.h | 3 --- mm/memory.c | 23 +++-------------------- 2 files changed, 3 insertions(+), 23 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c9bada4096ac..be7ec4c37ebe 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1871,9 +1871,6 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling); int copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma); -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp); int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/memory.c b/mm/memory.c index cc6968dc8e4e..278ab6d62b54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4964,9 +4964,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) } #endif /* __PAGETABLE_PMD_FOLDED */ -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp) +static int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, + pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) { pgd_t *pgd; p4d_t *p4d; @@ -4993,31 +4992,17 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, if (!pmdpp) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, - NULL, mm, address & PMD_MASK, - (address & PMD_MASK) + PMD_SIZE); - mmu_notifier_invalidate_range_start(range); - } *ptlp = pmd_lock(mm, pmd); if (pmd_huge(*pmd)) { *pmdpp = pmd; return 0; } spin_unlock(*ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); } if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, NULL, mm, - address & PAGE_MASK, - (address & PAGE_MASK) + PAGE_SIZE); - mmu_notifier_invalidate_range_start(range); - } ptep = pte_offset_map_lock(mm, pmd, address, ptlp); if (!pte_present(*ptep)) goto unlock; @@ -5025,8 +5010,6 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, return 0; unlock: pte_unmap_unlock(ptep, *ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); out: return -EINVAL; } @@ -5055,7 +5038,7 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp) { - return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp); + return follow_invalidate_pte(mm, address, ptepp, NULL, ptlp); } EXPORT_SYMBOL_GPL(follow_pte);