From patchwork Fri Mar 18 07:45:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12784969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C39E1C433EF for ; Fri, 18 Mar 2022 07:47:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233258AbiCRHss (ORCPT ); Fri, 18 Mar 2022 03:48:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233257AbiCRHsr (ORCPT ); Fri, 18 Mar 2022 03:48:47 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D2CA2BAE79 for ; Fri, 18 Mar 2022 00:47:28 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id m11-20020a17090a7f8b00b001beef6143a8so7664021pjl.4 for ; Fri, 18 Mar 2022 00:47:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bccyIuXNgVgWTsjz+RBd2YEFs26+BYj7DbFQevsUcHE=; b=ktwgzcqdcBQJ1VBS2U8dc8HWvMuXzX7CD5ssYWniZs0fpuGzRFxB1aStyFCE16xcf6 bcRfs6g0fk+CWkXNzt6qW0s/4Q23/4jyHcZIq72vUBXRSN4mI6my0+E3RHhk6p16z+qP 1KpeJB4aEPeXiEEgTskBN+kSgEOBSnQdHPesRKu6XQOAcIwTpDHh9GMkta7O1zDayXsm xqK3CcRPDjTvqUC3Ubi+eoVjJYhQrYRRzqjhy/v14B2DJRPNWrQjrLwzgIm/1O6bL6Um zSRSXHAWTNtfm08hQ62lXI7xgcKxzgxlar1CtPWd3dyB5eCwhROhwdwPEj4FQIDlG+2i 1IAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bccyIuXNgVgWTsjz+RBd2YEFs26+BYj7DbFQevsUcHE=; b=g9YeAATqvdjXHfatRIOkirZ90mppsyGgGDSfhfP2uQ2xZNk836ctFXurXqpVTMp5dG 2djId6DkDzJyaCWq7SMV+/qA/zHMM3DSWbqtRsLirXcMo53Cg3xRw7sBFMQ4dh9H1UVs US2wAh5WsnAqNMX1OZM8ANSIf5fN0tMn2omG4KqhPYm1OPB/PyFt67HKDWUI3sS/kv8o vZun1sTgiZNBg5kcfVOKXWLiG/mpuVdpu+vEpvBxL+sb7QX9SimGd8+b3vNlMZ0QrSkg Z9ZrfcnrI9BDvcUXv48Y15DLt4wElMu1BkNMb1PFoJrcIPprW6bSwROecEzIYcHoXQ0Y 2HOA== X-Gm-Message-State: AOAM533Pf1bOdb+aIqyQXyAwIA4q9yXIUTXVKIyRkFMh5+T75OwvgpBb I2ksnBw1/2v8jzl9v0vV1XkAWA== X-Google-Smtp-Source: ABdhPJxSaBc1N16M3AnXxYKSnMl0pGq/ZYMWFizlkbiINUdEnODL2hPebHnmfwE4By9faArbEr62/g== X-Received: by 2002:a17:90b:1bc3:b0:1bf:7461:7838 with SMTP id oa3-20020a17090b1bc300b001bf74617838mr20460132pjb.3.1647589647396; Fri, 18 Mar 2022 00:47:27 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.233]) by smtp.gmail.com with ESMTPSA id a38-20020a056a001d2600b004f72acd4dadsm8770941pfx.81.2022.03.18.00.47.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Mar 2022 00:47:27 -0700 (PDT) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v5 1/6] mm: rmap: fix cache flush on THP pages Date: Fri, 18 Mar 2022 15:45:24 +0800 Message-Id: <20220318074529.5261-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220318074529.5261-1-songmuchun@bytedance.com> References: <20220318074529.5261-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. At least, no problems were found due to this. Maybe because the architectures that have virtual indexed caches is less. Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") Signed-off-by: Muchun Song Reviewed-by: Yang Shi Reviewed-by: Dan Williams Reviewed-by: Christoph Hellwig --- mm/rmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index fc46a3d7b704..723682ddb9e8 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -970,7 +970,8 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) continue; - flush_cache_page(vma, address, folio_pfn(folio)); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); entry = pmdp_invalidate(vma, address, pmd); entry = pmd_wrprotect(entry); entry = pmd_mkclean(entry); From patchwork Fri Mar 18 07:45:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12784970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B77FC433F5 for ; Fri, 18 Mar 2022 07:47:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233276AbiCRHsz (ORCPT ); Fri, 18 Mar 2022 03:48:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233281AbiCRHsy (ORCPT ); Fri, 18 Mar 2022 03:48:54 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D63BB2BAE79 for ; Fri, 18 Mar 2022 00:47:36 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id n2so6403177plf.4 for ; Fri, 18 Mar 2022 00:47:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NuRXQ/E55v0tYD0et/dnDN8T47JEbNkSk7dguxnQZEo=; b=f56NKbZr0fS7dQ3Aad0mMvcBmfIop5dHmXc9y5u9PueUTvDmpaNUdl3yP56cDmqCi5 bpv8yWaeoMHx2uC7jXuuQFAzjdLIqY+nwy5oE4BmobwP637kHe+q4sWMEJaPaKAPuqUo MgTV5yXwMqTYdXeTDikj07OxMWnlP/maWDLeb7ewDwzrluxmUrOTlgFvjf0XTXGKe720 x+kIdO0LLQb01mRllWxCAeZHD5s7NF98hfhaDWa+3hp/HWQcRuSy5X1f5Fpcg+A8N4oa 1aT3QUOx2EmEWgIZwZ1a1m0MR6iK8ZKmXXFqk5MD11TVJo2jrxzpXMfJzUqazbiS7TqX 2r0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NuRXQ/E55v0tYD0et/dnDN8T47JEbNkSk7dguxnQZEo=; b=CkODwqXZw52zpP2gF3x04bJfVLxFNx4y50k19dt7bVY2Sn/GTgHTbl5rSWsVhdyFEH JO5J3jphotWoOyXLMwDveQ7p+2EZMgmzog8BysROKMoLLqx9m2dxgJ1ghbDIl2KEnxl3 dVYoy3vl44q7fvflVn13CUTWoKqccOWSp25injLqdISGT2IBRkfGSeYEo0Ieo+JGGZ9Z OXSrV0Olv20Z7qVUvYGc0OTb6CgEX/VCmhMzCAMrOV7B/jg0Cr/kfznIsK+ImYBlSLmG axnoKPo36zDIP/cPAQXPtrfIwjhghiuig9gcTsx6M2QdQFHAuPYDT676caTNSI8ylBWL DVIA== X-Gm-Message-State: AOAM533q0HsBmFek/oDEoo7GWOC1Ab+P/N5lUtuTeD2/DxOmlvYlyr2i 4oeUcUXGUbQ3ZPNhmAOM817cUg== X-Google-Smtp-Source: ABdhPJya/68onNzuV2mkq//V4WOdU9M2a18pUc69NLpou9mxKv7f/CTZ8VDDrvE+h2qT4ssg3AnNLQ== X-Received: by 2002:a17:902:8b87:b0:14b:47b3:c0a2 with SMTP id ay7-20020a1709028b8700b0014b47b3c0a2mr8718964plb.51.1647589656310; Fri, 18 Mar 2022 00:47:36 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.233]) by smtp.gmail.com with ESMTPSA id a38-20020a056a001d2600b004f72acd4dadsm8770941pfx.81.2022.03.18.00.47.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Mar 2022 00:47:36 -0700 (PDT) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v5 2/6] dax: fix cache flush on PMD-mapped pages Date: Fri, 18 Mar 2022 15:45:25 +0800 Message-Id: <20220318074529.5261-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220318074529.5261-1-songmuchun@bytedance.com> References: <20220318074529.5261-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. This is just a documentation issue with the respect to properly documenting the expected usage of cache flushing before modifying the pmd. However, in practice this is not a problem due to the fact that DAX is not available on architectures with virtually indexed caches per: commit d92576f1167c ("dax: does not work correctly with virtual aliasing caches") Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean") Signed-off-by: Muchun Song Reviewed-by: Dan Williams Reviewed-by: Christoph Hellwig --- fs/dax.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/dax.c b/fs/dax.c index 67a08a32fccb..a372304c9695 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -845,7 +845,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) goto unlock_pmd; - flush_cache_page(vma, address, pfn); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); pmd = pmdp_invalidate(vma, address, pmdp); pmd = pmd_wrprotect(pmd); pmd = pmd_mkclean(pmd); From patchwork Fri Mar 18 07:45:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12784971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6F08C433EF for ; Fri, 18 Mar 2022 07:47:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233289AbiCRHtH (ORCPT ); Fri, 18 Mar 2022 03:49:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233314AbiCRHtE (ORCPT ); Fri, 18 Mar 2022 03:49:04 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 995E42BBDC6 for ; Fri, 18 Mar 2022 00:47:45 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id t5so8822759pfg.4 for ; Fri, 18 Mar 2022 00:47:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5g/0QWFd98PH+GDbIHxo8Dsrvq3fGfzY+G57IkW5WS4=; b=dg9UAhNpOr6o3y5UonmoPElMtjhoOZitlzXpRB+XIF5Y8MxAwu62mOkmwX6QLytp+O ppOlIAS8ky3iN7gnlfGRJ5kTMGZ5GDAQMiY5M/76ReiR34lJtNuDnIS+LaVCf8fLfOWf ZGa+AzxsqH4J4nMDpXJ6woWlss1m9eHuGwhKWLSW0aE+bKO7Esv178sRPcvI+Sq5Cae0 qKRy9151sKYdmBtjG3VxYMOIgkN5dYtYOO6XufWJFsURfyhqevBJw12xZlfrAZq9LzLJ wJppZKBeS6G7JacPr/H+nPChDeKjTSlWqrCCJtWNsVzJpvo3rfWoteYP1/RvS/mnhPc4 tN4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5g/0QWFd98PH+GDbIHxo8Dsrvq3fGfzY+G57IkW5WS4=; b=VS2qKm0098U1oVxxrmgBu5VwLuLIE/Rwvlz+oF4NiYSz1YG9P0x+Z9sCYwymuTlQXM Fleh9IWLrW53MOJJKl692Fg06IowWUROQsY0UBRxApzRjZFkfnLNYjUkiRRXNJijpXx6 kW4KDMcwMgSSBJm3gXlB1TuPhbCAN39ZqZk9kscIS4g39sw3GSwcW+bTlvThKDtnaBKl p4PSS7vJbNQTYiykMgr9vk3/pYq2k2DBzbEVGlgh2jnqhm0RVBOF4Ei+r9cNC6+KGlZO Hth3ojKOGzZfgIDPZ67K7TuhBaykvDlPKlvLZ2b3TqtOuxkLTgyp4H9p8jJNUK2FK+1e 0R5Q== X-Gm-Message-State: AOAM5338ZYntcceQuGYL7IXo7E5wJ0gRka8oN9vMzvp0N1ByqhQsEH3O 60ulmuvVHpD+ayS2Ey3FB+v/Bw== X-Google-Smtp-Source: ABdhPJxmFtyXPIV41wp/GHfrz2vvaMZoXbbFj5/OvRKMGpF3ljaujoML+9vyTu1GnrcnRAgsr6q6FQ== X-Received: by 2002:aa7:8211:0:b0:4f7:8b7:239b with SMTP id k17-20020aa78211000000b004f708b7239bmr8426959pfi.64.1647589665060; Fri, 18 Mar 2022 00:47:45 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.233]) by smtp.gmail.com with ESMTPSA id a38-20020a056a001d2600b004f72acd4dadsm8770941pfx.81.2022.03.18.00.47.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Mar 2022 00:47:44 -0700 (PDT) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v5 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Date: Fri, 18 Mar 2022 15:45:26 +0800 Message-Id: <20220318074529.5261-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220318074529.5261-1-songmuchun@bytedance.com> References: <20220318074529.5261-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The page_mkclean_one() is supposed to be used with the pfn that has a associated struct page, but not all the pfns (e.g. DAX) have a struct page. Introduce a new function pfn_mkclean_range() to cleans the PTEs (including PMDs) mapped with range of pfns which has no struct page associated with them. This helper will be used by DAX device in the next patch to make pfns clean. Signed-off-by: Muchun Song --- include/linux/rmap.h | 3 +++ mm/internal.h | 26 +++++++++++++-------- mm/rmap.c | 65 +++++++++++++++++++++++++++++++++++++++++++--------- 3 files changed, 74 insertions(+), 20 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b58ddb8b2220..a6ec0d3e40c1 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -263,6 +263,9 @@ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *); */ int folio_mkclean(struct folio *); +int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, + struct vm_area_struct *vma); + void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked); /* diff --git a/mm/internal.h b/mm/internal.h index f45292dc4ef5..ff873944749f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -516,26 +516,22 @@ void mlock_page_drain(int cpu); extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); /* - * At what user virtual address is page expected in vma? - * Returns -EFAULT if all of the page is outside the range of vma. - * If page is a compound head, the entire compound page is considered. + * * Return the start of user virtual address at the specific offset within + * a vma. */ static inline unsigned long -vma_address(struct page *page, struct vm_area_struct *vma) +vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages, + struct vm_area_struct *vma) { - pgoff_t pgoff; unsigned long address; - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ - pgoff = page_to_pgoff(page); if (pgoff >= vma->vm_pgoff) { address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); /* Check for address beyond vma (or wrapped through 0?) */ if (address < vma->vm_start || address >= vma->vm_end) address = -EFAULT; - } else if (PageHead(page) && - pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) { + } else if (pgoff + nr_pages - 1 >= vma->vm_pgoff) { /* Test above avoids possibility of wrap to 0 on 32-bit */ address = vma->vm_start; } else { @@ -545,6 +541,18 @@ vma_address(struct page *page, struct vm_area_struct *vma) } /* + * Return the start of user virtual address of a page within a vma. + * Returns -EFAULT if all of the page is outside the range of vma. + * If page is a compound head, the entire compound page is considered. + */ +static inline unsigned long +vma_address(struct page *page, struct vm_area_struct *vma) +{ + VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ + return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma); +} + +/* * Then at what user virtual address will none of the range be found in vma? * Assumes that vma_address() already returned a good starting address. */ diff --git a/mm/rmap.c b/mm/rmap.c index 723682ddb9e8..ad5cf0e45a73 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -929,12 +929,12 @@ int folio_referenced(struct folio *folio, int is_locked, return pra.referenced; } -static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, - unsigned long address, void *arg) +static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) { - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); + int cleaned = 0; + struct vm_area_struct *vma = pvmw->vma; struct mmu_notifier_range range; - int *cleaned = arg; + unsigned long address = pvmw->address; /* * We have to assume the worse case ie pmd for invalidation. Note that @@ -942,16 +942,16 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, */ mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma, vma->vm_mm, address, - vma_address_end(&pvmw)); + vma_address_end(pvmw)); mmu_notifier_invalidate_range_start(&range); - while (page_vma_mapped_walk(&pvmw)) { + while (page_vma_mapped_walk(pvmw)) { int ret = 0; - address = pvmw.address; - if (pvmw.pte) { + address = pvmw->address; + if (pvmw->pte) { pte_t entry; - pte_t *pte = pvmw.pte; + pte_t *pte = pvmw->pte; if (!pte_dirty(*pte) && !pte_write(*pte)) continue; @@ -964,7 +964,7 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, ret = 1; } else { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - pmd_t *pmd = pvmw.pmd; + pmd_t *pmd = pvmw->pmd; pmd_t entry; if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) @@ -991,11 +991,22 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, * See Documentation/vm/mmu_notifier.rst */ if (ret) - (*cleaned)++; + cleaned++; } mmu_notifier_invalidate_range_end(&range); + return cleaned; +} + +static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, + unsigned long address, void *arg) +{ + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); + int *cleaned = arg; + + *cleaned += page_vma_mkclean_one(&pvmw); + return true; } @@ -1033,6 +1044,38 @@ int folio_mkclean(struct folio *folio) EXPORT_SYMBOL_GPL(folio_mkclean); /** + * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of + * [@pfn, @pfn + @nr_pages) at the specific offset (@pgoff) + * within the @vma of shared mappings. And since clean PTEs + * should also be readonly, write protects them too. + * @pfn: start pfn. + * @nr_pages: number of physically contiguous pages srarting with @pfn. + * @pgoff: page offset that the @pfn mapped with. + * @vma: vma that @pfn mapped within. + * + * Returns the number of cleaned PTEs (including PMDs). + */ +int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, + struct vm_area_struct *vma) +{ + struct page_vma_mapped_walk pvmw = { + .pfn = pfn, + .nr_pages = nr_pages, + .pgoff = pgoff, + .vma = vma, + .flags = PVMW_SYNC, + }; + + if (invalid_mkclean_vma(vma, NULL)) + return 0; + + pvmw.address = vma_pgoff_address(pgoff, nr_pages, vma); + VM_BUG_ON_VMA(pvmw.address == -EFAULT, vma); + + return page_vma_mkclean_one(&pvmw); +} + +/** * page_move_anon_rmap - move a page to our anon_vma * @page: the page to move to our anon_vma * @vma: the vma the page belongs to From patchwork Fri Mar 18 07:45:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12784972 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 731A1C433F5 for ; Fri, 18 Mar 2022 07:48:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233313AbiCRHtW (ORCPT ); Fri, 18 Mar 2022 03:49:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233291AbiCRHtT (ORCPT ); Fri, 18 Mar 2022 03:49:19 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E70A21B6 for ; Fri, 18 Mar 2022 00:47:53 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id d19so8798364pfv.7 for ; Fri, 18 Mar 2022 00:47:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lWUQux1KtYb2lW/SJw8SxLz0dcCTGFUlQsH2e++z+XY=; b=5bzBbr1D4egu3YxUjV6NU5rP+tmRTfoCbQCHzoz5dOiFVKlrS0cw7ippuefsHKLsCE fo2HvaJglETK9wm68r8dOlew9nAPqA9sQV27lJoQ/riPuNKVaTkiLydUSvyY34v/4Szx k2dwX1GB3beaot9LSmk0kJilr/Wox9GEY6UR+NAqpsr8eEAaPXoO62fSz6cfSarCvKXY 9PsV2xjPoa0/pke7OOwVOG768pLYDkFnAjWM8MrCHQWxTr8R8BqWDVB6TXE2OxVtxxmv S2p74U/apcotkbr8QzaAIeNAWsiOhvvfleiiDpIjzQhmtA6v39V4Pp/Goz4/EHviNis0 f85w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lWUQux1KtYb2lW/SJw8SxLz0dcCTGFUlQsH2e++z+XY=; b=w+Onv+45YLA+knw88Mizkx4RighAbm8CL0Hrlcv7qtLNBupUowUyKScSOGtBhy40o6 aEdRKmYXtHDOclTxp4P2IbATD4th1f8CPdBUz5E0crRYO2EU/J27Ri3Occ7U5to9qxJj D5iLuyoFNmHF4ciHhRbuYgkc4qzWqPrpGt/8JVsTCSDkkBFd/MA9R+vXrikcLAKMu889 U6BNrBY3AbtUf7X6Or7jXJvgFt160EWXbpfWqQ4+aitcRm6157+Dsbnja+0ClnBv6ctm m8Bjn3cA7N1JJX0AtS+h25Wj2qYWgZt5ZuHkj5sbkGoiS4p+sh1iamp/V8JsYFZICVOc LYJg== X-Gm-Message-State: AOAM531swLwnxFqOZR964jdu9gWyJcwEQ0O+anA2uwUXhf2bymKjrEo5 +C5A7Ma0eF1EP2ZUoGsTjqfp9g== X-Google-Smtp-Source: ABdhPJxOutWrAfVqkjylZCN0QPCTtjilyzjelgGeLWhCr1Nvt3WOGLzbIbbupSf5LNvps9BtFMZI4A== X-Received: by 2002:a63:1758:0:b0:381:effc:b48f with SMTP id 24-20020a631758000000b00381effcb48fmr6973156pgx.124.1647589673409; Fri, 18 Mar 2022 00:47:53 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.233]) by smtp.gmail.com with ESMTPSA id a38-20020a056a001d2600b004f72acd4dadsm8770941pfx.81.2022.03.18.00.47.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Mar 2022 00:47:53 -0700 (PDT) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v5 4/6] mm: pvmw: add support for walking devmap pages Date: Fri, 18 Mar 2022 15:45:27 +0800 Message-Id: <20220318074529.5261-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220318074529.5261-1-songmuchun@bytedance.com> References: <20220318074529.5261-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The devmap pages can not use page_vma_mapped_walk() to check if a huge devmap page is mapped into a vma. Add support for walking huge devmap pages so that DAX can use it in the next patch. Signed-off-by: Muchun Song --- mm/page_vma_mapped.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 1187f9c1ec5b..b3bf802a6435 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -210,16 +210,9 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) */ pmde = READ_ONCE(*pvmw->pmd); - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { + if (pmd_leaf(pmde) || is_pmd_migration_entry(pmde)) { pvmw->ptl = pmd_lock(mm, pvmw->pmd); pmde = *pvmw->pmd; - if (likely(pmd_trans_huge(pmde))) { - if (pvmw->flags & PVMW_MIGRATION) - return not_found(pvmw); - if (!check_pmd(pmd_pfn(pmde), pvmw)) - return not_found(pvmw); - return true; - } if (!pmd_present(pmde)) { swp_entry_t entry; @@ -232,6 +225,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return not_found(pvmw); return true; } + if (likely(pmd_trans_huge(pmde) || pmd_devmap(pmde))) { + if (pvmw->flags & PVMW_MIGRATION) + return not_found(pvmw); + if (!check_pmd(pmd_pfn(pmde), pvmw)) + return not_found(pvmw); + return true; + } /* THP pmd was split under us: handle on pte level */ spin_unlock(pvmw->ptl); pvmw->ptl = NULL; From patchwork Fri Mar 18 07:45:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12784973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4092AC433EF for ; Fri, 18 Mar 2022 07:48:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233279AbiCRHtX (ORCPT ); Fri, 18 Mar 2022 03:49:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233291AbiCRHtW (ORCPT ); Fri, 18 Mar 2022 03:49:22 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA436175393 for ; Fri, 18 Mar 2022 00:48:02 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id mr5-20020a17090b238500b001c67366ae93so5261871pjb.4 for ; Fri, 18 Mar 2022 00:48:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FG/K6837zQPVwcpe+tAz7OlRAj17PMacY8wX1oMHrEs=; b=KPK4LrnCmNp7Z6s5wQGhYN0YTe9YmNp4wMsWtNeciJ6Nr5c6ooIjt1Wod+Zs6C6l6k rUcoy/8jY41NqMFbnQ46C6HAJCWqPWBwoxGl2aaAI4m0MHAF1Dffrolb7X9K1sy7vQdE Iz/qDAw+yir/EV+4I1S8emceiVNlQH3W1LjhDNdnHluUbY0KeGwccCatFvpqM5klMw5t ZWwiE/ZVn49o6MGwrkX/P/f/ysD8jHyWliSofcl71V+rEy9rCR6dH4MyEcR7cY+NAJD6 74s4ettE0gg17Ki/VftrCzXY1HQBtI+ic6p6Np9vQ8oU0bIqUFrl2YD7/OiVOwkgtHPn NJoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FG/K6837zQPVwcpe+tAz7OlRAj17PMacY8wX1oMHrEs=; b=diNuhQmX9gmaNTRol8cLqynsv/UPgsQil1FiHeIFOCAVy60D96m+d6PAVB1FyxElod qZeyrWTUZvVrmEmszFQ1tsF8VbJiaKXu9A6QOle7sLn2ksjyFyr22JnllutRbgXGXaFn dlnC6/CERXtH+MYN/tPL1U5MIn56dFQY6PygEZk3tLocsIfIGiVuCVltWMNHA2/WtQ95 Lf29BDEjM8DOtSnrZyJlurKXqjD8M3jc3tRza4dQWOWoASJsSIy0PiHswscRz0x1uwt7 j3qMccQL2uCiyyfDklndYOiDtvm6O5phlndT8tSUxXQ47JxKVjctsaSkVq7MQxljmiTe Yr2w== X-Gm-Message-State: AOAM530vd7HC4JQ+CFgQn19QrVmP7DzNcTqn8J/7+Sk+JYkUfAvf223V beOsoC4gm5FdA6aJZC9yYc8iew== X-Google-Smtp-Source: ABdhPJzqGruIlbwCoxPni7WvLhOqAvLIoTrelUOfikdFmyPFQKUK/Ax2ojztKR7LFVa9l6ohLSmn1A== X-Received: by 2002:a17:90b:17d1:b0:1bf:1e3:ded3 with SMTP id me17-20020a17090b17d100b001bf01e3ded3mr20609033pjb.144.1647589681912; Fri, 18 Mar 2022 00:48:01 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.233]) by smtp.gmail.com with ESMTPSA id a38-20020a056a001d2600b004f72acd4dadsm8770941pfx.81.2022.03.18.00.47.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Mar 2022 00:48:01 -0700 (PDT) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v5 5/6] dax: fix missing writeprotect the pte entry Date: Fri, 18 Mar 2022 15:45:28 +0800 Message-Id: <20220318074529.5261-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220318074529.5261-1-songmuchun@bytedance.com> References: <20220318074529.5261-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently dax_mapping_entry_mkclean() fails to clean and write protect the pte entry within a DAX PMD entry during an *sync operation. This can result in data loss in the following sequence: 1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and making the pmd entry dirty and writeable. 2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K) write to the same file, dirtying PMD radix tree entry (already done in 1)) and making the pte entry dirty and writeable. 3) fsync, flushing out PMD data and cleaning the radix tree entry. We currently fail to mark the pte entry as clean and write protected since the vma of process B is not covered in dax_entry_mkclean(). 4) process B writes to the pte. These don't cause any page faults since the pte entry is dirty and writeable. The radix tree entry remains clean. 5) fsync, which fails to flush the dirty PMD data because the radix tree entry was clean. 6) crash - dirty data that should have been fsync'd as part of 5) could still have been in the processor cache, and is lost. Just to use pfn_mkclean_range() to clean the pfns to fix this issue. Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush") Signed-off-by: Muchun Song Reviewed-by: Christoph Hellwig --- fs/dax.c | 83 ++++++---------------------------------------------------------- 1 file changed, 7 insertions(+), 76 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index a372304c9695..7fd4a16769f9 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #define CREATE_TRACE_POINTS @@ -789,87 +790,17 @@ static void *dax_insert_entry(struct xa_state *xas, return entry; } -static inline -unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma) -{ - unsigned long address; - - address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); - return address; -} - /* Walk all mappings of a given index of a file and writeprotect them */ -static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, - unsigned long pfn) +static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn, + unsigned long npfn, pgoff_t start) { struct vm_area_struct *vma; - pte_t pte, *ptep = NULL; - pmd_t *pmdp = NULL; - spinlock_t *ptl; + pgoff_t end = start + npfn - 1; i_mmap_lock_read(mapping); - vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) { - struct mmu_notifier_range range; - unsigned long address; - + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) { + pfn_mkclean_range(pfn, npfn, start, vma); cond_resched(); - - if (!(vma->vm_flags & VM_SHARED)) - continue; - - address = pgoff_address(index, vma); - - /* - * follow_invalidate_pte() will use the range to call - * mmu_notifier_invalidate_range_start() on our behalf before - * taking any lock. - */ - if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep, - &pmdp, &ptl)) - continue; - - /* - * No need to call mmu_notifier_invalidate_range() as we are - * downgrading page table protection not changing it to point - * to a new page. - * - * See Documentation/vm/mmu_notifier.rst - */ - if (pmdp) { -#ifdef CONFIG_FS_DAX_PMD - pmd_t pmd; - - if (pfn != pmd_pfn(*pmdp)) - goto unlock_pmd; - if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) - goto unlock_pmd; - - flush_cache_range(vma, address, - address + HPAGE_PMD_SIZE); - pmd = pmdp_invalidate(vma, address, pmdp); - pmd = pmd_wrprotect(pmd); - pmd = pmd_mkclean(pmd); - set_pmd_at(vma->vm_mm, address, pmdp, pmd); -unlock_pmd: -#endif - spin_unlock(ptl); - } else { - if (pfn != pte_pfn(*ptep)) - goto unlock_pte; - if (!pte_dirty(*ptep) && !pte_write(*ptep)) - goto unlock_pte; - - flush_cache_page(vma, address, pfn); - pte = ptep_clear_flush(vma, address, ptep); - pte = pte_wrprotect(pte); - pte = pte_mkclean(pte); - set_pte_at(vma->vm_mm, address, ptep, pte); -unlock_pte: - pte_unmap_unlock(ptep, ptl); - } - - mmu_notifier_invalidate_range_end(&range); } i_mmap_unlock_read(mapping); } @@ -937,7 +868,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, count = 1UL << dax_entry_order(entry); index = xas->xa_index & ~(count - 1); - dax_entry_mkclean(mapping, index, pfn); + dax_entry_mkclean(mapping, pfn, count, index); dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE); /* * After we have flushed the cache, we can clear the dirty tag. There From patchwork Fri Mar 18 07:45:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12784974 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05DB6C433EF for ; Fri, 18 Mar 2022 07:48:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233342AbiCRHti (ORCPT ); Fri, 18 Mar 2022 03:49:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44896 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233329AbiCRHte (ORCPT ); Fri, 18 Mar 2022 03:49:34 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FB9DB2475 for ; Fri, 18 Mar 2022 00:48:11 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id mz9-20020a17090b378900b001c657559290so7407869pjb.2 for ; Fri, 18 Mar 2022 00:48:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CiqL2201i/r2h3p8KI/NBkw6WGqzhDfT//6vVp9Lies=; b=bXFvZFfrI9C/L+NvtlIM0LtbMc9OI6fpHgmIV5gvdFRnQP6DfETLgq8COcZHL9qIFx 5bzIBwfZrjgZShT0Nv6dg54gHhhnaS+mQfdF2z26zWuGr24FPRLmgQBRSiE0LFYz4OhI z1C9H8HtntVzRG07BglBrccoNz4ssJmAHMPjGfWDGQ35UoQhAlnZMSDwyGW2Ku/yFday 2pQYFWEWfQKrVCP9HKKBqVxFgdWYlk4mjpKHgnJTbiluPaEdXtrU+2B9lJCccSPLEgEG NnoeI7zIaBwYpcQe+ULKS1L4X7Ikmta5Nes7W+xQB3j8evY6Uj7EXtFPq9iaiwoqA4Oh AVnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CiqL2201i/r2h3p8KI/NBkw6WGqzhDfT//6vVp9Lies=; b=d+MA4yMCF3BvmLkRleanBzdthC4cYZ3fCtBJwrFLaMf7uWgPlC1MFldwQxzBFLI3M0 xOLkk74KLj7x9nGjp95xL5ASL7Rpkj4hUPfuzPzynFViCCHFo3IFL82Q5wCzPpi+E1im UITyPE+fUFzyaYC+gXW/Sfwed891Ttz8jB4TdwPzhUkIhtJOVWEHgeFOApaLjYCL23jD 8iv9qdd0U59mMZsDeQ9aASjnp9kSRJCl7xhWaxBBsqVMF4cCgHf+4JqqRL2SIBy2qxst fYmb29EMU2mypcaCWDcsadzWTWG8EJRupSqKEy1GC85BhtCUjIh5q1udK/aDNpD70KrP DxPg== X-Gm-Message-State: AOAM533mfRMEsTgSA05FKy3wXWPJVpPCoAwuKIlsEaMcnoMOMMPMb2rP LioU3hol8H7edIUOr5svJVhW6A== X-Google-Smtp-Source: ABdhPJx6Gb2cl0kaRKMvmsLEzJyKu4bc+A6kr/eo/U8JztFBvDzyQCyUtAKxuHv1+XJWl0bol5r0fw== X-Received: by 2002:a17:902:cf02:b0:14f:e0c2:1514 with SMTP id i2-20020a170902cf0200b0014fe0c21514mr8557186plg.90.1647589690731; Fri, 18 Mar 2022 00:48:10 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.233]) by smtp.gmail.com with ESMTPSA id a38-20020a056a001d2600b004f72acd4dadsm8770941pfx.81.2022.03.18.00.48.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Mar 2022 00:48:10 -0700 (PDT) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v5 6/6] mm: simplify follow_invalidate_pte() Date: Fri, 18 Mar 2022 15:45:29 +0800 Message-Id: <20220318074529.5261-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220318074529.5261-1-songmuchun@bytedance.com> References: <20220318074529.5261-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The only user (DAX) of range and pmdpp parameters of follow_invalidate_pte() is gone, it is safe to remove them and make it static to simlify the code. This is revertant of the following commits: 097963959594 ("mm: add follow_pte_pmd()") a4d1a8852513 ("dax: update to new mmu_notifier semantic") There is only one caller of the follow_invalidate_pte(). So just fold it into follow_pte() and remove it. Signed-off-by: Muchun Song Reviewed-by: Christoph Hellwig --- include/linux/mm.h | 3 -- mm/memory.c | 81 ++++++++++++++++-------------------------------------- 2 files changed, 23 insertions(+), 61 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c9bada4096ac..be7ec4c37ebe 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1871,9 +1871,6 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling); int copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma); -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp); int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/memory.c b/mm/memory.c index cc6968dc8e4e..84f7250e6cd1 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4964,9 +4964,29 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) } #endif /* __PAGETABLE_PMD_FOLDED */ -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp) +/** + * follow_pte - look up PTE at a user virtual address + * @mm: the mm_struct of the target address space + * @address: user virtual address + * @ptepp: location to store found PTE + * @ptlp: location to store the lock for the PTE + * + * On a successful return, the pointer to the PTE is stored in @ptepp; + * the corresponding lock is taken and its location is stored in @ptlp. + * The contents of the PTE are only stable until @ptlp is released; + * any further use, if any, must be protected against invalidation + * with MMU notifiers. + * + * Only IO mappings and raw PFN mappings are allowed. The mmap semaphore + * should be taken for read. + * + * KVM uses this function. While it is arguably less bad than ``follow_pfn``, + * it is not a good general-purpose API. + * + * Return: zero on success, -ve otherwise. + */ +int follow_pte(struct mm_struct *mm, unsigned long address, + pte_t **ptepp, spinlock_t **ptlp) { pgd_t *pgd; p4d_t *p4d; @@ -4989,35 +5009,9 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, pmd = pmd_offset(pud, address); VM_BUG_ON(pmd_trans_huge(*pmd)); - if (pmd_huge(*pmd)) { - if (!pmdpp) - goto out; - - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, - NULL, mm, address & PMD_MASK, - (address & PMD_MASK) + PMD_SIZE); - mmu_notifier_invalidate_range_start(range); - } - *ptlp = pmd_lock(mm, pmd); - if (pmd_huge(*pmd)) { - *pmdpp = pmd; - return 0; - } - spin_unlock(*ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); - } - if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, NULL, mm, - address & PAGE_MASK, - (address & PAGE_MASK) + PAGE_SIZE); - mmu_notifier_invalidate_range_start(range); - } ptep = pte_offset_map_lock(mm, pmd, address, ptlp); if (!pte_present(*ptep)) goto unlock; @@ -5025,38 +5019,9 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, return 0; unlock: pte_unmap_unlock(ptep, *ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); out: return -EINVAL; } - -/** - * follow_pte - look up PTE at a user virtual address - * @mm: the mm_struct of the target address space - * @address: user virtual address - * @ptepp: location to store found PTE - * @ptlp: location to store the lock for the PTE - * - * On a successful return, the pointer to the PTE is stored in @ptepp; - * the corresponding lock is taken and its location is stored in @ptlp. - * The contents of the PTE are only stable until @ptlp is released; - * any further use, if any, must be protected against invalidation - * with MMU notifiers. - * - * Only IO mappings and raw PFN mappings are allowed. The mmap semaphore - * should be taken for read. - * - * KVM uses this function. While it is arguably less bad than ``follow_pfn``, - * it is not a good general-purpose API. - * - * Return: zero on success, -ve otherwise. - */ -int follow_pte(struct mm_struct *mm, unsigned long address, - pte_t **ptepp, spinlock_t **ptlp) -{ - return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp); -} EXPORT_SYMBOL_GPL(follow_pte); /**