From patchwork Wed Mar 2 08:27:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12765609 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2FC0C433FE for ; Wed, 2 Mar 2022 08:29:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A55A8D0003; Wed, 2 Mar 2022 03:29:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6537F8D0001; Wed, 2 Mar 2022 03:29:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 542C28D0003; Wed, 2 Mar 2022 03:29:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0024.hostedemail.com [216.40.44.24]) by kanga.kvack.org (Postfix) with ESMTP id 46FE28D0001 for ; Wed, 2 Mar 2022 03:29:14 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 031B3181B5EAC for ; Wed, 2 Mar 2022 08:29:14 +0000 (UTC) X-FDA: 79198771386.19.C1B96F4 Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by imf27.hostedemail.com (Postfix) with ESMTP id 904A040006 for ; Wed, 2 Mar 2022 08:29:13 +0000 (UTC) Received: by mail-pj1-f44.google.com with SMTP id g7-20020a17090a708700b001bb78857ccdso4172702pjk.1 for ; Wed, 02 Mar 2022 00:29:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dOhvmpBLieRVF+NKQPb4pdavGDEjnXCIzLlMLWWpOKA=; b=Eqy9IPjbLYYbIstc/J+JZ8WTf619n0h2r8uqKqJ+e6S9Uv+eWqpcul4UbryOb5p+/u MaSxXNwLLByowHTqGTJmAsxIcHxjSri8rtIKgWnK7dAxgThGETtK7a3vlVgTsAIso5ar jz6a8FaGafuyrVazLBfr548UwTAKf5gemmCc1q/mdgh/EBxhT2Wvwoij1uyTH3KARIXd jcdjAZ9+pZL04jxy94vLnK5/fYv2dad6VTjRS0mMTD7NNmpGZHjiGq1JUbhrnPSfBT1F 4WAiWLScu72rwaHUnfiXZdJgAL3fWS9Qcg6grtWwG0Zxwea9ard9epbbCH3a5Is0pfue VlFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dOhvmpBLieRVF+NKQPb4pdavGDEjnXCIzLlMLWWpOKA=; b=ZepDkTJ56Tw2s9z1pfYtfm55ri/niNh278X9PPdb3Nf3qv7Nv9la+4oIi6jNYQUNHO AmE7r2m9DyxjVLJHlvaESYeU8jKRwGWfBlmArXOawJtbqnePAUrWzfZ276GQJHrXMvAN 6m9/UoZS5E5ao/BTQ48QSty2IcCUDFFJgngyRhU+vMyEMCOmI4SmnZI1woQhxiDBkCuU lCOL3mOJRMwktgszLJ0uYigDFfk2NbggFNr2oDNSHytjlkN2NJC99T1uFhH6kvw1koxo t1PJDmSlCCDCB8tz8Qh7nEsU/otTuvW644Gf0BFHvUB/GD5BNfEnftDEUb3n/bfHLcTD OIDQ== X-Gm-Message-State: AOAM530emRkLRW0vh29Ya8BvW9eNwyfvC0Y0kN5b09cO/hhFEYiOUhSg AqdM0wFmIe5XGZ28GDvxyyORLw== X-Google-Smtp-Source: ABdhPJy4/KlcXCarIW7Ivk1JNvCCmPY83Lz2efEDwfG2djMlcALEZNHFvUurWjFDAawaZRfirDGOSw== X-Received: by 2002:a17:90a:ec09:b0:1bc:d7c2:b2d5 with SMTP id l9-20020a17090aec0900b001bcd7c2b2d5mr25500689pjy.22.1646209752693; Wed, 02 Mar 2022 00:29:12 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id a20-20020a056a000c9400b004f396b965a9sm20922228pfv.49.2022.03.02.00.29.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 00:29:12 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v4 1/6] mm: rmap: fix cache flush on THP pages Date: Wed, 2 Mar 2022 16:27:13 +0800 Message-Id: <20220302082718.32268-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220302082718.32268-1-songmuchun@bytedance.com> References: <20220302082718.32268-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 904A040006 X-Stat-Signature: agun6eh9w8yh6b4jen9inwk7mj7inmof Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=Eqy9IPjb; spf=pass (imf27.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.44 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1646209753-202770 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. At least, no problems were found due to this. Maybe because the architectures that have virtual indexed caches is less. Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") Signed-off-by: Muchun Song Reviewed-by: Yang Shi Reviewed-by: Dan Williams --- mm/rmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/rmap.c b/mm/rmap.c index fc46a3d7b704..723682ddb9e8 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -970,7 +970,8 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) continue; - flush_cache_page(vma, address, folio_pfn(folio)); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); entry = pmdp_invalidate(vma, address, pmd); entry = pmd_wrprotect(entry); entry = pmd_mkclean(entry); From patchwork Wed Mar 2 08:27:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12765610 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19B2FC433F5 for ; Wed, 2 Mar 2022 08:29:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A74748D0006; Wed, 2 Mar 2022 03:29:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A23CA8D0005; Wed, 2 Mar 2022 03:29:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 913518D0006; Wed, 2 Mar 2022 03:29:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id 8107F8D0005 for ; Wed, 2 Mar 2022 03:29:22 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 176058249980 for ; Wed, 2 Mar 2022 08:29:22 +0000 (UTC) X-FDA: 79198771764.30.B087DF7 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf01.hostedemail.com (Postfix) with ESMTP id AE3E540002 for ; Wed, 2 Mar 2022 08:29:21 +0000 (UTC) Received: by mail-pl1-f172.google.com with SMTP id z11so948455pla.7 for ; Wed, 02 Mar 2022 00:29:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eiCDiO7SeVqlcAe7b/JTkNhJRMnMslj9oR2/HG6e9n8=; b=cYJLK58sIMtE77DATtQlxH9dtPyPDOawr2Hs9P06tVu2j6KUemAg7S9UEgucud0DdX 4h/Uh/mP0MROdl00Wa/vvnYHaH6Ya6dm/OxjneqryKBM8Gp6v7zc8BhA27iNEPG1yHH2 Eojft7GBt8tz5GmBW/hk0eB/lgf+Hbk5vSPzs7SU8EZS8jxOTz8PGvLJ6crgihfH37PS nXcScKKh2ZkdzeEMqPdHtVBCvj2or8wIA4avGEhSTLrskKUn6QAREUoUwwRZk01wFFti xXOkL2PqQ9YVw97HrnKwed0fGb91eHchAYlq83PZW206GlgUEgKY4mhHqxy1QcA26b4q Powg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eiCDiO7SeVqlcAe7b/JTkNhJRMnMslj9oR2/HG6e9n8=; b=dCf1VvJk/gLC1Pa9xif+6HXKmRPyFY3isMLRSVowyIJLa9JHrvL3g6NCX2L/rpfYRt 2YPx8VE7TB7bQy1GBUmpk2eMQxQlJqvAL5ufHqM9CQhHpmlfuWTvq/Skggty4UVuULGZ o7Rzs4iyhDkmWz8YtKX64DaIgPUAXRgBteU8J8CuTWTpn1ty/IEmfb9Jah/yMDPHGkSL r7fMFcR61jH+yZuYexMs8Tdg4V+wZS3vefy1wUoJR/zPIusnCPCMj/nCmUnY+QL5p8JP 7YGP/gNY1GvDVlcIJvl7K95mZtsaNzm38kzla7pksezfSQ6CV4wCmFN8sKki/6L0uxAa uh7Q== X-Gm-Message-State: AOAM530JB0aInt3VB3cTM0wOt5tbF/EA0d1n898WLmL5jXU0fDf/hIFV wc6Hoc21dRsjYl//RPKhyrebpg== X-Google-Smtp-Source: ABdhPJzHW21CrBKiP5Br3BMc/q1Bt4vTHDEYHfTZ2tmAwLrCBHaS+KABg8tBrpmlUqjiIhOiEwMJWw== X-Received: by 2002:a17:90a:550b:b0:1bd:1e3a:a407 with SMTP id b11-20020a17090a550b00b001bd1e3aa407mr19607925pji.112.1646209760828; Wed, 02 Mar 2022 00:29:20 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id a20-20020a056a000c9400b004f396b965a9sm20922228pfv.49.2022.03.02.00.29.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 00:29:20 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v4 2/6] dax: fix cache flush on PMD-mapped pages Date: Wed, 2 Mar 2022 16:27:14 +0800 Message-Id: <20220302082718.32268-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220302082718.32268-1-songmuchun@bytedance.com> References: <20220302082718.32268-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: AE3E540002 X-Stat-Signature: 7sy6nmzuito4pty1eb9q87tj87fm6hq1 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=cYJLK58s; spf=pass (imf01.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-HE-Tag: 1646209761-46153 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean") Signed-off-by: Muchun Song Reviewed-by: Dan Williams --- fs/dax.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/dax.c b/fs/dax.c index 67a08a32fccb..a372304c9695 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -845,7 +845,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) goto unlock_pmd; - flush_cache_page(vma, address, pfn); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); pmd = pmdp_invalidate(vma, address, pmdp); pmd = pmd_wrprotect(pmd); pmd = pmd_mkclean(pmd); From patchwork Wed Mar 2 08:27:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12765611 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB250C433F5 for ; Wed, 2 Mar 2022 08:29:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 45A188D0008; Wed, 2 Mar 2022 03:29:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4090A8D0007; Wed, 2 Mar 2022 03:29:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D1E58D0008; Wed, 2 Mar 2022 03:29:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0212.hostedemail.com [216.40.44.212]) by kanga.kvack.org (Postfix) with ESMTP id 1DC2E8D0007 for ; Wed, 2 Mar 2022 03:29:31 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C2AD79F5E9 for ; Wed, 2 Mar 2022 08:29:30 +0000 (UTC) X-FDA: 79198772100.22.8FEA4A8 Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by imf30.hostedemail.com (Postfix) with ESMTP id 58E128000A for ; Wed, 2 Mar 2022 08:29:30 +0000 (UTC) Received: by mail-pj1-f45.google.com with SMTP id b8so1132166pjb.4 for ; Wed, 02 Mar 2022 00:29:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5g/0QWFd98PH+GDbIHxo8Dsrvq3fGfzY+G57IkW5WS4=; b=AAVeeC/1N3G5GcQ+K6GvknxUpCVQsZK39/oE0xJ/wgZT1GfB7gPbGuBOafACxBd/c6 MkH1O0a51SjPMpZ4wg2yNC3c4zpZ9dE5JglFpUNSIXG0tww4bCfNAXwvDe+a5pT1A/P+ 9CUnfcNARVTbx0MaJya8bS1PjE/9KiSW76Iaio2gwlTnxUnXAbKEpjLj6kKMXhDjR6b7 OefsDv/keodJ7ewbycR7dyDT1rZ4Oos4YZlqHiUed2rus7fSEtveiT6lBZy0JF1vvXVe LKMCP0ft+vVnndO2ogo+gRfdYi8O/h6hMPqKgldeWsB49c0/KdDg4htf5wEE883PpJ0B zUow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5g/0QWFd98PH+GDbIHxo8Dsrvq3fGfzY+G57IkW5WS4=; b=wPPaMTX0/zX2MOF02gjVESAqYg4k6VWg5KEswsG3RBlbP4XAVGQxpiGfu6En6B/k7V WwoUcg+UMt7vipIZH9vtZCasOwzKLVkapiqPYPVm3SE7YvTqNzjR5aCS9DVQN4GCl03B UUM+Zjd6f+HiXRonnFGjV1Z4x1uNRN3AuHG9D4jT/JBQ07fAksIEcJNzJNzT/sUl3458 n/Vs33Pyb1K0CfJG0hSJWR1brKWKbgYtxhfb0QVZVhqE7pZZPC7dC2RaIoVMA1LKpfwj 2uAAEuiyzbTwomK034BlKV6xmTIPK7YBqTRD4ubSRQsET34sXbmUOhUJp581/rFDlCuZ OdiA== X-Gm-Message-State: AOAM532aPD+4I2c8NXhu+eFXy5lPMsvnNP1EZ5CfGlezK4iDOlz2Emsf ScQrYqDFsq45kFYXoHQSuZBHZw== X-Google-Smtp-Source: ABdhPJwYRvyK+lr9AitLQt9EnXiJDHe8rcmZLfe5ZnI1R9iF0WxPL6NhcheQihzz2A5zbJdCIhQ0vg== X-Received: by 2002:a17:90a:ab89:b0:1bc:71a7:f93a with SMTP id n9-20020a17090aab8900b001bc71a7f93amr25812348pjq.111.1646209769383; Wed, 02 Mar 2022 00:29:29 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id a20-20020a056a000c9400b004f396b965a9sm20922228pfv.49.2022.03.02.00.29.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 00:29:29 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v4 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Date: Wed, 2 Mar 2022 16:27:15 +0800 Message-Id: <20220302082718.32268-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220302082718.32268-1-songmuchun@bytedance.com> References: <20220302082718.32268-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 58E128000A X-Stat-Signature: 181s8zwy4qnjjx3da1wadjfintzppizb X-Rspam-User: Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b="AAVeeC/1"; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf30.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.45 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam03 X-HE-Tag: 1646209770-331522 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The page_mkclean_one() is supposed to be used with the pfn that has a associated struct page, but not all the pfns (e.g. DAX) have a struct page. Introduce a new function pfn_mkclean_range() to cleans the PTEs (including PMDs) mapped with range of pfns which has no struct page associated with them. This helper will be used by DAX device in the next patch to make pfns clean. Signed-off-by: Muchun Song --- include/linux/rmap.h | 3 +++ mm/internal.h | 26 +++++++++++++-------- mm/rmap.c | 65 +++++++++++++++++++++++++++++++++++++++++++--------- 3 files changed, 74 insertions(+), 20 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index b58ddb8b2220..a6ec0d3e40c1 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -263,6 +263,9 @@ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *); */ int folio_mkclean(struct folio *); +int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, + struct vm_area_struct *vma); + void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked); /* diff --git a/mm/internal.h b/mm/internal.h index f45292dc4ef5..ff873944749f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -516,26 +516,22 @@ void mlock_page_drain(int cpu); extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); /* - * At what user virtual address is page expected in vma? - * Returns -EFAULT if all of the page is outside the range of vma. - * If page is a compound head, the entire compound page is considered. + * * Return the start of user virtual address at the specific offset within + * a vma. */ static inline unsigned long -vma_address(struct page *page, struct vm_area_struct *vma) +vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages, + struct vm_area_struct *vma) { - pgoff_t pgoff; unsigned long address; - VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ - pgoff = page_to_pgoff(page); if (pgoff >= vma->vm_pgoff) { address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); /* Check for address beyond vma (or wrapped through 0?) */ if (address < vma->vm_start || address >= vma->vm_end) address = -EFAULT; - } else if (PageHead(page) && - pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) { + } else if (pgoff + nr_pages - 1 >= vma->vm_pgoff) { /* Test above avoids possibility of wrap to 0 on 32-bit */ address = vma->vm_start; } else { @@ -545,6 +541,18 @@ vma_address(struct page *page, struct vm_area_struct *vma) } /* + * Return the start of user virtual address of a page within a vma. + * Returns -EFAULT if all of the page is outside the range of vma. + * If page is a compound head, the entire compound page is considered. + */ +static inline unsigned long +vma_address(struct page *page, struct vm_area_struct *vma) +{ + VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ + return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma); +} + +/* * Then at what user virtual address will none of the range be found in vma? * Assumes that vma_address() already returned a good starting address. */ diff --git a/mm/rmap.c b/mm/rmap.c index 723682ddb9e8..ad5cf0e45a73 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -929,12 +929,12 @@ int folio_referenced(struct folio *folio, int is_locked, return pra.referenced; } -static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, - unsigned long address, void *arg) +static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) { - DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); + int cleaned = 0; + struct vm_area_struct *vma = pvmw->vma; struct mmu_notifier_range range; - int *cleaned = arg; + unsigned long address = pvmw->address; /* * We have to assume the worse case ie pmd for invalidation. Note that @@ -942,16 +942,16 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, */ mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma, vma->vm_mm, address, - vma_address_end(&pvmw)); + vma_address_end(pvmw)); mmu_notifier_invalidate_range_start(&range); - while (page_vma_mapped_walk(&pvmw)) { + while (page_vma_mapped_walk(pvmw)) { int ret = 0; - address = pvmw.address; - if (pvmw.pte) { + address = pvmw->address; + if (pvmw->pte) { pte_t entry; - pte_t *pte = pvmw.pte; + pte_t *pte = pvmw->pte; if (!pte_dirty(*pte) && !pte_write(*pte)) continue; @@ -964,7 +964,7 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, ret = 1; } else { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - pmd_t *pmd = pvmw.pmd; + pmd_t *pmd = pvmw->pmd; pmd_t entry; if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) @@ -991,11 +991,22 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, * See Documentation/vm/mmu_notifier.rst */ if (ret) - (*cleaned)++; + cleaned++; } mmu_notifier_invalidate_range_end(&range); + return cleaned; +} + +static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma, + unsigned long address, void *arg) +{ + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC); + int *cleaned = arg; + + *cleaned += page_vma_mkclean_one(&pvmw); + return true; } @@ -1033,6 +1044,38 @@ int folio_mkclean(struct folio *folio) EXPORT_SYMBOL_GPL(folio_mkclean); /** + * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of + * [@pfn, @pfn + @nr_pages) at the specific offset (@pgoff) + * within the @vma of shared mappings. And since clean PTEs + * should also be readonly, write protects them too. + * @pfn: start pfn. + * @nr_pages: number of physically contiguous pages srarting with @pfn. + * @pgoff: page offset that the @pfn mapped with. + * @vma: vma that @pfn mapped within. + * + * Returns the number of cleaned PTEs (including PMDs). + */ +int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, + struct vm_area_struct *vma) +{ + struct page_vma_mapped_walk pvmw = { + .pfn = pfn, + .nr_pages = nr_pages, + .pgoff = pgoff, + .vma = vma, + .flags = PVMW_SYNC, + }; + + if (invalid_mkclean_vma(vma, NULL)) + return 0; + + pvmw.address = vma_pgoff_address(pgoff, nr_pages, vma); + VM_BUG_ON_VMA(pvmw.address == -EFAULT, vma); + + return page_vma_mkclean_one(&pvmw); +} + +/** * page_move_anon_rmap - move a page to our anon_vma * @page: the page to move to our anon_vma * @vma: the vma the page belongs to From patchwork Wed Mar 2 08:27:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12765612 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CE29C433F5 for ; Wed, 2 Mar 2022 08:29:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CE4468D0009; Wed, 2 Mar 2022 03:29:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C94818D0007; Wed, 2 Mar 2022 03:29:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B83998D0009; Wed, 2 Mar 2022 03:29:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id AA42C8D0007 for ; Wed, 2 Mar 2022 03:29:39 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 6D629121A38 for ; Wed, 2 Mar 2022 08:29:39 +0000 (UTC) X-FDA: 79198772478.06.EF19310 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf14.hostedemail.com (Postfix) with ESMTP id E2EB7100005 for ; Wed, 2 Mar 2022 08:29:38 +0000 (UTC) Received: by mail-pl1-f173.google.com with SMTP id s1so930306plg.12 for ; Wed, 02 Mar 2022 00:29:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l/myAdjexodhPdxGEM0MHuk2V1UzL2hIaRiiu46o6gA=; b=5wCqMdPra7WU6QtB0HKKqpqInbC0mjKYJvNn1TUItnMFQQUxMLnPC8gOImTQQjrUJ6 lxZPmTjDu1L0Q72qlAPkPaRlENO07rpX9UYXO0FAF59aYEtozl+ZKAWQZ8W3qqRE+ikr SlRgQWIHHhcTtdgRhGo4D22XRHAZBBeblqRndIow0K+xwrQOm5CHH/9Z9ixzQe6h7kFE zNljQhcoRBU9180TzWuXPS9NCuP82R0Fe0PKzpYQqkyaXnxFW5Nt5cxWBJuTk3L4oHxt KISVkpNb+bLAdydzsYyxsQZ+ibl5wXiET/Dx7dQOH8lcVFxHxuZfrkdIRaRw8w+oE5Al xSiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l/myAdjexodhPdxGEM0MHuk2V1UzL2hIaRiiu46o6gA=; b=19xlfSgfC3rOTQJxKHcheOCC89CWgmp0uAGNtlttx3mAT1Pkl9DljdkmwedFRZTW0/ w0Gzt/rlhyVz68j+7icOEyL25IWjBG1Dz6SmbwwgfIWH1gwBt6AFd88GglaWHwFA1Lxl YlzzikkSCZcD4cBMs/eIAg+JWFJQ5Y90RHD7Y2tdvm33FyAQwIwCyGnzVXtcLMlcQO1I ygEMfUaWlo1maZB2DUjPQWyhSV47zpmxnNERlNH3piPrppfrdQUOgPa7ZXad4Mv2bGZU QZlwK1+LGbrj3jOen+ZbC7FoonZiDptMT1d21LNfS/IYFrr6XNIY9kQkRqaj0pf5ZfIG q8AA== X-Gm-Message-State: AOAM532J8ejaLDt2EZfa+ipbIDbTfLLATQOQf60DoO4Br+OuKitAe+it o30RopccoxxDt7Fm6GP92YhDlA== X-Google-Smtp-Source: ABdhPJzT8ObZdk6+fbYovMIjQEfRPZciRdCbVZminjRSbL7HjS0V45URjs58A3iPwR4mLPclCDR1PA== X-Received: by 2002:a17:902:e549:b0:150:2412:c94c with SMTP id n9-20020a170902e54900b001502412c94cmr26221501plf.94.1646209777959; Wed, 02 Mar 2022 00:29:37 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id a20-20020a056a000c9400b004f396b965a9sm20922228pfv.49.2022.03.02.00.29.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 00:29:37 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v4 4/6] mm: pvmw: add support for walking devmap pages Date: Wed, 2 Mar 2022 16:27:16 +0800 Message-Id: <20220302082718.32268-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220302082718.32268-1-songmuchun@bytedance.com> References: <20220302082718.32268-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: E2EB7100005 X-Stat-Signature: ensauwnt46cda8wd17e69kembbr3t5ng Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=5wCqMdPr; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf14.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1646209778-83883 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The devmap pages can not use page_vma_mapped_walk() to check if a huge devmap page is mapped into a vma. Add support for walking huge devmap pages so that DAX can use it in the next patch. Signed-off-by: Muchun Song --- mm/page_vma_mapped.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 1187f9c1ec5b..f9ffa84adf4d 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -210,10 +210,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) */ pmde = READ_ONCE(*pvmw->pmd); - if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) { + if (pmd_trans_huge(pmde) || pmd_devmap(pmde) || + is_pmd_migration_entry(pmde)) { pvmw->ptl = pmd_lock(mm, pvmw->pmd); pmde = *pvmw->pmd; - if (likely(pmd_trans_huge(pmde))) { + if (likely(pmd_trans_huge(pmde) || pmd_devmap(pmde))) { if (pvmw->flags & PVMW_MIGRATION) return not_found(pvmw); if (!check_pmd(pmd_pfn(pmde), pvmw)) From patchwork Wed Mar 2 08:27:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12765613 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 911BBC433FE for ; Wed, 2 Mar 2022 08:29:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3200C8D000B; Wed, 2 Mar 2022 03:29:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D0EB8D000C; Wed, 2 Mar 2022 03:29:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 198098D000B; Wed, 2 Mar 2022 03:29:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0233.hostedemail.com [216.40.44.233]) by kanga.kvack.org (Postfix) with ESMTP id 0B33A8D000A for ; Wed, 2 Mar 2022 03:29:48 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BC7E1181C49DA for ; Wed, 2 Mar 2022 08:29:47 +0000 (UTC) X-FDA: 79198772814.24.0D6AEA3 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf12.hostedemail.com (Postfix) with ESMTP id 5E1064000A for ; Wed, 2 Mar 2022 08:29:47 +0000 (UTC) Received: by mail-pl1-f180.google.com with SMTP id s1so930551plg.12 for ; Wed, 02 Mar 2022 00:29:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FG/K6837zQPVwcpe+tAz7OlRAj17PMacY8wX1oMHrEs=; b=PGIGAurL4qs724ojSh6WoXxqvhMvCZBybFDAP3gGqIW/vXElNJIqLMJ/ek2d2i0/Gj DRVyI03bPjU1MyjcBUJTjfRKlpRh6REUP4Rt0TEw0TdfFlVjGLHr1ts1xWOCoIa7Z7n7 6DAdde+I05MtZqGsiYmzRKyAtq9xB1MsB6FjAP2m+1TtPnQ/DIP34Uyd2JbVYISQp65R nExPxZLC4MOYKdnifRbtN8d5JhZm/XLiy126P2Kzu2fi4yBFh+KKVB0MtSyH0tDfDene hXngnrmCnb/fLZ4hmhGh4D1wKUyfXyi/PmnXMyC3kUQnv8Nc0yCrZnmlAOgBygTB2HZu AAug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FG/K6837zQPVwcpe+tAz7OlRAj17PMacY8wX1oMHrEs=; b=FyiHc5Rebc+qIEHVFM++3AaCckeQOY6Pn6IrozSIYp7qE6lp3wdBN8XCxSwrkRgwp9 kWmmjMD87gE96s6lC/26VigP97YC89K3V6Yh+qFx9KLN87aZjfb07nJ+HsKOCJcVuoCN tNPG2c2bZj5qNNNplBUOlii0WRuT1U7d1ApeBDCXn20iagm3wFKgKMPQqo5Tb6VwNhw9 kiBjXz9ajF76He2olMK9VJPmkEPNXSbrbi7LqTOrDbKZErQrD+pRP0nyLtFKxRRzKchJ t03qCWw6Gq5ajGHDtdsYfNkcayN3oOWa1eZa0cYYsl+OS4Ozsd7ubkrydLU60P/0SJb4 BcBA== X-Gm-Message-State: AOAM530LU9mN99GHTFrMOcC379mB1QV+85OT/WnTsoP8oQVkVVziGNcn 0mgjBO387uLK6sVIuI40fZWIJA== X-Google-Smtp-Source: ABdhPJwPYghkWYEC4o+1xoq9Wh92BCG8knxGA3IJCRknHamVSwQdJF2a2sTBIdNnyYfvR3xQbvXLCQ== X-Received: by 2002:a17:902:e5c4:b0:151:9bf6:f47f with SMTP id u4-20020a170902e5c400b001519bf6f47fmr637485plf.110.1646209786386; Wed, 02 Mar 2022 00:29:46 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id a20-20020a056a000c9400b004f396b965a9sm20922228pfv.49.2022.03.02.00.29.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 00:29:46 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v4 5/6] dax: fix missing writeprotect the pte entry Date: Wed, 2 Mar 2022 16:27:17 +0800 Message-Id: <20220302082718.32268-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220302082718.32268-1-songmuchun@bytedance.com> References: <20220302082718.32268-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 5E1064000A X-Stat-Signature: 476yu5w4zg9fot51nbe4rggeixo7gznj Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=PGIGAurL; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-HE-Tag: 1646209787-976202 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently dax_mapping_entry_mkclean() fails to clean and write protect the pte entry within a DAX PMD entry during an *sync operation. This can result in data loss in the following sequence: 1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and making the pmd entry dirty and writeable. 2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K) write to the same file, dirtying PMD radix tree entry (already done in 1)) and making the pte entry dirty and writeable. 3) fsync, flushing out PMD data and cleaning the radix tree entry. We currently fail to mark the pte entry as clean and write protected since the vma of process B is not covered in dax_entry_mkclean(). 4) process B writes to the pte. These don't cause any page faults since the pte entry is dirty and writeable. The radix tree entry remains clean. 5) fsync, which fails to flush the dirty PMD data because the radix tree entry was clean. 6) crash - dirty data that should have been fsync'd as part of 5) could still have been in the processor cache, and is lost. Just to use pfn_mkclean_range() to clean the pfns to fix this issue. Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush") Signed-off-by: Muchun Song --- fs/dax.c | 83 ++++++---------------------------------------------------------- 1 file changed, 7 insertions(+), 76 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index a372304c9695..7fd4a16769f9 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #define CREATE_TRACE_POINTS @@ -789,87 +790,17 @@ static void *dax_insert_entry(struct xa_state *xas, return entry; } -static inline -unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma) -{ - unsigned long address; - - address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); - VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); - return address; -} - /* Walk all mappings of a given index of a file and writeprotect them */ -static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, - unsigned long pfn) +static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn, + unsigned long npfn, pgoff_t start) { struct vm_area_struct *vma; - pte_t pte, *ptep = NULL; - pmd_t *pmdp = NULL; - spinlock_t *ptl; + pgoff_t end = start + npfn - 1; i_mmap_lock_read(mapping); - vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) { - struct mmu_notifier_range range; - unsigned long address; - + vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) { + pfn_mkclean_range(pfn, npfn, start, vma); cond_resched(); - - if (!(vma->vm_flags & VM_SHARED)) - continue; - - address = pgoff_address(index, vma); - - /* - * follow_invalidate_pte() will use the range to call - * mmu_notifier_invalidate_range_start() on our behalf before - * taking any lock. - */ - if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep, - &pmdp, &ptl)) - continue; - - /* - * No need to call mmu_notifier_invalidate_range() as we are - * downgrading page table protection not changing it to point - * to a new page. - * - * See Documentation/vm/mmu_notifier.rst - */ - if (pmdp) { -#ifdef CONFIG_FS_DAX_PMD - pmd_t pmd; - - if (pfn != pmd_pfn(*pmdp)) - goto unlock_pmd; - if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) - goto unlock_pmd; - - flush_cache_range(vma, address, - address + HPAGE_PMD_SIZE); - pmd = pmdp_invalidate(vma, address, pmdp); - pmd = pmd_wrprotect(pmd); - pmd = pmd_mkclean(pmd); - set_pmd_at(vma->vm_mm, address, pmdp, pmd); -unlock_pmd: -#endif - spin_unlock(ptl); - } else { - if (pfn != pte_pfn(*ptep)) - goto unlock_pte; - if (!pte_dirty(*ptep) && !pte_write(*ptep)) - goto unlock_pte; - - flush_cache_page(vma, address, pfn); - pte = ptep_clear_flush(vma, address, ptep); - pte = pte_wrprotect(pte); - pte = pte_mkclean(pte); - set_pte_at(vma->vm_mm, address, ptep, pte); -unlock_pte: - pte_unmap_unlock(ptep, ptl); - } - - mmu_notifier_invalidate_range_end(&range); } i_mmap_unlock_read(mapping); } @@ -937,7 +868,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, count = 1UL << dax_entry_order(entry); index = xas->xa_index & ~(count - 1); - dax_entry_mkclean(mapping, index, pfn); + dax_entry_mkclean(mapping, pfn, count, index); dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE); /* * After we have flushed the cache, we can clear the dirty tag. There From patchwork Wed Mar 2 08:27:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12765614 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0C48C433EF for ; Wed, 2 Mar 2022 08:29:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B3F08D000E; Wed, 2 Mar 2022 03:29:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 764C48D000D; Wed, 2 Mar 2022 03:29:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 652EF8D000E; Wed, 2 Mar 2022 03:29:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 55AAC8D000D for ; Wed, 2 Mar 2022 03:29:56 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 246C1121B3F for ; Wed, 2 Mar 2022 08:29:56 +0000 (UTC) X-FDA: 79198773192.06.D40AA35 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf27.hostedemail.com (Postfix) with ESMTP id A02CB40004 for ; Wed, 2 Mar 2022 08:29:55 +0000 (UTC) Received: by mail-pl1-f180.google.com with SMTP id c9so987270pll.0 for ; Wed, 02 Mar 2022 00:29:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4hzepAOQTsZTnVcr4QPZkWOiCoYklQmYHIbKFxu1/40=; b=BHUW+N1Ur7UISy6INaVLWByZSyogvV+wzA1z7v+CiybCHBc+N8nc1LoUx9B/gbMwk8 HIxbxIEXNDbK6hfHZimmn14xUJYqq3yIVmG70INxE2KYfLl4KmUNNh9XfTB7LkU3YpoV FG8rJke6cX1AZ2UNkxOr9wCCg9VWLMfZ5ewsd/xbWM2tDR4yezKnqBpsuxXvnOn7EZIa cGtrTwXg+9wcFTsu2uis6l8DW1Civ6IoXbVOGDJpqxQ+17Wz2WPYDrgF6vrr7l0UtmwZ 3063ZCy+GqpkS/YiXnDcw0ssHDJXt9k86JVDTmf2n/tlC+Ntv50DXrHDr7skXOtprb2j vqOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4hzepAOQTsZTnVcr4QPZkWOiCoYklQmYHIbKFxu1/40=; b=SN4coSar7FZLUH2EElqQIjz0keYQUN/0k/LjXbAceXKk8O+TrZfTwOJzP0f8lxwTKC Mu3onORiwvYcZR+OOtdhkOkvr+uBxqoSpDwQx1p/SjC01m2Es54gdwzB7TXQrqZ4Witb rap5plZJm26GbUQAlUVvq30disn7ZXFnBmxJDPRXtohTAbdtsbclevnG2dkWDhwCG3vB x6Q7ZflPbBcSOaXGrA18SVLav+tCXuhgf3BmZw7F+4vRHnGTFfozLai3Pdif2jpLalLt yiZJoutwzMNXGjKuB/06qUMS+NECh/MDIQv/Mcoy/rYsqRT57g7Ua4WFgltxTZqN69jX N31A== X-Gm-Message-State: AOAM530uagmgezI/yNWJKwdLmxg2xn82+M7Cl6JV20wE0p+SDcaAoYQb 8t6PoTB2hcLrL94KNfbGd5QiuA== X-Google-Smtp-Source: ABdhPJyz4uqOwKQjoCqmWeCnPA8FTI71YHNESq0Oi6Zb6fkSHMg74TLKOucvxJxvZaZ9ZDbR40TJ5g== X-Received: by 2002:a17:90a:c901:b0:1be:ce4d:7cee with SMTP id v1-20020a17090ac90100b001bece4d7ceemr9400545pjt.213.1646209794788; Wed, 02 Mar 2022 00:29:54 -0800 (PST) Received: from FVFYT0MHHV2J.bytedance.net ([61.120.150.70]) by smtp.gmail.com with ESMTPSA id a20-20020a056a000c9400b004f396b965a9sm20922228pfv.49.2022.03.02.00.29.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 00:29:54 -0800 (PST) From: Muchun Song To: dan.j.williams@intel.com, willy@infradead.org, jack@suse.cz, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, apopple@nvidia.com, shy828301@gmail.com, rcampbell@nvidia.com, hughd@google.com, xiyuyang19@fudan.edu.cn, kirill.shutemov@linux.intel.com, zwisler@kernel.org, hch@infradead.org Cc: linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v4 6/6] mm: remove range parameter from follow_invalidate_pte() Date: Wed, 2 Mar 2022 16:27:18 +0800 Message-Id: <20220302082718.32268-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220302082718.32268-1-songmuchun@bytedance.com> References: <20220302082718.32268-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A02CB40004 X-Stat-Signature: hkpkgfpaf1xwwpcaeg5ie7wizhaewsrd Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=BHUW+N1U; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf27.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1646209795-474659 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only user (DAX) of range parameter of follow_invalidate_pte() is gone, it safe to remove the range paramter and make it static to simlify the code. Signed-off-by: Muchun Song Reviewed-by: Dan Williams --- include/linux/mm.h | 3 --- mm/memory.c | 23 +++-------------------- 2 files changed, 3 insertions(+), 23 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c9bada4096ac..be7ec4c37ebe 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1871,9 +1871,6 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling); int copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma); -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp); int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, diff --git a/mm/memory.c b/mm/memory.c index cc6968dc8e4e..278ab6d62b54 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4964,9 +4964,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) } #endif /* __PAGETABLE_PMD_FOLDED */ -int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, - struct mmu_notifier_range *range, pte_t **ptepp, - pmd_t **pmdpp, spinlock_t **ptlp) +static int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, + pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) { pgd_t *pgd; p4d_t *p4d; @@ -4993,31 +4992,17 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, if (!pmdpp) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, - NULL, mm, address & PMD_MASK, - (address & PMD_MASK) + PMD_SIZE); - mmu_notifier_invalidate_range_start(range); - } *ptlp = pmd_lock(mm, pmd); if (pmd_huge(*pmd)) { *pmdpp = pmd; return 0; } spin_unlock(*ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); } if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) goto out; - if (range) { - mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, NULL, mm, - address & PAGE_MASK, - (address & PAGE_MASK) + PAGE_SIZE); - mmu_notifier_invalidate_range_start(range); - } ptep = pte_offset_map_lock(mm, pmd, address, ptlp); if (!pte_present(*ptep)) goto unlock; @@ -5025,8 +5010,6 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, return 0; unlock: pte_unmap_unlock(ptep, *ptlp); - if (range) - mmu_notifier_invalidate_range_end(range); out: return -EINVAL; } @@ -5055,7 +5038,7 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address, int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, spinlock_t **ptlp) { - return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp); + return follow_invalidate_pte(mm, address, ptepp, NULL, ptlp); } EXPORT_SYMBOL_GPL(follow_pte);