diff mbox series

[v12,2/7] mm: factor helpers for memory_failure_dev_pagemap

Message ID 20220410160904.3758789-3-ruansy.fnst@fujitsu.com (mailing list archive)
State New, archived
Headers show
Series fsdax: introduce fs query to support reflink | expand

Commit Message

Shiyang Ruan April 10, 2022, 4:08 p.m. UTC
memory_failure_dev_pagemap code is a bit complex before introduce RMAP
feature for fsdax.  So it is needed to factor some helper functions to
simplify these code.

Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
---
 mm/memory-failure.c | 157 ++++++++++++++++++++++++--------------------
 1 file changed, 87 insertions(+), 70 deletions(-)

Comments

kernel test robot April 10, 2022, 7:48 p.m. UTC | #1
Hi Shiyang,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on hnaz-mm/master]
[also build test WARNING on next-20220408]
[cannot apply to xfs-linux/for-next linus/master linux/master v5.18-rc1]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Shiyang-Ruan/fsdax-introduce-fs-query-to-support-reflink/20220411-001048
base:   https://github.com/hnaz/linux-mm master
config: arm64-randconfig-r021-20220410 (https://download.01.org/0day-ci/archive/20220411/202204110348.fupyvJK7-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 256c6b0ba14e8a7ab6373b61b7193ea8c0a3651c)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install arm64 cross compiling tool for clang build
        # apt-get install binutils-aarch64-linux-gnu
        # https://github.com/intel-lab-lkp/linux/commit/9ab00d3f6d4d9d3d2e4446480567af17c8726bd2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Shiyang-Ruan/fsdax-introduce-fs-query-to-support-reflink/20220411-001048
        git checkout 9ab00d3f6d4d9d3d2e4446480567af17c8726bd2
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=arm64 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> mm/memory-failure.c:1533:6: warning: variable 'rc' set but not used [-Wunused-but-set-variable]
           int rc = 0;
               ^
   1 warning generated.


vim +/rc +1533 mm/memory-failure.c

  1526	
  1527	static int mf_generic_kill_procs(unsigned long long pfn, int flags,
  1528			struct dev_pagemap *pgmap)
  1529	{
  1530		struct page *page = pfn_to_page(pfn);
  1531		LIST_HEAD(to_kill);
  1532		dax_entry_t cookie;
> 1533		int rc = 0;
  1534	
  1535		/*
  1536		 * Pages instantiated by device-dax (not filesystem-dax)
  1537		 * may be compound pages.
  1538		 */
  1539		page = compound_head(page);
  1540	
  1541		/*
  1542		 * Prevent the inode from being freed while we are interrogating
  1543		 * the address_space, typically this would be handled by
  1544		 * lock_page(), but dax pages do not use the page lock. This
  1545		 * also prevents changes to the mapping of this pfn until
  1546		 * poison signaling is complete.
  1547		 */
  1548		cookie = dax_lock_page(page);
  1549		if (!cookie)
  1550			return -EBUSY;
  1551	
  1552		if (hwpoison_filter(page)) {
  1553			rc = -EOPNOTSUPP;
  1554			goto unlock;
  1555		}
  1556	
  1557		if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
  1558			/*
  1559			 * TODO: Handle HMM pages which may need coordination
  1560			 * with device-side memory.
  1561			 */
  1562			return -EBUSY;
  1563		}
  1564	
  1565		/*
  1566		 * Use this flag as an indication that the dax page has been
  1567		 * remapped UC to prevent speculative consumption of poison.
  1568		 */
  1569		SetPageHWPoison(page);
  1570	
  1571		/*
  1572		 * Unlike System-RAM there is no possibility to swap in a
  1573		 * different physical page at a given virtual address, so all
  1574		 * userspace consumption of ZONE_DEVICE memory necessitates
  1575		 * SIGBUS (i.e. MF_MUST_KILL)
  1576		 */
  1577		flags |= MF_ACTION_REQUIRED | MF_MUST_KILL;
  1578		collect_procs(page, &to_kill, true);
  1579	
  1580		unmap_and_kill(&to_kill, pfn, page->mapping, page->index, flags);
  1581	unlock:
  1582		dax_unlock_page(page, cookie);
  1583		return 0;
  1584	}
  1585
kernel test robot April 10, 2022, 8:19 p.m. UTC | #2
Hi Shiyang,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on hnaz-mm/master]
[also build test WARNING on next-20220408]
[cannot apply to xfs-linux/for-next linus/master linux/master v5.18-rc1]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Shiyang-Ruan/fsdax-introduce-fs-query-to-support-reflink/20220411-001048
base:   https://github.com/hnaz/linux-mm master
config: x86_64-randconfig-a011 (https://download.01.org/0day-ci/archive/20220411/202204110420.O844CZYb-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.2.0-19) 11.2.0
reproduce (this is a W=1 build):
        # https://github.com/intel-lab-lkp/linux/commit/9ab00d3f6d4d9d3d2e4446480567af17c8726bd2
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Shiyang-Ruan/fsdax-introduce-fs-query-to-support-reflink/20220411-001048
        git checkout 9ab00d3f6d4d9d3d2e4446480567af17c8726bd2
        # save the config file to linux build tree
        mkdir build_dir
        make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   mm/memory-failure.c: In function 'mf_generic_kill_procs':
>> mm/memory-failure.c:1533:13: warning: variable 'rc' set but not used [-Wunused-but-set-variable]
    1533 |         int rc = 0;
         |             ^~


vim +/rc +1533 mm/memory-failure.c

  1526	
  1527	static int mf_generic_kill_procs(unsigned long long pfn, int flags,
  1528			struct dev_pagemap *pgmap)
  1529	{
  1530		struct page *page = pfn_to_page(pfn);
  1531		LIST_HEAD(to_kill);
  1532		dax_entry_t cookie;
> 1533		int rc = 0;
  1534	
  1535		/*
  1536		 * Pages instantiated by device-dax (not filesystem-dax)
  1537		 * may be compound pages.
  1538		 */
  1539		page = compound_head(page);
  1540	
  1541		/*
  1542		 * Prevent the inode from being freed while we are interrogating
  1543		 * the address_space, typically this would be handled by
  1544		 * lock_page(), but dax pages do not use the page lock. This
  1545		 * also prevents changes to the mapping of this pfn until
  1546		 * poison signaling is complete.
  1547		 */
  1548		cookie = dax_lock_page(page);
  1549		if (!cookie)
  1550			return -EBUSY;
  1551	
  1552		if (hwpoison_filter(page)) {
  1553			rc = -EOPNOTSUPP;
  1554			goto unlock;
  1555		}
  1556	
  1557		if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
  1558			/*
  1559			 * TODO: Handle HMM pages which may need coordination
  1560			 * with device-side memory.
  1561			 */
  1562			return -EBUSY;
  1563		}
  1564	
  1565		/*
  1566		 * Use this flag as an indication that the dax page has been
  1567		 * remapped UC to prevent speculative consumption of poison.
  1568		 */
  1569		SetPageHWPoison(page);
  1570	
  1571		/*
  1572		 * Unlike System-RAM there is no possibility to swap in a
  1573		 * different physical page at a given virtual address, so all
  1574		 * userspace consumption of ZONE_DEVICE memory necessitates
  1575		 * SIGBUS (i.e. MF_MUST_KILL)
  1576		 */
  1577		flags |= MF_ACTION_REQUIRED | MF_MUST_KILL;
  1578		collect_procs(page, &to_kill, true);
  1579	
  1580		unmap_and_kill(&to_kill, pfn, page->mapping, page->index, flags);
  1581	unlock:
  1582		dax_unlock_page(page, cookie);
  1583		return 0;
  1584	}
  1585
Christoph Hellwig April 11, 2022, 6:37 a.m. UTC | #3
> +	unmap_and_kill(&to_kill, pfn, page->mapping, page->index, flags);
> +unlock:
> +	dax_unlock_page(page, cookie);
> +	return 0;

As the buildbot points out this should probably be a "return rc".
Shiyang Ruan April 11, 2022, 9:39 a.m. UTC | #4
在 2022/4/11 14:37, Christoph Hellwig 写道:
>> +	unmap_and_kill(&to_kill, pfn, page->mapping, page->index, flags);
>> +unlock:
>> +	dax_unlock_page(page, cookie);
>> +	return 0;
> 
> As the buildbot points out this should probably be a "return rc".

Yes, my mistake, when resolving the conflict with latest code.


--
Thanks,
Ruan
diff mbox series

Patch

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index ef402b490663..f1cdd39f01f7 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1499,6 +1499,90 @@  static int try_to_split_thp_page(struct page *page, const char *msg)
 	return 0;
 }
 
+static void unmap_and_kill(struct list_head *to_kill, unsigned long pfn,
+		struct address_space *mapping, pgoff_t index, int flags)
+{
+	struct to_kill *tk;
+	unsigned long size = 0;
+
+	list_for_each_entry(tk, to_kill, nd)
+		if (tk->size_shift)
+			size = max(size, 1UL << tk->size_shift);
+
+	if (size) {
+		/*
+		 * Unmap the largest mapping to avoid breaking up device-dax
+		 * mappings which are constant size. The actual size of the
+		 * mapping being torn down is communicated in siginfo, see
+		 * kill_proc()
+		 */
+		loff_t start = (index << PAGE_SHIFT) & ~(size - 1);
+
+		unmap_mapping_range(mapping, start, size, 0);
+	}
+
+	kill_procs(to_kill, flags & MF_MUST_KILL, false, pfn, flags);
+}
+
+static int mf_generic_kill_procs(unsigned long long pfn, int flags,
+		struct dev_pagemap *pgmap)
+{
+	struct page *page = pfn_to_page(pfn);
+	LIST_HEAD(to_kill);
+	dax_entry_t cookie;
+	int rc = 0;
+
+	/*
+	 * Pages instantiated by device-dax (not filesystem-dax)
+	 * may be compound pages.
+	 */
+	page = compound_head(page);
+
+	/*
+	 * Prevent the inode from being freed while we are interrogating
+	 * the address_space, typically this would be handled by
+	 * lock_page(), but dax pages do not use the page lock. This
+	 * also prevents changes to the mapping of this pfn until
+	 * poison signaling is complete.
+	 */
+	cookie = dax_lock_page(page);
+	if (!cookie)
+		return -EBUSY;
+
+	if (hwpoison_filter(page)) {
+		rc = -EOPNOTSUPP;
+		goto unlock;
+	}
+
+	if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
+		/*
+		 * TODO: Handle HMM pages which may need coordination
+		 * with device-side memory.
+		 */
+		return -EBUSY;
+	}
+
+	/*
+	 * Use this flag as an indication that the dax page has been
+	 * remapped UC to prevent speculative consumption of poison.
+	 */
+	SetPageHWPoison(page);
+
+	/*
+	 * Unlike System-RAM there is no possibility to swap in a
+	 * different physical page at a given virtual address, so all
+	 * userspace consumption of ZONE_DEVICE memory necessitates
+	 * SIGBUS (i.e. MF_MUST_KILL)
+	 */
+	flags |= MF_ACTION_REQUIRED | MF_MUST_KILL;
+	collect_procs(page, &to_kill, true);
+
+	unmap_and_kill(&to_kill, pfn, page->mapping, page->index, flags);
+unlock:
+	dax_unlock_page(page, cookie);
+	return 0;
+}
+
 /*
  * Called from hugetlb code with hugetlb_lock held.
  * If a hugepage is successfully grabbed (so it's determined to handle
@@ -1663,12 +1747,8 @@  static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
 		struct dev_pagemap *pgmap)
 {
 	struct page *page = pfn_to_page(pfn);
-	unsigned long size = 0;
-	struct to_kill *tk;
 	LIST_HEAD(tokill);
-	int rc = -EBUSY;
-	loff_t start;
-	dax_entry_t cookie;
+	int rc = -ENXIO;
 
 	if (flags & MF_COUNT_INCREASED)
 		/*
@@ -1677,73 +1757,10 @@  static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
 		put_page(page);
 
 	/* device metadata space is not recoverable */
-	if (!pgmap_pfn_valid(pgmap, pfn)) {
-		rc = -ENXIO;
-		goto out;
-	}
-
-	/*
-	 * Pages instantiated by device-dax (not filesystem-dax)
-	 * may be compound pages.
-	 */
-	page = compound_head(page);
-
-	/*
-	 * Prevent the inode from being freed while we are interrogating
-	 * the address_space, typically this would be handled by
-	 * lock_page(), but dax pages do not use the page lock. This
-	 * also prevents changes to the mapping of this pfn until
-	 * poison signaling is complete.
-	 */
-	cookie = dax_lock_page(page);
-	if (!cookie)
+	if (!pgmap_pfn_valid(pgmap, pfn))
 		goto out;
 
-	if (hwpoison_filter(page)) {
-		rc = -EOPNOTSUPP;
-		goto unlock;
-	}
-
-	if (pgmap->type == MEMORY_DEVICE_PRIVATE) {
-		/*
-		 * TODO: Handle HMM pages which may need coordination
-		 * with device-side memory.
-		 */
-		goto unlock;
-	}
-
-	/*
-	 * Use this flag as an indication that the dax page has been
-	 * remapped UC to prevent speculative consumption of poison.
-	 */
-	SetPageHWPoison(page);
-
-	/*
-	 * Unlike System-RAM there is no possibility to swap in a
-	 * different physical page at a given virtual address, so all
-	 * userspace consumption of ZONE_DEVICE memory necessitates
-	 * SIGBUS (i.e. MF_MUST_KILL)
-	 */
-	flags |= MF_ACTION_REQUIRED | MF_MUST_KILL;
-	collect_procs(page, &tokill, true);
-
-	list_for_each_entry(tk, &tokill, nd)
-		if (tk->size_shift)
-			size = max(size, 1UL << tk->size_shift);
-	if (size) {
-		/*
-		 * Unmap the largest mapping to avoid breaking up
-		 * device-dax mappings which are constant size. The
-		 * actual size of the mapping being torn down is
-		 * communicated in siginfo, see kill_proc()
-		 */
-		start = (page->index << PAGE_SHIFT) & ~(size - 1);
-		unmap_mapping_range(page->mapping, start, size, 0);
-	}
-	kill_procs(&tokill, true, false, pfn, flags);
-	rc = 0;
-unlock:
-	dax_unlock_page(page, cookie);
+	rc = mf_generic_kill_procs(pfn, flags, pgmap);
 out:
 	/* drop pgmap ref acquired in caller */
 	put_dev_pagemap(pgmap);