diff mbox series

[4/6] dm,dax,pmem: prepare dax_copy_to/from_iter() APIs with DAXDEV_F_RECOVERY

Message ID 20211021001059.438843-5-jane.chu@oracle.com (mailing list archive)
State New, archived
Headers show
Series dax poison recovery with RWF_RECOVERY_DATA flag | expand

Commit Message

Jane Chu Oct. 21, 2021, 12:10 a.m. UTC
Prepare dax_copy_to/from_iter() APIs with DAXDEV_F_RECOVERY flag
such that when the flag is set, the underlying driver implementation
of the APIs may deal with potential poison in a given address
range and read partial data or write after clearing poison.

Signed-off-by: Jane Chu <jane.chu@oracle.com>
---
 drivers/dax/super.c           | 10 ++++++----
 drivers/md/dm-linear.c        |  8 ++++----
 drivers/md/dm-log-writes.c    | 12 ++++++------
 drivers/md/dm-stripe.c        |  8 ++++----
 drivers/md/dm.c               |  8 ++++----
 drivers/nvdimm/pmem.c         |  4 ++--
 drivers/s390/block/dcssblk.c  |  6 ++++--
 fs/dax.c                      |  4 ++--
 fs/fuse/virtio_fs.c           |  8 ++++----
 include/linux/dax.h           |  8 ++++----
 include/linux/device-mapper.h |  2 +-
 11 files changed, 41 insertions(+), 37 deletions(-)

Comments

Christoph Hellwig Oct. 21, 2021, 11:27 a.m. UTC | #1
On Wed, Oct 20, 2021 at 06:10:57PM -0600, Jane Chu wrote:
> Prepare dax_copy_to/from_iter() APIs with DAXDEV_F_RECOVERY flag
> such that when the flag is set, the underlying driver implementation
> of the APIs may deal with potential poison in a given address
> range and read partial data or write after clearing poison.

FYI, I've been wondering for a while if we could just kill off these
methods entirely.  Basically the driver interaction consists of two
parts:

 a) wether to use the flushcache/mcsafe variants of the generic helpers
 b) actually doing remapping for device mapper

to me it seems like we should handle a) with flags in dax_operations,
and only have a remap callback for device mapper.  That way we'd avoid
the indirect calls for the native case, and also avoid tons of
boilerplate code.  "futher decouple DAX from block devices" series
already massages the device mapper into a form suitable for such
callbacks.
Jane Chu Oct. 22, 2021, 12:49 a.m. UTC | #2
On 10/21/2021 4:27 AM, Christoph Hellwig wrote:
> On Wed, Oct 20, 2021 at 06:10:57PM -0600, Jane Chu wrote:
>> Prepare dax_copy_to/from_iter() APIs with DAXDEV_F_RECOVERY flag
>> such that when the flag is set, the underlying driver implementation
>> of the APIs may deal with potential poison in a given address
>> range and read partial data or write after clearing poison.
> 
> FYI, I've been wondering for a while if we could just kill off these
> methods entirely.  Basically the driver interaction consists of two
> parts:
> 
>   a) wether to use the flushcache/mcsafe variants of the generic helpers
>   b) actually doing remapping for device mapper
> 
> to me it seems like we should handle a) with flags in dax_operations,
> and only have a remap callback for device mapper.  That way we'd avoid
> the indirect calls for the native case, and also avoid tons of
> boilerplate code.  "futher decouple DAX from block devices" series
> already massages the device mapper into a form suitable for such
> callbacks.
> 

I've looked through your "futher decouple DAX from block devices" series 
and likes the use of xarray in place of the host hash list.
Which upstream version is the series based upon?
If it's based on your development repo, I'd be happy to take a clone
and rebase my patches on yours if you provide a link. Please let me
know the best way to cooperate.

That said, I'm unclear at what you're trying to suggest with respect
to the 'DAXDEV_F_RECOVERY' flag.  The flag came from upper dax-fs
call stack to the dm target layer, and the dm targets are equipped
with handling pmem driver specific task, so it appears that the flag 
would need to be passed down to the native pmem layer, right?
Am I totally missing your point?

thanks,
-jane
Jane Chu Oct. 22, 2021, 1:41 a.m. UTC | #3
On 10/21/2021 5:49 PM, Jane Chu wrote:
> On 10/21/2021 4:27 AM, Christoph Hellwig wrote:
>> On Wed, Oct 20, 2021 at 06:10:57PM -0600, Jane Chu wrote:
>>> Prepare dax_copy_to/from_iter() APIs with DAXDEV_F_RECOVERY flag
>>> such that when the flag is set, the underlying driver implementation
>>> of the APIs may deal with potential poison in a given address
>>> range and read partial data or write after clearing poison.
>>
>> FYI, I've been wondering for a while if we could just kill off these
>> methods entirely.  Basically the driver interaction consists of two
>> parts:
>>
>>    a) wether to use the flushcache/mcsafe variants of the generic helpers
>>    b) actually doing remapping for device mapper
>>
>> to me it seems like we should handle a) with flags in dax_operations,
>> and only have a remap callback for device mapper.  That way we'd avoid
>> the indirect calls for the native case, and also avoid tons of
>> boilerplate code.  "futher decouple DAX from block devices" series
>> already massages the device mapper into a form suitable for such
>> callbacks.
>>
> 
> I've looked through your "futher decouple DAX from block devices" series
> and likes the use of xarray in place of the host hash list.
> Which upstream version is the series based upon?
> If it's based on your development repo, I'd be happy to take a clone
> and rebase my patches on yours if you provide a link. Please let me
> know the best way to cooperate.
> 
> That said, I'm unclear at what you're trying to suggest with respect
> to the 'DAXDEV_F_RECOVERY' flag.  The flag came from upper dax-fs
> call stack to the dm target layer, and the dm targets are equipped
> with handling pmem driver specific task, so it appears that the flag

Apologies. The above line should be
"..., and the dm targets are _not_ equipped with handling pmem driver
specific task,"

-jane


> would need to be passed down to the native pmem layer, right?
> Am I totally missing your point?
> 
> thanks,
> -jane
>
Christoph Hellwig Oct. 22, 2021, 5:33 a.m. UTC | #4
On Fri, Oct 22, 2021 at 12:49:15AM +0000, Jane Chu wrote:
> I've looked through your "futher decouple DAX from block devices" series 
> and likes the use of xarray in place of the host hash list.
> Which upstream version is the series based upon?
> If it's based on your development repo, I'd be happy to take a clone
> and rebase my patches on yours if you provide a link. Please let me
> know the best way to cooperate.

It is based on linux-next from when it was posted.  A git tree is here:

http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dax-block-cleanup

> That said, I'm unclear at what you're trying to suggest with respect
> to the 'DAXDEV_F_RECOVERY' flag.  The flag came from upper dax-fs
> call stack to the dm target layer, and the dm targets are equipped
> with handling pmem driver specific task, so it appears that the flag 
> would need to be passed down to the native pmem layer, right?
> Am I totally missing your point?

We'll need to pass it through (assuming we want to keep supporting
dm, see the recent discussion with Dan).

FYI, here is a sketch where I'd like to move to, but this isn't properly
tested yet:

http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dax-devirtualize

To support something like DAXDEV_F_RECOVERYwe'd need a separate
dax_operations methods.  Which to me suggest it probably should be
a different operation (fallocate / ioctl / etc) as Darrick did earlier.
Jane Chu Oct. 22, 2021, 8:30 p.m. UTC | #5
On 10/21/2021 10:33 PM, Christoph Hellwig wrote:
> On Fri, Oct 22, 2021 at 12:49:15AM +0000, Jane Chu wrote:
>> I've looked through your "futher decouple DAX from block devices" series
>> and likes the use of xarray in place of the host hash list.
>> Which upstream version is the series based upon?
>> If it's based on your development repo, I'd be happy to take a clone
>> and rebase my patches on yours if you provide a link. Please let me
>> know the best way to cooperate.
> 
> It is based on linux-next from when it was posted.  A git tree is here:
> 
> http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dax-block-cleanup
> 
>> That said, I'm unclear at what you're trying to suggest with respect
>> to the 'DAXDEV_F_RECOVERY' flag.  The flag came from upper dax-fs
>> call stack to the dm target layer, and the dm targets are equipped
>> with handling pmem driver specific task, so it appears that the flag
>> would need to be passed down to the native pmem layer, right?
>> Am I totally missing your point?
> 
> We'll need to pass it through (assuming we want to keep supporting
> dm, see the recent discussion with Dan).
> 
> FYI, here is a sketch where I'd like to move to, but this isn't properly
> tested yet:
> 
> http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dax-devirtualize
> 
> To support something like DAXDEV_F_RECOVERYwe'd need a separate
> dax_operations methods.  Which to me suggest it probably should be
> a different operation (fallocate / ioctl / etc) as Darrick did earlier.
> 

Thanks for the info!
-jane
diff mbox series

Patch

diff --git a/drivers/dax/super.c b/drivers/dax/super.c
index 67093f1c3341..97854da1ecf7 100644
--- a/drivers/dax/super.c
+++ b/drivers/dax/super.c
@@ -330,22 +330,24 @@  long dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages,
 EXPORT_SYMBOL_GPL(dax_direct_access);
 
 size_t dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
-		size_t bytes, struct iov_iter *i)
+		size_t bytes, struct iov_iter *i, unsigned long flags)
 {
 	if (!dax_alive(dax_dev))
 		return 0;
 
-	return dax_dev->ops->copy_from_iter(dax_dev, pgoff, addr, bytes, i);
+	return dax_dev->ops->copy_from_iter(dax_dev, pgoff, addr, bytes, i,
+					    flags);
 }
 EXPORT_SYMBOL_GPL(dax_copy_from_iter);
 
 size_t dax_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
-		size_t bytes, struct iov_iter *i)
+		size_t bytes, struct iov_iter *i, unsigned long flags)
 {
 	if (!dax_alive(dax_dev))
 		return 0;
 
-	return dax_dev->ops->copy_to_iter(dax_dev, pgoff, addr, bytes, i);
+	return dax_dev->ops->copy_to_iter(dax_dev, pgoff, addr, bytes, i,
+					  flags);
 }
 EXPORT_SYMBOL_GPL(dax_copy_to_iter);
 
diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
index cb7c8518f02d..cc57bd639871 100644
--- a/drivers/md/dm-linear.c
+++ b/drivers/md/dm-linear.c
@@ -181,7 +181,7 @@  static long linear_dax_direct_access(struct dm_target *ti, pgoff_t pgoff,
 }
 
 static size_t linear_dax_copy_from_iter(struct dm_target *ti, pgoff_t pgoff,
-		void *addr, size_t bytes, struct iov_iter *i)
+	void *addr, size_t bytes, struct iov_iter *i, unsigned long flags)
 {
 	struct linear_c *lc = ti->private;
 	struct block_device *bdev = lc->dev->bdev;
@@ -191,11 +191,11 @@  static size_t linear_dax_copy_from_iter(struct dm_target *ti, pgoff_t pgoff,
 	dev_sector = linear_map_sector(ti, sector);
 	if (bdev_dax_pgoff(bdev, dev_sector, ALIGN(bytes, PAGE_SIZE), &pgoff))
 		return 0;
-	return dax_copy_from_iter(dax_dev, pgoff, addr, bytes, i);
+	return dax_copy_from_iter(dax_dev, pgoff, addr, bytes, i, flags);
 }
 
 static size_t linear_dax_copy_to_iter(struct dm_target *ti, pgoff_t pgoff,
-		void *addr, size_t bytes, struct iov_iter *i)
+	void *addr, size_t bytes, struct iov_iter *i, unsigned long flags)
 {
 	struct linear_c *lc = ti->private;
 	struct block_device *bdev = lc->dev->bdev;
@@ -205,7 +205,7 @@  static size_t linear_dax_copy_to_iter(struct dm_target *ti, pgoff_t pgoff,
 	dev_sector = linear_map_sector(ti, sector);
 	if (bdev_dax_pgoff(bdev, dev_sector, ALIGN(bytes, PAGE_SIZE), &pgoff))
 		return 0;
-	return dax_copy_to_iter(dax_dev, pgoff, addr, bytes, i);
+	return dax_copy_to_iter(dax_dev, pgoff, addr, bytes, i, flags);
 }
 
 static int linear_dax_zero_page_range(struct dm_target *ti, pgoff_t pgoff,
diff --git a/drivers/md/dm-log-writes.c b/drivers/md/dm-log-writes.c
index 6d8b88dcce6c..b8e9bddc47b8 100644
--- a/drivers/md/dm-log-writes.c
+++ b/drivers/md/dm-log-writes.c
@@ -964,8 +964,8 @@  static long log_writes_dax_direct_access(struct dm_target *ti, pgoff_t pgoff,
 }
 
 static size_t log_writes_dax_copy_from_iter(struct dm_target *ti,
-					    pgoff_t pgoff, void *addr, size_t bytes,
-					    struct iov_iter *i)
+	pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i,
+	unsigned long flags)
 {
 	struct log_writes_c *lc = ti->private;
 	sector_t sector = pgoff * PAGE_SECTORS;
@@ -984,19 +984,19 @@  static size_t log_writes_dax_copy_from_iter(struct dm_target *ti,
 		return 0;
 	}
 dax_copy:
-	return dax_copy_from_iter(lc->dev->dax_dev, pgoff, addr, bytes, i);
+	return dax_copy_from_iter(lc->dev->dax_dev, pgoff, addr, bytes, i, flags);
 }
 
 static size_t log_writes_dax_copy_to_iter(struct dm_target *ti,
-					  pgoff_t pgoff, void *addr, size_t bytes,
-					  struct iov_iter *i)
+	pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i,
+	unsigned long flags)
 {
 	struct log_writes_c *lc = ti->private;
 	sector_t sector = pgoff * PAGE_SECTORS;
 
 	if (bdev_dax_pgoff(lc->dev->bdev, sector, ALIGN(bytes, PAGE_SIZE), &pgoff))
 		return 0;
-	return dax_copy_to_iter(lc->dev->dax_dev, pgoff, addr, bytes, i);
+	return dax_copy_to_iter(lc->dev->dax_dev, pgoff, addr, bytes, i, flags);
 }
 
 static int log_writes_dax_zero_page_range(struct dm_target *ti, pgoff_t pgoff,
diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c
index 0a97d0472a0b..eefaa23a36fa 100644
--- a/drivers/md/dm-stripe.c
+++ b/drivers/md/dm-stripe.c
@@ -323,7 +323,7 @@  static long stripe_dax_direct_access(struct dm_target *ti, pgoff_t pgoff,
 }
 
 static size_t stripe_dax_copy_from_iter(struct dm_target *ti, pgoff_t pgoff,
-		void *addr, size_t bytes, struct iov_iter *i)
+	void *addr, size_t bytes, struct iov_iter *i, unsigned long flags)
 {
 	sector_t dev_sector, sector = pgoff * PAGE_SECTORS;
 	struct stripe_c *sc = ti->private;
@@ -338,11 +338,11 @@  static size_t stripe_dax_copy_from_iter(struct dm_target *ti, pgoff_t pgoff,
 
 	if (bdev_dax_pgoff(bdev, dev_sector, ALIGN(bytes, PAGE_SIZE), &pgoff))
 		return 0;
-	return dax_copy_from_iter(dax_dev, pgoff, addr, bytes, i);
+	return dax_copy_from_iter(dax_dev, pgoff, addr, bytes, i, flags);
 }
 
 static size_t stripe_dax_copy_to_iter(struct dm_target *ti, pgoff_t pgoff,
-		void *addr, size_t bytes, struct iov_iter *i)
+	void *addr, size_t bytes, struct iov_iter *i, unsigned long flags)
 {
 	sector_t dev_sector, sector = pgoff * PAGE_SECTORS;
 	struct stripe_c *sc = ti->private;
@@ -357,7 +357,7 @@  static size_t stripe_dax_copy_to_iter(struct dm_target *ti, pgoff_t pgoff,
 
 	if (bdev_dax_pgoff(bdev, dev_sector, ALIGN(bytes, PAGE_SIZE), &pgoff))
 		return 0;
-	return dax_copy_to_iter(dax_dev, pgoff, addr, bytes, i);
+	return dax_copy_to_iter(dax_dev, pgoff, addr, bytes, i, flags);
 }
 
 static int stripe_dax_zero_page_range(struct dm_target *ti, pgoff_t pgoff,
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index e5a14abd45f9..764183ddebc1 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1045,7 +1045,7 @@  static bool dm_dax_supported(struct dax_device *dax_dev, struct block_device *bd
 }
 
 static size_t dm_dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff,
-				    void *addr, size_t bytes, struct iov_iter *i)
+	void *addr, size_t bytes, struct iov_iter *i, unsigned long flags)
 {
 	struct mapped_device *md = dax_get_private(dax_dev);
 	sector_t sector = pgoff * PAGE_SECTORS;
@@ -1061,7 +1061,7 @@  static size_t dm_dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff,
 		ret = copy_from_iter(addr, bytes, i);
 		goto out;
 	}
-	ret = ti->type->dax_copy_from_iter(ti, pgoff, addr, bytes, i);
+	ret = ti->type->dax_copy_from_iter(ti, pgoff, addr, bytes, i, flags);
  out:
 	dm_put_live_table(md, srcu_idx);
 
@@ -1069,7 +1069,7 @@  static size_t dm_dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff,
 }
 
 static size_t dm_dax_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff,
-		void *addr, size_t bytes, struct iov_iter *i)
+	void *addr, size_t bytes, struct iov_iter *i, unsigned long flags)
 {
 	struct mapped_device *md = dax_get_private(dax_dev);
 	sector_t sector = pgoff * PAGE_SECTORS;
@@ -1085,7 +1085,7 @@  static size_t dm_dax_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff,
 		ret = copy_to_iter(addr, bytes, i);
 		goto out;
 	}
-	ret = ti->type->dax_copy_to_iter(ti, pgoff, addr, bytes, i);
+	ret = ti->type->dax_copy_to_iter(ti, pgoff, addr, bytes, i, flags);
  out:
 	dm_put_live_table(md, srcu_idx);
 
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index ed699416655b..e2a1c35108cd 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -311,13 +311,13 @@  static long pmem_dax_direct_access(struct dax_device *dax_dev,
  * dax_iomap_actor()
  */
 static size_t pmem_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff,
-		void *addr, size_t bytes, struct iov_iter *i)
+	void *addr, size_t bytes, struct iov_iter *i, unsigned long flags)
 {
 	return _copy_from_iter_flushcache(addr, bytes, i);
 }
 
 static size_t pmem_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff,
-		void *addr, size_t bytes, struct iov_iter *i)
+	void *addr, size_t bytes, struct iov_iter *i, unsigned long flags)
 {
 	return _copy_mc_to_iter(addr, bytes, i);
 }
diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
index 6ab2f9badc8d..6eb2b9a7682b 100644
--- a/drivers/s390/block/dcssblk.c
+++ b/drivers/s390/block/dcssblk.c
@@ -45,13 +45,15 @@  static const struct block_device_operations dcssblk_devops = {
 };
 
 static size_t dcssblk_dax_copy_from_iter(struct dax_device *dax_dev,
-		pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i)
+	pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i,
+	unsigned long flags)
 {
 	return copy_from_iter(addr, bytes, i);
 }
 
 static size_t dcssblk_dax_copy_to_iter(struct dax_device *dax_dev,
-		pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i)
+	pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i,
+	unsigned long flags)
 {
 	return copy_to_iter(addr, bytes, i);
 }
diff --git a/fs/dax.c b/fs/dax.c
index f603a9ce7f20..69433c6cd6c4 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1241,10 +1241,10 @@  static loff_t dax_iomap_iter(const struct iomap_iter *iomi,
 		 */
 		if (iov_iter_rw(iter) == WRITE)
 			xfer = dax_copy_from_iter(dax_dev, pgoff, kaddr,
-					map_len, iter);
+					map_len, iter, dax_flag);
 		else
 			xfer = dax_copy_to_iter(dax_dev, pgoff, kaddr,
-					map_len, iter);
+					map_len, iter, dax_flag);
 
 		pos += xfer;
 		length -= xfer;
diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
index d201b6e8a190..b0d80459b1cb 100644
--- a/fs/fuse/virtio_fs.c
+++ b/fs/fuse/virtio_fs.c
@@ -754,15 +754,15 @@  static long virtio_fs_direct_access(struct dax_device *dax_dev, pgoff_t pgoff,
 }
 
 static size_t virtio_fs_copy_from_iter(struct dax_device *dax_dev,
-				       pgoff_t pgoff, void *addr,
-				       size_t bytes, struct iov_iter *i)
+	pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i,
+	unsigned long flags)
 {
 	return copy_from_iter(addr, bytes, i);
 }
 
 static size_t virtio_fs_copy_to_iter(struct dax_device *dax_dev,
-				       pgoff_t pgoff, void *addr,
-				       size_t bytes, struct iov_iter *i)
+	pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i,
+	unsigned long flags)
 {
 	return copy_to_iter(addr, bytes, i);
 }
diff --git a/include/linux/dax.h b/include/linux/dax.h
index 0044a5d87e5d..97f421f831e2 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -33,10 +33,10 @@  struct dax_operations {
 			sector_t, sector_t);
 	/* copy_from_iter: required operation for fs-dax direct-i/o */
 	size_t (*copy_from_iter)(struct dax_device *, pgoff_t, void *, size_t,
-			struct iov_iter *);
+			struct iov_iter *, unsigned long);
 	/* copy_to_iter: required operation for fs-dax direct-i/o */
 	size_t (*copy_to_iter)(struct dax_device *, pgoff_t, void *, size_t,
-			struct iov_iter *);
+			struct iov_iter *, unsigned long);
 	/* zero_page_range: required operation. Zero page range   */
 	int (*zero_page_range)(struct dax_device *, pgoff_t, size_t);
 };
@@ -197,9 +197,9 @@  void *dax_get_private(struct dax_device *dax_dev);
 long dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages,
 		void **kaddr, pfn_t *pfn, unsigned long);
 size_t dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
-		size_t bytes, struct iov_iter *i);
+		size_t bytes, struct iov_iter *i, unsigned long flags);
 size_t dax_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
-		size_t bytes, struct iov_iter *i);
+		size_t bytes, struct iov_iter *i, unsigned long flags);
 int dax_zero_page_range(struct dax_device *dax_dev, pgoff_t pgoff,
 			size_t nr_pages);
 void dax_flush(struct dax_device *dax_dev, void *addr, size_t size);
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 307c29789332..81c67c3d96ed 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -148,7 +148,7 @@  typedef int (*dm_busy_fn) (struct dm_target *ti);
 typedef long (*dm_dax_direct_access_fn) (struct dm_target *ti, pgoff_t pgoff,
 		long nr_pages, void **kaddr, pfn_t *pfn, unsigned long flags);
 typedef size_t (*dm_dax_copy_iter_fn)(struct dm_target *ti, pgoff_t pgoff,
-		void *addr, size_t bytes, struct iov_iter *i);
+		void *addr, size_t bytes, struct iov_iter *i, unsigned long flags);
 typedef int (*dm_dax_zero_page_range_fn)(struct dm_target *ti, pgoff_t pgoff,
 		size_t nr_pages);