diff mbox

[04/10] dax: Fix data corruption for written and mmapped files

Message ID 1458566575-28063-5-git-send-email-jack@suse.cz (mailing list archive)
State New, archived
Headers show

Commit Message

Jan Kara March 21, 2016, 1:22 p.m. UTC
When a fault to a hole races with write filling the hole, it can happen
that block zeroing in __dax_fault() overwrites the data copied by write.
Since filesystem is supposed to provide pre-zeroed blocks for fault
anyway, just remove the racy zeroing from dax code. The only catch is
with read-faults over unwritten block where __dax_fault() filled in the
block into page tables anyway. For that case we have to fall back to
using hole page now.

Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/dax.c | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

Comments

Ross Zwisler March 23, 2016, 5:39 p.m. UTC | #1
On Mon, Mar 21, 2016 at 02:22:49PM +0100, Jan Kara wrote:
> When a fault to a hole races with write filling the hole, it can happen
> that block zeroing in __dax_fault() overwrites the data copied by write.
> Since filesystem is supposed to provide pre-zeroed blocks for fault
> anyway, just remove the racy zeroing from dax code. The only catch is
> with read-faults over unwritten block where __dax_fault() filled in the
> block into page tables anyway. For that case we have to fall back to
> using hole page now.
>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  fs/dax.c | 9 +--------
>  1 file changed, 1 insertion(+), 8 deletions(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index d496466652cd..50d81172438b 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -582,11 +582,6 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh,
>  		error = PTR_ERR(dax.addr);
>  		goto out;
>  	}
> -
> -	if (buffer_unwritten(bh) || buffer_new(bh)) {
> -		clear_pmem(dax.addr, PAGE_SIZE);
> -		wmb_pmem();
> -	}

I agree that we should be dropping these bits of code, but I think they are
just dead code that could never be executed?  I don't see how we could have
hit a race?

For the above, dax_insert_mapping() is only called if we actually have a block
mapping (holes go through dax_load_hole()), so for ext4 and XFS I think
buffer_unwritten() and buffer_new() are always false, so this code could never
be executed, right?

I suppose that maybe we could get into here via ext2 if BH_New was set?  Is
that the race?

>  	dax_unmap_atomic(bdev, &dax);
>  
>  	error = dax_radix_entry(mapping, vmf->pgoff, dax.sector, false,
> @@ -665,7 +660,7 @@ int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
>  	if (error)
>  		goto unlock_page;
>  
> -	if (!buffer_mapped(&bh) && !buffer_unwritten(&bh) && !vmf->cow_page) {
> +	if (!buffer_mapped(&bh) && !vmf->cow_page) {

Sure.

>  		if (vmf->flags & FAULT_FLAG_WRITE) {
>  			error = get_block(inode, block, &bh, 1);
>  			count_vm_event(PGMAJFAULT);
> @@ -950,8 +945,6 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
>  		}
>  
>  		if (buffer_unwritten(&bh) || buffer_new(&bh)) {
> -			clear_pmem(dax.addr, PMD_SIZE);
> -			wmb_pmem();
>  			count_vm_event(PGMAJFAULT);
>  			mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT);
>  			result |= VM_FAULT_MAJOR;

I think this whole block is just dead code, right?  Can we ever get into here?

Same argument applies as from dax_insert_mapping() - if we get this far then
we have a mapped buffer, and in the PMD case we know we're on ext4 of XFS
since ext2 doesn't do huge page mappings.

So, buffer_unwritten() and buffer_new() both always return false, right?

Yea...we really need to clean up our buffer flag handling. :)
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jan Kara March 24, 2016, 12:51 p.m. UTC | #2
On Wed 23-03-16 11:39:45, Ross Zwisler wrote:
> On Mon, Mar 21, 2016 at 02:22:49PM +0100, Jan Kara wrote:
> > When a fault to a hole races with write filling the hole, it can happen
> > that block zeroing in __dax_fault() overwrites the data copied by write.
> > Since filesystem is supposed to provide pre-zeroed blocks for fault
> > anyway, just remove the racy zeroing from dax code. The only catch is
> > with read-faults over unwritten block where __dax_fault() filled in the
> > block into page tables anyway. For that case we have to fall back to
> > using hole page now.
> >
> > Signed-off-by: Jan Kara <jack@suse.cz>
> > ---
> >  fs/dax.c | 9 +--------
> >  1 file changed, 1 insertion(+), 8 deletions(-)
> > 
> > diff --git a/fs/dax.c b/fs/dax.c
> > index d496466652cd..50d81172438b 100644
> > --- a/fs/dax.c
> > +++ b/fs/dax.c
> > @@ -582,11 +582,6 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh,
> >  		error = PTR_ERR(dax.addr);
> >  		goto out;
> >  	}
> > -
> > -	if (buffer_unwritten(bh) || buffer_new(bh)) {
> > -		clear_pmem(dax.addr, PAGE_SIZE);
> > -		wmb_pmem();
> > -	}
> 
> I agree that we should be dropping these bits of code, but I think they are
> just dead code that could never be executed?  I don't see how we could have
> hit a race?
> 
> For the above, dax_insert_mapping() is only called if we actually have a block
> mapping (holes go through dax_load_hole()), so for ext4 and XFS I think
> buffer_unwritten() and buffer_new() are always false, so this code could never
> be executed, right?
> 
> I suppose that maybe we could get into here via ext2 if BH_New was set?  Is
> that the race?

Yeah, you are right that only ext2 is prone to the race I have described
since for the rest this should be just a dead code. I'll update the changelog
in this sense.

> >  		if (vmf->flags & FAULT_FLAG_WRITE) {
> >  			error = get_block(inode, block, &bh, 1);
> >  			count_vm_event(PGMAJFAULT);
> > @@ -950,8 +945,6 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
> >  		}
> >  
> >  		if (buffer_unwritten(&bh) || buffer_new(&bh)) {
> > -			clear_pmem(dax.addr, PMD_SIZE);
> > -			wmb_pmem();
> >  			count_vm_event(PGMAJFAULT);
> >  			mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT);
> >  			result |= VM_FAULT_MAJOR;
> 
> I think this whole block is just dead code, right?  Can we ever get into here?
> 
> Same argument applies as from dax_insert_mapping() - if we get this far then
> we have a mapped buffer, and in the PMD case we know we're on ext4 of XFS
> since ext2 doesn't do huge page mappings.
> 
> So, buffer_unwritten() and buffer_new() both always return false, right?
> 
> Yea...we really need to clean up our buffer flag handling. :)

Hum, looking at the code now I'm somewhat confused. __dax_pmd_fault does:

if (!write && !buffer_mapped(&bh) && buffer_uptodate(&bh)) {
	... install zero page ...
}

but what the buffer_update() check is about? That will never be true,
right? So we will fall back to the second branch and there we can actually
hit the

if (buffer_unwritten(&bh) || buffer_new(&bh)) {

because for read fault we can get unwritten buffer. But I guess that is a
mistake in the first branch. After fixing that we can just remove the
second if as you say. Unless you object, I'll update the patch in this
sense.

								Honza
Ross Zwisler March 29, 2016, 3:17 p.m. UTC | #3
On Thu, Mar 24, 2016 at 01:51:12PM +0100, Jan Kara wrote:
> On Wed 23-03-16 11:39:45, Ross Zwisler wrote:
> > On Mon, Mar 21, 2016 at 02:22:49PM +0100, Jan Kara wrote:
> > > When a fault to a hole races with write filling the hole, it can happen
> > > that block zeroing in __dax_fault() overwrites the data copied by write.
> > > Since filesystem is supposed to provide pre-zeroed blocks for fault
> > > anyway, just remove the racy zeroing from dax code. The only catch is
> > > with read-faults over unwritten block where __dax_fault() filled in the
> > > block into page tables anyway. For that case we have to fall back to
> > > using hole page now.
> > >
> > > Signed-off-by: Jan Kara <jack@suse.cz>
> > > ---
> > >  fs/dax.c | 9 +--------
> > >  1 file changed, 1 insertion(+), 8 deletions(-)
> > > 
> > > diff --git a/fs/dax.c b/fs/dax.c
> > > index d496466652cd..50d81172438b 100644
> > > --- a/fs/dax.c
> > > +++ b/fs/dax.c
> > > @@ -582,11 +582,6 @@ static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh,
> > >  		error = PTR_ERR(dax.addr);
> > >  		goto out;
> > >  	}
> > > -
> > > -	if (buffer_unwritten(bh) || buffer_new(bh)) {
> > > -		clear_pmem(dax.addr, PAGE_SIZE);
> > > -		wmb_pmem();
> > > -	}
> > 
> > I agree that we should be dropping these bits of code, but I think they are
> > just dead code that could never be executed?  I don't see how we could have
> > hit a race?
> > 
> > For the above, dax_insert_mapping() is only called if we actually have a block
> > mapping (holes go through dax_load_hole()), so for ext4 and XFS I think
> > buffer_unwritten() and buffer_new() are always false, so this code could never
> > be executed, right?
> > 
> > I suppose that maybe we could get into here via ext2 if BH_New was set?  Is
> > that the race?
> 
> Yeah, you are right that only ext2 is prone to the race I have described
> since for the rest this should be just a dead code. I'll update the changelog
> in this sense.

What do you think about updating ext2 so that like ext4 and xfs it doesn't
ever return BH_New?  AFAICT ext2 doesn't rely on DAX to clear the sectors it
returns - it does that in ext2_get_blocks() via dax_clear_sectors(), right?

Or, really, I guess we could just leave ext2 alone and let it return BH_New,
and just make sure that DAX doesn't do anything with it.

> > >  		if (vmf->flags & FAULT_FLAG_WRITE) {
> > >  			error = get_block(inode, block, &bh, 1);
> > >  			count_vm_event(PGMAJFAULT);
> > > @@ -950,8 +945,6 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
> > >  		}
> > >  
> > >  		if (buffer_unwritten(&bh) || buffer_new(&bh)) {
> > > -			clear_pmem(dax.addr, PMD_SIZE);
> > > -			wmb_pmem();
> > >  			count_vm_event(PGMAJFAULT);
> > >  			mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT);
> > >  			result |= VM_FAULT_MAJOR;
> > 
> > I think this whole block is just dead code, right?  Can we ever get into here?
> > 
> > Same argument applies as from dax_insert_mapping() - if we get this far then
> > we have a mapped buffer, and in the PMD case we know we're on ext4 of XFS
> > since ext2 doesn't do huge page mappings.
> > 
> > So, buffer_unwritten() and buffer_new() both always return false, right?
> > 
> > Yea...we really need to clean up our buffer flag handling. :)
> 
> Hum, looking at the code now I'm somewhat confused. __dax_pmd_fault does:
> 
> if (!write && !buffer_mapped(&bh) && buffer_uptodate(&bh)) {
> 	... install zero page ...
> }
> 
> but what the buffer_update() check is about? That will never be true,
> right? So we will fall back to the second branch and there we can actually
> hit the
> 
> if (buffer_unwritten(&bh) || buffer_new(&bh)) {
> 
> because for read fault we can get unwritten buffer. But I guess that is a
> mistake in the first branch. After fixing that we can just remove the
> second if as you say. Unless you object, I'll update the patch in this
> sense.

I can't remember if I've ever seen this code get executed - I *think* that
when we hit a hole we always drop back and do 4k zero pages via this code:

	/*
	 * If the filesystem isn't willing to tell us the length of a hole,
	 * just fall back to PTEs.  Calling get_block 512 times in a loop
	 * would be silly.
	 */
	if (!buffer_size_valid(&bh) || bh.b_size < PMD_SIZE) {
		dax_pmd_dbg(&bh, address, "allocated block too small");
		return VM_FAULT_FALLBACK;
	}

I agree that this could probably use some cleanup and additional testing.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/dax.c b/fs/dax.c
index d496466652cd..50d81172438b 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -582,11 +582,6 @@  static int dax_insert_mapping(struct inode *inode, struct buffer_head *bh,
 		error = PTR_ERR(dax.addr);
 		goto out;
 	}
-
-	if (buffer_unwritten(bh) || buffer_new(bh)) {
-		clear_pmem(dax.addr, PAGE_SIZE);
-		wmb_pmem();
-	}
 	dax_unmap_atomic(bdev, &dax);
 
 	error = dax_radix_entry(mapping, vmf->pgoff, dax.sector, false,
@@ -665,7 +660,7 @@  int __dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
 	if (error)
 		goto unlock_page;
 
-	if (!buffer_mapped(&bh) && !buffer_unwritten(&bh) && !vmf->cow_page) {
+	if (!buffer_mapped(&bh) && !vmf->cow_page) {
 		if (vmf->flags & FAULT_FLAG_WRITE) {
 			error = get_block(inode, block, &bh, 1);
 			count_vm_event(PGMAJFAULT);
@@ -950,8 +945,6 @@  int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address,
 		}
 
 		if (buffer_unwritten(&bh) || buffer_new(&bh)) {
-			clear_pmem(dax.addr, PMD_SIZE);
-			wmb_pmem();
 			count_vm_event(PGMAJFAULT);
 			mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT);
 			result |= VM_FAULT_MAJOR;