diff mbox

Silent data corruption in blkdev_direct_IO()

Message ID 2311947c2f0f368bd10474edb0f0f5b51dde6b7d.camel@suse.com (mailing list archive)
State New, archived
Headers show

Commit Message

Martin Wilck July 13, 2018, 4:56 p.m. UTC
On Thu, 2018-07-12 at 10:42 -0600, Jens Axboe wrote:
> 
> Hence the patch I sent is wrong, the code actually looks fine. Which
> means we're back to trying to figure out what is going on here. It'd
> be great with a test case...

We don't have an easy test case yet. But the customer has confirmed
that the problem occurs with upstream 4.17.5, too. We also confirmed
again that the problem occurs when the kernel uses the kmalloc() code
path in __blkdev_direct_IO_simple().

My personal suggestion would be to ditch __blkdev_direct_IO_simple()
altogether. After all, it's not _that_ much simpler thatn
__blkdev_direct_IO(), and it seems to be broken in a subtle way.

However, so far I've only identified a minor problem, see below - it
doesn't explain the data corruption we're seeing.

Martin

Comments

Jens Axboe July 13, 2018, 6 p.m. UTC | #1
On 7/13/18 10:56 AM, Martin Wilck wrote:
> On Thu, 2018-07-12 at 10:42 -0600, Jens Axboe wrote:
>>
>> Hence the patch I sent is wrong, the code actually looks fine. Which
>> means we're back to trying to figure out what is going on here. It'd
>> be great with a test case...
> 
> We don't have an easy test case yet. But the customer has confirmed
> that the problem occurs with upstream 4.17.5, too. We also confirmed
> again that the problem occurs when the kernel uses the kmalloc() code
> path in __blkdev_direct_IO_simple().
> 
> My personal suggestion would be to ditch __blkdev_direct_IO_simple()
> altogether. After all, it's not _that_ much simpler thatn
> __blkdev_direct_IO(), and it seems to be broken in a subtle way.

That's not a great suggestion at all, we need to find out why we're
hitting the issue. For all you know, the bug could be elsewhere and
we're just going to be hitting it differently some other way. The
head-in-the-sand approach is rarely a win long term.

It's saving an allocation per IO, that's definitely measurable on
the faster storage. For reads, it's also not causing a context
switch for dirtying pages. I'm not a huge fan of multiple cases
in general, but this one is definitely warranted in an era where
1 usec is a lot of extra time for an IO.

> However, so far I've only identified a minor problem, see below - it
> doesn't explain the data corruption we're seeing.

What would help is trying to boil down a test case. So far it's a lot
of hand waving, and nothing that can really help narrow down what is
going on here.
Jens Axboe July 13, 2018, 6:50 p.m. UTC | #2
On 7/13/18 12:00 PM, Jens Axboe wrote:
> On 7/13/18 10:56 AM, Martin Wilck wrote:
>> On Thu, 2018-07-12 at 10:42 -0600, Jens Axboe wrote:
>>>
>>> Hence the patch I sent is wrong, the code actually looks fine. Which
>>> means we're back to trying to figure out what is going on here. It'd
>>> be great with a test case...
>>
>> We don't have an easy test case yet. But the customer has confirmed
>> that the problem occurs with upstream 4.17.5, too. We also confirmed
>> again that the problem occurs when the kernel uses the kmalloc() code
>> path in __blkdev_direct_IO_simple().
>>
>> My personal suggestion would be to ditch __blkdev_direct_IO_simple()
>> altogether. After all, it's not _that_ much simpler thatn
>> __blkdev_direct_IO(), and it seems to be broken in a subtle way.
> 
> That's not a great suggestion at all, we need to find out why we're
> hitting the issue. For all you know, the bug could be elsewhere and
> we're just going to be hitting it differently some other way. The
> head-in-the-sand approach is rarely a win long term.
> 
> It's saving an allocation per IO, that's definitely measurable on
> the faster storage. For reads, it's also not causing a context
> switch for dirtying pages. I'm not a huge fan of multiple cases
> in general, but this one is definitely warranted in an era where
> 1 usec is a lot of extra time for an IO.
> 
>> However, so far I've only identified a minor problem, see below - it
>> doesn't explain the data corruption we're seeing.
> 
> What would help is trying to boil down a test case. So far it's a lot
> of hand waving, and nothing that can really help narrow down what is
> going on here.

When someone reports to you that some kind of data corruption occurs,
you need to find out as many details you can. Ideally they can give you
a test case, but if they can't, you ask as many questions as possible to
help YOU make a test case.

- What is the nature of the corruption? Is it reads or writes? Is it
  zeroes, random data, or data from somewhere else? How big is the
  corrupted area?

- What does the workload look like that reproduces it? If they don't
  really know and they can't give you a reproducer, you help them with
  tracing to give you the information you need.

Those are a great start. Right now we have zero information, other than
reverting that patch fixes it. This means that we're likely dealing with
IO that is larger than 16k, since we're going to be hitting the same
path for <= 16k ios with the patch reverted. We also know it's less than
1MB. But that's it. I don't even know if it's writes, since your (and
Hannes's) report has no details at all.

Go look at what you ask customers to provide for bug reports, then apply
some of that to this case...
Martin Wilck July 13, 2018, 8:48 p.m. UTC | #3
On Fri, 2018-07-13 at 12:00 -0600, Jens Axboe wrote:
> On 7/13/18 10:56 AM, Martin Wilck wrote:
> > On Thu, 2018-07-12 at 10:42 -0600, Jens Axboe wrote:
> > > 
> > > Hence the patch I sent is wrong, the code actually looks fine.
> > > Which
> > > means we're back to trying to figure out what is going on here.
> > > It'd
> > > be great with a test case...
> > 
> > We don't have an easy test case yet. But the customer has confirmed
> > that the problem occurs with upstream 4.17.5, too. We also
> > confirmed
> > again that the problem occurs when the kernel uses the kmalloc()
> > code
> > path in __blkdev_direct_IO_simple().
> > 
> > My personal suggestion would be to ditch
> > __blkdev_direct_IO_simple()
> > altogether. After all, it's not _that_ much simpler thatn
> > __blkdev_direct_IO(), and it seems to be broken in a subtle way.
> 
> That's not a great suggestion at all, we need to find out why we're
> hitting the issue. 

We're trying.

> For all you know, the bug could be elsewhere and
> we're just going to be hitting it differently some other way. The
> head-in-the-sand approach is rarely a win long term.
> 
> It's saving an allocation per IO, that's definitely measurable on
> the faster storage. 

I an see that for the inline path, but is there still an advantage if
we need to kmalloc() the biovec?

> For reads, it's also not causing a context
> switch for dirtying pages. I'm not a huge fan of multiple cases
> in general, but this one is definitely warranted in an era where
> 1 usec is a lot of extra time for an IO.

Ok, thanks for pointing that out.

> 
> > However, so far I've only identified a minor problem, see below -
> > it
> > doesn't explain the data corruption we're seeing.
> 
> What would help is trying to boil down a test case. So far it's a lot
> of hand waving, and nothing that can really help narrow down what is
> going on here.

It's not that we didn't try. We've run fio with verification on block
devices with varying io sizes, block sizes, and alignments, but so far
we haven't hit the issue. We've also tried to reproduce it by
approximating the customer's VM setup, with no success up to now.

However, we're now much closer than we used to be, so I'm confident
that we'll be able to present more concrete facts soon.

Martin
Jens Axboe July 13, 2018, 8:52 p.m. UTC | #4
On 7/13/18 2:48 PM, Martin Wilck wrote:
>> For all you know, the bug could be elsewhere and
>> we're just going to be hitting it differently some other way. The
>> head-in-the-sand approach is rarely a win long term.
>>
>> It's saving an allocation per IO, that's definitely measurable on
>> the faster storage. 
> 
> I an see that for the inline path, but is there still an advantage if
> we need to kmalloc() the biovec?

It's still one allocation instead of two. In the era of competing with
spdk on single digit latency devices, the answer is yes.

>>> However, so far I've only identified a minor problem, see below -
>>> it
>>> doesn't explain the data corruption we're seeing.
>>
>> What would help is trying to boil down a test case. So far it's a lot
>> of hand waving, and nothing that can really help narrow down what is
>> going on here.
> 
> It's not that we didn't try. We've run fio with verification on block
> devices with varying io sizes, block sizes, and alignments, but so far
> we haven't hit the issue. We've also tried to reproduce it by
> approximating the customer's VM setup, with no success up to now.

I ran some testing yesterday as well, but didn't trigger anything.
Didn't expect to either, as all the basic functionality was verified
when the patch was done. It's not really a path with a lot of corner
cases, so it's really weird that we're seeing anything at all. Which is
why I'm suspecting it's something else entirely, but it's really hard to
guesstimate on that with no clues at all.

> However, we're now much closer than we used to be, so I'm confident
> that we'll be able to present more concrete facts soon.

OK, sounds good.
Martin Wilck July 13, 2018, 10:21 p.m. UTC | #5
On Fri, 2018-07-13 at 12:50 -0600, Jens Axboe wrote:
> On 7/13/18 12:00 PM, Jens Axboe wrote:
> > On 7/13/18 10:56 AM, Martin Wilck wrote:
> > > On Thu, 2018-07-12 at 10:42 -0600, Jens Axboe wrote:
> > > > 
> > > > Hence the patch I sent is wrong, the code actually looks fine.
> > > > Which
> > > > means we're back to trying to figure out what is going on here.
> > > > It'd
> > > > be great with a test case...
> > > 
> > > We don't have an easy test case yet. But the customer has
> > > confirmed
> > > that the problem occurs with upstream 4.17.5, too. We also
> > > confirmed
> > > again that the problem occurs when the kernel uses the kmalloc()
> > > code
> > > path in __blkdev_direct_IO_simple().
> > > 
> > > My personal suggestion would be to ditch
> > > __blkdev_direct_IO_simple()
> > > altogether. After all, it's not _that_ much simpler thatn
> > > __blkdev_direct_IO(), and it seems to be broken in a subtle way.
> > 
> > That's not a great suggestion at all, we need to find out why we're
> > hitting the issue. For all you know, the bug could be elsewhere and
> > we're just going to be hitting it differently some other way. The
> > head-in-the-sand approach is rarely a win long term.
> > 
> > It's saving an allocation per IO, that's definitely measurable on
> > the faster storage. For reads, it's also not causing a context
> > switch for dirtying pages. I'm not a huge fan of multiple cases
> > in general, but this one is definitely warranted in an era where
> > 1 usec is a lot of extra time for an IO.
> > 
> > > However, so far I've only identified a minor problem, see below -
> > > it
> > > doesn't explain the data corruption we're seeing.
> > 
> > What would help is trying to boil down a test case. So far it's a
> > lot
> > of hand waving, and nothing that can really help narrow down what
> > is
> > going on here.
> 
> When someone reports to you that some kind of data corruption occurs,
> you need to find out as many details you can. Ideally they can give
> you
> a test case, but if they can't, you ask as many questions as possible
> to
> help YOU make a test case.
> 
> - What is the nature of the corruption? Is it reads or writes? Is it
>   zeroes, random data, or data from somewhere else? How big is the
>   corrupted area?
> 
> - What does the workload look like that reproduces it? If they don't
>   really know and they can't give you a reproducer, you help them
> with
>   tracing to give you the information you need.
> 
> Those are a great start. Right now we have zero information, other
> than
> reverting that patch fixes it. This means that we're likely dealing
> with
> IO that is larger than 16k, since we're going to be hitting the same
> path for <= 16k ios with the patch reverted. We also know it's less
> than
> 1MB.

The nature of the corruption is an interesting question. The weird
thing is that it's not block-aligned - "bad" data starts at some
apparently random offset into the log file. That'd actually made me
think that we're not looking at a block IO issue but maybe some
corruption that occured in memory before write-out. But then we
isolated this specific patch by bisection... I had my doubts, but it
seems to be sound, at least the test results indicate so.

The user application is a database which writes logs in blocks that are
multiples of 8k. The IO goes through various layers in the VM (ext3 on
LVM on MD on virtio), then through qemu via virtio-scsi, and onto a
host multipath device which is backed by FC storage. The VM is running
an old SLES 11 distro, the problems were caused by a host kernel
update.

My attempts to reproduce a corruption with lab equipment have failed up
to now. I've been trying to replay the customer's workload with
btreplay. I also tried to replay it with fio with verification to try
to find corruptions, but I guess I didn't get the setup quite right. 

>  But that's it. I don't even know if it's writes, since your (and
> Hannes's) report has no details at all.
> Go look at what you ask customers to provide for bug reports, then
> apply
> some of that to this case...

Sorry. It's writes. The reason we posted this here was a) the concrete
questions Hannes asked about the code, b) the patch we'd isolated - we
thought you or Christoph might just have a bright idea about it, or see
what we didn't, and c) that we thought the problem might affect
upstream, too.

We don't expect you to do the dirty work for us. We'll come back to the
list when we know more, or have specific questions.

Martin
Martin Wilck July 16, 2018, 7:05 p.m. UTC | #6
On Fri, 2018-07-13 at 14:52 -0600, Jens Axboe wrote:
> On 7/13/18 2:48 PM, Martin Wilck wrote:
> > > 
> > > > However, so far I've only identified a minor problem, see below
> > > > -
> > > > it
> > > > doesn't explain the data corruption we're seeing.
> > > 
> > > What would help is trying to boil down a test case. So far it's a
> > > lot
> > > of hand waving, and nothing that can really help narrow down what
> > > is
> > > going on here.
> > 
> > It's not that we didn't try. We've run fio with verification on
> > block
> > devices with varying io sizes, block sizes, and alignments, but so
> > far
> > we haven't hit the issue. We've also tried to reproduce it by
> > approximating the customer's VM setup, with no success up to now.
> 
> I ran some testing yesterday as well, but didn't trigger anything.
> Didn't expect to either, as all the basic functionality was verified
> when the patch was done. It's not really a path with a lot of corner
> cases, so it's really weird that we're seeing anything at all. Which
> is
> why I'm suspecting it's something else entirely, but it's really hard
> to
> guesstimate on that with no clues at all.
> 
> > However, we're now much closer than we used to be, so I'm confident
> > that we'll be able to present more concrete facts soon.
> 
> OK, sounds good.

Jan Kara has provided very convincing analysis and provided a patch
which we are going to have to the customer test. 

By calling bio_iov_iter_get_pages() only once,
__blkdev_direct_IO_simple() may not transfer all requested bytes,
because bio_iov_iter_get_pages() doesn't necessarily exhaust all data
in the iov_iter. Thus a short write may occur, and
__generic_file_write_iter() falls back to buffered IO. We've actually
observed these "short direct writes" in the error case with an
instrumented kernel (in a trace I got, ~13000/800000 direct write ops
on a block device transferred less data than requested).

We suspect that this concurrency of direct and buffered writes may
cause the corruption the customer observes.

Does that make sense to you?

Regards,
Martin
diff mbox

Patch

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 7ec920e..b82b516 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -218,8 +218,12 @@  __blkdev_direct_IO_simple(struct kiocb *iocb,
struct iov_iter *iter,
 	bio.bi_end_io = blkdev_bio_end_io_simple;
 
 	ret = bio_iov_iter_get_pages(&bio, iter);
-	if (unlikely(ret))
+	if (unlikely(ret)) {
+		if (vecs != inline_vecs)
+			kfree(vecs);
+		bio_uninit(&bio);
 		return ret;
+	}
 	ret = bio.bi_iter.bi_size;