From patchwork Tue Feb 26 04:40:39 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Chinner X-Patchwork-Id: 2183861 Return-Path: X-Original-To: patchwork-dm-devel@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from mx4-phx2.redhat.com (mx4-phx2.redhat.com [209.132.183.25]) by patchwork1.kernel.org (Postfix) with ESMTP id AD9723FD4E for ; Tue, 26 Feb 2013 04:44:18 +0000 (UTC) Received: from lists01.pubmisc.prod.ext.phx2.redhat.com (lists01.pubmisc.prod.ext.phx2.redhat.com [10.5.19.33]) by mx4-phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r1Q4emkM003487; Mon, 25 Feb 2013 23:40:49 -0500 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by lists01.pubmisc.prod.ext.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r1Q4elur030027 for ; Mon, 25 Feb 2013 23:40:47 -0500 Received: from mx1.redhat.com (ext-mx16.extmail.prod.ext.phx2.redhat.com [10.5.110.21]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r1Q4ekjD018447 for ; Mon, 25 Feb 2013 23:40:46 -0500 Received: from ipmail05.adl6.internode.on.net (ipmail05.adl6.internode.on.net [150.101.137.143]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r1Q4egaG001292 for ; Mon, 25 Feb 2013 23:40:43 -0500 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApcJAHo7LFF5LKqk/2dsb2JhbABFvE6FE4ECF3OCHwEBBAEnExwjBQsIEQoJJQ8FJQMhE4gNBbAikAMVjTIMXCI9B4NAA5Y8iWKHBIFSgUmBVQEBAhwG Received: from ppp121-44-170-164.lns20.syd7.internode.on.net (HELO dastard) ([121.44.170.164]) by ipmail05.adl6.internode.on.net with ESMTP; 26 Feb 2013 15:10:40 +1030 Received: from dave by dastard with local (Exim 4.76) (envelope-from ) id 1UACLH-0005p3-7x; Tue, 26 Feb 2013 15:40:39 +1100 Date: Tue, 26 Feb 2013 15:40:39 +1100 From: Dave Chinner To: Matteo Frigo Message-ID: <20130226044039.GM5551@dastard> References: <87d2vnc34r.fsf@fftw.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <87d2vnc34r.fsf@fftw.org> User-Agent: Mutt/1.5.21 (2010-09-15) X-RedHat-Spam-Score: -2.309 (BAYES_00, DCC_REPUT_00_12, RCVD_IN_DNSWL_NONE, SPF_FAIL) X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 X-Scanned-By: MIMEDefang 2.68 on 10.5.110.21 X-loop: dm-devel@redhat.com Cc: dm-devel@redhat.com, xfs@oss.sgi.com Subject: [dm-devel] [BUG] pvmove corrupting XFS filesystems (was Re: [BUG] Internal error xfs_dir2_data_reada_verify) X-BeenThere: dm-devel@redhat.com X-Mailman-Version: 2.1.12 Precedence: junk Reply-To: device-mapper development List-Id: device-mapper development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com [cc'd dm-devel] On Mon, Feb 25, 2013 at 07:47:32PM -0500, Matteo Frigo wrote: > For some reason XFS reliably crashes for me when used in conjunction > with LVM2's pvmove. The error reliably appears when removing a large > number of files from a volume that is being pvmove'd at the same time. > I am using vanilla kernel 3.8. A typical kernel message looks like the > following: > > [ 262.396471] ffff88001ecfb000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ > [ 262.398314] XFS (dm-1): Internal error xfs_dir2_data_reada_verify at line 226 of file fs/xfs/xfs_dir2_data.c. Caller 0xffffffffa01eb42d > [ 262.398314] > [ 262.398740] Pid: 134, comm: kworker/0:1H Not tainted 3.8.0 #1 > [ 262.398742] Call Trace: > [ 262.398767] [] ? xfs_corruption_error+0x54/0x6f [xfs] > [ 262.398777] [] ? xfs_buf_iodone_work+0x2e/0x5c [xfs] > [ 262.398792] [] ? xfs_dir2_data_reada_verify+0x76/0x88 [xfs] > [ 262.398801] [] ? xfs_buf_iodone_work+0x2e/0x5c [xfs] > [ 262.398809] [] ? xfs_buf_iodone_work+0x2e/0x5c [xfs] > [ 262.398814] [] ? process_one_work+0x16d/0x2c5 > [ 262.398831] [] ? xfs_buf_relse+0x12/0x12 [xfs] > [ 262.398834] [] ? worker_thread+0x117/0x1b1 > [ 262.398837] [] ? rescuer_thread+0x187/0x187 > [ 262.398840] [] ? kthread+0x81/0x89 > [ 262.398842] [] ? __kthread_parkme+0x5c/0x5c > [ 262.398846] [] ? ret_from_fork+0x7c/0xb0 > [ 262.398848] [] ? __kthread_parkme+0x5c/0x5c > [ 262.398850] XFS (dm-1): Corruption detected. Unmount and run xfs_repair > [ 262.399089] XFS (dm-1): metadata I/O error: block 0x805060 ("xfs_trans_read_buf_map") error 117 numblks 8 That's clearly a 3.8 kernel you are seeing this on. The readahead has returned zeros rather than data which implies that the readahead has been cancelled. I can see a potential bug in the 3.8 code where the above verifier is called even on an IO error. Hence a readahead will trigger a corruption error like above, even though a failed readahead is supposed to be silent. A failed readahead is then supposed to be retried by a blocking read when the metadata is actually read, and this means that failed readahead can be ignored. > I have observed the problem with vanilla kernels 3.8 and 3.4.33, and > with CentOS 6.3 using the CentOS variant of 2.6.32. I have observed the > problem on various virtual machines using debian wheezy/sid and Fedora > 18, and on at least three physical machines. Even though the kernel > reports "Corruption detected", xfs_repair appears to be able to fix any > problems, so I haven't actually lost any data. The problem goes away > after stopping pvmove. Which implies two things: 1) that the corruption is not limited to readahead; and 2) the bug is a result of whatever DM is doing during a pvmove operation. > HOW TO REPRODUCE: I created a virtual machine with debian sid and two > virtual disks /dev/vdb and /dev/vdc. Then, the following script > reliably triggers the failure. > > pvcreate /dev/vd[bc] > vgcreate test /dev/vd[bc] > lvcreate -L 8G -n vol test /dev/vdb > mkfs.xfs -f /dev/mapper/test-vol > mount -o noatime /dev/mapper/test-vol /mnt > cd /mnt > git clone ~/linux-stable > cd / > umount /mnt > > mount -o noatime /dev/mapper/test-vol /mnt > pvmove -b /dev/vdb /dev/vdc > sleep 2 > rm -rf /mnt/linux-stable Yup, I can reproduce it here on a 3.8 kernel. Note that it's not just returning buffers with zeros in them, it's returning buffers with stale data in them: ffff88011cfd4000: 31 5f 53 50 41 52 45 5f 4d 41 53 4b 29 20 3e 3e 1_SPARE_MASK) >> [98756.147971] XFS (dm-0): Internal error xfs_dir3_data_reada_verify at line 240 of file fs/xfs/xfs_dir2_data.c. Caller 0xffffffff81457dbf > I have observed the problem even without unmounting and re-mounting the > file system, but it appears that the umount/mount sequence makes the > problem 100% reproducible. Yup, because it's got to read the metadata from disk to do the rm, and hence triggers lots of directory block readahead to occur. > I am not implying that this is an xfs bug---it may well be a dm bug for > all I know. However, the same test completes correctly using ext4 > instead of xfs, suggesting that dm is working in at least some cases. ext4 doesn't use readahead in the same way that XFS does. XFS has a long history of discovering readahead handling bugs in DM... So, I added a check to ensure we don't run the verifier on buffers that have had an IO error returned (i.e. we preserve the error and IO completion checking until a read buffer read occurs): and so errors on buffers are passed all the way through. This occurred: [ 112.067262] XFS (dm-0): metadata I/O error: block 0x180068 ("xfs_trans_read_buf_map") error 11 numblks 8 [ 112.069331] XFS (dm-0): metadata I/O error: block 0x180068 ("xfs_trans_read_buf_map") error 11 numblks 8 [ 112.073369] XFS (dm-0): metadata I/O error: block 0x4800a8 ("xfs_trans_read_buf_map") error 11 numblks 8 [ 112.077853] XFS (dm-0): metadata I/O error: block 0x4800a8 ("xfs_trans_read_buf_map") error 11 numblks 8 [ 112.108900] XFS (dm-0): metadata I/O error: block 0x180d10 ("xfs_trans_read_buf_map") error 11 numblks 8 [ 112.112003] XFS (dm-0): metadata I/O error: block 0x180d10 ("xfs_trans_read_buf_map") error 11 numblks 8 [ 112.116444] XFS (dm-0): metadata I/O error: block 0x480d20 ("xfs_trans_read_buf_map") error 11 numblks 8 [ 112.119678] XFS (dm-0): metadata I/O error: block 0x480d20 ("xfs_trans_read_buf_map") error 11 numblks 8 [ 112.125633] XFS (dm-0): metadata I/O error: block 0x1816b8 ("xfs_trans_read_buf_map") error 11 numblks 8 [ 112.129345] XFS (dm-0): metadata I/O error: block 0x1816b8 ("xfs_trans_read_buf_map") error 11 numblks 8 Error 11 is EAGAIN, which means that the IO is being completed with an error of EAGAIN on it. This is not an XFS bug - EAGAIN is not a recognised IO stack error on any filesystem and so whatever is returning it is broken. There are also definitely other problems in the lower layers of the storage stack when pvmove is executed, as this happened next: [ 112.130211] BUG: unable to handle kernel NULL pointer dereference at 000000000000007c [ 112.130232] IP: [] do_raw_spin_lock+0x17/0x110 [ 112.130235] PGD 21cb16067 PUD 21c65d067 PMD 0 [ 112.130238] Oops: 0000 [#1] SMP [ 112.130244] Modules linked in: [ 112.130249] CPU 5 [ 112.130249] Pid: 0, comm: swapper/5 Not tainted 3.8.0-dgc+ #433 Bochs Bochs [ 112.130254] RIP: 0010:[] [] do_raw_spin_lock+0x17/0x110 [ 112.130259] RSP: 0018:ffff88011dd03c18 EFLAGS: 00010086 [ 112.130260] RAX: 0000000000000092 RBX: 0000000000000078 RCX: ffff88011cdcc680 [ 112.130261] RDX: ffff88007dbc5fd8 RSI: ffff88007bd6fcb8 RDI: 0000000000000078 [ 112.130262] RBP: ffff88011dd03c48 R08: 0000000000000000 R09: 0000000000000000 [ 112.130264] R10: 0000000000000000 R11: 0000000000000001 R12: 00000000000000a0 [ 112.130265] R13: 0000000000000078 R14: 0000000000010000 R15: 0000000000010000 [ 112.130267] FS: 0000000000000000(0000) GS:ffff88011dd00000(0000) knlGS:0000000000000000 [ 112.130268] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [ 112.130269] CR2: 000000000000007c CR3: 000000021c3f9000 CR4: 00000000000006e0 [ 112.130276] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 112.130282] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 112.130284] Process swapper/5 (pid: 0, threadinfo ffff88007dbc4000, task ffff88007dbc23c0) [ 112.130284] Stack: [ 112.130287] ffff88011dd03c78 0000000000000086 ffff88011dd03c88 0000000000000092 [ 112.130289] 00000000000000a0 0000000000000078 ffff88011dd03c68 ffffffff81bd7cf4 [ 112.130291] ffff88007c58a200 ffff88007bd6fcb8 ffff88011dd03c98 ffffffff81a65d2e [ 112.130292] Call Trace: [ 112.130295] [ 112.130300] [] _raw_spin_lock_irqsave+0x34/0x40 [ 112.130316] [] push+0x2e/0x60 [ 112.130319] [] ? run_pages_job+0xb0/0xb0 [ 112.130322] [] complete_io+0x6a/0xa0 [ 112.130325] [] dec_count.part.5+0x63/0x80 [ 112.130328] [] endio+0x55/0xa0 [ 112.130340] [] bio_endio+0x1d/0x30 [ 112.130348] [] req_bio_endio.isra.63+0x8b/0xd0 [ 112.130351] [] blk_update_request+0x108/0x4d0 [ 112.130354] [] ? _raw_spin_unlock+0xe/0x20 [ 112.130356] [] blk_update_bidi_request+0x27/0xa0 [ 112.130360] [] __blk_end_bidi_request+0x20/0x50 [ 112.130362] [] __blk_end_request_all+0x1f/0x30 [ 112.130373] [] virtblk_done+0x100/0x240 [ 112.130385] [] vring_interrupt+0x35/0x60 [ 112.130395] [] handle_irq_event_percpu+0x6c/0x230 [ 112.130398] [] handle_irq_event+0x48/0x70 [ 112.130404] [] handle_edge_irq+0x77/0x110 [ 112.130408] [] handle_irq+0x22/0x40 [ 112.130412] [] do_IRQ+0x5a/0xe0 [ 112.130414] [] common_interrupt+0x6d/0x6d [ 112.130416] [ 112.130420] [] ? native_safe_halt+0x6/0x10 [ 112.130424] [] default_idle+0x4f/0x220 [ 112.130427] [] cpu_idle+0xb6/0xf0 [ 112.130442] [] start_secondary+0x1aa/0x1ae [ 112.130466] Code: 10 89 47 08 c3 66 66 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 48 89 e5 48 83 ec 30 48 89 5d e8 48 89 fb 4c 89 65 f0 4c 89 6d f8 <81> 7f 04 ad 4e [ 112.130469] RIP [] do_raw_spin_lock+0x17/0x110 [ 112.130470] RSP [ 112.130471] CR2: 000000000000007c [ 112.130474] ---[ end trace e4b9ec49f84b99c2 ]--- [ 112.130476] Kernel panic - not syncing: Fatal exception in interrupt Which indicates that IO completion through kcopyd died. And finally, this corruption occurred on the third run: [ 68.309233] XFS (dm-0): Corruption detected. Unmount and run xfs_repair [ 68.310281] XFS (dm-0): bad inode magic/vsn daddr 1578640 #0 (magic=5844) [ 68.312136] XFS: Assertion failed: 0, file: fs/xfs/xfs_inode.c, line: 417 [ 68.313906] ------------[ cut here ]------------ [ 68.314822] kernel BUG at fs/xfs/xfs_message.c:100! [ 68.315838] invalid opcode: 0000 [#1] SMP [ 68.316117] Modules linked in: [ 68.316117] CPU 3 [ 68.316117] Pid: 4401, comm: kworker/3:1H Not tainted 3.8.0-dgc+ #433 Bochs Bochs [ 68.316117] RIP: 0010:[] [] assfail+0x22/0x30 [ 68.316117] RSP: 0018:ffff88021c70dd08 EFLAGS: 00010296 [ 68.316117] RAX: 000000000000003d RBX: 0000000000000000 RCX: 0000000000008f8e [ 68.316117] RDX: 0000000000008e8e RSI: 0000000000000096 RDI: 0000000000000246 [ 68.316117] RBP: ffff88021c70dd08 R08: 000000000000000a R09: 0000000000000215 [ 68.316117] R10: 0000000000000000 R11: 0000000000000214 R12: ffff88011d005000 [ 68.316117] R13: ffff88011b71d5c0 R14: 0000000000000020 R15: ffff88011b5d1000 [ 68.316117] FS: 0000000000000000(0000) GS:ffff88021fc00000(0000) knlGS:0000000000000000 [ 68.316117] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [ 68.316117] CR2: 0000000000757000 CR3: 000000019a1ac000 CR4: 00000000000006e0 [ 68.316117] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 68.316117] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 68.316117] Process kworker/3:1H (pid: 4401, threadinfo ffff88021c70c000, task ffff88021c97c280) [ 68.316117] Stack: [ 68.316117] ffff88021c70dd68 ffffffff814b2c37 ffffffff814b2cee 0000000000000003 [ 68.316117] ffff88021c70dd48 ffff88011d0052a0 ffff88021fc12c40 ffff88011b71d688 [ 68.316117] ffff88011b71d5c0 0000000000000001 0000000000000000 ffff88011b71d688 [ 68.316117] Call Trace: [ 68.316117] [] xfs_inode_buf_verify+0xf7/0x190 [ 68.316117] [] ? xfs_inode_buf_read_verify+0xe/0x10 [ 68.316117] [] xfs_inode_buf_read_verify+0xe/0x10 [ 68.316117] [] xfs_buf_iodone_work+0xc5/0xf0 [ 68.316117] [] process_one_work+0x132/0x4e0 [ 68.316117] [] ? xfs_buf_unlock+0xa0/0xa0 [ 68.316117] [] worker_thread+0x15d/0x450 [ 68.316117] [] ? __next_gcwq_cpu+0x60/0x60 [ 68.316117] [] kthread+0xd8/0xe0 [ 68.316117] [] ? flush_kthread_worker+0xa0/0xa0 [ 68.316117] [] ret_from_fork+0x7c/0xb0 [ 68.316117] [] ? flush_kthread_worker+0xa0/0xa0 Which is on a buffer that has never had read-ahead issued on it. It's an inode buffer, but the partial magic number indicates that the block contains directory data of some kind. IOWs, it looks to me like data from the wrong blocks was returned to XFS.... [Ben: now do you see why I want every single piece of XFS metadata to be completely self describing with embedded UUIDs, block/inode numbers and owners? Having a block/inode number encoded into the metadata will detect these sorts of storage layer bugs immediately and tell us the class of error is occurring without having to guess...] All of the evidence points of a problem caused by the pvmove operation. I'd suggest that you don't use it until we get to the bottom of the problem (i.e. where the EAGAIN is coming from and why).... Cheers, Dave. --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -1022,7 +1022,9 @@ xfs_buf_iodone_work( bool read = !!(bp->b_flags & XBF_READ); bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD); - if (read && bp->b_ops) + + /* validate buffers that were read without errors */ + if (read && bp->b_ops && !bp->b_error && (bp->b_flags & XBF_DONE)) bp->b_ops->verify_read(bp); if (bp->b_iodone)