[1/4] btrfs: backref, only collect file extent items matching backref offset
diff mbox series

Message ID 20200207093818.23710-2-ethanwu@synology.com
State New
Headers show
Series
  • btrfs: improve normal backref walking
Related show

Commit Message

ethanwu Feb. 7, 2020, 9:38 a.m. UTC
When resolving one backref of type EXTENT_DATA_REF, we collect all
references that simply references the EXTENT_ITEM even though
their (file_pos- file_extent_item::offset) are not the same as the
btrfs_extent_data_ref::offset we are searching.

This patch add additional check so that we only collect references whose
(file_pos- file_extent_item::offset) == btrfs_extent_data_ref::offset.

Signed-off-by: ethanwu <ethanwu@synology.com>
---
 fs/btrfs/backref.c | 63 ++++++++++++++++++++++++----------------------
 1 file changed, 33 insertions(+), 30 deletions(-)

Comments

Josef Bacik Feb. 7, 2020, 4:26 p.m. UTC | #1
On 2/7/20 4:38 AM, ethanwu wrote:
> When resolving one backref of type EXTENT_DATA_REF, we collect all
> references that simply references the EXTENT_ITEM even though
> their (file_pos- file_extent_item::offset) are not the same as the
> btrfs_extent_data_ref::offset we are searching.
> 
> This patch add additional check so that we only collect references whose
> (file_pos- file_extent_item::offset) == btrfs_extent_data_ref::offset.
> 
> Signed-off-by: ethanwu <ethanwu@synology.com>

I just want to make sure that btrfs/097 passes still right?  That's what the 
key_for_search thing was about, so I want to make sure we're not regressing.  I 
assume you've run xfstests but I just want to make doubly sure we're good here. 
If you did then you can add

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef
ethanwu Feb. 10, 2020, 9:12 a.m. UTC | #2
Josef Bacik 於 2020-02-08 00:26 寫到:
> On 2/7/20 4:38 AM, ethanwu wrote:
>> When resolving one backref of type EXTENT_DATA_REF, we collect all
>> references that simply references the EXTENT_ITEM even though
>> their (file_pos- file_extent_item::offset) are not the same as the
>> btrfs_extent_data_ref::offset we are searching.
>> 
>> This patch add additional check so that we only collect references 
>> whose
>> (file_pos- file_extent_item::offset) == btrfs_extent_data_ref::offset.
>> 
>> Signed-off-by: ethanwu <ethanwu@synology.com>
> 
> I just want to make sure that btrfs/097 passes still right?  That's
> what the key_for_search thing was about, so I want to make sure we're
> not regressing.  I assume you've run xfstests but I just want to make
> doubly sure we're good here. If you did then you can add
> 
> Reviewed-by: Josef Bacik <josef@toxicpanda.com>
> 
> Thanks,
> 
> Josef

Thanks for reviewing.

I've run the btrfs part of xfstests, 097 passed.
Failed at following tests:
074 (failed 2 out of 5 runs),
139, 153, 154,
197, 198(Patches related to these 2 tests seem to be not merged yet?)
201, 202

My kernel environment is 5.5-rc5, and this branch doesn't contain
fixes for tests 201 and 202.
All these failing tests also failed at the same version without my 
patch.

Thanks,
ethanwu
Johannes Thumshirn Feb. 10, 2020, 10:33 a.m. UTC | #3
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
David Sterba Feb. 10, 2020, 4:29 p.m. UTC | #4
On Mon, Feb 10, 2020 at 05:12:48PM +0800, ethanwu wrote:
> Josef Bacik 於 2020-02-08 00:26 寫到:
> > On 2/7/20 4:38 AM, ethanwu wrote:
> >> When resolving one backref of type EXTENT_DATA_REF, we collect all
> >> references that simply references the EXTENT_ITEM even though
> >> their (file_pos- file_extent_item::offset) are not the same as the
> >> btrfs_extent_data_ref::offset we are searching.
> >> 
> >> This patch add additional check so that we only collect references 
> >> whose
> >> (file_pos- file_extent_item::offset) == btrfs_extent_data_ref::offset.
> >> 
> >> Signed-off-by: ethanwu <ethanwu@synology.com>
> > 
> > I just want to make sure that btrfs/097 passes still right?  That's
> > what the key_for_search thing was about, so I want to make sure we're
> > not regressing.  I assume you've run xfstests but I just want to make
> > doubly sure we're good here. If you did then you can add
> > 
> > Reviewed-by: Josef Bacik <josef@toxicpanda.com>
> > 
> > Thanks,
> > 
> > Josef
> 
> Thanks for reviewing.
> 
> I've run the btrfs part of xfstests, 097 passed.
> Failed at following tests:
> 074 (failed 2 out of 5 runs),
> 139, 153, 154,
> 197, 198(Patches related to these 2 tests seem to be not merged yet?)
> 201, 202
> 
> My kernel environment is 5.5-rc5, and this branch doesn't contain
> fixes for tests 201 and 202.
> All these failing tests also failed at the same version without my 
> patch.

I tested the patchset in my environment and besides the above known
and unrelated failures, there's one that seems to be new and possibly
related to the patches:

btrfs/125               [18:18:14][ 5937.333021] run fstests btrfs/125 at 2020-02-07 18:18:14
[ 5937.737705] BTRFS info (device vda): disk space caching is enabled
[ 5937.741359] BTRFS info (device vda): has skinny extents
[ 5938.318881] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5 devid 1 transid 5 /dev/vdb scanned by mkfs.btrfs (21913)
[ 5938.323180] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5 devid 2 transid 5 /dev/vdc scanned by mkfs.btrfs (21913)
[ 5938.327229] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5 devid 3 transid 5 /dev/vdd scanned by mkfs.btrfs (21913)
[ 5938.344608] BTRFS info (device vdb): disk space caching is enabled
[ 5938.347892] BTRFS info (device vdb): has skinny extents
[ 5938.350941] BTRFS info (device vdb): flagging fs with big metadata feature
[ 5938.360083] BTRFS info (device vdb): checking UUID tree
[ 5938.470343] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5 devid 2 transid 7 /dev/vdc scanned by mount (21955)
[ 5938.476444] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5 devid 1 transid 7 /dev/vdb scanned by mount (21955)
[ 5938.480289] BTRFS info (device vdb): allowing degraded mounts
[ 5938.483738] BTRFS info (device vdb): disk space caching is enabled
[ 5938.487557] BTRFS info (device vdb): has skinny extents
[ 5938.491416] BTRFS warning (device vdb): devid 3 uuid f86704f4-689c-4677-b5f2-5380cf6be2d3 is missing
[ 5938.493879] BTRFS warning (device vdb): devid 3 uuid f86704f4-689c-4677-b5f2-5380cf6be2d3 is missing
[ 5939.233288] BTRFS: device fsid 265be525-bf76-447b-8db6-d69b0d3885d1 devid 1 transid 250 /dev/vda scanned by btrfs (21983)
[ 5939.250044] BTRFS info (device vdb): disk space caching is enabled
[ 5939.253525] BTRFS info (device vdb): has skinny extents
[ 5949.283384] BTRFS info (device vdb): balance: start -d -m -s
[ 5949.288563] BTRFS info (device vdb): relocating block group 217710592 flags data|raid5
[ 5949.322231] BTRFS error (device vdb): bad tree block start, want 39862272 have 30949376
[ 5949.328136] repair_io_failure: 22 callbacks suppressed
[ 5949.328139] BTRFS info (device vdb): read error corrected: ino 0 off 39862272 (dev /dev/vdd sector 19488)
[ 5949.333447] BTRFS info (device vdb): read error corrected: ino 0 off 39866368 (dev /dev/vdd sector 19496)
[ 5949.336875] BTRFS info (device vdb): read error corrected: ino 0 off 39870464 (dev /dev/vdd sector 19504)
[ 5949.340325] BTRFS info (device vdb): read error corrected: ino 0 off 39874560 (dev /dev/vdd sector 19512)
[ 5949.409934] BTRFS warning (device vdb): csum failed root -9 ino 257 off 2228224 csum 0x5f6faf4265e0e04dc797f32ab085653d60954cfd976b257c83e7cd825ae7c98e expected csum 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7 mirror 1
[ 5949.414764] BTRFS warning (device vdb): csum failed root -9 ino 257 off 2555904 csum 0xde6a48c4c66a765d0142c27fee1ec429055152fe3f10d70e4ef59a9d7a071bdc expected csum 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7 mirror 1
[ 5949.414774] BTRFS warning (device vdb): csum failed root -9 ino 257 off 2621440 csum 0x47800732ac4471f4aced9c4fe35cb6046c401792a99daa02ccbc35e0b4632496 expected csum 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7 mirror 1
[ 5949.414946] BTRFS warning (device vdb): csum failed root -9 ino 257 off 2637824 csum 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78 expected csum 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7 mirror 1
[ 5949.415061] BTRFS warning (device vdb): csum failed root -9 ino 257 off 2641920 csum 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78 expected csum 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7 mirror 1
[ 5949.415136] BTRFS warning (device vdb): csum failed root -9 ino 257 off 2646016 csum 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78 expected csum 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7 mirror 1
[ 5949.415214] BTRFS warning (device vdb): csum failed root -9 ino 257 off 2650112 csum 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78 expected csum 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7 mirror 1
[ 5949.415260] BTRFS warning (device vdb): csum failed root -9 ino 257 off 2654208 csum 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78 expected csum 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7 mirror 1
[ 5949.415304] BTRFS warning (device vdb): csum failed root -9 ino 257 off 2658304 csum 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78 expected csum 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7 mirror 1
[ 5949.415348] BTRFS warning (device vdb): csum failed root -9 ino 257 off 2662400 csum 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78 expected csum 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7 mirror 1
[ 5949.419530] BTRFS info (device vdb): read error corrected: ino 257 off 2621440 (dev /dev/vdd sector 195712)
[ 5949.420414] BTRFS info (device vdb): read error corrected: ino 257 off 2641920 (dev /dev/vdd sector 195752)
[ 5949.420528] BTRFS info (device vdb): read error corrected: ino 257 off 2637824 (dev /dev/vdd sector 195744)
[ 5949.420651] BTRFS info (device vdb): read error corrected: ino 257 off 2650112 (dev /dev/vdd sector 195768)
[ 5949.420771] BTRFS info (device vdb): read error corrected: ino 257 off 2654208 (dev /dev/vdd sector 195776)
[ 5949.420886] BTRFS info (device vdb): read error corrected: ino 257 off 2662400 (dev /dev/vdd sector 195792)
[ 5949.527064] BTRFS error (device vdb): bad tree block start, want 39059456 have 30539776
[ 5949.527461] BTRFS error (device vdb): bad tree block start, want 39075840 have 30556160
[ 5949.527646] BTRFS error (device vdb): bad tree block start, want 39092224 have 0
[ 5949.527664] BTRFS error (device vdb): bad tree block start, want 39108608 have 30588928
[ 5949.546199] BTRFS error (device vdb): bad tree block start, want 39075840 have 30556160
[ 5949.579222] BTRFS error (device vdb): bad tree block start, want 39092224 have 0
[ 5949.589051] BTRFS error (device vdb): bad tree block start, want 39108608 have 30588928
[ 5949.828796] BTRFS error (device vdb): bad tree block start, want 39387136 have 30670848
[ 5949.828804] BTRFS error (device vdb): bad tree block start, want 39403520 have 0
[ 5950.430515] BTRFS info (device vdb): balance: ended with status: -5
[ 5950.450348] btrfs (22010) used greatest stack depth: 10304 bytes left
[failed, exit status 1][ 5950.461088] BTRFS info (device vdb): clearing incompat feature flag for RAID56 (0x80)
 [18:18:27]- output mismatch (see /tmp/fstests/results//btrfs/125.out.bad)
    --- tests/btrfs/125.out     2018-04-12 16:57:00.616225550 +0000
    +++ /tmp/fstests/results//btrfs/125.out.bad 2020-02-07 18:18:27.496000000 +0000
    @@ -3,5 +3,5 @@
     Write data with degraded mount
     
     Mount normal and balance
    -
    -Mount degraded but with other dev
    +failed: '/sbin/btrfs balance start /tmp/scratch'
    +(see /tmp/fstests/results//btrfs/125.full for details)
    ...
    (Run 'diff -u /tmp/fstests/tests/btrfs/125.out /tmp/fstests/results//btrfs/125.out.bad'  to see the entire diff)
ethanwu Feb. 11, 2020, 4:03 a.m. UTC | #5
David Sterba 於 2020-02-11 00:29 寫到:
> On Mon, Feb 10, 2020 at 05:12:48PM +0800, ethanwu wrote:
>> Josef Bacik 於 2020-02-08 00:26 寫到:
>> > On 2/7/20 4:38 AM, ethanwu wrote:
>> >> When resolving one backref of type EXTENT_DATA_REF, we collect all
>> >> references that simply references the EXTENT_ITEM even though
>> >> their (file_pos- file_extent_item::offset) are not the same as the
>> >> btrfs_extent_data_ref::offset we are searching.
>> >>
>> >> This patch add additional check so that we only collect references
>> >> whose
>> >> (file_pos- file_extent_item::offset) == btrfs_extent_data_ref::offset.
>> >>
>> >> Signed-off-by: ethanwu <ethanwu@synology.com>
>> >
>> > I just want to make sure that btrfs/097 passes still right?  That's
>> > what the key_for_search thing was about, so I want to make sure we're
>> > not regressing.  I assume you've run xfstests but I just want to make
>> > doubly sure we're good here. If you did then you can add
>> >
>> > Reviewed-by: Josef Bacik <josef@toxicpanda.com>
>> >
>> > Thanks,
>> >
>> > Josef
>> 
>> Thanks for reviewing.
>> 
>> I've run the btrfs part of xfstests, 097 passed.
>> Failed at following tests:
>> 074 (failed 2 out of 5 runs),
>> 139, 153, 154,
>> 197, 198(Patches related to these 2 tests seem to be not merged yet?)
>> 201, 202
>> 
>> My kernel environment is 5.5-rc5, and this branch doesn't contain
>> fixes for tests 201 and 202.
>> All these failing tests also failed at the same version without my
>> patch.
> 
> I tested the patchset in my environment and besides the above known
> and unrelated failures, there's one that seems to be new and possibly
> related to the patches:
> 
> btrfs/125               [18:18:14][ 5937.333021] run fstests btrfs/125
> at 2020-02-07 18:18:14
> [ 5937.737705] BTRFS info (device vda): disk space caching is enabled
> [ 5937.741359] BTRFS info (device vda): has skinny extents
> [ 5938.318881] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5
> devid 1 transid 5 /dev/vdb scanned by mkfs.btrfs (21913)
> [ 5938.323180] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5
> devid 2 transid 5 /dev/vdc scanned by mkfs.btrfs (21913)
> [ 5938.327229] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5
> devid 3 transid 5 /dev/vdd scanned by mkfs.btrfs (21913)
> [ 5938.344608] BTRFS info (device vdb): disk space caching is enabled
> [ 5938.347892] BTRFS info (device vdb): has skinny extents
> [ 5938.350941] BTRFS info (device vdb): flagging fs with big metadata 
> feature
> [ 5938.360083] BTRFS info (device vdb): checking UUID tree
> [ 5938.470343] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5
> devid 2 transid 7 /dev/vdc scanned by mount (21955)
> [ 5938.476444] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5
> devid 1 transid 7 /dev/vdb scanned by mount (21955)
> [ 5938.480289] BTRFS info (device vdb): allowing degraded mounts
> [ 5938.483738] BTRFS info (device vdb): disk space caching is enabled
> [ 5938.487557] BTRFS info (device vdb): has skinny extents
> [ 5938.491416] BTRFS warning (device vdb): devid 3 uuid
> f86704f4-689c-4677-b5f2-5380cf6be2d3 is missing
> [ 5938.493879] BTRFS warning (device vdb): devid 3 uuid
> f86704f4-689c-4677-b5f2-5380cf6be2d3 is missing
> [ 5939.233288] BTRFS: device fsid 265be525-bf76-447b-8db6-d69b0d3885d1
> devid 1 transid 250 /dev/vda scanned by btrfs (21983)
> [ 5939.250044] BTRFS info (device vdb): disk space caching is enabled
> [ 5939.253525] BTRFS info (device vdb): has skinny extents
> [ 5949.283384] BTRFS info (device vdb): balance: start -d -m -s
> [ 5949.288563] BTRFS info (device vdb): relocating block group
> 217710592 flags data|raid5
> [ 5949.322231] BTRFS error (device vdb): bad tree block start, want
> 39862272 have 30949376
> [ 5949.328136] repair_io_failure: 22 callbacks suppressed
> [ 5949.328139] BTRFS info (device vdb): read error corrected: ino 0
> off 39862272 (dev /dev/vdd sector 19488)
> [ 5949.333447] BTRFS info (device vdb): read error corrected: ino 0
> off 39866368 (dev /dev/vdd sector 19496)
> [ 5949.336875] BTRFS info (device vdb): read error corrected: ino 0
> off 39870464 (dev /dev/vdd sector 19504)
> [ 5949.340325] BTRFS info (device vdb): read error corrected: ino 0
> off 39874560 (dev /dev/vdd sector 19512)
> [ 5949.409934] BTRFS warning (device vdb): csum failed root -9 ino 257
> off 2228224 csum
> 0x5f6faf4265e0e04dc797f32ab085653d60954cfd976b257c83e7cd825ae7c98e
> expected csum
> 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7
> mirror 1
> [ 5949.414764] BTRFS warning (device vdb): csum failed root -9 ino 257
> off 2555904 csum
> 0xde6a48c4c66a765d0142c27fee1ec429055152fe3f10d70e4ef59a9d7a071bdc
> expected csum
> 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7
> mirror 1
> [ 5949.414774] BTRFS warning (device vdb): csum failed root -9 ino 257
> off 2621440 csum
> 0x47800732ac4471f4aced9c4fe35cb6046c401792a99daa02ccbc35e0b4632496
> expected csum
> 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7
> mirror 1
> [ 5949.414946] BTRFS warning (device vdb): csum failed root -9 ino 257
> off 2637824 csum
> 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78
> expected csum
> 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7
> mirror 1
> [ 5949.415061] BTRFS warning (device vdb): csum failed root -9 ino 257
> off 2641920 csum
> 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78
> expected csum
> 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7
> mirror 1
> [ 5949.415136] BTRFS warning (device vdb): csum failed root -9 ino 257
> off 2646016 csum
> 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78
> expected csum
> 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7
> mirror 1
> [ 5949.415214] BTRFS warning (device vdb): csum failed root -9 ino 257
> off 2650112 csum
> 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78
> expected csum
> 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7
> mirror 1
> [ 5949.415260] BTRFS warning (device vdb): csum failed root -9 ino 257
> off 2654208 csum
> 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78
> expected csum
> 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7
> mirror 1
> [ 5949.415304] BTRFS warning (device vdb): csum failed root -9 ino 257
> off 2658304 csum
> 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78
> expected csum
> 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7
> mirror 1
> [ 5949.415348] BTRFS warning (device vdb): csum failed root -9 ino 257
> off 2662400 csum
> 0x769bd186841c10e5b1106b55986206c0e87fc05a7f565fdee01b5abcaff6ae78
> expected csum
> 0xad7facb2586fc6e966c004d7d1d16b024f5805ff7cb47c7a85dabd8b48892ca7
> mirror 1
> [ 5949.419530] BTRFS info (device vdb): read error corrected: ino 257
> off 2621440 (dev /dev/vdd sector 195712)
> [ 5949.420414] BTRFS info (device vdb): read error corrected: ino 257
> off 2641920 (dev /dev/vdd sector 195752)
> [ 5949.420528] BTRFS info (device vdb): read error corrected: ino 257
> off 2637824 (dev /dev/vdd sector 195744)
> [ 5949.420651] BTRFS info (device vdb): read error corrected: ino 257
> off 2650112 (dev /dev/vdd sector 195768)
> [ 5949.420771] BTRFS info (device vdb): read error corrected: ino 257
> off 2654208 (dev /dev/vdd sector 195776)
> [ 5949.420886] BTRFS info (device vdb): read error corrected: ino 257
> off 2662400 (dev /dev/vdd sector 195792)
> [ 5949.527064] BTRFS error (device vdb): bad tree block start, want
> 39059456 have 30539776
> [ 5949.527461] BTRFS error (device vdb): bad tree block start, want
> 39075840 have 30556160
> [ 5949.527646] BTRFS error (device vdb): bad tree block start, want
> 39092224 have 0
> [ 5949.527664] BTRFS error (device vdb): bad tree block start, want
> 39108608 have 30588928
> [ 5949.546199] BTRFS error (device vdb): bad tree block start, want
> 39075840 have 30556160
> [ 5949.579222] BTRFS error (device vdb): bad tree block start, want
> 39092224 have 0
> [ 5949.589051] BTRFS error (device vdb): bad tree block start, want
> 39108608 have 30588928
> [ 5949.828796] BTRFS error (device vdb): bad tree block start, want
> 39387136 have 30670848
> [ 5949.828804] BTRFS error (device vdb): bad tree block start, want
> 39403520 have 0
> [ 5950.430515] BTRFS info (device vdb): balance: ended with status: -5
> [ 5950.450348] btrfs (22010) used greatest stack depth: 10304 bytes 
> left
> [failed, exit status 1][ 5950.461088] BTRFS info (device vdb):
> clearing incompat feature flag for RAID56 (0x80)
>  [18:18:27]- output mismatch (see 
> /tmp/fstests/results//btrfs/125.out.bad)
>     --- tests/btrfs/125.out     2018-04-12 16:57:00.616225550 +0000
>     +++ /tmp/fstests/results//btrfs/125.out.bad 2020-02-07
> 18:18:27.496000000 +0000
>     @@ -3,5 +3,5 @@
>      Write data with degraded mount
> 
>      Mount normal and balance
>     -
>     -Mount degraded but with other dev
>     +failed: '/sbin/btrfs balance start /tmp/scratch'
>     +(see /tmp/fstests/results//btrfs/125.full for details)
>     ...
>     (Run 'diff -u /tmp/fstests/tests/btrfs/125.out
> /tmp/fstests/results//btrfs/125.out.bad'  to see the entire diff)

Hi,
I've rebased my kernel environment to the latest for-next branch,
xfstests is updated to latest as well.
Test 125 still passes many times without even one failure.

here's my local.config

export TEST_DEV=/dev/sdc
export TEST_DIR=/mnt/test
export SCRATCH_MNT=/mnt/scratch
export FSTYP=btrfs
export SCRATCH_DEV_POOL="/dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh"

each device has 80GB capacity.

Besides, LOGWRITES_DEV is not set and CONFIG_FAULT_INJECTION_DEBUG_FS
is turned off, but both seems to be unrelated to 125.

thanks,
ethanwu
Qu Wenruo Feb. 11, 2020, 4:33 a.m. UTC | #6
On 2020/2/11 下午12:03, ethanwu wrote:
> David Sterba 於 2020-02-11 00:29 寫到:
>> On Mon, Feb 10, 2020 at 05:12:48PM +0800, ethanwu wrote:
>>> Josef Bacik 於 2020-02-08 00:26 寫到:
>>> > On 2/7/20 4:38 AM, ethanwu wrote:
>>> >> When resolving one backref of type EXTENT_DATA_REF, we collect all
>>> >> references that simply references the EXTENT_ITEM even though
>>> >> their (file_pos- file_extent_item::offset) are not the same as the
>>> >> btrfs_extent_data_ref::offset we are searching.
>>> >>
>>> >> This patch add additional check so that we only collect references
>>> >> whose
>>> >> (file_pos- file_extent_item::offset) ==
>>> btrfs_extent_data_ref::offset.
>>> >>
>>> >> Signed-off-by: ethanwu <ethanwu@synology.com>
>>> >
>>> > I just want to make sure that btrfs/097 passes still right?  That's
>>> > what the key_for_search thing was about, so I want to make sure we're
>>> > not regressing.  I assume you've run xfstests but I just want to make
>>> > doubly sure we're good here. If you did then you can add
>>> >
>>> > Reviewed-by: Josef Bacik <josef@toxicpanda.com>
>>> >
>>> > Thanks,
>>> >
>>> > Josef
>>>
>>> Thanks for reviewing.
>>>
>>> I've run the btrfs part of xfstests, 097 passed.
>>> Failed at following tests:
>>> 074 (failed 2 out of 5 runs),
>>> 139, 153, 154,
>>> 197, 198(Patches related to these 2 tests seem to be not merged yet?)
>>> 201, 202
>>>
>>> My kernel environment is 5.5-rc5, and this branch doesn't contain
>>> fixes for tests 201 and 202.
>>> All these failing tests also failed at the same version without my
>>> patch.
>>
>> I tested the patchset in my environment and besides the above known
>> and unrelated failures, there's one that seems to be new and possibly
>> related to the patches:
>>
>> btrfs/125               [18:18:14][ 5937.333021] run fstests btrfs/125
>> at 2020-02-07 18:18:14
>> [ 5937.737705] BTRFS info (device vda): disk space caching is enabled
>> [ 5937.741359] BTRFS info (device vda): has skinny extents
>> [ 5938.318881] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5
>> devid 1 transid 5 /dev/vdb scanned by mkfs.btrfs (21913)
>> [ 5938.323180] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5
>> devid 2 transid 5 /dev/vdc scanned by mkfs.btrfs (21913)
>> [ 5938.327229] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5
>> devid 3 transid 5 /dev/vdd scanned by mkfs.btrfs (21913)
>> [ 5938.344608] BTRFS info (device vdb): disk space caching is enabled
>> [ 5938.347892] BTRFS info (device vdb): has skinny extents
>> [ 5938.350941] BTRFS info (device vdb): flagging fs with big metadata
>> feature
>> [ 5938.360083] BTRFS info (device vdb): checking UUID tree
>> [ 5938.470343] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5
>> devid 2 transid 7 /dev/vdc scanned by mount (21955)
>> [ 5938.476444] BTRFS: device fsid e34ea734-aeef-484b-8a5b-d061e3bad8c5
>> devid 1 transid 7 /dev/vdb scanned by mount (21955)
>> [ 5938.480289] BTRFS info (device vdb): allowing degraded mounts
>> [ 5938.483738] BTRFS info (device vdb): disk space caching is enabled
>> [ 5938.487557] BTRFS info (device vdb): has skinny extents
>> [ 5938.491416] BTRFS warning (device vdb): devid 3 uuid
>> f86704f4-689c-4677-b5f2-5380cf6be2d3 is missing
>> [ 5938.493879] BTRFS warning (device vdb): devid 3 uuid
>> f86704f4-689c-4677-b5f2-5380cf6be2d3 is missing
>> [ 5939.233288] BTRFS: device fsid 265be525-bf76-447b-8db6-d69b0d3885d1
>> devid 1 transid 250 /dev/vda scanned by btrfs (21983)
>> [ 5939.250044] BTRFS info (device vdb): disk space caching is enabled
>> [ 5939.253525] BTRFS info (device vdb): has skinny extents
>> [ 5949.283384] BTRFS info (device vdb): balance: start -d -m -s
>> [ 5949.288563] BTRFS info (device vdb): relocating block group
>> 217710592 flags data|raid5
>> [ 5949.322231] BTRFS error (device vdb): bad tree block start, want
>> 39862272 have 30949376
>> [ 5949.328136] repair_io_failure: 22 callbacks suppressed
>> [ 5949.328139] BTRFS info (device vdb): read error corrected: ino 0
>> off 39862272 (dev /dev/vdd sector 19488)
>> [ 5949.333447] BTRFS info (device vdb): read error corrected: ino 0
>> off 39866368 (dev /dev/vdd sector 19496)
>> [ 5949.336875] BTRFS info (device vdb): read error corrected: ino 0
>> off 39870464 (dev /dev/vdd sector 19504)
>> [ 5949.340325] BTRFS info (device vdb): read error corrected: ino 0
>> off 39874560 (dev /dev/vdd sector 19512)
>> [ 5949.409934] BTRFS warning (device vdb): csum failed root -9 ino 257
>> off 2228224 csum

This looks like an existing bug, IIRC Zygo reported it before.

Btrfs balance just randomly failed at data reloc tree.

Thus I don't believe it's related to Ethan's patches.

Thanks,
Qu

> 
> Hi,
> I've rebased my kernel environment to the latest for-next branch,
> xfstests is updated to latest as well.
> Test 125 still passes many times without even one failure.
> 
> here's my local.config
> 
> export TEST_DEV=/dev/sdc
> export TEST_DIR=/mnt/test
> export SCRATCH_MNT=/mnt/scratch
> export FSTYP=btrfs
> export SCRATCH_DEV_POOL="/dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh"
> 
> each device has 80GB capacity.
> 
> Besides, LOGWRITES_DEV is not set and CONFIG_FAULT_INJECTION_DEBUG_FS
> is turned off, but both seems to be unrelated to 125.
> 
> thanks,
> ethanwu
> 
> 
> 
>
David Sterba Feb. 11, 2020, 6:21 p.m. UTC | #7
On Tue, Feb 11, 2020 at 12:33:48PM +0800, Qu Wenruo wrote:
> >> 39862272 have 30949376
> >> [ 5949.328136] repair_io_failure: 22 callbacks suppressed
> >> [ 5949.328139] BTRFS info (device vdb): read error corrected: ino 0
> >> off 39862272 (dev /dev/vdd sector 19488)
> >> [ 5949.333447] BTRFS info (device vdb): read error corrected: ino 0
> >> off 39866368 (dev /dev/vdd sector 19496)
> >> [ 5949.336875] BTRFS info (device vdb): read error corrected: ino 0
> >> off 39870464 (dev /dev/vdd sector 19504)
> >> [ 5949.340325] BTRFS info (device vdb): read error corrected: ino 0
> >> off 39874560 (dev /dev/vdd sector 19512)
> >> [ 5949.409934] BTRFS warning (device vdb): csum failed root -9 ino 257
> >> off 2228224 csum
> 
> This looks like an existing bug, IIRC Zygo reported it before.
> 
> Btrfs balance just randomly failed at data reloc tree.
> 
> Thus I don't believe it's related to Ethan's patches.

Ok, than the patches make it more likely to happen, which could mean
that faster backref processing hits some race window. As there could be
more we should first fix the bug you say Zygo reported.
ethanwu Feb. 12, 2020, 11:32 a.m. UTC | #8
David Sterba 於 2020-02-12 02:21 寫到:
> On Tue, Feb 11, 2020 at 12:33:48PM +0800, Qu Wenruo wrote:
>> >> 39862272 have 30949376
>> >> [ 5949.328136] repair_io_failure: 22 callbacks suppressed
>> >> [ 5949.328139] BTRFS info (device vdb): read error corrected: ino 0
>> >> off 39862272 (dev /dev/vdd sector 19488)
>> >> [ 5949.333447] BTRFS info (device vdb): read error corrected: ino 0
>> >> off 39866368 (dev /dev/vdd sector 19496)
>> >> [ 5949.336875] BTRFS info (device vdb): read error corrected: ino 0
>> >> off 39870464 (dev /dev/vdd sector 19504)
>> >> [ 5949.340325] BTRFS info (device vdb): read error corrected: ino 0
>> >> off 39874560 (dev /dev/vdd sector 19512)
>> >> [ 5949.409934] BTRFS warning (device vdb): csum failed root -9 ino 257
>> >> off 2228224 csum
>> 
>> This looks like an existing bug, IIRC Zygo reported it before.
>> 
>> Btrfs balance just randomly failed at data reloc tree.
>> 
>> Thus I don't believe it's related to Ethan's patches.
> 
> Ok, than the patches make it more likely to happen, which could mean
> that faster backref processing hits some race window. As there could be
> more we should first fix the bug you say Zygo reported.

I added a log to check if find_parent_nodes is ever called under
test btrfs/125. It turns out that btrfs/125 doesn't pass through the
function. What my patches do is all under find_parent_nodes.
Therefore, I don't think my patch would make btrfs/125 more likely
to happen, at least it doesn't change the behavior of functions
btrfs/125 run through.

Is it easy to reproduce in your test environment?

Thanks,
ethanwu
Filipe Manana Feb. 12, 2020, 12:03 p.m. UTC | #9
On Wed, Feb 12, 2020 at 11:34 AM ethanwu <ethanwu@synology.com> wrote:
>
> David Sterba 於 2020-02-12 02:21 寫到:
> > On Tue, Feb 11, 2020 at 12:33:48PM +0800, Qu Wenruo wrote:
> >> >> 39862272 have 30949376
> >> >> [ 5949.328136] repair_io_failure: 22 callbacks suppressed
> >> >> [ 5949.328139] BTRFS info (device vdb): read error corrected: ino 0
> >> >> off 39862272 (dev /dev/vdd sector 19488)
> >> >> [ 5949.333447] BTRFS info (device vdb): read error corrected: ino 0
> >> >> off 39866368 (dev /dev/vdd sector 19496)
> >> >> [ 5949.336875] BTRFS info (device vdb): read error corrected: ino 0
> >> >> off 39870464 (dev /dev/vdd sector 19504)
> >> >> [ 5949.340325] BTRFS info (device vdb): read error corrected: ino 0
> >> >> off 39874560 (dev /dev/vdd sector 19512)
> >> >> [ 5949.409934] BTRFS warning (device vdb): csum failed root -9 ino 257
> >> >> off 2228224 csum
> >>
> >> This looks like an existing bug, IIRC Zygo reported it before.
> >>
> >> Btrfs balance just randomly failed at data reloc tree.
> >>
> >> Thus I don't believe it's related to Ethan's patches.
> >
> > Ok, than the patches make it more likely to happen, which could mean
> > that faster backref processing hits some race window. As there could be
> > more we should first fix the bug you say Zygo reported.
>
> I added a log to check if find_parent_nodes is ever called under
> test btrfs/125. It turns out that btrfs/125 doesn't pass through the
> function. What my patches do is all under find_parent_nodes.
> Therefore, I don't think my patch would make btrfs/125 more likely
> to happen, at least it doesn't change the behavior of functions
> btrfs/125 run through.

Yep, test btrfs/125 doesn't trigger backreference walking.
Backreference walking is used by fiemap, send, the logical ino ioctls,
scrub (in error paths to get a path for an inode), and qgroups. The
test doesn't use any of these features.

I've seen that test fail for the same reason once as well, so it's
definitely not caused by your patches.

Thanks.

>
> Is it easy to reproduce in your test environment?
>
> Thanks,
> ethanwu
Qu Wenruo Feb. 12, 2020, 12:11 p.m. UTC | #10
On 2020/2/12 下午7:32, ethanwu wrote:
> David Sterba 於 2020-02-12 02:21 寫到:
>> On Tue, Feb 11, 2020 at 12:33:48PM +0800, Qu Wenruo wrote:
>>> >> 39862272 have 30949376
>>> >> [ 5949.328136] repair_io_failure: 22 callbacks suppressed
>>> >> [ 5949.328139] BTRFS info (device vdb): read error corrected: ino 0
>>> >> off 39862272 (dev /dev/vdd sector 19488)
>>> >> [ 5949.333447] BTRFS info (device vdb): read error corrected: ino 0
>>> >> off 39866368 (dev /dev/vdd sector 19496)
>>> >> [ 5949.336875] BTRFS info (device vdb): read error corrected: ino 0
>>> >> off 39870464 (dev /dev/vdd sector 19504)
>>> >> [ 5949.340325] BTRFS info (device vdb): read error corrected: ino 0
>>> >> off 39874560 (dev /dev/vdd sector 19512)
>>> >> [ 5949.409934] BTRFS warning (device vdb): csum failed root -9 ino
>>> 257
>>> >> off 2228224 csum
>>>
>>> This looks like an existing bug, IIRC Zygo reported it before.
>>>
>>> Btrfs balance just randomly failed at data reloc tree.
>>>
>>> Thus I don't believe it's related to Ethan's patches.
>>
>> Ok, than the patches make it more likely to happen, which could mean
>> that faster backref processing hits some race window. As there could be
>> more we should first fix the bug you say Zygo reported.
> 
> I added a log to check if find_parent_nodes is ever called under
> test btrfs/125. It turns out that btrfs/125 doesn't pass through the
> function. What my patches do is all under find_parent_nodes.

Balance goes through its own backref cache, thus it doesn't utilize the
path you're modifying.

So don't worry your patches look pretty good.

Furthermore, this csum mismatch is not related to backref walk, but the
data csum and the data in data reloc tree, which are all created by balance.

So there is really no reason to block such good optimization.

Thanks,
Qu
> Therefore, I don't think my patch would make btrfs/125 more likely
> to happen, at least it doesn't change the behavior of functions
> btrfs/125 run through.
> 
> Is it easy to reproduce in your test environment?>
> Thanks,
> ethanwu
David Sterba Feb. 12, 2020, 2:57 p.m. UTC | #11
On Wed, Feb 12, 2020 at 08:11:56PM +0800, Qu Wenruo wrote:
> >>>
> >>> This looks like an existing bug, IIRC Zygo reported it before.
> >>>
> >>> Btrfs balance just randomly failed at data reloc tree.
> >>>
> >>> Thus I don't believe it's related to Ethan's patches.
> >>
> >> Ok, than the patches make it more likely to happen, which could mean
> >> that faster backref processing hits some race window. As there could be
> >> more we should first fix the bug you say Zygo reported.
> > 
> > I added a log to check if find_parent_nodes is ever called under
> > test btrfs/125. It turns out that btrfs/125 doesn't pass through the
> > function. What my patches do is all under find_parent_nodes.
> 
> Balance goes through its own backref cache, thus it doesn't utilize the
> path you're modifying.
> 
> So don't worry your patches look pretty good.
> 
> Furthermore, this csum mismatch is not related to backref walk, but the
> data csum and the data in data reloc tree, which are all created by balance.
> 
> So there is really no reason to block such good optimization.

I don't mean to block the patchset but when I test patchsets from 5
people and tests start to fail I need to know what's the cause and if
there's a fix in sight. So far the test failed 2 out of 2 (once the
branch itself and then with for-next), I can do more rounds but at this
point it's too reliable to reproduce so there is some connection.

Sometimes it looks like I blame the messenger and complaining under
patches that don't cause the bugs, but often I don't have anyting better
than new warnings between 2 test rounds. Once we have more eyes on the
problem we'll narrow it down and find the root cause.
Qu Wenruo Feb. 13, 2020, 12:59 a.m. UTC | #12
On 2020/2/12 下午10:57, David Sterba wrote:
> On Wed, Feb 12, 2020 at 08:11:56PM +0800, Qu Wenruo wrote:
>>>>>
>>>>> This looks like an existing bug, IIRC Zygo reported it before.
>>>>>
>>>>> Btrfs balance just randomly failed at data reloc tree.
>>>>>
>>>>> Thus I don't believe it's related to Ethan's patches.
>>>>
>>>> Ok, than the patches make it more likely to happen, which could mean
>>>> that faster backref processing hits some race window. As there could be
>>>> more we should first fix the bug you say Zygo reported.
>>>
>>> I added a log to check if find_parent_nodes is ever called under
>>> test btrfs/125. It turns out that btrfs/125 doesn't pass through the
>>> function. What my patches do is all under find_parent_nodes.
>>
>> Balance goes through its own backref cache, thus it doesn't utilize the
>> path you're modifying.
>>
>> So don't worry your patches look pretty good.
>>
>> Furthermore, this csum mismatch is not related to backref walk, but the
>> data csum and the data in data reloc tree, which are all created by balance.
>>
>> So there is really no reason to block such good optimization.
> 
> I don't mean to block the patchset but when I test patchsets from 5
> people and tests start to fail I need to know what's the cause and if
> there's a fix in sight. So far the test failed 2 out of 2 (once the
> branch itself and then with for-next), I can do more rounds but at this
> point it's too reliable to reproduce so there is some connection.
> 
> Sometimes it looks like I blame the messenger and complaining under
> patches that don't cause the bugs, but often I don't have anyting better
> than new warnings between 2 test rounds. Once we have more eyes on the
> problem we'll narrow it down and find the root cause.
> 
BTW, from your initial report, the csum looks pretty long.

Are you testing with those new csum algos? And could that be the reason
why it's much easier to reproduce?

Thanks,
Qu
David Sterba Feb. 18, 2020, 4:54 p.m. UTC | #13
On Thu, Feb 13, 2020 at 08:59:56AM +0800, Qu Wenruo wrote:
> BTW, from your initial report, the csum looks pretty long.
> 
> Are you testing with those new csum algos? And could that be the reason
> why it's much easier to reproduce?

I have various configurations of test setups, some VMs use SHA256 and it
could be the factor that makes the bug reproducible.

Patch
diff mbox series

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index e5d85311d5d5..1572eab3cc06 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -347,33 +347,10 @@  static int add_prelim_ref(const struct btrfs_fs_info *fs_info,
 		return -ENOMEM;
 
 	ref->root_id = root_id;
-	if (key) {
+	if (key)
 		ref->key_for_search = *key;
-		/*
-		 * We can often find data backrefs with an offset that is too
-		 * large (>= LLONG_MAX, maximum allowed file offset) due to
-		 * underflows when subtracting a file's offset with the data
-		 * offset of its corresponding extent data item. This can
-		 * happen for example in the clone ioctl.
-		 * So if we detect such case we set the search key's offset to
-		 * zero to make sure we will find the matching file extent item
-		 * at add_all_parents(), otherwise we will miss it because the
-		 * offset taken form the backref is much larger then the offset
-		 * of the file extent item. This can make us scan a very large
-		 * number of file extent items, but at least it will not make
-		 * us miss any.
-		 * This is an ugly workaround for a behaviour that should have
-		 * never existed, but it does and a fix for the clone ioctl
-		 * would touch a lot of places, cause backwards incompatibility
-		 * and would not fix the problem for extents cloned with older
-		 * kernels.
-		 */
-		if (ref->key_for_search.type == BTRFS_EXTENT_DATA_KEY &&
-		    ref->key_for_search.offset >= LLONG_MAX)
-			ref->key_for_search.offset = 0;
-	} else {
+	else
 		memset(&ref->key_for_search, 0, sizeof(ref->key_for_search));
-	}
 
 	ref->inode_list = NULL;
 	ref->level = level;
@@ -424,6 +401,7 @@  static int add_all_parents(struct btrfs_root *root, struct btrfs_path *path,
 	u64 disk_byte;
 	u64 wanted_disk_byte = ref->wanted_disk_byte;
 	u64 count = 0;
+	u64 data_offset;
 
 	if (level != 0) {
 		eb = path->nodes[level];
@@ -457,11 +435,15 @@  static int add_all_parents(struct btrfs_root *root, struct btrfs_path *path,
 
 		fi = btrfs_item_ptr(eb, slot, struct btrfs_file_extent_item);
 		disk_byte = btrfs_file_extent_disk_bytenr(eb, fi);
+		data_offset = btrfs_file_extent_offset(eb, fi);
 
 		if (disk_byte == wanted_disk_byte) {
 			eie = NULL;
 			old = NULL;
-			count++;
+			if (ref->key_for_search.offset == key.offset - data_offset)
+				count++;
+			else
+				goto next;
 			if (extent_item_pos) {
 				ret = check_extent_in_eb(&key, eb, fi,
 						*extent_item_pos,
@@ -513,6 +495,7 @@  static int resolve_indirect_ref(struct btrfs_fs_info *fs_info,
 	int root_level;
 	int level = ref->level;
 	int index;
+	struct btrfs_key search_key = ref->key_for_search;
 
 	root_key.objectid = ref->root_id;
 	root_key.type = BTRFS_ROOT_ITEM_KEY;
@@ -545,13 +528,33 @@  static int resolve_indirect_ref(struct btrfs_fs_info *fs_info,
 		goto out;
 	}
 
+	/*
+	 * We can often find data backrefs with an offset that is too
+	 * large (>= LLONG_MAX, maximum allowed file offset) due to
+	 * underflows when subtracting a file's offset with the data
+	 * offset of its corresponding extent data item. This can
+	 * happen for example in the clone ioctl.
+	 * So if we detect such case we set the search key's offset to
+	 * zero to make sure we will find the matching file extent item
+	 * at add_all_parents(), otherwise we will miss it because the
+	 * offset taken form the backref is much larger then the offset
+	 * of the file extent item. This can make us scan a very large
+	 * number of file extent items, but at least it will not make
+	 * us miss any.
+	 * This is an ugly workaround for a behaviour that should have
+	 * never existed, but it does and a fix for the clone ioctl
+	 * would touch a lot of places, cause backwards incompatibility
+	 * and would not fix the problem for extents cloned with older
+	 * kernels.
+	 */
+	if (search_key.type == BTRFS_EXTENT_DATA_KEY &&
+	    search_key.offset >= LLONG_MAX)
+		search_key.offset = 0;
 	path->lowest_level = level;
 	if (time_seq == SEQ_LAST)
-		ret = btrfs_search_slot(NULL, root, &ref->key_for_search, path,
-					0, 0);
+		ret = btrfs_search_slot(NULL, root, &search_key, path, 0, 0);
 	else
-		ret = btrfs_search_old_slot(root, &ref->key_for_search, path,
-					    time_seq);
+		ret = btrfs_search_old_slot(root, &search_key, path, time_seq);
 
 	/* root node has been locked, we can release @subvol_srcu safely here */
 	srcu_read_unlock(&fs_info->subvol_srcu, index);