diff mbox series

[v3,09/11] core.fsyncmethod: tests for batch mode

Message ID b5f371e97fee69d87da1dccd3180de0691c15834.1648097906.git.gitgitgadget@gmail.com (mailing list archive)
State Superseded
Headers show
Series core.fsyncmethod: add 'batch' mode for faster fsyncing of multiple objects | expand

Commit Message

Neeraj Singh (WINDOWS-SFS) March 24, 2022, 4:58 a.m. UTC
From: Neeraj Singh <neerajsi@microsoft.com>

Add test cases to exercise batch mode for:
 * 'git add'
 * 'git stash'
 * 'git update-index'
 * 'git unpack-objects'

These tests ensure that the added data winds up in the object database.

In this change we introduce a new test helper lib-unique-files.sh. The
goal of this library is to create a tree of files that have different
oids from any other files that may have been created in the current test
repo. This helps us avoid missing validation of an object being added
due to it already being in the repo.

Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
---
 t/lib-unique-files.sh  | 32 ++++++++++++++++++++++++++++++++
 t/t3700-add.sh         | 28 ++++++++++++++++++++++++++++
 t/t3903-stash.sh       | 20 ++++++++++++++++++++
 t/t5300-pack-object.sh | 41 +++++++++++++++++++++++++++--------------
 4 files changed, 107 insertions(+), 14 deletions(-)
 create mode 100644 t/lib-unique-files.sh

Comments

Ævar Arnfjörð Bjarmason March 24, 2022, 4:29 p.m. UTC | #1
On Thu, Mar 24 2022, Neeraj Singh via GitGitGadget wrote:

> From: Neeraj Singh <neerajsi@microsoft.com>
>
> Add test cases to exercise batch mode for:
>  * 'git add'
>  * 'git stash'
>  * 'git update-index'
>  * 'git unpack-objects'
>
> These tests ensure that the added data winds up in the object database.
>
> In this change we introduce a new test helper lib-unique-files.sh. The
> goal of this library is to create a tree of files that have different
> oids from any other files that may have been created in the current test
> repo. This helps us avoid missing validation of an object being added
> due to it already being in the repo.
>
> Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
> ---
>  t/lib-unique-files.sh  | 32 ++++++++++++++++++++++++++++++++
>  t/t3700-add.sh         | 28 ++++++++++++++++++++++++++++
>  t/t3903-stash.sh       | 20 ++++++++++++++++++++
>  t/t5300-pack-object.sh | 41 +++++++++++++++++++++++++++--------------
>  4 files changed, 107 insertions(+), 14 deletions(-)
>  create mode 100644 t/lib-unique-files.sh
>
> diff --git a/t/lib-unique-files.sh b/t/lib-unique-files.sh
> new file mode 100644
> index 00000000000..74efca91dd7
> --- /dev/null
> +++ b/t/lib-unique-files.sh
> @@ -0,0 +1,32 @@
> +# Helper to create files with unique contents
> +
> +# Create multiple files with unique contents within this test run. Takes the
> +# number of directories, the number of files in each directory, and the base
> +# directory.
> +#
> +# test_create_unique_files 2 3 my_dir -- Creates 2 directories with 3 files
> +#					 each in my_dir, all with contents
> +#					 different from previous invocations
> +#					 of this command in this run.
> +
> +test_create_unique_files () {
> +	test "$#" -ne 3 && BUG "3 param"
> +
> +	local dirs="$1" &&
> +	local files="$2" &&
> +	local basedir="$3" &&
> +	local counter=0 &&
> +	test_tick &&
> +	local basedata=$basedir$test_tick &&
> +	rm -rf "$basedir" &&
> +	for i in $(test_seq $dirs)
> +	do
> +		local dir=$basedir/dir$i &&
> +		mkdir -p "$dir" &&
> +		for j in $(test_seq $files)
> +		do
> +			counter=$((counter + 1)) &&
> +			echo "$basedata.$counter">"$dir/file$j.txt"
> +		done
> +	done
> +}

Having written my own perf tests for this series, I still don't get why
this is needed, at all.

tl;dr: the below: I think this whole workaround is because you missed
that "test_when_finished" exists, and how it excludes perf timings.

I.e. I get that if we ran this N times we'd want to wipe our repo
between tests, as for e.g. "git add" you want it to actually add the
objects.

It's what I do with the "hyperfine" command in
https://lore.kernel.org/git/RFC-patch-v2-4.7-61f4f3d7ef4-20220323T140753Z-avarab@gmail.com/
with the "-p" option.

I.e. hyperfine has a way to say "this is setup, but don't measure the
time", which is 1/2 of what you're working around here and in 10/11.

But as 10/11 shows you're limited to one run with t/perf because you
want to not include those "setup" numbers, and "test_perf" has no easy
way to avoid that (but more on that later).

Which b.t.w. I'm really skeptical of as an approach here in any case
(even if we couldn't exclude it from the numbers).

I.e. yes what "hyperfine" does would be preferrable, but in exchange for
avoiding that you're comparing samples of 1 runs.

Surely we're better off with N run (even if noisy). Given enough of them
the difference will shake out, and our estimated +/- will narrow..

But aside from that, why isn't this just:
	
	for cfg in true false blah
	done
		test_expect_success "setup for $cfg" '
			git init repo-$cfg &&
			for f in $(test_seq 1 100)
			do
				>repo-$cfg/$f
			done
		'
	
		test_perf "perf test for $cfg" '
			git -C repo-$cfg
		'
	done

Which surely is going to be more accurate in the context of our limited
t/perf environment because creating unique files is not sufficient at
all to ensure that your tests don't interfere with each other.

That's because in the first iteration we'll create N objects in
.git/objects/aa/* or whatever, which will *still be there* for your
second test, which will impact performance.

Whereas if you just make N repos you don't need unique files, and you
won't be introducing that as a conflating variable.

But anyway, reading perf-lib.sh again I haven't tested, but this whole
workaround seems truly unnecessary. I.e. in test_run_perf_ we do:
	
	test_run_perf_ () {
	        test_cleanup=:
	        test_export_="test_cleanup"
	        export test_cleanup test_export_
	        "$GTIME" -f "%E %U %S" -o test_time.$i "$TEST_SHELL_PATH" -c ' 
                	[... code we run and time ...]
		'
                [... later ...]
                test_eval_ "$test_cleanup"
	}

So can't you just avoid this whole glorious workaround for the low low
cost of approximately one shellscript string assignment? :)

I.e. if you do:

	setup_clean () {
		rm -rf repo
	}

	setup_first () {
		git init repo &&
		[make a bunch of files or whatever in repo]
	}

	setup_next () {
		test_when_finished "setup_clean" &&
		setup_first
	}

	test_expect_success 'setup initial stuff' '
		setup_first
	'

	test_perf 'my perf test' '
		test_when_finished "setup_next" &&
		[your perf test here]
	'

	test_expect_success 'cleanup' '
		# Not really needed, but just for completeness, we are
                # about to nuke the trash dir anyway...
		setup_clean
	'

I haven't tested (and need to run), but i'm pretty sure that does
exactly what you want without these workarounds, i.e. you'll get
"trampoline setup" without that setup being included in the perf
numbers.

Is it pretty? No, but it's a lot less complex than this unique file
business & workarounds, and will give you just the numbers you want, and
most importantly you car run it N times now for better samples.

I.e. "what you want" sans a *tiny* bit of noise that we use to just call
a function to do:

    test_cleanup=setup_next

Which we'll then eval *after* we measure your numbers to setup the next
test.
Neeraj Singh March 24, 2022, 6:23 p.m. UTC | #2
On Thu, Mar 24, 2022 at 9:53 AM Ævar Arnfjörð Bjarmason
<avarab@gmail.com> wrote:
>
>
> On Thu, Mar 24 2022, Neeraj Singh via GitGitGadget wrote:
>
> > From: Neeraj Singh <neerajsi@microsoft.com>
> >
> > Add test cases to exercise batch mode for:
> >  * 'git add'
> >  * 'git stash'
> >  * 'git update-index'
> >  * 'git unpack-objects'
> >
> > These tests ensure that the added data winds up in the object database.
> >
> > In this change we introduce a new test helper lib-unique-files.sh. The
> > goal of this library is to create a tree of files that have different
> > oids from any other files that may have been created in the current test
> > repo. This helps us avoid missing validation of an object being added
> > due to it already being in the repo.
> >
> > Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
> > ---
> >  t/lib-unique-files.sh  | 32 ++++++++++++++++++++++++++++++++
> >  t/t3700-add.sh         | 28 ++++++++++++++++++++++++++++
> >  t/t3903-stash.sh       | 20 ++++++++++++++++++++
> >  t/t5300-pack-object.sh | 41 +++++++++++++++++++++++++++--------------
> >  4 files changed, 107 insertions(+), 14 deletions(-)
> >  create mode 100644 t/lib-unique-files.sh
> >
> > diff --git a/t/lib-unique-files.sh b/t/lib-unique-files.sh
> > new file mode 100644
> > index 00000000000..74efca91dd7
> > --- /dev/null
> > +++ b/t/lib-unique-files.sh
> > @@ -0,0 +1,32 @@
> > +# Helper to create files with unique contents
> > +
> > +# Create multiple files with unique contents within this test run. Takes the
> > +# number of directories, the number of files in each directory, and the base
> > +# directory.
> > +#
> > +# test_create_unique_files 2 3 my_dir -- Creates 2 directories with 3 files
> > +#                                     each in my_dir, all with contents
> > +#                                     different from previous invocations
> > +#                                     of this command in this run.
> > +
> > +test_create_unique_files () {
> > +     test "$#" -ne 3 && BUG "3 param"
> > +
> > +     local dirs="$1" &&
> > +     local files="$2" &&
> > +     local basedir="$3" &&
> > +     local counter=0 &&
> > +     test_tick &&
> > +     local basedata=$basedir$test_tick &&
> > +     rm -rf "$basedir" &&
> > +     for i in $(test_seq $dirs)
> > +     do
> > +             local dir=$basedir/dir$i &&
> > +             mkdir -p "$dir" &&
> > +             for j in $(test_seq $files)
> > +             do
> > +                     counter=$((counter + 1)) &&
> > +                     echo "$basedata.$counter">"$dir/file$j.txt"
> > +             done
> > +     done
> > +}
>
> Having written my own perf tests for this series, I still don't get why
> this is needed, at all.
>
> tl;dr: the below: I think this whole workaround is because you missed
> that "test_when_finished" exists, and how it excludes perf timings.
>

I actually noticed test_when_finished, but I didn't think of your
"setup the next round on cleanup of last" idea.  I was debating at the
time adding a "test_perf_setup" helper to do the setup work during
each perf iteration.  How about I do that and just create a new repo
in each test_perf_setup step?

> I.e. I get that if we ran this N times we'd want to wipe our repo
> between tests, as for e.g. "git add" you want it to actually add the
> objects.
>
> It's what I do with the "hyperfine" command in
> https://lore.kernel.org/git/RFC-patch-v2-4.7-61f4f3d7ef4-20220323T140753Z-avarab@gmail.com/
> with the "-p" option.
>
> I.e. hyperfine has a way to say "this is setup, but don't measure the
> time", which is 1/2 of what you're working around here and in 10/11.
>
> But as 10/11 shows you're limited to one run with t/perf because you
> want to not include those "setup" numbers, and "test_perf" has no easy
> way to avoid that (but more on that later).
>
> Which b.t.w. I'm really skeptical of as an approach here in any case
> (even if we couldn't exclude it from the numbers).
>
> I.e. yes what "hyperfine" does would be preferrable, but in exchange for
> avoiding that you're comparing samples of 1 runs.
>
> Surely we're better off with N run (even if noisy). Given enough of them
> the difference will shake out, and our estimated +/- will narrow..
>
> But aside from that, why isn't this just:
>
>         for cfg in true false blah
>         done
>                 test_expect_success "setup for $cfg" '
>                         git init repo-$cfg &&
>                         for f in $(test_seq 1 100)
>                         do
>                                 >repo-$cfg/$f
>                         done
>                 '
>
>                 test_perf "perf test for $cfg" '
>                         git -C repo-$cfg
>                 '
>         done
>
> Which surely is going to be more accurate in the context of our limited
> t/perf environment because creating unique files is not sufficient at
> all to ensure that your tests don't interfere with each other.
>
> That's because in the first iteration we'll create N objects in
> .git/objects/aa/* or whatever, which will *still be there* for your
> second test, which will impact performance.
>
> Whereas if you just make N repos you don't need unique files, and you
> won't be introducing that as a conflating variable.
>
> But anyway, reading perf-lib.sh again I haven't tested, but this whole
> workaround seems truly unnecessary. I.e. in test_run_perf_ we do:
>
>         test_run_perf_ () {
>                 test_cleanup=:
>                 test_export_="test_cleanup"
>                 export test_cleanup test_export_
>                 "$GTIME" -f "%E %U %S" -o test_time.$i "$TEST_SHELL_PATH" -c '
>                         [... code we run and time ...]
>                 '
>                 [... later ...]
>                 test_eval_ "$test_cleanup"
>         }
>
> So can't you just avoid this whole glorious workaround for the low low
> cost of approximately one shellscript string assignment? :)
>
> I.e. if you do:
>
>         setup_clean () {
>                 rm -rf repo
>         }
>
>         setup_first () {
>                 git init repo &&
>                 [make a bunch of files or whatever in repo]
>         }
>
>         setup_next () {
>                 test_when_finished "setup_clean" &&
>                 setup_first
>         }
>
>         test_expect_success 'setup initial stuff' '
>                 setup_first
>         '
>
>         test_perf 'my perf test' '
>                 test_when_finished "setup_next" &&
>                 [your perf test here]
>         '
>
>         test_expect_success 'cleanup' '
>                 # Not really needed, but just for completeness, we are
>                 # about to nuke the trash dir anyway...
>                 setup_clean
>         '
>
> I haven't tested (and need to run), but i'm pretty sure that does
> exactly what you want without these workarounds, i.e. you'll get
> "trampoline setup" without that setup being included in the perf
> numbers.
>
> Is it pretty? No, but it's a lot less complex than this unique file
> business & workarounds, and will give you just the numbers you want, and
> most importantly you car run it N times now for better samples.
>
> I.e. "what you want" sans a *tiny* bit of noise that we use to just call
> a function to do:
>
>     test_cleanup=setup_next
>
> Which we'll then eval *after* we measure your numbers to setup the next
> test.

How about I add a new test_perf_setup mechanism to make your idea work
in a straightforward way?

I still want the test_create_unique_files thing as a way to make
multiple files easily.  And for the non-perf tests it makes sense to
have differing contents within a test run.

Thanks,
Neeraj
Ævar Arnfjörð Bjarmason March 26, 2022, 3:35 p.m. UTC | #3
On Thu, Mar 24 2022, Neeraj Singh wrote:

> On Thu, Mar 24, 2022 at 9:53 AM Ævar Arnfjörð Bjarmason
> <avarab@gmail.com> wrote:
>>
>>
>> On Thu, Mar 24 2022, Neeraj Singh via GitGitGadget wrote:
>>
>> > From: Neeraj Singh <neerajsi@microsoft.com>
>> >
>> > Add test cases to exercise batch mode for:
>> >  * 'git add'
>> >  * 'git stash'
>> >  * 'git update-index'
>> >  * 'git unpack-objects'
>> >
>> > These tests ensure that the added data winds up in the object database.
>> >
>> > In this change we introduce a new test helper lib-unique-files.sh. The
>> > goal of this library is to create a tree of files that have different
>> > oids from any other files that may have been created in the current test
>> > repo. This helps us avoid missing validation of an object being added
>> > due to it already being in the repo.
>> >
>> > Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
>> > ---
>> >  t/lib-unique-files.sh  | 32 ++++++++++++++++++++++++++++++++
>> >  t/t3700-add.sh         | 28 ++++++++++++++++++++++++++++
>> >  t/t3903-stash.sh       | 20 ++++++++++++++++++++
>> >  t/t5300-pack-object.sh | 41 +++++++++++++++++++++++++++--------------
>> >  4 files changed, 107 insertions(+), 14 deletions(-)
>> >  create mode 100644 t/lib-unique-files.sh
>> >
>> > diff --git a/t/lib-unique-files.sh b/t/lib-unique-files.sh
>> > new file mode 100644
>> > index 00000000000..74efca91dd7
>> > --- /dev/null
>> > +++ b/t/lib-unique-files.sh
>> > @@ -0,0 +1,32 @@
>> > +# Helper to create files with unique contents
>> > +
>> > +# Create multiple files with unique contents within this test run. Takes the
>> > +# number of directories, the number of files in each directory, and the base
>> > +# directory.
>> > +#
>> > +# test_create_unique_files 2 3 my_dir -- Creates 2 directories with 3 files
>> > +#                                     each in my_dir, all with contents
>> > +#                                     different from previous invocations
>> > +#                                     of this command in this run.
>> > +
>> > +test_create_unique_files () {
>> > +     test "$#" -ne 3 && BUG "3 param"
>> > +
>> > +     local dirs="$1" &&
>> > +     local files="$2" &&
>> > +     local basedir="$3" &&
>> > +     local counter=0 &&
>> > +     test_tick &&
>> > +     local basedata=$basedir$test_tick &&
>> > +     rm -rf "$basedir" &&
>> > +     for i in $(test_seq $dirs)
>> > +     do
>> > +             local dir=$basedir/dir$i &&
>> > +             mkdir -p "$dir" &&
>> > +             for j in $(test_seq $files)
>> > +             do
>> > +                     counter=$((counter + 1)) &&
>> > +                     echo "$basedata.$counter">"$dir/file$j.txt"
>> > +             done
>> > +     done
>> > +}
>>
>> Having written my own perf tests for this series, I still don't get why
>> this is needed, at all.
>>
>> tl;dr: the below: I think this whole workaround is because you missed
>> that "test_when_finished" exists, and how it excludes perf timings.
>>
>
> I actually noticed test_when_finished, but I didn't think of your
> "setup the next round on cleanup of last" idea.  I was debating at the
> time adding a "test_perf_setup" helper to do the setup work during
> each perf iteration.  How about I do that and just create a new repo
> in each test_perf_setup step?
>
>> I.e. I get that if we ran this N times we'd want to wipe our repo
>> between tests, as for e.g. "git add" you want it to actually add the
>> objects.
>>
>> It's what I do with the "hyperfine" command in
>> https://lore.kernel.org/git/RFC-patch-v2-4.7-61f4f3d7ef4-20220323T140753Z-avarab@gmail.com/
>> with the "-p" option.
>>
>> I.e. hyperfine has a way to say "this is setup, but don't measure the
>> time", which is 1/2 of what you're working around here and in 10/11.
>>
>> But as 10/11 shows you're limited to one run with t/perf because you
>> want to not include those "setup" numbers, and "test_perf" has no easy
>> way to avoid that (but more on that later).
>>
>> Which b.t.w. I'm really skeptical of as an approach here in any case
>> (even if we couldn't exclude it from the numbers).
>>
>> I.e. yes what "hyperfine" does would be preferrable, but in exchange for
>> avoiding that you're comparing samples of 1 runs.
>>
>> Surely we're better off with N run (even if noisy). Given enough of them
>> the difference will shake out, and our estimated +/- will narrow..
>>
>> But aside from that, why isn't this just:
>>
>>         for cfg in true false blah
>>         done
>>                 test_expect_success "setup for $cfg" '
>>                         git init repo-$cfg &&
>>                         for f in $(test_seq 1 100)
>>                         do
>>                                 >repo-$cfg/$f
>>                         done
>>                 '
>>
>>                 test_perf "perf test for $cfg" '
>>                         git -C repo-$cfg
>>                 '
>>         done
>>
>> Which surely is going to be more accurate in the context of our limited
>> t/perf environment because creating unique files is not sufficient at
>> all to ensure that your tests don't interfere with each other.
>>
>> That's because in the first iteration we'll create N objects in
>> .git/objects/aa/* or whatever, which will *still be there* for your
>> second test, which will impact performance.
>>
>> Whereas if you just make N repos you don't need unique files, and you
>> won't be introducing that as a conflating variable.
>>
>> But anyway, reading perf-lib.sh again I haven't tested, but this whole
>> workaround seems truly unnecessary. I.e. in test_run_perf_ we do:
>>
>>         test_run_perf_ () {
>>                 test_cleanup=:
>>                 test_export_="test_cleanup"
>>                 export test_cleanup test_export_
>>                 "$GTIME" -f "%E %U %S" -o test_time.$i "$TEST_SHELL_PATH" -c '
>>                         [... code we run and time ...]
>>                 '
>>                 [... later ...]
>>                 test_eval_ "$test_cleanup"
>>         }
>>
>> So can't you just avoid this whole glorious workaround for the low low
>> cost of approximately one shellscript string assignment? :)
>>
>> I.e. if you do:
>>
>>         setup_clean () {
>>                 rm -rf repo
>>         }
>>
>>         setup_first () {
>>                 git init repo &&
>>                 [make a bunch of files or whatever in repo]
>>         }
>>
>>         setup_next () {
>>                 test_when_finished "setup_clean" &&
>>                 setup_first
>>         }
>>
>>         test_expect_success 'setup initial stuff' '
>>                 setup_first
>>         '
>>
>>         test_perf 'my perf test' '
>>                 test_when_finished "setup_next" &&
>>                 [your perf test here]
>>         '
>>
>>         test_expect_success 'cleanup' '
>>                 # Not really needed, but just for completeness, we are
>>                 # about to nuke the trash dir anyway...
>>                 setup_clean
>>         '
>>
>> I haven't tested (and need to run), but i'm pretty sure that does
>> exactly what you want without these workarounds, i.e. you'll get
>> "trampoline setup" without that setup being included in the perf
>> numbers.
>>
>> Is it pretty? No, but it's a lot less complex than this unique file
>> business & workarounds, and will give you just the numbers you want, and
>> most importantly you car run it N times now for better samples.
>>
>> I.e. "what you want" sans a *tiny* bit of noise that we use to just call
>> a function to do:
>>
>>     test_cleanup=setup_next
>>
>> Which we'll then eval *after* we measure your numbers to setup the next
>> test.
>
> How about I add a new test_perf_setup mechanism to make your idea work
> in a straightforward way?

Sure, that sounds great.

> I still want the test_create_unique_files thing as a way to make
> multiple files easily.  And for the non-perf tests it makes sense to
> have differing contents within a test run.

I think running your perf test on some generated data might still make
sense, but I think given the above that the *method* really doesn't make
any sense.

I.e. pretty much the whole structure of t/perf is to write tests that
can be run on an arbitrary user-provided repo, some of them do make some
content assumptions (or need no repo), but we've tried to have tests
there handle arbitrary repos.

You ended up with that "generated random files" to get around the X-Y
problem of not being able to reset the area without making that part of
the metrics, but as demo'd above we can use test_when_finished for that.

And once that's resolved it would actually be much more handy to be able
to run this on an arbitrary repo, as you can see in my "git hyperfine"
one-liner I grabbed the "t" directory, but we could just make our test
data all files in the dir (or specify a glob via an env var).

I think it still sounds interesting to have a way to make arbitrary test
data, but surely that's then better as e.g.:

	cd t/perf
 	./make-random-repo /tmp/random-repo &&
	GIT_PERF_REPO=/tmp/random-repo ./run p<your test>

I.e. once we've resolved the metrics/play area issue needing to run this
on some very specific data is artificial limitation v.s. just being able
to point it at a given repo.
diff mbox series

Patch

diff --git a/t/lib-unique-files.sh b/t/lib-unique-files.sh
new file mode 100644
index 00000000000..74efca91dd7
--- /dev/null
+++ b/t/lib-unique-files.sh
@@ -0,0 +1,32 @@ 
+# Helper to create files with unique contents
+
+# Create multiple files with unique contents within this test run. Takes the
+# number of directories, the number of files in each directory, and the base
+# directory.
+#
+# test_create_unique_files 2 3 my_dir -- Creates 2 directories with 3 files
+#					 each in my_dir, all with contents
+#					 different from previous invocations
+#					 of this command in this run.
+
+test_create_unique_files () {
+	test "$#" -ne 3 && BUG "3 param"
+
+	local dirs="$1" &&
+	local files="$2" &&
+	local basedir="$3" &&
+	local counter=0 &&
+	test_tick &&
+	local basedata=$basedir$test_tick &&
+	rm -rf "$basedir" &&
+	for i in $(test_seq $dirs)
+	do
+		local dir=$basedir/dir$i &&
+		mkdir -p "$dir" &&
+		for j in $(test_seq $files)
+		do
+			counter=$((counter + 1)) &&
+			echo "$basedata.$counter">"$dir/file$j.txt"
+		done
+	done
+}
diff --git a/t/t3700-add.sh b/t/t3700-add.sh
index b1f90ba3250..8979c8a5f03 100755
--- a/t/t3700-add.sh
+++ b/t/t3700-add.sh
@@ -8,6 +8,8 @@  test_description='Test of git add, including the -- option.'
 TEST_PASSES_SANITIZE_LEAK=true
 . ./test-lib.sh
 
+. $TEST_DIRECTORY/lib-unique-files.sh
+
 # Test the file mode "$1" of the file "$2" in the index.
 test_mode_in_index () {
 	case "$(git ls-files -s "$2")" in
@@ -34,6 +36,32 @@  test_expect_success \
     'Test that "git add -- -q" works' \
     'touch -- -q && git add -- -q'
 
+BATCH_CONFIGURATION='-c core.fsync=loose-object -c core.fsyncmethod=batch'
+
+test_expect_success 'git add: core.fsyncmethod=batch' "
+	test_create_unique_files 2 4 files_base_dir1 &&
+	GIT_TEST_FSYNC=1 git $BATCH_CONFIGURATION add -- ./files_base_dir1/ &&
+	git ls-files --stage files_base_dir1/ |
+	test_parse_ls_files_stage_oids >added_files_oids &&
+
+	# We created 2 subdirs with 4 files each (8 files total) above
+	test_line_count = 8 added_files_oids &&
+	git cat-file --batch-check='%(objectname)' <added_files_oids >added_files_actual &&
+	test_cmp added_files_oids added_files_actual
+"
+
+test_expect_success 'git update-index: core.fsyncmethod=batch' "
+	test_create_unique_files 2 4 files_base_dir2 &&
+	find files_base_dir2 ! -type d -print | xargs git $BATCH_CONFIGURATION update-index --add -- &&
+	git ls-files --stage files_base_dir2 |
+	test_parse_ls_files_stage_oids >added_files2_oids &&
+
+	# We created 2 subdirs with 4 files each (8 files total) above
+	test_line_count = 8 added_files2_oids &&
+	git cat-file --batch-check='%(objectname)' <added_files2_oids >added_files2_actual &&
+	test_cmp added_files2_oids added_files2_actual
+"
+
 test_expect_success \
 	'git add: Test that executable bit is not used if core.filemode=0' \
 	'git config core.filemode 0 &&
diff --git a/t/t3903-stash.sh b/t/t3903-stash.sh
index 4abbc8fccae..20e94881964 100755
--- a/t/t3903-stash.sh
+++ b/t/t3903-stash.sh
@@ -9,6 +9,7 @@  GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=main
 export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME
 
 . ./test-lib.sh
+. $TEST_DIRECTORY/lib-unique-files.sh
 
 test_expect_success 'usage on cmd and subcommand invalid option' '
 	test_expect_code 129 git stash --invalid-option 2>usage &&
@@ -1410,6 +1411,25 @@  test_expect_success 'stash handles skip-worktree entries nicely' '
 	git rev-parse --verify refs/stash:A.t
 '
 
+
+BATCH_CONFIGURATION='-c core.fsync=loose-object -c core.fsyncmethod=batch'
+
+test_expect_success 'stash with core.fsyncmethod=batch' "
+	test_create_unique_files 2 4 files_base_dir &&
+	GIT_TEST_FSYNC=1 git $BATCH_CONFIGURATION stash push -u -- ./files_base_dir/ &&
+
+	# The files were untracked, so use the third parent,
+	# which contains the untracked files
+	git ls-tree -r stash^3 -- ./files_base_dir/ |
+	test_parse_ls_tree_oids >stashed_files_oids &&
+
+	# We created 2 dirs with 4 files each (8 files total) above
+	test_line_count = 8 stashed_files_oids &&
+	git cat-file --batch-check='%(objectname)' <stashed_files_oids >stashed_files_actual &&
+	test_cmp stashed_files_oids stashed_files_actual
+"
+
+
 test_expect_success 'git stash succeeds despite directory/file change' '
 	test_create_repo directory_file_switch_v1 &&
 	(
diff --git a/t/t5300-pack-object.sh b/t/t5300-pack-object.sh
index a11d61206ad..f8a0f309e2d 100755
--- a/t/t5300-pack-object.sh
+++ b/t/t5300-pack-object.sh
@@ -161,22 +161,27 @@  test_expect_success 'pack-objects with bogus arguments' '
 '
 
 check_unpack () {
+	local packname="$1" &&
+	local object_list="$2" &&
+	local git_config="$3" &&
 	test_when_finished "rm -rf git2" &&
-	git init --bare git2 &&
-	git -C git2 unpack-objects -n <"$1".pack &&
-	git -C git2 unpack-objects <"$1".pack &&
-	(cd .git && find objects -type f -print) |
-	while read path
-	do
-		cmp git2/$path .git/$path || {
-			echo $path differs.
-			return 1
-		}
-	done
+	git $git_config init --bare git2 &&
+	(
+		git $git_config -C git2 unpack-objects -n <"$packname".pack &&
+		git $git_config -C git2 unpack-objects <"$packname".pack &&
+		git $git_config -C git2 cat-file --batch-check="%(objectname)"
+	) <"$object_list" >current &&
+	cmp "$object_list" current
 }
 
 test_expect_success 'unpack without delta' '
-	check_unpack test-1-${packname_1}
+	check_unpack test-1-${packname_1} obj-list
+'
+
+BATCH_CONFIGURATION='-c core.fsync=loose-object -c core.fsyncmethod=batch'
+
+test_expect_success 'unpack without delta (core.fsyncmethod=batch)' '
+	check_unpack test-1-${packname_1} obj-list "$BATCH_CONFIGURATION"
 '
 
 test_expect_success 'pack with REF_DELTA' '
@@ -185,7 +190,11 @@  test_expect_success 'pack with REF_DELTA' '
 '
 
 test_expect_success 'unpack with REF_DELTA' '
-	check_unpack test-2-${packname_2}
+	check_unpack test-2-${packname_2} obj-list
+'
+
+test_expect_success 'unpack with REF_DELTA (core.fsyncmethod=batch)' '
+       check_unpack test-2-${packname_2} obj-list "$BATCH_CONFIGURATION"
 '
 
 test_expect_success 'pack with OFS_DELTA' '
@@ -195,7 +204,11 @@  test_expect_success 'pack with OFS_DELTA' '
 '
 
 test_expect_success 'unpack with OFS_DELTA' '
-	check_unpack test-3-${packname_3}
+	check_unpack test-3-${packname_3} obj-list
+'
+
+test_expect_success 'unpack with OFS_DELTA (core.fsyncmethod=batch)' '
+       check_unpack test-3-${packname_3} obj-list "$BATCH_CONFIGURATION"
 '
 
 test_expect_success 'compare delta flavors' '