diff mbox series

[1/3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence

Message ID 5a7f6bbf4cf2038634a572f42ad80e95a8d0ae9c.1600686204.git.bcodding@redhat.com (mailing list archive)
State New, archived
Headers show
Series [1/3] NFSv4: Fix a livelock when CLOSE pre-emptively bumps state sequence | expand

Commit Message

Benjamin Coddington Sept. 21, 2020, 11:04 a.m. UTC
Since commit 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in
CLOSE/OPEN_DOWNGRADE") the following livelock may occur if a CLOSE races
with the update of the nfs_state:

Process 1	  Process 2	   Server
=========         =========	   ========
 OPEN file
		  OPEN file
		  		   Reply OPEN (1)
		  		   Reply OPEN (2)
 Update state (1)
 CLOSE file (1)
		  		   Reply OLD_STATEID (1)
 CLOSE file (2)
		  		   Reply CLOSE (-1)
		  Update state (2)
		  wait for state change
 OPEN file
		  wake
 CLOSE file
 OPEN file
		  wake
 CLOSE file
 ...
		  ...

As long as the first process continues updating state, the second process
will fail to exit the loop in nfs_set_open_stateid_locked().  This livelock
has been observed in generic/168.

Fix this by detecting the case in nfs_need_update_open_stateid() and
then exit the loop if:
 - the state is NFS_OPEN_STATE, and
 - the stateid sequence is > 1, and
 - the stateid doesn't match the current open stateid

Fixes: 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in CLOSE/OPEN_DOWNGRADE")
Cc: stable@vger.kernel.org # v5.4+
Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
---
 fs/nfs/nfs4proc.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

Comments

Schumaker, Anna Sept. 22, 2020, 2:03 p.m. UTC | #1
Hi Ben,

On Mon, Sep 21, 2020 at 7:05 AM Benjamin Coddington <bcodding@redhat.com> wrote:
>
> Since commit 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in
> CLOSE/OPEN_DOWNGRADE") the following livelock may occur if a CLOSE races
> with the update of the nfs_state:
>
> Process 1         Process 2        Server
> =========         =========        ========
>  OPEN file
>                   OPEN file
>                                    Reply OPEN (1)
>                                    Reply OPEN (2)
>  Update state (1)
>  CLOSE file (1)
>                                    Reply OLD_STATEID (1)
>  CLOSE file (2)
>                                    Reply CLOSE (-1)
>                   Update state (2)
>                   wait for state change
>  OPEN file
>                   wake
>  CLOSE file
>  OPEN file
>                   wake
>  CLOSE file
>  ...
>                   ...
>
> As long as the first process continues updating state, the second process
> will fail to exit the loop in nfs_set_open_stateid_locked().  This livelock
> has been observed in generic/168.

Once I apply this patch I have trouble with generic/478 doing lock reclaim:

[  937.460505] run fstests generic/478 at 2020-09-22 09:59:14
[  937.607990] NFS: __nfs4_reclaim_open_state: Lock reclaim failed!

And the test just hangs until I kill it.

Just thought you should know!
Anna

>
> Fix this by detecting the case in nfs_need_update_open_stateid() and
> then exit the loop if:
>  - the state is NFS_OPEN_STATE, and
>  - the stateid sequence is > 1, and
>  - the stateid doesn't match the current open stateid
>
> Fixes: 0e0cb35b417f ("NFSv4: Handle NFS4ERR_OLD_STATEID in CLOSE/OPEN_DOWNGRADE")
> Cc: stable@vger.kernel.org # v5.4+
> Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
> ---
>  fs/nfs/nfs4proc.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
> index 45e0585e0667..9ced7a62c05e 100644
> --- a/fs/nfs/nfs4proc.c
> +++ b/fs/nfs/nfs4proc.c
> @@ -1570,10 +1570,14 @@ static bool nfs_need_update_open_stateid(struct nfs4_state *state,
>  {
>         if (test_bit(NFS_OPEN_STATE, &state->flags) == 0 ||
>             !nfs4_stateid_match_other(stateid, &state->open_stateid)) {
> -               if (stateid->seqid == cpu_to_be32(1))
> +               if (stateid->seqid == cpu_to_be32(1)) {
>                         nfs_state_log_update_open_stateid(state);
> -               else
> -                       set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);
> +               } else {
> +                       if (!nfs4_stateid_match_other(stateid, &state->open_stateid))
> +                               return false;
> +                       else
> +                               set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);
> +               }
>                 return true;
>         }
>
> --
> 2.20.1
>
Benjamin Coddington Sept. 22, 2020, 2:22 p.m. UTC | #2
On 22 Sep 2020, at 10:03, Anna Schumaker wrote:
> Hi Ben,
>
> Once I apply this patch I have trouble with generic/478 doing lock reclaim:
>
> [  937.460505] run fstests generic/478 at 2020-09-22 09:59:14
> [  937.607990] NFS: __nfs4_reclaim_open_state: Lock reclaim failed!
>
> And the test just hangs until I kill it.
>
> Just thought you should know!

Yes, thanks!  I'm not seeing that..  I've tested these based on v5.8.4, I'll
rebase and check again.  I see a wirecap of generic/478 is only 515K on my
system, would you be willing to share a capture of your test failing?

Ben
Schumaker, Anna Sept. 22, 2020, 2:31 p.m. UTC | #3
On Tue, Sep 22, 2020 at 10:22 AM Benjamin Coddington
<bcodding@redhat.com> wrote:
>
> On 22 Sep 2020, at 10:03, Anna Schumaker wrote:
> > Hi Ben,
> >
> > Once I apply this patch I have trouble with generic/478 doing lock reclaim:
> >
> > [  937.460505] run fstests generic/478 at 2020-09-22 09:59:14
> > [  937.607990] NFS: __nfs4_reclaim_open_state: Lock reclaim failed!
> >
> > And the test just hangs until I kill it.
> >
> > Just thought you should know!
>
> Yes, thanks!  I'm not seeing that..  I've tested these based on v5.8.4, I'll
> rebase and check again.  I see a wirecap of generic/478 is only 515K on my
> system, would you be willing to share a capture of your test failing?

I have it based on v5.9-rc6 (plus the patches I have queued up for
v5.10), so there definitely could be a difference there! I'm using a
stock kernel on my server, though :)

I can definitely get you a packet trace once I re-apply the patch and
rerun the test.

Anna

>
> Ben
>
Benjamin Coddington Sept. 22, 2020, 3:46 p.m. UTC | #4
On 22 Sep 2020, at 10:43, Anna Schumaker wrote:

> On Tue, Sep 22, 2020 at 10:31 AM Anna Schumaker
> <anna.schumaker@netapp.com> wrote:
>>
>> On Tue, Sep 22, 2020 at 10:22 AM Benjamin Coddington
>> <bcodding@redhat.com> wrote:
>>>
>>> On 22 Sep 2020, at 10:03, Anna Schumaker wrote:
>>>> Hi Ben,
>>>>
>>>> Once I apply this patch I have trouble with generic/478 doing lock 
>>>> reclaim:
>>>>
>>>> [  937.460505] run fstests generic/478 at 2020-09-22 09:59:14
>>>> [  937.607990] NFS: __nfs4_reclaim_open_state: Lock reclaim failed!
>>>>
>>>> And the test just hangs until I kill it.
>>>>
>>>> Just thought you should know!
>>>
>>> Yes, thanks!  I'm not seeing that..  I've tested these based on 
>>> v5.8.4, I'll
>>> rebase and check again.  I see a wirecap of generic/478 is only 515K 
>>> on my
>>> system, would you be willing to share a capture of your test 
>>> failing?
>>
>> I have it based on v5.9-rc6 (plus the patches I have queued up for
>> v5.10), so there definitely could be a difference there! I'm using a
>> stock kernel on my server, though :)
>>
>> I can definitely get you a packet trace once I re-apply the patch and
>> rerun the test.
>
> Here's the packet trace, I reran the test with just this patch applied
> on top of v5.9-rc6 so it's not interacting with something else in my
> tree. Looks like it's ending up in an NFS4ERR_OLD_STATEID loop.

Thanks very much!

Did you see this failure with all three patches applied, or just with 
the
first patch?

I see the client get two OPEN responses, but then is sending 
TEST_STATEID
with the first seqid.  Seems like seqid 2 is getting lost.  I wonder if
we're making a bad assumption that NFS_OPEN_STATE can only be toggled 
under
the so_lock.

Ben
Schumaker, Anna Sept. 22, 2020, 3:53 p.m. UTC | #5
On Tue, Sep 22, 2020 at 11:49 AM Benjamin Coddington
<bcodding@redhat.com> wrote:
>
> On 22 Sep 2020, at 10:43, Anna Schumaker wrote:
>
> > On Tue, Sep 22, 2020 at 10:31 AM Anna Schumaker
> > <anna.schumaker@netapp.com> wrote:
> >>
> >> On Tue, Sep 22, 2020 at 10:22 AM Benjamin Coddington
> >> <bcodding@redhat.com> wrote:
> >>>
> >>> On 22 Sep 2020, at 10:03, Anna Schumaker wrote:
> >>>> Hi Ben,
> >>>>
> >>>> Once I apply this patch I have trouble with generic/478 doing lock
> >>>> reclaim:
> >>>>
> >>>> [  937.460505] run fstests generic/478 at 2020-09-22 09:59:14
> >>>> [  937.607990] NFS: __nfs4_reclaim_open_state: Lock reclaim failed!
> >>>>
> >>>> And the test just hangs until I kill it.
> >>>>
> >>>> Just thought you should know!
> >>>
> >>> Yes, thanks!  I'm not seeing that..  I've tested these based on
> >>> v5.8.4, I'll
> >>> rebase and check again.  I see a wirecap of generic/478 is only 515K
> >>> on my
> >>> system, would you be willing to share a capture of your test
> >>> failing?
> >>
> >> I have it based on v5.9-rc6 (plus the patches I have queued up for
> >> v5.10), so there definitely could be a difference there! I'm using a
> >> stock kernel on my server, though :)
> >>
> >> I can definitely get you a packet trace once I re-apply the patch and
> >> rerun the test.
> >
> > Here's the packet trace, I reran the test with just this patch applied
> > on top of v5.9-rc6 so it's not interacting with something else in my
> > tree. Looks like it's ending up in an NFS4ERR_OLD_STATEID loop.
>
> Thanks very much!
>
> Did you see this failure with all three patches applied, or just with
> the
> first patch?

I saw it with the first patch applied, and with the first and third
applied. I initially hit it as I was wrapping up for the day
yesterday, but I left out #2 since I saw your retraction

>
> I see the client get two OPEN responses, but then is sending
> TEST_STATEID
> with the first seqid.  Seems like seqid 2 is getting lost.  I wonder if
> we're making a bad assumption that NFS_OPEN_STATE can only be toggled
> under
> the so_lock.
>
> Ben
>
Schumaker, Anna Sept. 22, 2020, 4:11 p.m. UTC | #6
On Tue, Sep 22, 2020 at 11:53 AM Anna Schumaker
<anna.schumaker@netapp.com> wrote:
>
> On Tue, Sep 22, 2020 at 11:49 AM Benjamin Coddington
> <bcodding@redhat.com> wrote:
> >
> > On 22 Sep 2020, at 10:43, Anna Schumaker wrote:
> >
> > > On Tue, Sep 22, 2020 at 10:31 AM Anna Schumaker
> > > <anna.schumaker@netapp.com> wrote:
> > >>
> > >> On Tue, Sep 22, 2020 at 10:22 AM Benjamin Coddington
> > >> <bcodding@redhat.com> wrote:
> > >>>
> > >>> On 22 Sep 2020, at 10:03, Anna Schumaker wrote:
> > >>>> Hi Ben,
> > >>>>
> > >>>> Once I apply this patch I have trouble with generic/478 doing lock
> > >>>> reclaim:
> > >>>>
> > >>>> [  937.460505] run fstests generic/478 at 2020-09-22 09:59:14
> > >>>> [  937.607990] NFS: __nfs4_reclaim_open_state: Lock reclaim failed!
> > >>>>
> > >>>> And the test just hangs until I kill it.
> > >>>>
> > >>>> Just thought you should know!
> > >>>
> > >>> Yes, thanks!  I'm not seeing that..  I've tested these based on
> > >>> v5.8.4, I'll
> > >>> rebase and check again.  I see a wirecap of generic/478 is only 515K
> > >>> on my
> > >>> system, would you be willing to share a capture of your test
> > >>> failing?
> > >>
> > >> I have it based on v5.9-rc6 (plus the patches I have queued up for
> > >> v5.10), so there definitely could be a difference there! I'm using a
> > >> stock kernel on my server, though :)
> > >>
> > >> I can definitely get you a packet trace once I re-apply the patch and
> > >> rerun the test.
> > >
> > > Here's the packet trace, I reran the test with just this patch applied
> > > on top of v5.9-rc6 so it's not interacting with something else in my
> > > tree. Looks like it's ending up in an NFS4ERR_OLD_STATEID loop.
> >
> > Thanks very much!
> >
> > Did you see this failure with all three patches applied, or just with
> > the
> > first patch?
>
> I saw it with the first patch applied, and with the first and third
> applied. I initially hit it as I was wrapping up for the day
> yesterday, but I left out #2 since I saw your retraction

I reran with all three patches applied, and didn't have the issue. So
something in the refactor patch fixes it.

Anna

>
> >
> > I see the client get two OPEN responses, but then is sending
> > TEST_STATEID
> > with the first seqid.  Seems like seqid 2 is getting lost.  I wonder if
> > we're making a bad assumption that NFS_OPEN_STATE can only be toggled
> > under
> > the so_lock.
> >
> > Ben
> >
Benjamin Coddington Sept. 22, 2020, 6:47 p.m. UTC | #7
On 22 Sep 2020, at 12:11, Anna Schumaker wrote:

> On Tue, Sep 22, 2020 at 11:53 AM Anna Schumaker
> <anna.schumaker@netapp.com> wrote:
>>
>> On Tue, Sep 22, 2020 at 11:49 AM Benjamin Coddington
>> <bcodding@redhat.com> wrote:
>>>
>>> On 22 Sep 2020, at 10:43, Anna Schumaker wrote:
>>>
>>>> On Tue, Sep 22, 2020 at 10:31 AM Anna Schumaker
>>>> <anna.schumaker@netapp.com> wrote:
>>>>>
>>>>> On Tue, Sep 22, 2020 at 10:22 AM Benjamin Coddington
>>>>> <bcodding@redhat.com> wrote:
>>>>>>
>>>>>> On 22 Sep 2020, at 10:03, Anna Schumaker wrote:
>>>>>>> Hi Ben,
>>>>>>>
>>>>>>> Once I apply this patch I have trouble with generic/478 doing lock
>>>>>>> reclaim:
>>>>>>>
>>>>>>> [  937.460505] run fstests generic/478 at 2020-09-22 09:59:14
>>>>>>> [  937.607990] NFS: __nfs4_reclaim_open_state: Lock reclaim failed!
>>>>>>>
>>>>>>> And the test just hangs until I kill it.
>>>>>>>
>>>>>>> Just thought you should know!
>>>>>>
>>>>>> Yes, thanks!  I'm not seeing that..  I've tested these based on
>>>>>> v5.8.4, I'll
>>>>>> rebase and check again.  I see a wirecap of generic/478 is only 515K
>>>>>> on my
>>>>>> system, would you be willing to share a capture of your test
>>>>>> failing?
>>>>>
>>>>> I have it based on v5.9-rc6 (plus the patches I have queued up for
>>>>> v5.10), so there definitely could be a difference there! I'm using a
>>>>> stock kernel on my server, though :)
>>>>>
>>>>> I can definitely get you a packet trace once I re-apply the patch and
>>>>> rerun the test.
>>>>
>>>> Here's the packet trace, I reran the test with just this patch applied
>>>> on top of v5.9-rc6 so it's not interacting with something else in my
>>>> tree. Looks like it's ending up in an NFS4ERR_OLD_STATEID loop.
>>>
>>> Thanks very much!
>>>
>>> Did you see this failure with all three patches applied, or just with
>>> the
>>> first patch?
>>
>> I saw it with the first patch applied, and with the first and third
>> applied. I initially hit it as I was wrapping up for the day
>> yesterday, but I left out #2 since I saw your retraction
>
> I reran with all three patches applied, and didn't have the issue. So
> something in the refactor patch fixes it.

That helped me see the case we're not handling correctly is when two OPENs
race and the second one tries to update the state first and gets dropped.
That is fixed by the 2/3 refactor patch since the refactor was being a bit
more explicit.

That means I'll need to fix those two patches and send them again.  I'm very
glad you caught this!  Thanks very much for helping me find the problem.

Ben
Schumaker, Anna Sept. 22, 2020, 6:51 p.m. UTC | #8
On Tue, Sep 22, 2020 at 2:47 PM Benjamin Coddington <bcodding@redhat.com> wrote:
>
> On 22 Sep 2020, at 12:11, Anna Schumaker wrote:
>
> > On Tue, Sep 22, 2020 at 11:53 AM Anna Schumaker
> > <anna.schumaker@netapp.com> wrote:
> >>
> >> On Tue, Sep 22, 2020 at 11:49 AM Benjamin Coddington
> >> <bcodding@redhat.com> wrote:
> >>>
> >>> On 22 Sep 2020, at 10:43, Anna Schumaker wrote:
> >>>
> >>>> On Tue, Sep 22, 2020 at 10:31 AM Anna Schumaker
> >>>> <anna.schumaker@netapp.com> wrote:
> >>>>>
> >>>>> On Tue, Sep 22, 2020 at 10:22 AM Benjamin Coddington
> >>>>> <bcodding@redhat.com> wrote:
> >>>>>>
> >>>>>> On 22 Sep 2020, at 10:03, Anna Schumaker wrote:
> >>>>>>> Hi Ben,
> >>>>>>>
> >>>>>>> Once I apply this patch I have trouble with generic/478 doing lock
> >>>>>>> reclaim:
> >>>>>>>
> >>>>>>> [  937.460505] run fstests generic/478 at 2020-09-22 09:59:14
> >>>>>>> [  937.607990] NFS: __nfs4_reclaim_open_state: Lock reclaim failed!
> >>>>>>>
> >>>>>>> And the test just hangs until I kill it.
> >>>>>>>
> >>>>>>> Just thought you should know!
> >>>>>>
> >>>>>> Yes, thanks!  I'm not seeing that..  I've tested these based on
> >>>>>> v5.8.4, I'll
> >>>>>> rebase and check again.  I see a wirecap of generic/478 is only 515K
> >>>>>> on my
> >>>>>> system, would you be willing to share a capture of your test
> >>>>>> failing?
> >>>>>
> >>>>> I have it based on v5.9-rc6 (plus the patches I have queued up for
> >>>>> v5.10), so there definitely could be a difference there! I'm using a
> >>>>> stock kernel on my server, though :)
> >>>>>
> >>>>> I can definitely get you a packet trace once I re-apply the patch and
> >>>>> rerun the test.
> >>>>
> >>>> Here's the packet trace, I reran the test with just this patch applied
> >>>> on top of v5.9-rc6 so it's not interacting with something else in my
> >>>> tree. Looks like it's ending up in an NFS4ERR_OLD_STATEID loop.
> >>>
> >>> Thanks very much!
> >>>
> >>> Did you see this failure with all three patches applied, or just with
> >>> the
> >>> first patch?
> >>
> >> I saw it with the first patch applied, and with the first and third
> >> applied. I initially hit it as I was wrapping up for the day
> >> yesterday, but I left out #2 since I saw your retraction
> >
> > I reran with all three patches applied, and didn't have the issue. So
> > something in the refactor patch fixes it.
>
> That helped me see the case we're not handling correctly is when two OPENs
> race and the second one tries to update the state first and gets dropped.
> That is fixed by the 2/3 refactor patch since the refactor was being a bit
> more explicit.
>
> That means I'll need to fix those two patches and send them again.  I'm very
> glad you caught this!  Thanks very much for helping me find the problem.

You're welcome! I'm looking forward to the next version :)

Anna

>
> Ben
>
diff mbox series

Patch

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 45e0585e0667..9ced7a62c05e 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -1570,10 +1570,14 @@  static bool nfs_need_update_open_stateid(struct nfs4_state *state,
 {
 	if (test_bit(NFS_OPEN_STATE, &state->flags) == 0 ||
 	    !nfs4_stateid_match_other(stateid, &state->open_stateid)) {
-		if (stateid->seqid == cpu_to_be32(1))
+		if (stateid->seqid == cpu_to_be32(1)) {
 			nfs_state_log_update_open_stateid(state);
-		else
-			set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);
+		} else {
+			if (!nfs4_stateid_match_other(stateid, &state->open_stateid))
+				return false;
+			else
+				set_bit(NFS_STATE_CHANGE_WAIT, &state->flags);
+		}
 		return true;
 	}