Message ID | 87r4opw0og.fsf@spindle.srvr.nix (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, Oct 23, 2012 at 06:36:15PM +0100, Nix wrote: > On 23 Oct 2012, nix@esperi.org.uk uttered the following: > > > On 23 Oct 2012, Trond Myklebust spake thusly: > >> On Tue, 2012-10-23 at 12:46 -0400, J. Bruce Fields wrote: > >>> Looks like there's some confusion about whether nsm_client_get() returns > >>> NULL or an error? > >> > >> nsm_client_get() looks extremely racy in the case where ln->nsm_users == > >> 0. Since we never recheck the value of ln->nsm_users after taking > >> nsm_create_mutex, what is stopping 2 different threads from both setting > >> ln->nsm_clnt and re-initialising ln->nsm_users? > > > > Yep. At the worst possible time: > > > > spin_lock(&ln->nsm_clnt_lock); > > if (ln->nsm_users) { > > if (--ln->nsm_users) > > ln->nsm_clnt = NULL; > > (1) shutdown = !ln->nsm_users; > > } > > spin_unlock(&ln->nsm_clnt_lock); > > > > If a thread reinitializes nsm_users at point (1), after the assignment, > > we could well end up with ln->nsm_clnt NULL and shutdown false. A bit > > later, nsm_mon_unmon gets called with a NULL clnt, and boom. > > Possible fix if so, utterly untested so far (will test when I can face > yet another reboot and fs-corruption-recovery-hell cycle, in a few > hours), may ruin performance, violate locking hierarchies, and consume > kittens: Right, mutexes can't be taken while holding spinlocks. Keep the kittens well away from the computer. --b. > > diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c > index e4fb3ba..da91cdf 100644 > --- a/fs/lockd/mon.c > +++ b/fs/lockd/mon.c > @@ -98,7 +98,6 @@ static struct rpc_clnt *nsm_client_get(struct net *net) > spin_unlock(&ln->nsm_clnt_lock); > goto out; > } > - spin_unlock(&ln->nsm_clnt_lock); > > mutex_lock(&nsm_create_mutex); > clnt = nsm_create(net); > @@ -108,6 +107,7 @@ static struct rpc_clnt *nsm_client_get(struct net *net) > ln->nsm_users = 1; > } > mutex_unlock(&nsm_create_mutex); > + spin_unlock(&ln->nsm_clnt_lock); > out: > return clnt; > } > -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
> -----Original Message----- > From: Nix [mailto:nix@esperi.org.uk] > Sent: Tuesday, October 23, 2012 1:36 PM > To: Myklebust, Trond > Cc: J. Bruce Fields; Ted Ts'o; linux-kernel@vger.kernel.org; Schumaker, > Bryan; Peng Tao; gregkh@linuxfoundation.org; linux-nfs@vger.kernel.org; > Stanislav Kinsbursky > Subject: Re: Heads-up: 3.6.2 / 3.6.3 NFS server oops: 3.6.2+ regression? (also > an unrelated ext4 data loss bug) > > On 23 Oct 2012, nix@esperi.org.uk uttered the following: > > > On 23 Oct 2012, Trond Myklebust spake thusly: > >> On Tue, 2012-10-23 at 12:46 -0400, J. Bruce Fields wrote: > >>> Looks like there's some confusion about whether nsm_client_get() > >>> returns NULL or an error? > >> > >> nsm_client_get() looks extremely racy in the case where ln->nsm_users > >> == 0. Since we never recheck the value of ln->nsm_users after taking > >> nsm_create_mutex, what is stopping 2 different threads from both > >> setting > >> ln->nsm_clnt and re-initialising ln->nsm_users? > > > > Yep. At the worst possible time: > > > > spin_lock(&ln->nsm_clnt_lock); > > if (ln->nsm_users) { > > if (--ln->nsm_users) > > ln->nsm_clnt = NULL; > > (1) shutdown = !ln->nsm_users; > > } > > spin_unlock(&ln->nsm_clnt_lock); > > > > If a thread reinitializes nsm_users at point (1), after the > > assignment, we could well end up with ln->nsm_clnt NULL and shutdown > > false. A bit later, nsm_mon_unmon gets called with a NULL clnt, and boom. > > Possible fix if so, utterly untested so far (will test when I can face yet another > reboot and fs-corruption-recovery-hell cycle, in a few hours), may ruin > performance, violate locking hierarchies, and consume > kittens: > > diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c index e4fb3ba..da91cdf 100644 > --- a/fs/lockd/mon.c > +++ b/fs/lockd/mon.c > @@ -98,7 +98,6 @@ static struct rpc_clnt *nsm_client_get(struct net *net) > spin_unlock(&ln->nsm_clnt_lock); > goto out; > } > - spin_unlock(&ln->nsm_clnt_lock); > > mutex_lock(&nsm_create_mutex); > clnt = nsm_create(net); > @@ -108,6 +107,7 @@ static struct rpc_clnt *nsm_client_get(struct net *net) > ln->nsm_users = 1; > } > mutex_unlock(&nsm_create_mutex); > + spin_unlock(&ln->nsm_clnt_lock); You can't hold a spinlock while sleeping. Both mutex_lock() and nsm_create() can definitely sleep. The correct way to do this is to grab the spinlock and recheck the value of ln->nsm_users inside the 'if (!IS_ERR())' condition. If it is still zero, bump it and set ln->nsm_clnt, otherwise bump it, get the existing ln->nsm_clnt and call rpc_shutdown_clnt() on the redundant nsm client after dropping the spinlock. Cheers Trond -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
T24gVHVlLCAyMDEyLTEwLTIzIGF0IDE3OjQ0ICswMDAwLCBNeWtsZWJ1c3QsIFRyb25kIHdyb3Rl Og0KPiBZb3UgY2FuJ3QgaG9sZCBhIHNwaW5sb2NrIHdoaWxlIHNsZWVwaW5nLiBCb3RoIG11dGV4 X2xvY2soKSBhbmQgbnNtX2NyZWF0ZSgpIGNhbiBkZWZpbml0ZWx5IHNsZWVwLg0KPiANCj4gVGhl IGNvcnJlY3Qgd2F5IHRvIGRvIHRoaXMgaXMgdG8gZ3JhYiB0aGUgc3BpbmxvY2sgYW5kIHJlY2hl Y2sgdGhlIHZhbHVlIG9mIGxuLT5uc21fdXNlcnMgaW5zaWRlIHRoZSAnaWYgKCFJU19FUlIoKSkn IGNvbmRpdGlvbi4gSWYgaXQgaXMgc3RpbGwgemVybywgYnVtcCBpdCBhbmQgc2V0IGxuLT5uc21f Y2xudCwgb3RoZXJ3aXNlIGJ1bXAgaXQsIGdldCB0aGUgZXhpc3RpbmcgbG4tPm5zbV9jbG50IGFu ZCBjYWxsIHJwY19zaHV0ZG93bl9jbG50KCkgb24gdGhlIHJlZHVuZGFudCBuc20gY2xpZW50IGFm dGVyIGRyb3BwaW5nIHRoZSBzcGlubG9jay4NCj4gDQo+IENoZWVycw0KPiAgIFRyb25kDQoNCkNh biB5b3UgcGxlYXNlIGNoZWNrIGlmIHRoZSBmb2xsb3dpbmcgcGF0Y2ggZml4ZXMgdGhlIGlzc3Vl Pw0KDQpDaGVlcnMNCiAgVHJvbmQNCg0KODwtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KRnJvbSA0NGEwNzA0NTVkMjQ2ZTA5ZGUwY2VmYzg4 NzU4MzNmMjFjYTY1NWU4IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQ0KRnJvbTogVHJvbmQgTXlr bGVidXN0IDxUcm9uZC5NeWtsZWJ1c3RAbmV0YXBwLmNvbT4NCkRhdGU6IFR1ZSwgMjMgT2N0IDIw MTIgMTM6NTE6NTggLTA0MDANClN1YmplY3Q6IFtQQVRDSF0gTE9DS0Q6IGZpeCByYWNlcyBpbiBu c21fY2xpZW50X2dldA0KDQpDb21taXQgZTk0MDZkYjIwZmVjYmZjYWI2NDZiYWQxNTdiNGNmZGM3 Y2FkZGRmYiAobG9ja2Q6IHBlci1uZXQNCk5TTSBjbGllbnQgY3JlYXRpb24gYW5kIGRlc3RydWN0 aW9uIGhlbHBlcnMgaW50cm9kdWNlZCkgY29udGFpbnMNCmEgbmFzdHkgcmFjZSBvbiBpbml0aWFs aXNhdGlvbiBvZiB0aGUgcGVyLW5ldCBOU00gY2xpZW50IGJlY2F1c2UNCml0IGRvZXNuJ3QgY2hl Y2sgd2hldGhlciBvciBub3QgdGhlIGNsaWVudCBpcyBzZXQgYWZ0ZXIgZ3JhYmJpbmcNCnRoZSBu c21fY3JlYXRlX211dGV4Lg0KDQpSZXBvcnRlZC1ieTogTml4IDxuaXhAZXNwZXJpLm9yZy51az4N ClNpZ25lZC1vZmYtYnk6IFRyb25kIE15a2xlYnVzdCA8VHJvbmQuTXlrbGVidXN0QG5ldGFwcC5j b20+DQotLS0NCiBmcy9sb2NrZC9tb24uYyB8IDIwICsrKysrKysrKysrKysrLS0tLS0tDQogMSBm aWxlIGNoYW5nZWQsIDE0IGluc2VydGlvbnMoKyksIDYgZGVsZXRpb25zKC0pDQoNCmRpZmYgLS1n aXQgYS9mcy9sb2NrZC9tb24uYyBiL2ZzL2xvY2tkL21vbi5jDQppbmRleCBlNGZiM2JhLi45NzU1 NjAzIDEwMDY0NA0KLS0tIGEvZnMvbG9ja2QvbW9uLmMNCisrKyBiL2ZzL2xvY2tkL21vbi5jDQpA QCAtODgsNyArODgsNyBAQCBzdGF0aWMgc3RydWN0IHJwY19jbG50ICpuc21fY3JlYXRlKHN0cnVj dCBuZXQgKm5ldCkNCiBzdGF0aWMgc3RydWN0IHJwY19jbG50ICpuc21fY2xpZW50X2dldChzdHJ1 Y3QgbmV0ICpuZXQpDQogew0KIAlzdGF0aWMgREVGSU5FX01VVEVYKG5zbV9jcmVhdGVfbXV0ZXgp Ow0KLQlzdHJ1Y3QgcnBjX2NsbnQJKmNsbnQ7DQorCXN0cnVjdCBycGNfY2xudAkqY2xudCwgKm5l dzsNCiAJc3RydWN0IGxvY2tkX25ldCAqbG4gPSBuZXRfZ2VuZXJpYyhuZXQsIGxvY2tkX25ldF9p ZCk7DQogDQogCXNwaW5fbG9jaygmbG4tPm5zbV9jbG50X2xvY2spOw0KQEAgLTEwMSwxMSArMTAx LDE5IEBAIHN0YXRpYyBzdHJ1Y3QgcnBjX2NsbnQgKm5zbV9jbGllbnRfZ2V0KHN0cnVjdCBuZXQg Km5ldCkNCiAJc3Bpbl91bmxvY2soJmxuLT5uc21fY2xudF9sb2NrKTsNCiANCiAJbXV0ZXhfbG9j aygmbnNtX2NyZWF0ZV9tdXRleCk7DQotCWNsbnQgPSBuc21fY3JlYXRlKG5ldCk7DQotCWlmICgh SVNfRVJSKGNsbnQpKSB7DQotCQlsbi0+bnNtX2NsbnQgPSBjbG50Ow0KLQkJc21wX3dtYigpOw0K LQkJbG4tPm5zbV91c2VycyA9IDE7DQorCW5ldyA9IG5zbV9jcmVhdGUobmV0KTsNCisJY2xudCA9 IG5ldzsNCisJaWYgKCFJU19FUlIobmV3KSkgew0KKwkJc3Bpbl9sb2NrKCZsbi0+bnNtX2NsbnRf bG9jayk7DQorCQlpZiAoIWxuLT5uc21fdXNlcnMpIHsNCisJCQlsbi0+bnNtX2NsbnQgPSBuZXc7 DQorCQkJbmV3ID0gTlVMTDsNCisJCX0NCisJCWNsbnQgPSBsbi0+bnNtX2NsbnQ7DQorCQlsbi0+ bnNtX3VzZXJzKys7DQorCQlzcGluX3VubG9jaygmbG4tPm5zbV9jbG50X2xvY2spOw0KKwkJaWYg KG5ldykNCisJCQlycGNfc2h1dGRvd25fY2xpZW50KG5ldyk7DQogCX0NCiAJbXV0ZXhfdW5sb2Nr KCZuc21fY3JlYXRlX211dGV4KTsNCiBvdXQ6DQotLSANCjEuNy4xMS43DQoNCg0KLS0gDQpUcm9u ZCBNeWtsZWJ1c3QNCkxpbnV4IE5GUyBjbGllbnQgbWFpbnRhaW5lcg0KDQpOZXRBcHANClRyb25k Lk15a2xlYnVzdEBuZXRhcHAuY29tDQp3d3cubmV0YXBwLmNvbQ0K -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c index e4fb3ba..da91cdf 100644 --- a/fs/lockd/mon.c +++ b/fs/lockd/mon.c @@ -98,7 +98,6 @@ static struct rpc_clnt *nsm_client_get(struct net *net) spin_unlock(&ln->nsm_clnt_lock); goto out; } - spin_unlock(&ln->nsm_clnt_lock); mutex_lock(&nsm_create_mutex); clnt = nsm_create(net); @@ -108,6 +107,7 @@ static struct rpc_clnt *nsm_client_get(struct net *net) ln->nsm_users = 1; } mutex_unlock(&nsm_create_mutex); + spin_unlock(&ln->nsm_clnt_lock); out: return clnt; }