Message ID | 20140721164044.2845f3fd@notabene.brown (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Milosz Tanski <milosz@adfin.com> wrote: > That's the same thing exact fix I started testing on Saturday. I found that > there already is a wait_event_timeout (even without your recent changes). The > thing I'm not quite sure is what timeout it should use? That's probably something to make an external tuning knob for. David -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, 29 Jul 2014 17:12:34 +0100 David Howells <dhowells@redhat.com> wrote: > Milosz Tanski <milosz@adfin.com> wrote: > > > That's the same thing exact fix I started testing on Saturday. I found that > > there already is a wait_event_timeout (even without your recent changes). The > > thing I'm not quite sure is what timeout it should use? > > That's probably something to make an external tuning knob for. > > David Ugg. External tuning knobs should be avoided wherever possible, and always come with detailed instructions on how to tune them </rant> In this case I think it very nearly doesn't matter *at all* what value is used. If you set it a bit too high, then on the very very rare occasion that it would currently deadlock, you get a longer-than-necessary wait. So just make sure that is short enough that by the time the sysadmin notices and starts looking for the problem, it will be gone. And if you set it a bit too low, then it will loop around to find another page to deal with before that one is finished being written out, and so maybe do a little bit more work than is needed (though it'll be needed eventually). So the perfect number is somewhere between the typical response time for storage, and the typical response time for the sys-admin. Anywhere between 100ms and 10sec would do. 1 second is the geo-mean. (sorry I didn't reply earlier - I missed you email somehow). NeilBrown
I would vote on the lower end of the spectrum by default (closer to 100ms) since I imagine anybody deploying this in production environment would likely be using SSD drives for the caching. And in my tests on spinning disks there was little to no benefit outside of reducing network traffic. - Milosz On Tue, Jul 29, 2014 at 5:17 PM, NeilBrown <neilb@suse.de> wrote: > On Tue, 29 Jul 2014 17:12:34 +0100 David Howells <dhowells@redhat.com> wrote: > >> Milosz Tanski <milosz@adfin.com> wrote: >> >> > That's the same thing exact fix I started testing on Saturday. I found that >> > there already is a wait_event_timeout (even without your recent changes). The >> > thing I'm not quite sure is what timeout it should use? >> >> That's probably something to make an external tuning knob for. >> >> David > > Ugg. External tuning knobs should be avoided wherever possible, and always > come with detailed instructions on how to tune them </rant> > > In this case I think it very nearly doesn't matter *at all* what value is > used. > > If you set it a bit too high, then on the very very rare occasion that it > would currently deadlock, you get a longer-than-necessary wait. So just make > sure that is short enough that by the time the sysadmin notices and starts > looking for the problem, it will be gone. > > And if you set it a bit too low, then it will loop around to find another > page to deal with before that one is finished being written out, and so maybe > do a little bit more work than is needed (though it'll be needed eventually). > > So the perfect number is somewhere between the typical response time for > storage, and the typical response time for the sys-admin. Anywhere between > 100ms and 10sec would do. 1 second is the geo-mean. > > (sorry I didn't reply earlier - I missed you email somehow). > > NeilBrown
On Tue, 29 Jul 2014 21:48:34 -0400 Milosz Tanski <milosz@adfin.com> wrote: > I would vote on the lower end of the spectrum by default (closer to > 100ms) since I imagine anybody deploying this in production > environment would likely be using SSD drives for the caching. And in > my tests on spinning disks there was little to no benefit outside of > reducing network traffic. Maybe I'm confused...... I thought the whole point of this patch was to avoid deadlocks. Now you seem to be talking about a performance benefit. What did I miss? NeilBrown > > - Milosz > > On Tue, Jul 29, 2014 at 5:17 PM, NeilBrown <neilb@suse.de> wrote: > > On Tue, 29 Jul 2014 17:12:34 +0100 David Howells <dhowells@redhat.com> wrote: > > > >> Milosz Tanski <milosz@adfin.com> wrote: > >> > >> > That's the same thing exact fix I started testing on Saturday. I found that > >> > there already is a wait_event_timeout (even without your recent changes). The > >> > thing I'm not quite sure is what timeout it should use? > >> > >> That's probably something to make an external tuning knob for. > >> > >> David > > > > Ugg. External tuning knobs should be avoided wherever possible, and always > > come with detailed instructions on how to tune them </rant> > > > > In this case I think it very nearly doesn't matter *at all* what value is > > used. > > > > If you set it a bit too high, then on the very very rare occasion that it > > would currently deadlock, you get a longer-than-necessary wait. So just make > > sure that is short enough that by the time the sysadmin notices and starts > > looking for the problem, it will be gone. > > > > And if you set it a bit too low, then it will loop around to find another > > page to deal with before that one is finished being written out, and so maybe > > do a little bit more work than is needed (though it'll be needed eventually). > > > > So the perfect number is somewhere between the typical response time for > > storage, and the typical response time for the sys-admin. Anywhere between > > 100ms and 10sec would do. 1 second is the geo-mean. > > > > (sorry I didn't reply earlier - I missed you email somehow). > > > > NeilBrown > > >
I don't think that fixing a dead lock should impose a somewhat un-explainable high latency for the for the end user (or system admin). With old drives such latencies (second plus) were not unexpected. - Milosz On Tue, Jul 29, 2014 at 10:19 PM, NeilBrown <neilb@suse.de> wrote: > On Tue, 29 Jul 2014 21:48:34 -0400 Milosz Tanski <milosz@adfin.com> wrote: > >> I would vote on the lower end of the spectrum by default (closer to >> 100ms) since I imagine anybody deploying this in production >> environment would likely be using SSD drives for the caching. And in >> my tests on spinning disks there was little to no benefit outside of >> reducing network traffic. > > Maybe I'm confused...... > > I thought the whole point of this patch was to avoid deadlocks. > Now you seem to be talking about a performance benefit. > What did I miss? > > NeilBrown > > >> >> - Milosz >> >> On Tue, Jul 29, 2014 at 5:17 PM, NeilBrown <neilb@suse.de> wrote: >> > On Tue, 29 Jul 2014 17:12:34 +0100 David Howells <dhowells@redhat.com> wrote: >> > >> >> Milosz Tanski <milosz@adfin.com> wrote: >> >> >> >> > That's the same thing exact fix I started testing on Saturday. I found that >> >> > there already is a wait_event_timeout (even without your recent changes). The >> >> > thing I'm not quite sure is what timeout it should use? >> >> >> >> That's probably something to make an external tuning knob for. >> >> >> >> David >> > >> > Ugg. External tuning knobs should be avoided wherever possible, and always >> > come with detailed instructions on how to tune them </rant> >> > >> > In this case I think it very nearly doesn't matter *at all* what value is >> > used. >> > >> > If you set it a bit too high, then on the very very rare occasion that it >> > would currently deadlock, you get a longer-than-necessary wait. So just make >> > sure that is short enough that by the time the sysadmin notices and starts >> > looking for the problem, it will be gone. >> > >> > And if you set it a bit too low, then it will loop around to find another >> > page to deal with before that one is finished being written out, and so maybe >> > do a little bit more work than is needed (though it'll be needed eventually). >> > >> > So the perfect number is somewhere between the typical response time for >> > storage, and the typical response time for the sys-admin. Anywhere between >> > 100ms and 10sec would do. 1 second is the geo-mean. >> > >> > (sorry I didn't reply earlier - I missed you email somehow). >> > >> > NeilBrown >> >> >> >
I was away for a few days but I did think about this some more and how to avoid a tunable and having a sensible default option. FSCache already tracks statistics about how long writes and reads take (at least if you enable that option). With those stats in hand we should be able generate a default timeout value that works well and avoid a tunable. My self I thinking something like the 90th percentile time for page write... whatever the value may be this should be a decent way of auto-optimizing this timeout. - M On Wed, Jul 30, 2014 at 12:06 PM, Milosz Tanski <milosz@adfin.com> wrote: > I don't think that fixing a dead lock should impose a somewhat > un-explainable high latency for the for the end user (or system > admin). With old drives such latencies (second plus) were not > unexpected. > > - Milosz > > On Tue, Jul 29, 2014 at 10:19 PM, NeilBrown <neilb@suse.de> wrote: >> On Tue, 29 Jul 2014 21:48:34 -0400 Milosz Tanski <milosz@adfin.com> wrote: >> >>> I would vote on the lower end of the spectrum by default (closer to >>> 100ms) since I imagine anybody deploying this in production >>> environment would likely be using SSD drives for the caching. And in >>> my tests on spinning disks there was little to no benefit outside of >>> reducing network traffic. >> >> Maybe I'm confused...... >> >> I thought the whole point of this patch was to avoid deadlocks. >> Now you seem to be talking about a performance benefit. >> What did I miss? >> >> NeilBrown >> >> >>> >>> - Milosz >>> >>> On Tue, Jul 29, 2014 at 5:17 PM, NeilBrown <neilb@suse.de> wrote: >>> > On Tue, 29 Jul 2014 17:12:34 +0100 David Howells <dhowells@redhat.com> wrote: >>> > >>> >> Milosz Tanski <milosz@adfin.com> wrote: >>> >> >>> >> > That's the same thing exact fix I started testing on Saturday. I found that >>> >> > there already is a wait_event_timeout (even without your recent changes). The >>> >> > thing I'm not quite sure is what timeout it should use? >>> >> >>> >> That's probably something to make an external tuning knob for. >>> >> >>> >> David >>> > >>> > Ugg. External tuning knobs should be avoided wherever possible, and always >>> > come with detailed instructions on how to tune them </rant> >>> > >>> > In this case I think it very nearly doesn't matter *at all* what value is >>> > used. >>> > >>> > If you set it a bit too high, then on the very very rare occasion that it >>> > would currently deadlock, you get a longer-than-necessary wait. So just make >>> > sure that is short enough that by the time the sysadmin notices and starts >>> > looking for the problem, it will be gone. >>> > >>> > And if you set it a bit too low, then it will loop around to find another >>> > page to deal with before that one is finished being written out, and so maybe >>> > do a little bit more work than is needed (though it'll be needed eventually). >>> > >>> > So the perfect number is somewhere between the typical response time for >>> > storage, and the typical response time for the sys-admin. Anywhere between >>> > 100ms and 10sec would do. 1 second is the geo-mean. >>> > >>> > (sorry I didn't reply earlier - I missed you email somehow). >>> > >>> > NeilBrown >>> >>> >>> >> > > > > -- > Milosz Tanski > CTO > 16 East 34th Street, 15th floor > New York, NY 10016 > > p: 646-253-9055 > e: milosz@adfin.com
On Tue, 5 Aug 2014 00:12:25 -0400 Milosz Tanski <milosz@adfin.com> wrote: > I was away for a few days but I did think about this some more and how > to avoid a tunable and having a sensible default option. > > FSCache already tracks statistics about how long writes and reads take > (at least if you enable that option). With those stats in hand we > should be able generate a default timeout value that works well and > avoid a tunable. > > My self I thinking something like the 90th percentile time for page > write... whatever the value may be this should be a decent way of > auto-optimizing this timeout. Sounds like it could be a good approach, though if stats aren't enabled we need a sensible fall back. What is the actual cost of having the timeout too small? I guess it might be unnecessary writeouts, but I haven't done any analysis. If the cost is expected to be quite small, a smaller timeout might be very appropriate. One statistic that might be interesting is how long that wait typically takes at present, and how often it deadlocks. Mind you, we might be trying too hard. Maybe just go for 100ms. When you suggested that, I wasn't really objecting to your choice of a number. I was surprised because you seemed to justify it as a performance concern, and I didn't think deadlocks would happen often enough for that to be a valid concern. When deadlock do happen, I presume the system is temporarily under high memory pressure so lots of things are probably going slowly, so a delay of a second might not be noticed. But there are way too many "probably"s and "might"s. If you can present anything that looks like real data, it'll certainly trump all my hypotheses. Thanks, NeilBrown > > - M > > On Wed, Jul 30, 2014 at 12:06 PM, Milosz Tanski <milosz@adfin.com> wrote: > > I don't think that fixing a dead lock should impose a somewhat > > un-explainable high latency for the for the end user (or system > > admin). With old drives such latencies (second plus) were not > > unexpected. > > > > - Milosz > > > > On Tue, Jul 29, 2014 at 10:19 PM, NeilBrown <neilb@suse.de> wrote: > >> On Tue, 29 Jul 2014 21:48:34 -0400 Milosz Tanski <milosz@adfin.com> wrote: > >> > >>> I would vote on the lower end of the spectrum by default (closer to > >>> 100ms) since I imagine anybody deploying this in production > >>> environment would likely be using SSD drives for the caching. And in > >>> my tests on spinning disks there was little to no benefit outside of > >>> reducing network traffic. > >> > >> Maybe I'm confused...... > >> > >> I thought the whole point of this patch was to avoid deadlocks. > >> Now you seem to be talking about a performance benefit. > >> What did I miss? > >> > >> NeilBrown > >> > >> > >>> > >>> - Milosz > >>> > >>> On Tue, Jul 29, 2014 at 5:17 PM, NeilBrown <neilb@suse.de> wrote: > >>> > On Tue, 29 Jul 2014 17:12:34 +0100 David Howells <dhowells@redhat.com> wrote: > >>> > > >>> >> Milosz Tanski <milosz@adfin.com> wrote: > >>> >> > >>> >> > That's the same thing exact fix I started testing on Saturday. I found that > >>> >> > there already is a wait_event_timeout (even without your recent changes). The > >>> >> > thing I'm not quite sure is what timeout it should use? > >>> >> > >>> >> That's probably something to make an external tuning knob for. > >>> >> > >>> >> David > >>> > > >>> > Ugg. External tuning knobs should be avoided wherever possible, and always > >>> > come with detailed instructions on how to tune them </rant> > >>> > > >>> > In this case I think it very nearly doesn't matter *at all* what value is > >>> > used. > >>> > > >>> > If you set it a bit too high, then on the very very rare occasion that it > >>> > would currently deadlock, you get a longer-than-necessary wait. So just make > >>> > sure that is short enough that by the time the sysadmin notices and starts > >>> > looking for the problem, it will be gone. > >>> > > >>> > And if you set it a bit too low, then it will loop around to find another > >>> > page to deal with before that one is finished being written out, and so maybe > >>> > do a little bit more work than is needed (though it'll be needed eventually). > >>> > > >>> > So the perfect number is somewhere between the typical response time for > >>> > storage, and the typical response time for the sys-admin. Anywhere between > >>> > 100ms and 10sec would do. 1 second is the geo-mean. > >>> > > >>> > (sorry I didn't reply earlier - I missed you email somehow). > >>> > > >>> > NeilBrown > >>> > >>> > >>> > >> > > > > > > > > -- > > Milosz Tanski > > CTO > > 16 East 34th Street, 15th floor > > New York, NY 10016 > > > > p: 646-253-9055 > > e: milosz@adfin.com > > >
Milosz Tanski <milosz@adfin.com> wrote: > FSCache already tracks statistics about how long writes and reads take > (at least if you enable that option). With those stats in hand we > should be able generate a default timeout value that works well and > avoid a tunable. If stat generation is going to be on all the time, it should probably use something other than atomic ops. NFS uses a set of ints on each CPU which are added together when needed. David -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/fs/fscache/page.c b/fs/fscache/page.c index ed70714503fa..58035024c5cf 100644 --- a/fs/fscache/page.c +++ b/fs/fscache/page.c @@ -43,6 +43,13 @@ void __fscache_wait_on_page_write(struct fscache_cookie *cookie, struct page *pa } EXPORT_SYMBOL(__fscache_wait_on_page_write); +void __fscache_wait_on_page_write_timeout(struct fscache_cookie *cookie, struct page *page, unsigned long timeout) +{ + wait_queue_head_t *wq = bit_waitqueue(&cookie->flags, 0); + + wait_event_timeout(*wq, !__fscache_check_page_write(cookie, page), timeout); +} + /* * decide whether a page can be released, possibly by cancelling a store to it * - we're allowed to sleep if __GFP_WAIT is flagged @@ -115,7 +122,7 @@ page_busy: } fscache_stat(&fscache_n_store_vmscan_wait); - __fscache_wait_on_page_write(cookie, page); + __fscache_wait_on_page_write_timeout(cookie, page, HZ); gfp &= ~__GFP_WAIT; goto try_again; }