mbox series

[v1,0/4] Perf tool LTO support

Message ID 20230724201247.748146-1-irogers@google.com (mailing list archive)
Headers show
Series Perf tool LTO support | expand

Message

Ian Rogers July 24, 2023, 8:12 p.m. UTC
Add a build flag, LTO=1, so that perf is built with the -flto
flag. Address some build errors this configuration throws up.

For me on my Debian derived OS, "CC=clang CXX=clang++ LD=ld.lld" works
fine. With GCC LTO this fails with:
```
lto-wrapper: warning: using serial compilation of 50 LTRANS jobs
lto-wrapper: note: see the ‘-flto’ option documentation for more information
/usr/bin/ld: /tmp/ccK8kXAu.ltrans10.ltrans.o:(.data.rel.ro+0x28): undefined reference to `memset_orig'
/usr/bin/ld: /tmp/ccK8kXAu.ltrans10.ltrans.o:(.data.rel.ro+0x40): undefined reference to `__memset'
/usr/bin/ld: /tmp/ccK8kXAu.ltrans10.ltrans.o:(.data.rel+0x28): undefined reference to `memcpy_orig'
/usr/bin/ld: /tmp/ccK8kXAu.ltrans10.ltrans.o:(.data.rel+0x40): undefined reference to `__memcpy'
/usr/bin/ld: /tmp/ccK8kXAu.ltrans44.ltrans.o: in function `test__arch_unwind_sample':
/home/irogers/kernel.org/tools/perf/arch/x86/tests/dwarf-unwind.c:72: undefined reference to `perf_regs_load'
collect2: error: ld returned 1 exit status
```

The issue is that we build multiple .o files in a directory and then
link them into a .o with "ld -r" (cmd_ld_multi). This early link step
appears to trigger GCC to remove the .S file definition of the symbol
and break the later link step (the perf-in.o shows perf_regs_load, for
example, going from the text section to being undefined at the link
step which doesn't happen with clang or without LTO). It is possible
to work around this by taking the final perf link command and adding
the .o files generated from .S back into it, namely:
arch/x86/tests/regs_load.o
bench/mem-memset-x86-64-asm.o
bench/mem-memcpy-x86-64-asm.o

A quick performance check and the performance improvements from LTO
are noticeable:

Non-LTO
```
$ perf bench internals synthesize
 # Running 'internals/synthesize' benchmark:
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
  Average synthesis took: 202.216 usec (+- 0.160 usec)
  Average num. events: 51.000 (+- 0.000)
  Average time per event 3.965 usec
  Average data synthesis took: 230.875 usec (+- 0.285 usec)
  Average num. events: 271.000 (+- 0.000)
  Average time per event 0.852 usec
```

LTO
```
$ perf bench internals synthesize
 # Running 'internals/synthesize' benchmark:
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
  Average synthesis took: 104.530 usec (+- 0.074 usec)
  Average num. events: 51.000 (+- 0.000)
  Average time per event 2.050 usec
  Average data synthesis took: 112.660 usec (+- 0.114 usec)
  Average num. events: 273.000 (+- 0.000)
  Average time per event 0.413 usec
```

Ian Rogers (4):
  perf stat: Avoid uninitialized use of perf_stat_config
  perf parse-events: Avoid use uninitialized warning
  perf test: Avoid weak symbol for arch_tests
  perf build: Add LTO build option

 tools/perf/Makefile.config      |  5 +++++
 tools/perf/tests/builtin-test.c | 11 ++++++++++-
 tools/perf/tests/stat.c         |  2 +-
 tools/perf/util/parse-events.c  |  2 +-
 tools/perf/util/stat.c          |  2 +-
 5 files changed, 18 insertions(+), 4 deletions(-)

Comments

Nick Desaulniers July 24, 2023, 9:15 p.m. UTC | #1
On Mon, Jul 24, 2023 at 1:12 PM Ian Rogers <irogers@google.com> wrote:
>
> Add a build flag, LTO=1, so that perf is built with the -flto
> flag. Address some build errors this configuration throws up.

Hi Ian,
Thanks for the performance numbers. Any sense of what the build time
numbers might look like for building perf with LTO?

Does `-flto=thin` in clang's case make a meaningful difference of
`-flto`? I'd recommend that over "full LTO" `-flto` when the
performance difference of the result isn't too meaningful.  ThinLTO
should be faster to build, but I don't know that I've ever built perf,
so IDK what to expect.
Arnaldo Carvalho de Melo July 24, 2023, 9:29 p.m. UTC | #2
Em Mon, Jul 24, 2023 at 01:12:43PM -0700, Ian Rogers escreveu:
> Add a build flag, LTO=1, so that perf is built with the -flto
> flag. Address some build errors this configuration throws up.
> 
> For me on my Debian derived OS, "CC=clang CXX=clang++ LD=ld.lld" works
> fine. With GCC LTO this fails with:
> ```
> lto-wrapper: warning: using serial compilation of 50 LTRANS jobs
> lto-wrapper: note: see the ‘-flto’ option documentation for more information
> /usr/bin/ld: /tmp/ccK8kXAu.ltrans10.ltrans.o:(.data.rel.ro+0x28): undefined reference to `memset_orig'
> /usr/bin/ld: /tmp/ccK8kXAu.ltrans10.ltrans.o:(.data.rel.ro+0x40): undefined reference to `__memset'
> /usr/bin/ld: /tmp/ccK8kXAu.ltrans10.ltrans.o:(.data.rel+0x28): undefined reference to `memcpy_orig'
> /usr/bin/ld: /tmp/ccK8kXAu.ltrans10.ltrans.o:(.data.rel+0x40): undefined reference to `__memcpy'
> /usr/bin/ld: /tmp/ccK8kXAu.ltrans44.ltrans.o: in function `test__arch_unwind_sample':
> /home/irogers/kernel.org/tools/perf/arch/x86/tests/dwarf-unwind.c:72: undefined reference to `perf_regs_load'
> collect2: error: ld returned 1 exit status
> ```
> 
> The issue is that we build multiple .o files in a directory and then
> link them into a .o with "ld -r" (cmd_ld_multi). This early link step
> appears to trigger GCC to remove the .S file definition of the symbol
> and break the later link step (the perf-in.o shows perf_regs_load, for
> example, going from the text section to being undefined at the link
> step which doesn't happen with clang or without LTO). It is possible
> to work around this by taking the final perf link command and adding
> the .o files generated from .S back into it, namely:
> arch/x86/tests/regs_load.o
> bench/mem-memset-x86-64-asm.o
> bench/mem-memcpy-x86-64-asm.o
> 
> A quick performance check and the performance improvements from LTO
> are noticeable:
> 
> Non-LTO
> ```
> $ perf bench internals synthesize
>  # Running 'internals/synthesize' benchmark:
> Computing performance of single threaded perf event synthesis by
> synthesizing events on the perf process itself:
>   Average synthesis took: 202.216 usec (+- 0.160 usec)
>   Average num. events: 51.000 (+- 0.000)
>   Average time per event 3.965 usec
>   Average data synthesis took: 230.875 usec (+- 0.285 usec)
>   Average num. events: 271.000 (+- 0.000)
>   Average time per event 0.852 usec
> ```
> 
> LTO
> ```
> $ perf bench internals synthesize
>  # Running 'internals/synthesize' benchmark:
> Computing performance of single threaded perf event synthesis by
> synthesizing events on the perf process itself:
>   Average synthesis took: 104.530 usec (+- 0.074 usec)
>   Average num. events: 51.000 (+- 0.000)
>   Average time per event 2.050 usec
>   Average data synthesis took: 112.660 usec (+- 0.114 usec)
>   Average num. events: 273.000 (+- 0.000)
>   Average time per event 0.413 usec


Cool stuff! Applied locally, test building now on the container suite.

- Arnaldo

> ```
> 
> Ian Rogers (4):
>   perf stat: Avoid uninitialized use of perf_stat_config
>   perf parse-events: Avoid use uninitialized warning
>   perf test: Avoid weak symbol for arch_tests
>   perf build: Add LTO build option
> 
>  tools/perf/Makefile.config      |  5 +++++
>  tools/perf/tests/builtin-test.c | 11 ++++++++++-
>  tools/perf/tests/stat.c         |  2 +-
>  tools/perf/util/parse-events.c  |  2 +-
>  tools/perf/util/stat.c          |  2 +-
>  5 files changed, 18 insertions(+), 4 deletions(-)
> 
> -- 
> 2.41.0.487.g6d72f3e995-goog
>
Ian Rogers July 24, 2023, 9:48 p.m. UTC | #3
On Mon, Jul 24, 2023 at 2:15 PM Nick Desaulniers
<ndesaulniers@google.com> wrote:
>
> On Mon, Jul 24, 2023 at 1:12 PM Ian Rogers <irogers@google.com> wrote:
> >
> > Add a build flag, LTO=1, so that perf is built with the -flto
> > flag. Address some build errors this configuration throws up.
>
> Hi Ian,
> Thanks for the performance numbers. Any sense of what the build time
> numbers might look like for building perf with LTO?
>
> Does `-flto=thin` in clang's case make a meaningful difference of
> `-flto`? I'd recommend that over "full LTO" `-flto` when the
> performance difference of the result isn't too meaningful.  ThinLTO
> should be faster to build, but I don't know that I've ever built perf,
> so IDK what to expect.

Hi Nick,

I'm not sure how much the perf build will benefit from LTO to say
whether thin is good enough or not. Things like "perf record" are
designed to spend the majority of their time blocking on a poll system
call. We have benchmarks at least :-)

I grabbed some clang build times in an unscientific way on my loaded laptop:

no LTO
real    0m48.846s
user    3m11.452s
sys     0m29.598s

-flto=thin
real    0m55.910s
user    4m2.342s
sys     0m30.120s

real    0m50.330s
user    3m36.986s
sys     0m28.519s

-flto
real    1m12.002s
user    3m27.676s
sys     0m30.305s

real    1m5.187s
user    3m19.348s
sys     0m29.031s

So perhaps thin LTO increases total build time by 10%, whilst full LTO
increases it by 50%.

Gathering some clang performance numbers:

no LTO
$ perf bench internals synthesize
# Running 'internals/synthesize' benchmark:
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
 Average synthesis took: 178.694 usec (+- 0.171 usec)
 Average num. events: 52.000 (+- 0.000)
 Average time per event 3.436 usec
 Average data synthesis took: 194.545 usec (+- 0.088 usec)
 Average num. events: 277.000 (+- 0.000)
 Average time per event 0.702 usec
# Running 'internals/synthesize' benchmark:
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
 Average synthesis took: 175.381 usec (+- 0.105 usec)
 Average num. events: 52.000 (+- 0.000)
 Average time per event 3.373 usec
 Average data synthesis took: 188.980 usec (+- 0.071 usec)
 Average num. events: 278.000 (+- 0.000)
 Average time per event 0.680 usec

-flto=thin
$ perf bench internals synthesize
# Running 'internals/synthesize' benchmark:
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
 Average synthesis took: 183.122 usec (+- 0.082 usec)
 Average num. events: 52.000 (+- 0.000)
 Average time per event 3.522 usec
 Average data synthesis took: 196.468 usec (+- 0.102 usec)
 Average num. events: 277.000 (+- 0.000)
 Average time per event 0.709 usec
# Running 'internals/synthesize' benchmark:
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
 Average synthesis took: 177.684 usec (+- 0.094 usec)
 Average num. events: 52.000 (+- 0.000)
 Average time per event 3.417 usec
 Average data synthesis took: 190.079 usec (+- 0.077 usec)
 Average num. events: 275.000 (+- 0.000)
 Average time per event 0.691 usec

-flto
$ perf bench internals synthesize
# Running 'internals/synthesize' benchmark:
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
 Average synthesis took: 112.599 usec (+- 0.040 usec)
 Average num. events: 52.000 (+- 0.000)
 Average time per event 2.165 usec
 Average data synthesis took: 119.012 usec (+- 0.070 usec)
 Average num. events: 278.000 (+- 0.000)
 Average time per event 0.428 usec
# Running 'internals/synthesize' benchmark:
Computing performance of single threaded perf event synthesis by
synthesizing events on the perf process itself:
 Average synthesis took: 107.606 usec (+- 0.147 usec)
 Average num. events: 52.000 (+- 0.000)
 Average time per event 2.069 usec
 Average data synthesis took: 114.633 usec (+- 0.159 usec)
 Average num. events: 279.000 (+- 0.000)
 Average time per event 0.411 usec

The performance win from thin LTO doesn't look to be there. Full LTO
appears to be reducing event synthesis time down to 60% of what it
was. The clang numbers are looking better than the GCC ones. I think
from this it makes sense to use -flto.

Thanks,
Ian

> --
> Thanks,
> ~Nick Desaulniers
Nick Desaulniers July 24, 2023, 10:27 p.m. UTC | #4
On Mon, Jul 24, 2023 at 2:48 PM Ian Rogers <irogers@google.com> wrote:
>
> On Mon, Jul 24, 2023 at 2:15 PM Nick Desaulniers
> <ndesaulniers@google.com> wrote:
> >
> > On Mon, Jul 24, 2023 at 1:12 PM Ian Rogers <irogers@google.com> wrote:
> > >
> > > Add a build flag, LTO=1, so that perf is built with the -flto
> > > flag. Address some build errors this configuration throws up.
> >
> > Hi Ian,
> > Thanks for the performance numbers. Any sense of what the build time
> > numbers might look like for building perf with LTO?
> >
> > Does `-flto=thin` in clang's case make a meaningful difference of
> > `-flto`? I'd recommend that over "full LTO" `-flto` when the
> > performance difference of the result isn't too meaningful.  ThinLTO
> > should be faster to build, but I don't know that I've ever built perf,
> > so IDK what to expect.
>
> Hi Nick,
>
> I'm not sure how much the perf build will benefit from LTO to say
> whether thin is good enough or not. Things like "perf record" are
> designed to spend the majority of their time blocking on a poll system
> call. We have benchmarks at least :-)
>
> I grabbed some clang build times in an unscientific way on my loaded laptop:
>
> no LTO
> real    0m48.846s
> user    3m11.452s
> sys     0m29.598s
>
> -flto=thin
> real    0m55.910s
> user    4m2.342s
> sys     0m30.120s
>
> real    0m50.330s
> user    3m36.986s
> sys     0m28.519s
>
> -flto
> real    1m12.002s
> user    3m27.676s
> sys     0m30.305s
>
> real    1m5.187s
> user    3m19.348s
> sys     0m29.031s
>
> So perhaps thin LTO increases total build time by 10%, whilst full LTO
> increases it by 50%.
>
> Gathering some clang performance numbers:
>
> no LTO
> $ perf bench internals synthesize
> # Running 'internals/synthesize' benchmark:
> Computing performance of single threaded perf event synthesis by
> synthesizing events on the perf process itself:
>  Average synthesis took: 178.694 usec (+- 0.171 usec)
>  Average num. events: 52.000 (+- 0.000)
>  Average time per event 3.436 usec
>  Average data synthesis took: 194.545 usec (+- 0.088 usec)
>  Average num. events: 277.000 (+- 0.000)
>  Average time per event 0.702 usec
> # Running 'internals/synthesize' benchmark:
> Computing performance of single threaded perf event synthesis by
> synthesizing events on the perf process itself:
>  Average synthesis took: 175.381 usec (+- 0.105 usec)
>  Average num. events: 52.000 (+- 0.000)
>  Average time per event 3.373 usec
>  Average data synthesis took: 188.980 usec (+- 0.071 usec)
>  Average num. events: 278.000 (+- 0.000)
>  Average time per event 0.680 usec
>
> -flto=thin
> $ perf bench internals synthesize
> # Running 'internals/synthesize' benchmark:
> Computing performance of single threaded perf event synthesis by
> synthesizing events on the perf process itself:
>  Average synthesis took: 183.122 usec (+- 0.082 usec)
>  Average num. events: 52.000 (+- 0.000)
>  Average time per event 3.522 usec
>  Average data synthesis took: 196.468 usec (+- 0.102 usec)
>  Average num. events: 277.000 (+- 0.000)
>  Average time per event 0.709 usec
> # Running 'internals/synthesize' benchmark:
> Computing performance of single threaded perf event synthesis by
> synthesizing events on the perf process itself:
>  Average synthesis took: 177.684 usec (+- 0.094 usec)
>  Average num. events: 52.000 (+- 0.000)
>  Average time per event 3.417 usec
>  Average data synthesis took: 190.079 usec (+- 0.077 usec)
>  Average num. events: 275.000 (+- 0.000)
>  Average time per event 0.691 usec
>
> -flto
> $ perf bench internals synthesize
> # Running 'internals/synthesize' benchmark:
> Computing performance of single threaded perf event synthesis by
> synthesizing events on the perf process itself:
>  Average synthesis took: 112.599 usec (+- 0.040 usec)
>  Average num. events: 52.000 (+- 0.000)
>  Average time per event 2.165 usec
>  Average data synthesis took: 119.012 usec (+- 0.070 usec)
>  Average num. events: 278.000 (+- 0.000)
>  Average time per event 0.428 usec
> # Running 'internals/synthesize' benchmark:
> Computing performance of single threaded perf event synthesis by
> synthesizing events on the perf process itself:
>  Average synthesis took: 107.606 usec (+- 0.147 usec)
>  Average num. events: 52.000 (+- 0.000)
>  Average time per event 2.069 usec
>  Average data synthesis took: 114.633 usec (+- 0.159 usec)
>  Average num. events: 279.000 (+- 0.000)
>  Average time per event 0.411 usec
>
> The performance win from thin LTO doesn't look to be there. Full LTO
> appears to be reducing event synthesis time down to 60% of what it
> was. The clang numbers are looking better than the GCC ones. I think
> from this it makes sense to use -flto.

Without any context, I'm not really sure what numbers are good vs. bad
("is larger better?").  More so I was curious if thinLTO perhaps got
most of the win without significant performance regressions. If not,
oh well, and if the slower full LTO has numbers that make sense to
other reviewers, well then *Chuck Norris thumbs up*.  Thanks for the
stats.

>
> Thanks,
> Ian
>
> > --
> > Thanks,
> > ~Nick Desaulniers
Ian Rogers July 24, 2023, 10:38 p.m. UTC | #5
On Mon, Jul 24, 2023 at 3:27 PM Nick Desaulniers
<ndesaulniers@google.com> wrote:
>
> On Mon, Jul 24, 2023 at 2:48 PM Ian Rogers <irogers@google.com> wrote:
> >
> > On Mon, Jul 24, 2023 at 2:15 PM Nick Desaulniers
> > <ndesaulniers@google.com> wrote:
> > >
> > > On Mon, Jul 24, 2023 at 1:12 PM Ian Rogers <irogers@google.com> wrote:
> > > >
> > > > Add a build flag, LTO=1, so that perf is built with the -flto
> > > > flag. Address some build errors this configuration throws up.
> > >
> > > Hi Ian,
> > > Thanks for the performance numbers. Any sense of what the build time
> > > numbers might look like for building perf with LTO?
> > >
> > > Does `-flto=thin` in clang's case make a meaningful difference of
> > > `-flto`? I'd recommend that over "full LTO" `-flto` when the
> > > performance difference of the result isn't too meaningful.  ThinLTO
> > > should be faster to build, but I don't know that I've ever built perf,
> > > so IDK what to expect.
> >
> > Hi Nick,
> >
> > I'm not sure how much the perf build will benefit from LTO to say
> > whether thin is good enough or not. Things like "perf record" are
> > designed to spend the majority of their time blocking on a poll system
> > call. We have benchmarks at least :-)
> >
> > I grabbed some clang build times in an unscientific way on my loaded laptop:
> >
> > no LTO
> > real    0m48.846s
> > user    3m11.452s
> > sys     0m29.598s
> >
> > -flto=thin
> > real    0m55.910s
> > user    4m2.342s
> > sys     0m30.120s
> >
> > real    0m50.330s
> > user    3m36.986s
> > sys     0m28.519s
> >
> > -flto
> > real    1m12.002s
> > user    3m27.676s
> > sys     0m30.305s
> >
> > real    1m5.187s
> > user    3m19.348s
> > sys     0m29.031s
> >
> > So perhaps thin LTO increases total build time by 10%, whilst full LTO
> > increases it by 50%.
> >
> > Gathering some clang performance numbers:
> >
> > no LTO
> > $ perf bench internals synthesize
> > # Running 'internals/synthesize' benchmark:
> > Computing performance of single threaded perf event synthesis by
> > synthesizing events on the perf process itself:
> >  Average synthesis took: 178.694 usec (+- 0.171 usec)
> >  Average num. events: 52.000 (+- 0.000)
> >  Average time per event 3.436 usec
> >  Average data synthesis took: 194.545 usec (+- 0.088 usec)
> >  Average num. events: 277.000 (+- 0.000)
> >  Average time per event 0.702 usec
> > # Running 'internals/synthesize' benchmark:
> > Computing performance of single threaded perf event synthesis by
> > synthesizing events on the perf process itself:
> >  Average synthesis took: 175.381 usec (+- 0.105 usec)
> >  Average num. events: 52.000 (+- 0.000)
> >  Average time per event 3.373 usec
> >  Average data synthesis took: 188.980 usec (+- 0.071 usec)
> >  Average num. events: 278.000 (+- 0.000)
> >  Average time per event 0.680 usec
> >
> > -flto=thin
> > $ perf bench internals synthesize
> > # Running 'internals/synthesize' benchmark:
> > Computing performance of single threaded perf event synthesis by
> > synthesizing events on the perf process itself:
> >  Average synthesis took: 183.122 usec (+- 0.082 usec)
> >  Average num. events: 52.000 (+- 0.000)
> >  Average time per event 3.522 usec
> >  Average data synthesis took: 196.468 usec (+- 0.102 usec)
> >  Average num. events: 277.000 (+- 0.000)
> >  Average time per event 0.709 usec
> > # Running 'internals/synthesize' benchmark:
> > Computing performance of single threaded perf event synthesis by
> > synthesizing events on the perf process itself:
> >  Average synthesis took: 177.684 usec (+- 0.094 usec)
> >  Average num. events: 52.000 (+- 0.000)
> >  Average time per event 3.417 usec
> >  Average data synthesis took: 190.079 usec (+- 0.077 usec)
> >  Average num. events: 275.000 (+- 0.000)
> >  Average time per event 0.691 usec
> >
> > -flto
> > $ perf bench internals synthesize
> > # Running 'internals/synthesize' benchmark:
> > Computing performance of single threaded perf event synthesis by
> > synthesizing events on the perf process itself:
> >  Average synthesis took: 112.599 usec (+- 0.040 usec)
> >  Average num. events: 52.000 (+- 0.000)
> >  Average time per event 2.165 usec
> >  Average data synthesis took: 119.012 usec (+- 0.070 usec)
> >  Average num. events: 278.000 (+- 0.000)
> >  Average time per event 0.428 usec
> > # Running 'internals/synthesize' benchmark:
> > Computing performance of single threaded perf event synthesis by
> > synthesizing events on the perf process itself:
> >  Average synthesis took: 107.606 usec (+- 0.147 usec)
> >  Average num. events: 52.000 (+- 0.000)
> >  Average time per event 2.069 usec
> >  Average data synthesis took: 114.633 usec (+- 0.159 usec)
> >  Average num. events: 279.000 (+- 0.000)
> >  Average time per event 0.411 usec
> >
> > The performance win from thin LTO doesn't look to be there. Full LTO
> > appears to be reducing event synthesis time down to 60% of what it
> > was. The clang numbers are looking better than the GCC ones. I think
> > from this it makes sense to use -flto.
>
> Without any context, I'm not really sure what numbers are good vs. bad
> ("is larger better?").  More so I was curious if thinLTO perhaps got
> most of the win without significant performance regressions. If not,
> oh well, and if the slower full LTO has numbers that make sense to
> other reviewers, well then *Chuck Norris thumbs up*.  Thanks for the
> stats.

I can at least explain the stats. When perf starts it has to
"synthesize" the state-of-the machine, it generates fake events to
describe the mmaps in each process by reading /proc. This is done most
typically so a virtual address can be turned into a filename and line
number. Generally this is done for the text part of a binary but it
may also be done for the data. Large systems may take a long time to
synthesize all the state for, hence the benchmark.

The result I normally look at above is the "Average time per event",
so without LTO or with thin LTO each event is taking approx. 180
microseconds to create. With full LTO the time taken per event is 110
microseconds, which could be a noticeable start-up time win.

Thanks,
Ian

> >
> > Thanks,
> > Ian
> >
> > > --
> > > Thanks,
> > > ~Nick Desaulniers
>
>
>
> --
> Thanks,
> ~Nick Desaulniers