Message ID | 20200922020148.3261797-1-riel@surriel.com (mailing list archive) |
---|---|
Headers | show |
Series | mm,swap: skip swap readahead for instant IO (like zswap) | expand |
On Mon, 21 Sep 2020 22:01:46 -0400 Rik van Riel <riel@surriel.com> wrote: > Both with frontswap/zswap, and with some extremely fast IO devices, > swap IO will be done before the "asynchronous" swap_readpage() call > has returned. > > In that case, doing swap readahead only wastes memory, increases > latency, and increases the chances of needing to evict something more > useful from memory. In that case, just skip swap readahead. Any quantitative testing results?
On Tue, 2020-09-22 at 10:12 -0700, Andrew Morton wrote: > On Mon, 21 Sep 2020 22:01:46 -0400 Rik van Riel <riel@surriel.com> > wrote: > > > Both with frontswap/zswap, and with some extremely fast IO devices, > > swap IO will be done before the "asynchronous" swap_readpage() call > > has returned. > > > > In that case, doing swap readahead only wastes memory, increases > > latency, and increases the chances of needing to evict something > > more > > useful from memory. In that case, just skip swap readahead. > > Any quantitative testing results? I have test results with a real workload now. Without this patch, enabling zswap results in about an 8% increase in p99 request latency. With these patches, the latency penalty for enabling zswap is under 1%. Enabling zswap allows us to give the main workload a little more memory, since the spikes in memory demand caused by things like system management software no longer cause large latency issues.
On Mon, 2020-10-05 at 13:32 -0400, Rik van Riel wrote: > On Tue, 2020-09-22 at 10:12 -0700, Andrew Morton wrote: > > On Mon, 21 Sep 2020 22:01:46 -0400 Rik van Riel <riel@surriel.com> > > wrote: > > Any quantitative testing results? > > I have test results with a real workload now. > > Without this patch, enabling zswap results in about an > 8% increase in p99 request latency. With these patches, > the latency penalty for enabling zswap is under 1%. Never mind that. On larger tests the effect seems to disappear, probably because the logic in __swapin_nr_pages() already reduces the number of pages read ahead to 2 on workloads with lots of random access. That reduces the latency effects observed. Now we might still see some memory waste due to decompressing pages we don't need, but I have not seen any real effects from that yet, either. I think it may be time to focus on a larger memory waste with zswap: leaving the compressed copy of memory around when we decompress the memory at swapin time. More aggressively freeing the compressed memory will probably buy us more than reducing readahead.