Message ID | 20200103210509.29237.18426.stgit@localhost.localdomain (mailing list archive) |
---|---|
Headers | show |
Series | mm / virtio: Provide support for free page reporting | expand |
On 1/3/20 4:16 PM, Alexander Duyck wrote: > This series provides an asynchronous means of reporting free guest pages > to a hypervisor so that the memory associated with those pages can be > dropped and reused by other processes and/or guests on the host. Using > this it is possible to avoid unnecessary I/O to disk and greatly improve > performance in the case of memory overcommit on the host. > > When enabled we will be performing a scan of free memory every 2 seconds > while pages of sufficiently high order are being freed. In each pass at > least one sixteenth of each free list will be reported. By doing this we > avoid racing against other threads that may be causing a high amount of > memory churn. > > The lowest page order currently scanned when reporting pages is > pageblock_order so that this feature will not interfere with the use of > Transparent Huge Pages in the case of virtualization. > > Currently this is only in use by virtio-balloon however there is the hope > that at some point in the future other hypervisors might be able to make > use of it. In the virtio-balloon/QEMU implementation the hypervisor is > currently using MADV_DONTNEED to indicate to the host kernel that the page > is currently free. It will be zeroed and faulted back into the guest the > next time the page is accessed. > > To track if a page is reported or not the Uptodate flag was repurposed and > used as a Reported flag for Buddy pages. We walk though the free list > isolating pages and adding them to the scatterlist until we either > encounter the end of the list, processed as many pages as were listed in > nr_free prior to us starting, or have filled the scatterlist with pages to > be reported. If we fill the scatterlist before we reach the end of the > list we rotate the list so that the first unreported page we encounter is > moved to the head of the list as that is where we will resume after we > have freed the reported pages back into the tail of the list. > > Below are the results from various benchmarks. I primarily focused on two > tests. The first is the will-it-scale/page_fault2 test, and the other is > a modified version of will-it-scale/page_fault1 that was enabled to use > THP. I did this as it allows for better visibility into different parts > of the memory subsystem. The guest is running with 32G for RAM on one > node of a E5-2630 v3. The host has had some features such as CPU turbo > disabled in the BIOS. > > Test page_fault1 (THP) page_fault2 > Name tasks Process Iter STDEV Process Iter STDEV > Baseline 1 1012402.50 0.14% 361855.25 0.81% > 16 8827457.25 0.09% 3282347.00 0.34% > > Patches Applied 1 1007897.00 0.23% 361887.00 0.26% > 16 8784741.75 0.39% 3240669.25 0.48% > > Patches Enabled 1 1010227.50 0.39% 359749.25 0.56% > 16 8756219.00 0.24% 3226608.75 0.97% > > Patches Enabled 1 1050982.00 4.26% 357966.25 0.14% > page shuffle 16 8672601.25 0.49% 3223177.75 0.40% > > Patches Enabled 1 1003238.00 0.22% 360211.00 0.22% > shuffle w/ RFC 16 8767010.50 0.32% 3199874.00 0.71% Just to be sure that I understand your test setup correctly: - You have a 32GB guest with a single node affined to a single node of your host (E5-2630). - You have THP in both host and the guest enabled and set to 'madvise'. - On top of the default x86_64 config and other virtio config options you have CONFIG_SLAB_FREELIST_RANDOM and CONFIG_SHUFFLE_PAGE_ALLOCATOR enabled for the third observation (Patches Enabled page shuffle). did I miss anything? Can you also remind me of the reason you have skipped recording the number of threads count reported as part of page_fault tests? Was it because you were observing different values with every fresh boot? > The results above are for a baseline with a linux-next-20191219 kernel, > that kernel with this patch set applied but page reporting disabled in > virtio-balloon, the patches applied and page reporting fully enabled, the > patches enabled with page shuffling enabled, and the patches applied with > page shuffling enabled and an RFC patch that makes used of MADV_FREE in > QEMU. These results include the deviation seen between the average value > reported here versus the high and/or low value. I observed that during the > test memory usage for the first three tests never dropped whereas with the > patches fully enabled the VM would drop to using only a few GB of the > host's memory when switching from memhog to page fault tests. Do you mean that in the later case you run the page fault tests after memhog? If so how much memory do you pass to memhog?
On Wed, 2020-01-08 at 02:57 -0500, Nitesh Narayan Lal wrote: > On 1/3/20 4:16 PM, Alexander Duyck wrote: <snip> > > Below are the results from various benchmarks. I primarily focused on two > > tests. The first is the will-it-scale/page_fault2 test, and the other is > > a modified version of will-it-scale/page_fault1 that was enabled to use > > THP. I did this as it allows for better visibility into different parts > > of the memory subsystem. The guest is running with 32G for RAM on one > > node of a E5-2630 v3. The host has had some features such as CPU turbo > > disabled in the BIOS. > > > > Test page_fault1 (THP) page_fault2 > > Name tasks Process Iter STDEV Process Iter STDEV > > Baseline 1 1012402.50 0.14% 361855.25 0.81% > > 16 8827457.25 0.09% 3282347.00 0.34% > > > > Patches Applied 1 1007897.00 0.23% 361887.00 0.26% > > 16 8784741.75 0.39% 3240669.25 0.48% > > > > Patches Enabled 1 1010227.50 0.39% 359749.25 0.56% > > 16 8756219.00 0.24% 3226608.75 0.97% > > > > Patches Enabled 1 1050982.00 4.26% 357966.25 0.14% > > page shuffle 16 8672601.25 0.49% 3223177.75 0.40% > > > > Patches Enabled 1 1003238.00 0.22% 360211.00 0.22% > > shuffle w/ RFC 16 8767010.50 0.32% 3199874.00 0.71% > > Just to be sure that I understand your test setup correctly: > - You have a 32GB guest with a single node affined to a single node of your host > (E5-2630). > - You have THP in both host and the guest enabled and set to 'madvise'. > - On top of the default x86_64 config and other virtio config options you have > CONFIG_SLAB_FREELIST_RANDOM and CONFIG_SHUFFLE_PAGE_ALLOCATOR enabled for the > third observation (Patches Enabled page shuffle). > did I miss anything? So the only things I think you overlooked was that CPU turbo was disbled int eh BIOS. Without that my numbers were much more unpredictable as the CPUs were turboing up and down and me and giving me inconsistent results. Also one thing I forgot to mention is that I had to modify the grub kernel command line to include page_alloc.shuffle=Y so that the page shuffling was actually active. > Can you also remind me of the reason you have skipped recording the number of > threads count reported as part of page_fault tests? Was it because you were > observing different values with every fresh boot? Mainly because the threads test gave me data that was all over the place at higher task counts and because it doesn't scale as well as the processes test case. The averages between the two worked out to be about the same, but the standard deviation was maxing out at 7% for the baseline and 8% for the patches enabled case. However the differences in the averages is still less than 1%. So for example the same data using the threads values for Baseline vs Patches enabled comes out as follows: Baseline 1 1133900.25 0.24% 358395.25 0.30% 16 5848684.75 6.96% 2181989.00 1.69% Patches Enabled 1 1132748.50 0.20% 356615.00 0.11% 16 5796647.00 8.38% 2160475.50 1.84% > > The results above are for a baseline with a linux-next-20191219 kernel, > > that kernel with this patch set applied but page reporting disabled in > > virtio-balloon, the patches applied and page reporting fully enabled, the > > patches enabled with page shuffling enabled, and the patches applied with > > page shuffling enabled and an RFC patch that makes used of MADV_FREE in > > QEMU. These results include the deviation seen between the average value > > reported here versus the high and/or low value. I observed that during the > > test memory usage for the first three tests never dropped whereas with the > > patches fully enabled the VM would drop to using only a few GB of the > > host's memory when switching from memhog to page fault tests. > > Do you mean that in the later case you run the page fault tests after memhog? > If so how much memory do you pass to memhog? For every test I would run memhog 32g in the guest to make sure all memory was allocated at least once before running the page fault tests. I was using that to make certain that the page reporting was working before running the test. That way the baseline gives more consistent results as we don't have to worry about there being any memory the guest has yet to fault in.
On Fri, 2020-01-03 at 13:16 -0800, Alexander Duyck wrote: > This series provides an asynchronous means of reporting free guest pages > to a hypervisor so that the memory associated with those pages can be > dropped and reused by other processes and/or guests on the host. Using > this it is possible to avoid unnecessary I/O to disk and greatly improve > performance in the case of memory overcommit on the host. <snip> > > Changes from v15: > https://lore.kernel.org/lkml/20191205161928.19548.41654.stgit@localhost.localdomain/ > Rebased on linux-next-20191219 > Split out patches for budget and moving head to last page processed > Updated budget code to reduce how much memory is reported per pass > Added logic to also rotate the list if we exit due a page isolation failure > Added migratetype as argument in __putback_isolated_page It's been about a week and a half since I posted the set and haven't really gotten much feedback other than a suggestion of a slight tweak to the titles for patches 7 & 8 to mention page_reporting. I'm mainly looking for input on patches 3, 4, 7 and 8 since those are the ones that contain most of the changes based on recent feedback. I'm wondering if there is any remaining concerns or if these patches are in a state where they are ready to be pulled into the MM tree? Thanks. - Alex