Message ID | 20190306155048.12868-1-nitesh@redhat.com (mailing list archive) |
---|---|
Headers | show |
Series | KVM: Guest Free Page Hinting | expand |
On 3/6/19 11:09 AM, Michael S. Tsirkin wrote: > On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote: >> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >> >> Benefit: >> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >> >> Changelog in v9: >> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. >> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >> * Dynamically allocated space is used to hold the isolated guest free pages. >> * All the pages are reported asynchronously to the host via virtio driver. >> * Pages are returned back to the guest buddy free list only when the host response is received. >> >> Pending items: >> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. >> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) >> * Compare reporting free pages via vring with vhost. >> * Decide between MADV_DONTNEED and MADV_FREE. >> * Analyze overall performance impact due to guest free page hinting. >> * Come up with proper/traceable error-message/logs. >> >> Tests: >> 1. Use-case - Number of guests we can launch >> >> NUMA Nodes = 1 with 15 GB memory >> Guest Memory = 5 GB >> Number of cores in guest = 1 >> Workload = test allocation program allocates 4GB memory, touches it via memset and exits. >> Procedure = >> The first guest is launched and once its console is up, the test allocation program is executed with 4 GB memory request (Due to this the guest occupies almost 4-5 GB of memory in the host in a system without page hinting). Once this program exits at that time another guest is launched in the host and the same process is followed. We continue launching the guests until a guest gets killed due to low memory condition in the host. >> >> Results: >> Without hinting = 3 >> With hinting = 5 >> >> 2. Hackbench >> Guest Memory = 5 GB >> Number of cores = 4 >> Number of tasks Time with Hinting Time without Hinting >> 4000 19.540 17.818 >> > How about memhog btw? > Alex reported: > > My testing up till now has consisted of setting up 4 8GB VMs on a system > with 32GB of memory and 4GB of swap. To stress the memory on the system I > would run "memhog 8G" sequentially on each of the guests and observe how > long it took to complete the run. The observed behavior is that on the > systems with these patches applied in both the guest and on the host I was > able to complete the test with a time of 5 to 7 seconds per guest. On a > system without these patches the time ranged from 7 to 49 seconds per > guest. I am assuming the variability is due to time being spent writing > pages out to disk in order to free up space for the guest. > Here are the results: Procedure: 3 Guests of size 5GB is launched on a single NUMA node with total memory of 15GB and no swap. In each of the guest, memhog is run with 5GB. Post-execution of memhog, Host memory usage is monitored by using Free command. Without Hinting: Time of execution Host used memory Guest 1: 45 seconds 5.4 GB Guest 2: 45 seconds 10 GB Guest 3: 1 minute 15 GB With Hinting: Time of execution Host used memory Guest 1: 49 seconds 2.4 GB Guest 2: 40 seconds 4.3 GB Guest 3: 50 seconds 6.3 GB
On 3/6/19 1:12 PM, Michael S. Tsirkin wrote: > On Wed, Mar 06, 2019 at 01:07:50PM -0500, Nitesh Narayan Lal wrote: >> On 3/6/19 11:09 AM, Michael S. Tsirkin wrote: >>> On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote: >>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >>>> >>>> Benefit: >>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >>>> >>>> Changelog in v9: >>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. >>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >>>> * Dynamically allocated space is used to hold the isolated guest free pages. >>>> * All the pages are reported asynchronously to the host via virtio driver. >>>> * Pages are returned back to the guest buddy free list only when the host response is received. >>>> >>>> Pending items: >>>> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. >>>> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) >>>> * Compare reporting free pages via vring with vhost. >>>> * Decide between MADV_DONTNEED and MADV_FREE. >>>> * Analyze overall performance impact due to guest free page hinting. >>>> * Come up with proper/traceable error-message/logs. >>>> >>>> Tests: >>>> 1. Use-case - Number of guests we can launch >>>> >>>> NUMA Nodes = 1 with 15 GB memory >>>> Guest Memory = 5 GB >>>> Number of cores in guest = 1 >>>> Workload = test allocation program allocates 4GB memory, touches it via memset and exits. >>>> Procedure = >>>> The first guest is launched and once its console is up, the test allocation program is executed with 4 GB memory request (Due to this the guest occupies almost 4-5 GB of memory in the host in a system without page hinting). Once this program exits at that time another guest is launched in the host and the same process is followed. We continue launching the guests until a guest gets killed due to low memory condition in the host. >>>> >>>> Results: >>>> Without hinting = 3 >>>> With hinting = 5 >>>> >>>> 2. Hackbench >>>> Guest Memory = 5 GB >>>> Number of cores = 4 >>>> Number of tasks Time with Hinting Time without Hinting >>>> 4000 19.540 17.818 >>>> >>> How about memhog btw? >>> Alex reported: >>> >>> My testing up till now has consisted of setting up 4 8GB VMs on a system >>> with 32GB of memory and 4GB of swap. To stress the memory on the system I >>> would run "memhog 8G" sequentially on each of the guests and observe how >>> long it took to complete the run. The observed behavior is that on the >>> systems with these patches applied in both the guest and on the host I was >>> able to complete the test with a time of 5 to 7 seconds per guest. On a >>> system without these patches the time ranged from 7 to 49 seconds per >>> guest. I am assuming the variability is due to time being spent writing >>> pages out to disk in order to free up space for the guest. >>> >> Here are the results: >> >> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with >> total memory of 15GB and no swap. In each of the guest, memhog is run >> with 5GB. Post-execution of memhog, Host memory usage is monitored by >> using Free command. >> >> Without Hinting: >> Time of execution Host used memory >> Guest 1: 45 seconds 5.4 GB >> Guest 2: 45 seconds 10 GB >> Guest 3: 1 minute 15 GB >> >> With Hinting: >> Time of execution Host used memory >> Guest 1: 49 seconds 2.4 GB >> Guest 2: 40 seconds 4.3 GB >> Guest 3: 50 seconds 6.3 GB > OK so no improvement. If we are looking in terms of memory we are getting back from the guest, then there is an improvement. However, if we are looking at the improvement in terms of time of execution of memhog then yes there is none. > OTOH Alex's patches cut time down to 5-7 seconds > which seems better. I haven't investigated memhog as such so cannot comment on what exactly it does and why there was a time difference. I can take a look at it. > Want to try testing Alex's patches for comparison? Somehow I am not in a favor of doing a hypercall on every page (with huge TLB order/MAX_ORDER -1) as I think it will be costly. I can try using Alex's host side logic instead of virtio. Let me know what you think? >
On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: > > Want to try testing Alex's patches for comparison? > Somehow I am not in a favor of doing a hypercall on every page (with > huge TLB order/MAX_ORDER -1) as I think it will be costly. > I can try using Alex's host side logic instead of virtio. > Let me know what you think? I am just saying maybe your setup is misconfigured that's why you see no speedup. If you try Alex's patches and *don't* see speedup like he does, then he might be able to help you figure out why. OTOH if you do then *you* can try figuring out why don't your patches help.
On 3/6/19 1:38 PM, Michael S. Tsirkin wrote: > On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: >>> Want to try testing Alex's patches for comparison? >> Somehow I am not in a favor of doing a hypercall on every page (with >> huge TLB order/MAX_ORDER -1) as I think it will be costly. >> I can try using Alex's host side logic instead of virtio. >> Let me know what you think? > I am just saying maybe your setup is misconfigured > that's why you see no speedup. Got it. > If you try Alex's > patches and *don't* see speedup like he does, then > he might be able to help you figure out why. > OTOH if you do then *you* can try figuring out why > don't your patches help. Yeap, I can do that. Thanks. >
On Wed, Mar 6, 2019 at 10:41 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > > On 3/6/19 1:38 PM, Michael S. Tsirkin wrote: > > On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: > >>> Want to try testing Alex's patches for comparison? > >> Somehow I am not in a favor of doing a hypercall on every page (with > >> huge TLB order/MAX_ORDER -1) as I think it will be costly. > >> I can try using Alex's host side logic instead of virtio. > >> Let me know what you think? > > I am just saying maybe your setup is misconfigured > > that's why you see no speedup. > Got it. > > If you try Alex's > > patches and *don't* see speedup like he does, then > > he might be able to help you figure out why. > > OTOH if you do then *you* can try figuring out why > > don't your patches help. > Yeap, I can do that. > Thanks. If I can get your patches up and running I can probably try the same test I did to see if I am able to reproduce the behavior. It may take a bit though as I am running into several merge conflicts that I am having to sort out. Thanks. - Alex
On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: > >> Here are the results: > >> > >> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with > >> total memory of 15GB and no swap. In each of the guest, memhog is run > >> with 5GB. Post-execution of memhog, Host memory usage is monitored by > >> using Free command. > >> > >> Without Hinting: > >> Time of execution Host used memory > >> Guest 1: 45 seconds 5.4 GB > >> Guest 2: 45 seconds 10 GB > >> Guest 3: 1 minute 15 GB > >> > >> With Hinting: > >> Time of execution Host used memory > >> Guest 1: 49 seconds 2.4 GB > >> Guest 2: 40 seconds 4.3 GB > >> Guest 3: 50 seconds 6.3 GB > > OK so no improvement. > If we are looking in terms of memory we are getting back from the guest, > then there is an improvement. However, if we are looking at the > improvement in terms of time of execution of memhog then yes there is none. Yes but the way I see it you can't overcommit this unused memory since guests can start using it at any time. You timed it carefully such that this does not happen, but what will cause this timing on real guests? So the real reason to want this is to avoid need for writeback on free pages. Right? > > OTOH Alex's patches cut time down to 5-7 seconds > > which seems better. > I haven't investigated memhog as such so cannot comment on what exactly > it does and why there was a time difference. I can take a look at it. > > Want to try testing Alex's patches for comparison? > Somehow I am not in a favor of doing a hypercall on every page (with > huge TLB order/MAX_ORDER -1) as I think it will be costly. > I can try using Alex's host side logic instead of virtio. > Let me know what you think? > > > -- > Regards > Nitesh >
On 06.03.19 19:43, Michael S. Tsirkin wrote: > On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: >>>> Here are the results: >>>> >>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with >>>> total memory of 15GB and no swap. In each of the guest, memhog is run >>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by >>>> using Free command. >>>> >>>> Without Hinting: >>>> Time of execution Host used memory >>>> Guest 1: 45 seconds 5.4 GB >>>> Guest 2: 45 seconds 10 GB >>>> Guest 3: 1 minute 15 GB >>>> >>>> With Hinting: >>>> Time of execution Host used memory >>>> Guest 1: 49 seconds 2.4 GB >>>> Guest 2: 40 seconds 4.3 GB >>>> Guest 3: 50 seconds 6.3 GB >>> OK so no improvement. >> If we are looking in terms of memory we are getting back from the guest, >> then there is an improvement. However, if we are looking at the >> improvement in terms of time of execution of memhog then yes there is none. > > Yes but the way I see it you can't overcommit this unused memory > since guests can start using it at any time. You timed it carefully > such that this does not happen, but what will cause this timing on real > guests? Whenever you overcommit you will need backup swap. There is no way around it. It just makes the probability of you having to go to disk less likely. If you assume that all of your guests will be using all of their memory all the time, you don't have to think about overcommiting memory in the first place. But this is not what we usually have. > > So the real reason to want this is to avoid need for writeback on free > pages. > > Right?
On 3/6/19 1:00 PM, Alexander Duyck wrote: > On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >> >> Benefit: >> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >> >> Changelog in v9: >> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. > Without a kthread this has the potential to get really ugly really > fast. If we are going to run asynchronously we should probably be > truly asynchonous and just place a few pieces of data in the page that > a worker thread can use to identify which pages have been hinted and > which pages have not. Can you please explain what do you mean by truly asynchronous? With this implementation also I am not reporting the pages synchronously. > Then we can have that one thread just walking > through the zone memory pulling out fixed size pieces at a time and > providing hints on that. By doing that we avoid the potential of > creating a batch of pages that eat up most of the system memory. > >> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >> * Dynamically allocated space is used to hold the isolated guest free pages. > I have concerns that doing this per CPU and allocating memory > dynamically can result in you losing a significant amount of memory as > it sits waiting to be hinted. It should not as the buddy will keep merging the pages and we are only capturing MAX_ORDER - 1. This was the issue with the last patch-series when I was capturing all order pages resulting in the per-cpu array to be filled with lower order pages. > >> * All the pages are reported asynchronously to the host via virtio driver. >> * Pages are returned back to the guest buddy free list only when the host response is received. > I have been thinking about this. Instead of stealing the page couldn't > you simply flag it that there is a hint in progress and simply wait in > arch_alloc_page until the hint has been processed? With the flag, I am assuming you mean to block the allocation until hinting is going on, which is an issue. That was one of the issues discussed earlier which I wanted to solve with this implementation. > The problem is in > stealing pages you are going to introduce false OOM issues when the > memory isn't available because it is being hinted on. I think this situation will arise when the guest is under memory pressure. In such situations any attempt to perform isolation will anyways fail and we may not be reporting anything at that time. > >> Pending items: >> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. >> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) >> * Compare reporting free pages via vring with vhost. >> * Decide between MADV_DONTNEED and MADV_FREE. >> * Analyze overall performance impact due to guest free page hinting. >> * Come up with proper/traceable error-message/logs. > I'll try applying these patches and see if I can reproduce the results > you reported. Thanks. Let me know if you run into any issues. > With the last patch set I couldn't reproduce the results > as you reported them. If I remember correctly then the last time you only tried with multiple vcpus and not with 1 vcpu. > It has me wondering if you were somehow seeing > the effects of a balloon instead of the actual memory hints as I > couldn't find any evidence of the memory ever actually being freed > back by the hints functionality. Can you please elaborate what kind of evidence you are looking for? I did trace the hints on the QEMU/host side. > > Also do you have any idea if this patch set will work with an SMP > setup or is it still racy? I might try enabling SMP in my environment > to see if I can test the scalability of the VM with something like a > will-it-scale test. I did try running page_fault1_threads in will-it-scale with 4 vcpus. It didn't give me any issue. > >> Tests: >> 1. Use-case - Number of guests we can launch >> >> NUMA Nodes = 1 with 15 GB memory >> Guest Memory = 5 GB >> Number of cores in guest = 1 >> Workload = test allocation program allocates 4GB memory, touches it via memset and exits. >> Procedure = >> The first guest is launched and once its console is up, the test allocation program is executed with 4 GB memory request (Due to this the guest occupies almost 4-5 GB of memory in the host in a system without page hinting). Once this program exits at that time another guest is launched in the host and the same process is followed. We continue launching the guests until a guest gets killed due to low memory condition in the host. >> >> Results: >> Without hinting = 3 >> With hinting = 5 >> >> 2. Hackbench >> Guest Memory = 5 GB >> Number of cores = 4 >> Number of tasks Time with Hinting Time without Hinting >> 4000 19.540 17.818 >> >>
On Wed, Mar 6, 2019 at 11:00 AM David Hildenbrand <david@redhat.com> wrote: > > On 06.03.19 19:43, Michael S. Tsirkin wrote: > > On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: > >>>> Here are the results: > >>>> > >>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with > >>>> total memory of 15GB and no swap. In each of the guest, memhog is run > >>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by > >>>> using Free command. > >>>> > >>>> Without Hinting: > >>>> Time of execution Host used memory > >>>> Guest 1: 45 seconds 5.4 GB > >>>> Guest 2: 45 seconds 10 GB > >>>> Guest 3: 1 minute 15 GB > >>>> > >>>> With Hinting: > >>>> Time of execution Host used memory > >>>> Guest 1: 49 seconds 2.4 GB > >>>> Guest 2: 40 seconds 4.3 GB > >>>> Guest 3: 50 seconds 6.3 GB > >>> OK so no improvement. > >> If we are looking in terms of memory we are getting back from the guest, > >> then there is an improvement. However, if we are looking at the > >> improvement in terms of time of execution of memhog then yes there is none. > > > > Yes but the way I see it you can't overcommit this unused memory > > since guests can start using it at any time. You timed it carefully > > such that this does not happen, but what will cause this timing on real > > guests? > > Whenever you overcommit you will need backup swap. There is no way > around it. It just makes the probability of you having to go to disk > less likely. > > If you assume that all of your guests will be using all of their memory > all the time, you don't have to think about overcommiting memory in the > first place. But this is not what we usually have. Right, but the general idea is that free page hinting allows us to avoid having to use the swap if we are hinting the pages as unused. The general assumption we are working with is that some percentage of the VMs are unused most of the time so you can share those resources between multiple VMs and have them free those up normally. If we can reduce swap usage we can improve overall performance and that was what I was pointing out with my test. I had also done something similar to what Nitesh was doing with his original test where I had launched 8 VMs with 8GB of memory per VM on a system with 32G of RAM and only 4G of swap. In that setup I could keep a couple VMs busy at a time without issues, and obviously without the patch I just started to OOM qemu instances and could only have 4 VMs at a time running at maximum.
On 06.03.19 20:08, Alexander Duyck wrote: > On Wed, Mar 6, 2019 at 11:00 AM David Hildenbrand <david@redhat.com> wrote: >> >> On 06.03.19 19:43, Michael S. Tsirkin wrote: >>> On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: >>>>>> Here are the results: >>>>>> >>>>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with >>>>>> total memory of 15GB and no swap. In each of the guest, memhog is run >>>>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by >>>>>> using Free command. >>>>>> >>>>>> Without Hinting: >>>>>> Time of execution Host used memory >>>>>> Guest 1: 45 seconds 5.4 GB >>>>>> Guest 2: 45 seconds 10 GB >>>>>> Guest 3: 1 minute 15 GB >>>>>> >>>>>> With Hinting: >>>>>> Time of execution Host used memory >>>>>> Guest 1: 49 seconds 2.4 GB >>>>>> Guest 2: 40 seconds 4.3 GB >>>>>> Guest 3: 50 seconds 6.3 GB >>>>> OK so no improvement. >>>> If we are looking in terms of memory we are getting back from the guest, >>>> then there is an improvement. However, if we are looking at the >>>> improvement in terms of time of execution of memhog then yes there is none. >>> >>> Yes but the way I see it you can't overcommit this unused memory >>> since guests can start using it at any time. You timed it carefully >>> such that this does not happen, but what will cause this timing on real >>> guests? >> >> Whenever you overcommit you will need backup swap. There is no way >> around it. It just makes the probability of you having to go to disk >> less likely. >> >> If you assume that all of your guests will be using all of their memory >> all the time, you don't have to think about overcommiting memory in the >> first place. But this is not what we usually have. > > Right, but the general idea is that free page hinting allows us to > avoid having to use the swap if we are hinting the pages as unused. > The general assumption we are working with is that some percentage of > the VMs are unused most of the time so you can share those resources > between multiple VMs and have them free those up normally. Yes, similar to VCPU yielding or playin scheduling when the VCPU is spleeping. Instead of busy looping, hand over the resource to somebody who can actually make use of it. > > If we can reduce swap usage we can improve overall performance and > that was what I was pointing out with my test. I had also done > something similar to what Nitesh was doing with his original test > where I had launched 8 VMs with 8GB of memory per VM on a system with > 32G of RAM and only 4G of swap. In that setup I could keep a couple > VMs busy at a time without issues, and obviously without the patch I > just started to OOM qemu instances and could only have 4 VMs at a > time running at maximum. While these are nice experiments (especially to showcase reduced swap usage!), I would not suggest to use 4GB of swap on a x2 overcomited system (32GB overcommited). Disks are so cheap nowadays that one does not have to play with fire. But yes, reducing swap usage implies overall system performance (unless the hinting is terribly slow :) ). Reducing swap usage, not swap space :)
On Wed, Mar 6, 2019 at 11:18 AM David Hildenbrand <david@redhat.com> wrote: > > On 06.03.19 20:08, Alexander Duyck wrote: > > On Wed, Mar 6, 2019 at 11:00 AM David Hildenbrand <david@redhat.com> wrote: > >> > >> On 06.03.19 19:43, Michael S. Tsirkin wrote: > >>> On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: > >>>>>> Here are the results: > >>>>>> > >>>>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with > >>>>>> total memory of 15GB and no swap. In each of the guest, memhog is run > >>>>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by > >>>>>> using Free command. > >>>>>> > >>>>>> Without Hinting: > >>>>>> Time of execution Host used memory > >>>>>> Guest 1: 45 seconds 5.4 GB > >>>>>> Guest 2: 45 seconds 10 GB > >>>>>> Guest 3: 1 minute 15 GB > >>>>>> > >>>>>> With Hinting: > >>>>>> Time of execution Host used memory > >>>>>> Guest 1: 49 seconds 2.4 GB > >>>>>> Guest 2: 40 seconds 4.3 GB > >>>>>> Guest 3: 50 seconds 6.3 GB > >>>>> OK so no improvement. > >>>> If we are looking in terms of memory we are getting back from the guest, > >>>> then there is an improvement. However, if we are looking at the > >>>> improvement in terms of time of execution of memhog then yes there is none. > >>> > >>> Yes but the way I see it you can't overcommit this unused memory > >>> since guests can start using it at any time. You timed it carefully > >>> such that this does not happen, but what will cause this timing on real > >>> guests? > >> > >> Whenever you overcommit you will need backup swap. There is no way > >> around it. It just makes the probability of you having to go to disk > >> less likely. > >> > >> If you assume that all of your guests will be using all of their memory > >> all the time, you don't have to think about overcommiting memory in the > >> first place. But this is not what we usually have. > > > > Right, but the general idea is that free page hinting allows us to > > avoid having to use the swap if we are hinting the pages as unused. > > The general assumption we are working with is that some percentage of > > the VMs are unused most of the time so you can share those resources > > between multiple VMs and have them free those up normally. > > Yes, similar to VCPU yielding or playin scheduling when the VCPU is > spleeping. Instead of busy looping, hand over the resource to somebody > who can actually make use of it. > > > > > If we can reduce swap usage we can improve overall performance and > > that was what I was pointing out with my test. I had also done > > something similar to what Nitesh was doing with his original test > > where I had launched 8 VMs with 8GB of memory per VM on a system with > > 32G of RAM and only 4G of swap. In that setup I could keep a couple > > VMs busy at a time without issues, and obviously without the patch I > > just started to OOM qemu instances and could only have 4 VMs at a > > time running at maximum. > > While these are nice experiments (especially to showcase reduced swap > usage!), I would not suggest to use 4GB of swap on a x2 overcomited > system (32GB overcommited). Disks are so cheap nowadays that one does > not have to play with fire. Right. The only reason for using 4G is because the system normally has 128G of RAM available and I didn't really think I would need swap for the system when I originally configured it. > But yes, reducing swap usage implies overall system performance (unless > the hinting is terribly slow :) ). Reducing swap usage, not swap space :) Right. Also the swap is really a necessity if we are going to look at things like MADV_FREE as I have not seen us really start to free up resources until we are starting to put some pressure on swap.
On 3/6/19 2:24 PM, Alexander Duyck wrote: > On Wed, Mar 6, 2019 at 11:18 AM David Hildenbrand <david@redhat.com> wrote: >> On 06.03.19 20:08, Alexander Duyck wrote: >>> On Wed, Mar 6, 2019 at 11:00 AM David Hildenbrand <david@redhat.com> wrote: >>>> On 06.03.19 19:43, Michael S. Tsirkin wrote: >>>>> On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: >>>>>>>> Here are the results: >>>>>>>> >>>>>>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with >>>>>>>> total memory of 15GB and no swap. In each of the guest, memhog is run >>>>>>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by >>>>>>>> using Free command. >>>>>>>> >>>>>>>> Without Hinting: >>>>>>>> Time of execution Host used memory >>>>>>>> Guest 1: 45 seconds 5.4 GB >>>>>>>> Guest 2: 45 seconds 10 GB >>>>>>>> Guest 3: 1 minute 15 GB >>>>>>>> >>>>>>>> With Hinting: >>>>>>>> Time of execution Host used memory >>>>>>>> Guest 1: 49 seconds 2.4 GB >>>>>>>> Guest 2: 40 seconds 4.3 GB >>>>>>>> Guest 3: 50 seconds 6.3 GB >>>>>>> OK so no improvement. >>>>>> If we are looking in terms of memory we are getting back from the guest, >>>>>> then there is an improvement. However, if we are looking at the >>>>>> improvement in terms of time of execution of memhog then yes there is none. >>>>> Yes but the way I see it you can't overcommit this unused memory >>>>> since guests can start using it at any time. You timed it carefully >>>>> such that this does not happen, but what will cause this timing on real >>>>> guests? >>>> Whenever you overcommit you will need backup swap. There is no way >>>> around it. It just makes the probability of you having to go to disk >>>> less likely. >>>> >>>> If you assume that all of your guests will be using all of their memory >>>> all the time, you don't have to think about overcommiting memory in the >>>> first place. But this is not what we usually have. >>> Right, but the general idea is that free page hinting allows us to >>> avoid having to use the swap if we are hinting the pages as unused. >>> The general assumption we are working with is that some percentage of >>> the VMs are unused most of the time so you can share those resources >>> between multiple VMs and have them free those up normally. >> Yes, similar to VCPU yielding or playin scheduling when the VCPU is >> spleeping. Instead of busy looping, hand over the resource to somebody >> who can actually make use of it. >> >>> If we can reduce swap usage we can improve overall performance and >>> that was what I was pointing out with my test. I had also done >>> something similar to what Nitesh was doing with his original test >>> where I had launched 8 VMs with 8GB of memory per VM on a system with >>> 32G of RAM and only 4G of swap. In that setup I could keep a couple >>> VMs busy at a time without issues, and obviously without the patch I >>> just started to OOM qemu instances and could only have 4 VMs at a >>> time running at maximum. >> While these are nice experiments (especially to showcase reduced swap >> usage!), I would not suggest to use 4GB of swap on a x2 overcomited >> system (32GB overcommited). Disks are so cheap nowadays that one does >> not have to play with fire. > Right. The only reason for using 4G is because the system normally has > 128G of RAM available and I didn't really think I would need swap for > the system when I originally configured it. > >> But yes, reducing swap usage implies overall system performance (unless >> the hinting is terribly slow :) ). Reducing swap usage, not swap space :) > Right. Also the swap is really a necessity if we are going to look at > things like MADV_FREE as I have not seen us really start to free up > resources until we are starting to put some pressure on swap. I agree in order to see the effect of MADV_FREE we may have to use swap(it doesn't have to be huge). About Michael's comment, if the guest is consistently under memory pressure then we may not get anything back in the host at all during this time.
On Wed, Mar 06, 2019 at 07:59:57PM +0100, David Hildenbrand wrote: > On 06.03.19 19:43, Michael S. Tsirkin wrote: > > On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: > >>>> Here are the results: > >>>> > >>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with > >>>> total memory of 15GB and no swap. In each of the guest, memhog is run > >>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by > >>>> using Free command. > >>>> > >>>> Without Hinting: > >>>> Time of execution Host used memory > >>>> Guest 1: 45 seconds 5.4 GB > >>>> Guest 2: 45 seconds 10 GB > >>>> Guest 3: 1 minute 15 GB > >>>> > >>>> With Hinting: > >>>> Time of execution Host used memory > >>>> Guest 1: 49 seconds 2.4 GB > >>>> Guest 2: 40 seconds 4.3 GB > >>>> Guest 3: 50 seconds 6.3 GB > >>> OK so no improvement. > >> If we are looking in terms of memory we are getting back from the guest, > >> then there is an improvement. However, if we are looking at the > >> improvement in terms of time of execution of memhog then yes there is none. > > > > Yes but the way I see it you can't overcommit this unused memory > > since guests can start using it at any time. You timed it carefully > > such that this does not happen, but what will cause this timing on real > > guests? > > Whenever you overcommit you will need backup swap. Right and the point of hinting is that pages can just be discarded and not end up in swap. Point is you should be able to see the gain. Hinting patches cost some CPU so we need to know whether they cost too much. How much is too much? When the cost is bigger than benefit. But we can't compare CPU cycles to bytes. So we need to benchmark everything in terms of cycles. > There is no way > around it. It just makes the probability of you having to go to disk > less likely. Right and let's quantify this. Does this result in net gain or loss? > If you assume that all of your guests will be using all of their memory > all the time, you don't have to think about overcommiting memory in the > first place. But this is not what we usually have. Right and swap is there to support overcommit. However it was felt that hinting can be faster since it avoids IO involved in swap. > > > > So the real reason to want this is to avoid need for writeback on free > > pages. > > > > Right? > > -- > > Thanks, > > David / dhildenb
On 06.03.19 21:32, Michael S. Tsirkin wrote: > On Wed, Mar 06, 2019 at 07:59:57PM +0100, David Hildenbrand wrote: >> On 06.03.19 19:43, Michael S. Tsirkin wrote: >>> On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: >>>>>> Here are the results: >>>>>> >>>>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with >>>>>> total memory of 15GB and no swap. In each of the guest, memhog is run >>>>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by >>>>>> using Free command. >>>>>> >>>>>> Without Hinting: >>>>>> Time of execution Host used memory >>>>>> Guest 1: 45 seconds 5.4 GB >>>>>> Guest 2: 45 seconds 10 GB >>>>>> Guest 3: 1 minute 15 GB >>>>>> >>>>>> With Hinting: >>>>>> Time of execution Host used memory >>>>>> Guest 1: 49 seconds 2.4 GB >>>>>> Guest 2: 40 seconds 4.3 GB >>>>>> Guest 3: 50 seconds 6.3 GB >>>>> OK so no improvement. >>>> If we are looking in terms of memory we are getting back from the guest, >>>> then there is an improvement. However, if we are looking at the >>>> improvement in terms of time of execution of memhog then yes there is none. >>> >>> Yes but the way I see it you can't overcommit this unused memory >>> since guests can start using it at any time. You timed it carefully >>> such that this does not happen, but what will cause this timing on real >>> guests? >> >> Whenever you overcommit you will need backup swap. > > Right and the point of hinting is that pages can just be > discarded and not end up in swap. > > > Point is you should be able to see the gain. > > Hinting patches cost some CPU so we need to know whether > they cost too much. How much is too much? When the cost > is bigger than benefit. But we can't compare CPU cycles > to bytes. So we need to benchmark everything in terms of > cycles. > >> There is no way >> around it. It just makes the probability of you having to go to disk >> less likely. > > > Right and let's quantify this. Does this result in net gain or loss? Yes, I am totally with you. But if it is a net benefit heavily depends on the setup. E.g. what kind of storage used for the swap, how fast, is the same disk also used for other I/O ... Also, CPU is a totally different resource than I/O. While you might have plenty of CPU cycles to spare, your I/O throughput might already be limited. Same goes into the other direction. So it might not be as easy as comparing two numbers. It really depends on the setup. Well, not completely true, with 0% CPU overhead we would have a clear winner with hinting ;) > > >> If you assume that all of your guests will be using all of their memory >> all the time, you don't have to think about overcommiting memory in the >> first place. But this is not what we usually have. > > Right and swap is there to support overcommit. However it > was felt that hinting can be faster since it avoids IO > involved in swap. Feels like it, I/O is prone to be slow.
On Wed, Mar 6, 2019 at 11:07 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > > > On 3/6/19 1:00 PM, Alexander Duyck wrote: > > On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. > >> > >> Benefit: > >> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). > >> > >> Changelog in v9: > >> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. > >> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. > >> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. > > Without a kthread this has the potential to get really ugly really > > fast. If we are going to run asynchronously we should probably be > > truly asynchonous and just place a few pieces of data in the page that > > a worker thread can use to identify which pages have been hinted and > > which pages have not. > > Can you please explain what do you mean by truly asynchronous? > > With this implementation also I am not reporting the pages synchronously. The problem is you are making it pseudo synchronous by having to push pages off to a side buffer aren't you? In my mind we should be able to have the page hinting go on with little to no interference with existing page allocation and freeing. > > Then we can have that one thread just walking > > through the zone memory pulling out fixed size pieces at a time and > > providing hints on that. By doing that we avoid the potential of > > creating a batch of pages that eat up most of the system memory. > > > >> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. > >> * Dynamically allocated space is used to hold the isolated guest free pages. > > I have concerns that doing this per CPU and allocating memory > > dynamically can result in you losing a significant amount of memory as > > it sits waiting to be hinted. > It should not as the buddy will keep merging the pages and we are only > capturing MAX_ORDER - 1. > This was the issue with the last patch-series when I was capturing all > order pages resulting in the per-cpu array to be filled with lower order > pages. > > > >> * All the pages are reported asynchronously to the host via virtio driver. > >> * Pages are returned back to the guest buddy free list only when the host response is received. > > I have been thinking about this. Instead of stealing the page couldn't > > you simply flag it that there is a hint in progress and simply wait in > > arch_alloc_page until the hint has been processed? > With the flag, I am assuming you mean to block the allocation until > hinting is going on, which is an issue. That was one of the issues > discussed earlier which I wanted to solve with this implementation. With the flag we would allow the allocation, but would have to synchronize with the hinting at that point. I got the idea from the way the s390 code works. They have both an arch_free_page and an arch_alloc_page. If I understand correctly the arch_alloc_page is what is meant to handle the case of a page that has been marked for hinting, but may not have been hinted on yet. My thought for now is to keep it simple and use a page flag to indicate that a page is currently pending a hint. We should be able to spin in such a case and it would probably still perform better than a solution where we would not have the memory available and possibly be under memory pressure. > > The problem is in > > stealing pages you are going to introduce false OOM issues when the > > memory isn't available because it is being hinted on. > I think this situation will arise when the guest is under memory > pressure. In such situations any attempt to perform isolation will > anyways fail and we may not be reporting anything at that time. What I want to avoid is the scenario where an application grabs a large amount of memory, then frees said memory, and we are sitting on it for some time because we decide to try and hint on the large chunk. By processing this sometime after the pages are sent to the buddy allocator in a separate thread, and by processing a small fixed window of memory at a time we can avoid making freeing memory expensive, and still provide the hints in a reasonable time frame. > > > >> Pending items: > >> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. > >> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) > >> * Compare reporting free pages via vring with vhost. > >> * Decide between MADV_DONTNEED and MADV_FREE. > >> * Analyze overall performance impact due to guest free page hinting. > >> * Come up with proper/traceable error-message/logs. > > I'll try applying these patches and see if I can reproduce the results > > you reported. > Thanks. Let me know if you run into any issues. > > With the last patch set I couldn't reproduce the results > > as you reported them. > If I remember correctly then the last time you only tried with multiple > vcpus and not with 1 vcpu. I had tried 1 vcpu, however I ended up running into some other issues that made it difficult to even boot the system last week. > > It has me wondering if you were somehow seeing > > the effects of a balloon instead of the actual memory hints as I > > couldn't find any evidence of the memory ever actually being freed > > back by the hints functionality. > > Can you please elaborate what kind of evidence you are looking for? > > I did trace the hints on the QEMU/host side. It looks like the new patches are working as I am seeing the memory freeing occurring this time around. Although it looks like this is still generating traces from free_pcpages_bulk if I enable multiple VCPUs: [ 175.823539] list_add corruption. next->prev should be prev (ffff947c7ffd61e0), but was ffffc7a29f9e0008. (next=ffffc7a29f4c0008). [ 175.825978] ------------[ cut here ]------------ [ 175.826889] kernel BUG at lib/list_debug.c:25! [ 175.827766] invalid opcode: 0000 [#1] SMP PTI [ 175.828621] CPU: 5 PID: 1344 Comm: page_fault1_thr Not tainted 5.0.0-next-20190306-baseline+ #76 [ 175.830312] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 [ 175.831885] RIP: 0010:__list_add_valid+0x35/0x70 [ 175.832784] Code: 18 48 8b 32 48 39 f0 75 39 48 39 c7 74 1e 48 39 fa 74 19 b8 01 00 00 00 c3 48 89 c1 48 c7 c7 80 b5 0f a9 31 c0 e8 8f aa c8 ff <0f> 0b 48 89 c1 48 89 fe 31 c0 48 c7 c7 30 b6 0f a9 e8 79 aa c8 ff [ 175.836379] RSP: 0018:ffffa717c40839b0 EFLAGS: 00010046 [ 175.837394] RAX: 0000000000000075 RBX: ffff947c7ffd61e0 RCX: 0000000000000000 [ 175.838779] RDX: 0000000000000000 RSI: ffff947c5f957188 RDI: ffff947c5f957188 [ 175.840162] RBP: ffff947c7ffd61d0 R08: 000000000000026f R09: 0000000000000005 [ 175.841539] R10: 0000000000000000 R11: ffffa717c4083730 R12: ffffc7a29f260008 [ 175.842932] R13: ffff947c7ffd5d00 R14: ffffc7a29f4c0008 R15: ffffc7a29f260000 [ 175.844319] FS: 0000000000000000(0000) GS:ffff947c5f940000(0000) knlGS:0000000000000000 [ 175.845896] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 175.847009] CR2: 00007fffe3421000 CR3: 000000051220e006 CR4: 0000000000160ee0 [ 175.848390] Call Trace: [ 175.848896] free_pcppages_bulk+0x4bc/0x6a0 [ 175.849723] free_unref_page_list+0x10d/0x190 [ 175.850567] release_pages+0x103/0x4a0 [ 175.851313] tlb_flush_mmu_free+0x36/0x50 [ 175.852105] unmap_page_range+0x963/0xd50 [ 175.852897] unmap_vmas+0x62/0xc0 [ 175.853549] exit_mmap+0xb5/0x1a0 [ 175.854205] mmput+0x5b/0x120 [ 175.854794] do_exit+0x273/0xc30 [ 175.855426] ? free_unref_page_commit+0x85/0xf0 [ 175.856312] do_group_exit+0x39/0xa0 [ 175.857018] get_signal+0x172/0x7c0 [ 175.857703] do_signal+0x36/0x620 [ 175.858355] ? percpu_counter_add_batch+0x4b/0x60 [ 175.859280] ? __do_munmap+0x288/0x390 [ 175.860020] exit_to_usermode_loop+0x4c/0xa8 [ 175.860859] do_syscall_64+0x152/0x170 [ 175.861595] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 175.862586] RIP: 0033:0x7ffff76a8ec7 [ 175.863292] Code: Bad RIP value. [ 175.863928] RSP: 002b:00007ffff4422eb8 EFLAGS: 00000212 ORIG_RAX: 000000000000000b [ 175.865396] RAX: 0000000000000000 RBX: 00007ffff7ff7280 RCX: 00007ffff76a8ec7 [ 175.866799] RDX: 00007fffe3422000 RSI: 0000000008000000 RDI: 00007fffdb422000 [ 175.868194] RBP: 0000000000001000 R08: ffffffffffffffff R09: 0000000000000000 [ 175.869582] R10: 0000000000000022 R11: 0000000000000212 R12: 00007ffff4422fc0 [ 175.870984] R13: 0000000000000001 R14: 00007fffffffc1b0 R15: 00007ffff44239c0 [ 175.872350] Modules linked in: ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_nat ip6table_mangle ip6table_raw ip6table_security iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 iptable_mangle iptable_raw iptable_security ebtable_filter ebtables ip6table_filter ip6_tables sunrpc sb_edac crct10dif_pclmul crc32_pclmul ghash_clmulni_intel kvm_intel kvm ppdev irqbypass parport_pc parport virtio_balloon pcspkr i2c_piix4 joydev xfs libcrc32c cirrus drm_kms_helper ttm drm e1000 crc32c_intel virtio_blk serio_raw ata_generic floppy pata_acpi qemu_fw_cfg [ 175.883153] ---[ end trace 5b67f12a67d1f373 ]--- I should be able to rebuild the kernels/qemu and test this patch set over the next day or two. Thanks. - Alex
On Wed, Mar 06, 2019 at 10:40:57PM +0100, David Hildenbrand wrote: > On 06.03.19 21:32, Michael S. Tsirkin wrote: > > On Wed, Mar 06, 2019 at 07:59:57PM +0100, David Hildenbrand wrote: > >> On 06.03.19 19:43, Michael S. Tsirkin wrote: > >>> On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: > >>>>>> Here are the results: > >>>>>> > >>>>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with > >>>>>> total memory of 15GB and no swap. In each of the guest, memhog is run > >>>>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by > >>>>>> using Free command. > >>>>>> > >>>>>> Without Hinting: > >>>>>> Time of execution Host used memory > >>>>>> Guest 1: 45 seconds 5.4 GB > >>>>>> Guest 2: 45 seconds 10 GB > >>>>>> Guest 3: 1 minute 15 GB > >>>>>> > >>>>>> With Hinting: > >>>>>> Time of execution Host used memory > >>>>>> Guest 1: 49 seconds 2.4 GB > >>>>>> Guest 2: 40 seconds 4.3 GB > >>>>>> Guest 3: 50 seconds 6.3 GB > >>>>> OK so no improvement. > >>>> If we are looking in terms of memory we are getting back from the guest, > >>>> then there is an improvement. However, if we are looking at the > >>>> improvement in terms of time of execution of memhog then yes there is none. > >>> > >>> Yes but the way I see it you can't overcommit this unused memory > >>> since guests can start using it at any time. You timed it carefully > >>> such that this does not happen, but what will cause this timing on real > >>> guests? > >> > >> Whenever you overcommit you will need backup swap. > > > > Right and the point of hinting is that pages can just be > > discarded and not end up in swap. > > > > > > Point is you should be able to see the gain. > > > > Hinting patches cost some CPU so we need to know whether > > they cost too much. How much is too much? When the cost > > is bigger than benefit. But we can't compare CPU cycles > > to bytes. So we need to benchmark everything in terms of > > cycles. > > > >> There is no way > >> around it. It just makes the probability of you having to go to disk > >> less likely. > > > > > > Right and let's quantify this. Does this result in net gain or loss? > > Yes, I am totally with you. But if it is a net benefit heavily depends > on the setup. E.g. what kind of storage used for the swap, how fast, is > the same disk also used for other I/O ... > > Also, CPU is a totally different resource than I/O. While you might have > plenty of CPU cycles to spare, your I/O throughput might already be > limited. Same goes into the other direction. > > So it might not be as easy as comparing two numbers. It really depends > on the setup. Well, not completely true, with 0% CPU overhead we would > have a clear winner with hinting ;) I mean users need to know about this too. Are these hinting patches a gain: - on zram - on ssd - on a rotating disk - none of the above ? If users don't know when would they enable hinting? Close to one is going to try all possible configurations, test exhaustively and find an optimal default for their workload. So it's our job to figure it out and provide guidance. > > > > > > >> If you assume that all of your guests will be using all of their memory > >> all the time, you don't have to think about overcommiting memory in the > >> first place. But this is not what we usually have. > > > > Right and swap is there to support overcommit. However it > > was felt that hinting can be faster since it avoids IO > > involved in swap. > > Feels like it, I/O is prone to be slow. > > > -- > > Thanks, > > David / dhildenb OK so should be measureable.
On Wed, Mar 6, 2019 at 2:18 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > On Wed, Mar 06, 2019 at 10:40:57PM +0100, David Hildenbrand wrote: > > On 06.03.19 21:32, Michael S. Tsirkin wrote: > > > On Wed, Mar 06, 2019 at 07:59:57PM +0100, David Hildenbrand wrote: > > >> On 06.03.19 19:43, Michael S. Tsirkin wrote: > > >>> On Wed, Mar 06, 2019 at 01:30:14PM -0500, Nitesh Narayan Lal wrote: > > >>>>>> Here are the results: > > >>>>>> > > >>>>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with > > >>>>>> total memory of 15GB and no swap. In each of the guest, memhog is run > > >>>>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by > > >>>>>> using Free command. > > >>>>>> > > >>>>>> Without Hinting: > > >>>>>> Time of execution Host used memory > > >>>>>> Guest 1: 45 seconds 5.4 GB > > >>>>>> Guest 2: 45 seconds 10 GB > > >>>>>> Guest 3: 1 minute 15 GB > > >>>>>> > > >>>>>> With Hinting: > > >>>>>> Time of execution Host used memory > > >>>>>> Guest 1: 49 seconds 2.4 GB > > >>>>>> Guest 2: 40 seconds 4.3 GB > > >>>>>> Guest 3: 50 seconds 6.3 GB > > >>>>> OK so no improvement. > > >>>> If we are looking in terms of memory we are getting back from the guest, > > >>>> then there is an improvement. However, if we are looking at the > > >>>> improvement in terms of time of execution of memhog then yes there is none. > > >>> > > >>> Yes but the way I see it you can't overcommit this unused memory > > >>> since guests can start using it at any time. You timed it carefully > > >>> such that this does not happen, but what will cause this timing on real > > >>> guests? > > >> > > >> Whenever you overcommit you will need backup swap. > > > > > > Right and the point of hinting is that pages can just be > > > discarded and not end up in swap. > > > > > > > > > Point is you should be able to see the gain. > > > > > > Hinting patches cost some CPU so we need to know whether > > > they cost too much. How much is too much? When the cost > > > is bigger than benefit. But we can't compare CPU cycles > > > to bytes. So we need to benchmark everything in terms of > > > cycles. > > > > > >> There is no way > > >> around it. It just makes the probability of you having to go to disk > > >> less likely. > > > > > > > > > Right and let's quantify this. Does this result in net gain or loss? > > > > Yes, I am totally with you. But if it is a net benefit heavily depends > > on the setup. E.g. what kind of storage used for the swap, how fast, is > > the same disk also used for other I/O ... > > > > Also, CPU is a totally different resource than I/O. While you might have > > plenty of CPU cycles to spare, your I/O throughput might already be > > limited. Same goes into the other direction. > > > > So it might not be as easy as comparing two numbers. It really depends > > on the setup. Well, not completely true, with 0% CPU overhead we would > > have a clear winner with hinting ;) > > I mean users need to know about this too. > > Are these hinting patches a gain: > - on zram > - on ssd > - on a rotating disk > - none of the above > ? > > If users don't know when would they enable hinting? > > Close to one is going to try all possible configurations, test > exhaustively and find an optimal default for their workload. > So it's our job to figure it out and provide guidance. Right. I think for now I will stick to testing on what I have which is a SSD for swap, and no-overcommit for the "non of the above" case. BTW it looks like this patch set introduced a pretty heavy penalty for the no-overcommit case. For a 32G VM with no overcommit a 32G memhog test is now taking over 50 seconds whereas without the patch set I can complete the test in around 20 seconds. > > > > > > > > > > >> If you assume that all of your guests will be using all of their memory > > >> all the time, you don't have to think about overcommiting memory in the > > >> first place. But this is not what we usually have. > > > > > > Right and swap is there to support overcommit. However it > > > was felt that hinting can be faster since it avoids IO > > > involved in swap. > > > > Feels like it, I/O is prone to be slow. > > > > > > -- > > > > Thanks, > > > > David / dhildenb > > OK so should be measureable. > > -- > MST
On 3/6/19 5:05 PM, Alexander Duyck wrote: > On Wed, Mar 6, 2019 at 11:07 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >> >> On 3/6/19 1:00 PM, Alexander Duyck wrote: >>> On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >>>> >>>> Benefit: >>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >>>> >>>> Changelog in v9: >>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. >>> Without a kthread this has the potential to get really ugly really >>> fast. If we are going to run asynchronously we should probably be >>> truly asynchonous and just place a few pieces of data in the page that >>> a worker thread can use to identify which pages have been hinted and >>> which pages have not. >> Can you please explain what do you mean by truly asynchronous? >> >> With this implementation also I am not reporting the pages synchronously. > The problem is you are making it pseudo synchronous by having to push > pages off to a side buffer aren't you? In my mind we should be able to > have the page hinting go on with little to no interference with > existing page allocation and freeing. We have to opt one of the two options: 1. Block allocation by using a flag or acquire a lock to prevent the usage of pages we are hinting. 2. Remove the page set entirely from the buddy. (This is what I am doing right now) The reason I would prefer the second approach is that we are not blocking the allocation in any way and as we are only working with a smaller set of pages we should be fine. However, with the current approach as we are reporting asynchronously there is a chance that we end up hinting more than 2-3 times for a single workload run. In situation where this could lead to low memory condition in the guest, the hinting will anyways fail as the guest will not allow page isolation. I can possibly try and test the same to ensure that we don't get OOM due to hinting when the guest is under memory pressure. > >>> Then we can have that one thread just walking >>> through the zone memory pulling out fixed size pieces at a time and >>> providing hints on that. By doing that we avoid the potential of >>> creating a batch of pages that eat up most of the system memory. >>> >>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >>>> * Dynamically allocated space is used to hold the isolated guest free pages. >>> I have concerns that doing this per CPU and allocating memory >>> dynamically can result in you losing a significant amount of memory as >>> it sits waiting to be hinted. >> It should not as the buddy will keep merging the pages and we are only >> capturing MAX_ORDER - 1. >> This was the issue with the last patch-series when I was capturing all >> order pages resulting in the per-cpu array to be filled with lower order >> pages. >>>> * All the pages are reported asynchronously to the host via virtio driver. >>>> * Pages are returned back to the guest buddy free list only when the host response is received. >>> I have been thinking about this. Instead of stealing the page couldn't >>> you simply flag it that there is a hint in progress and simply wait in >>> arch_alloc_page until the hint has been processed? >> With the flag, I am assuming you mean to block the allocation until >> hinting is going on, which is an issue. That was one of the issues >> discussed earlier which I wanted to solve with this implementation. > With the flag we would allow the allocation, but would have to > synchronize with the hinting at that point. I got the idea from the > way the s390 code works. They have both an arch_free_page and an > arch_alloc_page. If I understand correctly the arch_alloc_page is what > is meant to handle the case of a page that has been marked for > hinting, but may not have been hinted on yet. My thought for now is to > keep it simple and use a page flag to indicate that a page is > currently pending a hint. I am assuming this page flag will be located in the page structure. > We should be able to spin in such a case and > it would probably still perform better than a solution where we would > not have the memory available and possibly be under memory pressure. I had this same idea earlier. However, the thing about which I was not sure is if adding a flag in the page structure will be acceptable upstream. > >>> The problem is in >>> stealing pages you are going to introduce false OOM issues when the >>> memory isn't available because it is being hinted on. >> I think this situation will arise when the guest is under memory >> pressure. In such situations any attempt to perform isolation will >> anyways fail and we may not be reporting anything at that time. > What I want to avoid is the scenario where an application grabs a > large amount of memory, then frees said memory, and we are sitting on > it for some time because we decide to try and hint on the large chunk. I agree. > By processing this sometime after the pages are sent to the buddy > allocator in a separate thread, and by processing a small fixed window > of memory at a time we can avoid making freeing memory expensive, and > still provide the hints in a reasonable time frame. My impression is that the current window on which I am working may give issues for smaller size guests. But otherwise, we are already working with a smaller fixed window of memory. I can further restrict this to just 128 entries and test which would bring down the window of memory. Let me know what you think. > >>>> Pending items: >>>> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. >>>> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) >>>> * Compare reporting free pages via vring with vhost. >>>> * Decide between MADV_DONTNEED and MADV_FREE. >>>> * Analyze overall performance impact due to guest free page hinting. >>>> * Come up with proper/traceable error-message/logs. >>> I'll try applying these patches and see if I can reproduce the results >>> you reported. >> Thanks. Let me know if you run into any issues. >>> With the last patch set I couldn't reproduce the results >>> as you reported them. >> If I remember correctly then the last time you only tried with multiple >> vcpus and not with 1 vcpu. > I had tried 1 vcpu, however I ended up running into some other issues > that made it difficult to even boot the system last week. > >>> It has me wondering if you were somehow seeing >>> the effects of a balloon instead of the actual memory hints as I >>> couldn't find any evidence of the memory ever actually being freed >>> back by the hints functionality. >> Can you please elaborate what kind of evidence you are looking for? >> >> I did trace the hints on the QEMU/host side. > It looks like the new patches are working as I am seeing the memory > freeing occurring this time around. Although it looks like this is > still generating traces from free_pcpages_bulk if I enable multiple > VCPUs: I am assuming with the changes you suggested you were able to run this patch-series. Is that correct? > > [ 175.823539] list_add corruption. next->prev should be prev > (ffff947c7ffd61e0), but was ffffc7a29f9e0008. (next=ffffc7a29f4c0008). > [ 175.825978] ------------[ cut here ]------------ > [ 175.826889] kernel BUG at lib/list_debug.c:25! > [ 175.827766] invalid opcode: 0000 [#1] SMP PTI > [ 175.828621] CPU: 5 PID: 1344 Comm: page_fault1_thr Not tainted > 5.0.0-next-20190306-baseline+ #76 > [ 175.830312] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), > BIOS Bochs 01/01/2011 > [ 175.831885] RIP: 0010:__list_add_valid+0x35/0x70 > [ 175.832784] Code: 18 48 8b 32 48 39 f0 75 39 48 39 c7 74 1e 48 39 > fa 74 19 b8 01 00 00 00 c3 48 89 c1 48 c7 c7 80 b5 0f a9 31 c0 e8 8f > aa c8 ff <0f> 0b 48 89 c1 48 89 fe 31 c0 48 c7 c7 30 b6 0f a9 e8 79 aa > c8 ff > [ 175.836379] RSP: 0018:ffffa717c40839b0 EFLAGS: 00010046 > [ 175.837394] RAX: 0000000000000075 RBX: ffff947c7ffd61e0 RCX: 0000000000000000 > [ 175.838779] RDX: 0000000000000000 RSI: ffff947c5f957188 RDI: ffff947c5f957188 > [ 175.840162] RBP: ffff947c7ffd61d0 R08: 000000000000026f R09: 0000000000000005 > [ 175.841539] R10: 0000000000000000 R11: ffffa717c4083730 R12: ffffc7a29f260008 > [ 175.842932] R13: ffff947c7ffd5d00 R14: ffffc7a29f4c0008 R15: ffffc7a29f260000 > [ 175.844319] FS: 0000000000000000(0000) GS:ffff947c5f940000(0000) > knlGS:0000000000000000 > [ 175.845896] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 175.847009] CR2: 00007fffe3421000 CR3: 000000051220e006 CR4: 0000000000160ee0 > [ 175.848390] Call Trace: > [ 175.848896] free_pcppages_bulk+0x4bc/0x6a0 > [ 175.849723] free_unref_page_list+0x10d/0x190 > [ 175.850567] release_pages+0x103/0x4a0 > [ 175.851313] tlb_flush_mmu_free+0x36/0x50 > [ 175.852105] unmap_page_range+0x963/0xd50 > [ 175.852897] unmap_vmas+0x62/0xc0 > [ 175.853549] exit_mmap+0xb5/0x1a0 > [ 175.854205] mmput+0x5b/0x120 > [ 175.854794] do_exit+0x273/0xc30 > [ 175.855426] ? free_unref_page_commit+0x85/0xf0 > [ 175.856312] do_group_exit+0x39/0xa0 > [ 175.857018] get_signal+0x172/0x7c0 > [ 175.857703] do_signal+0x36/0x620 > [ 175.858355] ? percpu_counter_add_batch+0x4b/0x60 > [ 175.859280] ? __do_munmap+0x288/0x390 > [ 175.860020] exit_to_usermode_loop+0x4c/0xa8 > [ 175.860859] do_syscall_64+0x152/0x170 > [ 175.861595] entry_SYSCALL_64_after_hwframe+0x44/0xa9 > [ 175.862586] RIP: 0033:0x7ffff76a8ec7 > [ 175.863292] Code: Bad RIP value. > [ 175.863928] RSP: 002b:00007ffff4422eb8 EFLAGS: 00000212 ORIG_RAX: > 000000000000000b > [ 175.865396] RAX: 0000000000000000 RBX: 00007ffff7ff7280 RCX: 00007ffff76a8ec7 > [ 175.866799] RDX: 00007fffe3422000 RSI: 0000000008000000 RDI: 00007fffdb422000 > [ 175.868194] RBP: 0000000000001000 R08: ffffffffffffffff R09: 0000000000000000 > [ 175.869582] R10: 0000000000000022 R11: 0000000000000212 R12: 00007ffff4422fc0 > [ 175.870984] R13: 0000000000000001 R14: 00007fffffffc1b0 R15: 00007ffff44239c0 > [ 175.872350] Modules linked in: ip6t_rpfilter ip6t_REJECT > nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat > ebtable_broute bridge stp llc ip6table_nat ip6table_mangle > ip6table_raw ip6table_security iptable_nat nf_nat nf_conntrack > nf_defrag_ipv6 nf_defrag_ipv4 iptable_mangle iptable_raw > iptable_security ebtable_filter ebtables ip6table_filter ip6_tables > sunrpc sb_edac crct10dif_pclmul crc32_pclmul ghash_clmulni_intel > kvm_intel kvm ppdev irqbypass parport_pc parport virtio_balloon pcspkr > i2c_piix4 joydev xfs libcrc32c cirrus drm_kms_helper ttm drm e1000 > crc32c_intel virtio_blk serio_raw ata_generic floppy pata_acpi > qemu_fw_cfg > [ 175.883153] ---[ end trace 5b67f12a67d1f373 ]--- > > I should be able to rebuild the kernels/qemu and test this patch set > over the next day or two. Thanks. > > Thanks. > > - Alex
On Thu, Mar 7, 2019 at 5:09 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > > > On 3/6/19 5:05 PM, Alexander Duyck wrote: > > On Wed, Mar 6, 2019 at 11:07 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >> > >> On 3/6/19 1:00 PM, Alexander Duyck wrote: > >>> On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. > >>>> > >>>> Benefit: > >>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). > >>>> > >>>> Changelog in v9: > >>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. > >>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. > >>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. > >>> Without a kthread this has the potential to get really ugly really > >>> fast. If we are going to run asynchronously we should probably be > >>> truly asynchonous and just place a few pieces of data in the page that > >>> a worker thread can use to identify which pages have been hinted and > >>> which pages have not. > >> Can you please explain what do you mean by truly asynchronous? > >> > >> With this implementation also I am not reporting the pages synchronously. > > The problem is you are making it pseudo synchronous by having to push > > pages off to a side buffer aren't you? In my mind we should be able to > > have the page hinting go on with little to no interference with > > existing page allocation and freeing. > We have to opt one of the two options: > 1. Block allocation by using a flag or acquire a lock to prevent the > usage of pages we are hinting. > 2. Remove the page set entirely from the buddy. (This is what I am doing > right now) > > The reason I would prefer the second approach is that we are not > blocking the allocation in any way and as we are only working with a > smaller set of pages we should be fine. > However, with the current approach as we are reporting asynchronously > there is a chance that we end up hinting more than 2-3 times for a > single workload run. In situation where this could lead to low memory > condition in the guest, the hinting will anyways fail as the guest will > not allow page isolation. > I can possibly try and test the same to ensure that we don't get OOM due > to hinting when the guest is under memory pressure. So in either case you are essentially blocking allocation since the memory cannot be used. My concern is more with guaranteeing forward progress for as many CPUs as possible. With your current design you have one minor issue in that you aren't taking the lock to re-insert the pages back into the buddy allocator. When you add that step in it means you are going to be blocking allocation on that zone while you are reinserting the pages. Also right now you are using the calls to free_one_page to generate a list of hints where to search. I'm thinking that may not be the best approach since what we want to do is provide hints on idle free pages, not just pages that will be free for a short period of time. To that end what I think w may want to do is instead just walk the LRU list for a given zone/order in reverse order so that we can try to identify the pages that are most likely to be cold and unused and those are the first ones we want to be hinting on rather than the ones that were just freed. If we can look at doing something like adding a jiffies value to the page indicating when it was last freed we could even have a good point for determining when we should stop processing pages in a given zone/order list. In reality the approach wouldn't be too different from what you are doing now, the only real difference would be that we would just want to walk the LRU list for the given zone/order rather then pulling hints on what to free from the calls to free_one_page. In addition we would need to add a couple bits to indicate if the page has been hinted on, is in the middle of getting hinted on, and something such as the jiffies value I mentioned which we could use to determine how old the page is. > > > >>> Then we can have that one thread just walking > >>> through the zone memory pulling out fixed size pieces at a time and > >>> providing hints on that. By doing that we avoid the potential of > >>> creating a batch of pages that eat up most of the system memory. > >>> > >>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. > >>>> * Dynamically allocated space is used to hold the isolated guest free pages. > >>> I have concerns that doing this per CPU and allocating memory > >>> dynamically can result in you losing a significant amount of memory as > >>> it sits waiting to be hinted. > >> It should not as the buddy will keep merging the pages and we are only > >> capturing MAX_ORDER - 1. > >> This was the issue with the last patch-series when I was capturing all > >> order pages resulting in the per-cpu array to be filled with lower order > >> pages. > >>>> * All the pages are reported asynchronously to the host via virtio driver. > >>>> * Pages are returned back to the guest buddy free list only when the host response is received. > >>> I have been thinking about this. Instead of stealing the page couldn't > >>> you simply flag it that there is a hint in progress and simply wait in > >>> arch_alloc_page until the hint has been processed? > >> With the flag, I am assuming you mean to block the allocation until > >> hinting is going on, which is an issue. That was one of the issues > >> discussed earlier which I wanted to solve with this implementation. > > With the flag we would allow the allocation, but would have to > > synchronize with the hinting at that point. I got the idea from the > > way the s390 code works. They have both an arch_free_page and an > > arch_alloc_page. If I understand correctly the arch_alloc_page is what > > is meant to handle the case of a page that has been marked for > > hinting, but may not have been hinted on yet. My thought for now is to > > keep it simple and use a page flag to indicate that a page is > > currently pending a hint. > I am assuming this page flag will be located in the page structure. > > We should be able to spin in such a case and > > it would probably still perform better than a solution where we would > > not have the memory available and possibly be under memory pressure. > I had this same idea earlier. However, the thing about which I was not > sure is if adding a flag in the page structure will be acceptable upstream. > > > >>> The problem is in > >>> stealing pages you are going to introduce false OOM issues when the > >>> memory isn't available because it is being hinted on. > >> I think this situation will arise when the guest is under memory > >> pressure. In such situations any attempt to perform isolation will > >> anyways fail and we may not be reporting anything at that time. > > What I want to avoid is the scenario where an application grabs a > > large amount of memory, then frees said memory, and we are sitting on > > it for some time because we decide to try and hint on the large chunk. > I agree. > > By processing this sometime after the pages are sent to the buddy > > allocator in a separate thread, and by processing a small fixed window > > of memory at a time we can avoid making freeing memory expensive, and > > still provide the hints in a reasonable time frame. > > My impression is that the current window on which I am working may give > issues for smaller size guests. But otherwise, we are already working > with a smaller fixed window of memory. > > I can further restrict this to just 128 entries and test which would > bring down the window of memory. Let me know what you think. The problem is 128 entries is still pretty big when you consider you are working with 4M pages. If I am not mistaken that is a half gigabyte of memory. For lower order pages 128 would probably be fine, but with the higher order pages we may want to contain things to something smaller like 16MB to 64MB worth of memory. > > > >>>> Pending items: > >>>> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. > >>>> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) > >>>> * Compare reporting free pages via vring with vhost. > >>>> * Decide between MADV_DONTNEED and MADV_FREE. > >>>> * Analyze overall performance impact due to guest free page hinting. > >>>> * Come up with proper/traceable error-message/logs. > >>> I'll try applying these patches and see if I can reproduce the results > >>> you reported. > >> Thanks. Let me know if you run into any issues. > >>> With the last patch set I couldn't reproduce the results > >>> as you reported them. > >> If I remember correctly then the last time you only tried with multiple > >> vcpus and not with 1 vcpu. > > I had tried 1 vcpu, however I ended up running into some other issues > > that made it difficult to even boot the system last week. > > > >>> It has me wondering if you were somehow seeing > >>> the effects of a balloon instead of the actual memory hints as I > >>> couldn't find any evidence of the memory ever actually being freed > >>> back by the hints functionality. > >> Can you please elaborate what kind of evidence you are looking for? > >> > >> I did trace the hints on the QEMU/host side. > > It looks like the new patches are working as I am seeing the memory > > freeing occurring this time around. Although it looks like this is > > still generating traces from free_pcpages_bulk if I enable multiple > > VCPUs: > I am assuming with the changes you suggested you were able to run this > patch-series. Is that correct? Yes, I got it working by disabling SMP. I think I found and pointed out the issue in your other patch where you were using __free_one_page without holding the zone lock. > > > > [ 175.823539] list_add corruption. next->prev should be prev > > (ffff947c7ffd61e0), but was ffffc7a29f9e0008. (next=ffffc7a29f4c0008). > > [ 175.825978] ------------[ cut here ]------------ > > [ 175.826889] kernel BUG at lib/list_debug.c:25! > > [ 175.827766] invalid opcode: 0000 [#1] SMP PTI > > [ 175.828621] CPU: 5 PID: 1344 Comm: page_fault1_thr Not tainted > > 5.0.0-next-20190306-baseline+ #76 > > [ 175.830312] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), > > BIOS Bochs 01/01/2011 > > [ 175.831885] RIP: 0010:__list_add_valid+0x35/0x70 > > [ 175.832784] Code: 18 48 8b 32 48 39 f0 75 39 48 39 c7 74 1e 48 39 > > fa 74 19 b8 01 00 00 00 c3 48 89 c1 48 c7 c7 80 b5 0f a9 31 c0 e8 8f > > aa c8 ff <0f> 0b 48 89 c1 48 89 fe 31 c0 48 c7 c7 30 b6 0f a9 e8 79 aa > > c8 ff > > [ 175.836379] RSP: 0018:ffffa717c40839b0 EFLAGS: 00010046 > > [ 175.837394] RAX: 0000000000000075 RBX: ffff947c7ffd61e0 RCX: 0000000000000000 > > [ 175.838779] RDX: 0000000000000000 RSI: ffff947c5f957188 RDI: ffff947c5f957188 > > [ 175.840162] RBP: ffff947c7ffd61d0 R08: 000000000000026f R09: 0000000000000005 > > [ 175.841539] R10: 0000000000000000 R11: ffffa717c4083730 R12: ffffc7a29f260008 > > [ 175.842932] R13: ffff947c7ffd5d00 R14: ffffc7a29f4c0008 R15: ffffc7a29f260000 > > [ 175.844319] FS: 0000000000000000(0000) GS:ffff947c5f940000(0000) > > knlGS:0000000000000000 > > [ 175.845896] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > > [ 175.847009] CR2: 00007fffe3421000 CR3: 000000051220e006 CR4: 0000000000160ee0 > > [ 175.848390] Call Trace: > > [ 175.848896] free_pcppages_bulk+0x4bc/0x6a0 > > [ 175.849723] free_unref_page_list+0x10d/0x190 > > [ 175.850567] release_pages+0x103/0x4a0 > > [ 175.851313] tlb_flush_mmu_free+0x36/0x50 > > [ 175.852105] unmap_page_range+0x963/0xd50 > > [ 175.852897] unmap_vmas+0x62/0xc0 > > [ 175.853549] exit_mmap+0xb5/0x1a0 > > [ 175.854205] mmput+0x5b/0x120 > > [ 175.854794] do_exit+0x273/0xc30 > > [ 175.855426] ? free_unref_page_commit+0x85/0xf0 > > [ 175.856312] do_group_exit+0x39/0xa0 > > [ 175.857018] get_signal+0x172/0x7c0 > > [ 175.857703] do_signal+0x36/0x620 > > [ 175.858355] ? percpu_counter_add_batch+0x4b/0x60 > > [ 175.859280] ? __do_munmap+0x288/0x390 > > [ 175.860020] exit_to_usermode_loop+0x4c/0xa8 > > [ 175.860859] do_syscall_64+0x152/0x170 > > [ 175.861595] entry_SYSCALL_64_after_hwframe+0x44/0xa9 > > [ 175.862586] RIP: 0033:0x7ffff76a8ec7 > > [ 175.863292] Code: Bad RIP value. > > [ 175.863928] RSP: 002b:00007ffff4422eb8 EFLAGS: 00000212 ORIG_RAX: > > 000000000000000b > > [ 175.865396] RAX: 0000000000000000 RBX: 00007ffff7ff7280 RCX: 00007ffff76a8ec7 > > [ 175.866799] RDX: 00007fffe3422000 RSI: 0000000008000000 RDI: 00007fffdb422000 > > [ 175.868194] RBP: 0000000000001000 R08: ffffffffffffffff R09: 0000000000000000 > > [ 175.869582] R10: 0000000000000022 R11: 0000000000000212 R12: 00007ffff4422fc0 > > [ 175.870984] R13: 0000000000000001 R14: 00007fffffffc1b0 R15: 00007ffff44239c0 > > [ 175.872350] Modules linked in: ip6t_rpfilter ip6t_REJECT > > nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat > > ebtable_broute bridge stp llc ip6table_nat ip6table_mangle > > ip6table_raw ip6table_security iptable_nat nf_nat nf_conntrack > > nf_defrag_ipv6 nf_defrag_ipv4 iptable_mangle iptable_raw > > iptable_security ebtable_filter ebtables ip6table_filter ip6_tables > > sunrpc sb_edac crct10dif_pclmul crc32_pclmul ghash_clmulni_intel > > kvm_intel kvm ppdev irqbypass parport_pc parport virtio_balloon pcspkr > > i2c_piix4 joydev xfs libcrc32c cirrus drm_kms_helper ttm drm e1000 > > crc32c_intel virtio_blk serio_raw ata_generic floppy pata_acpi > > qemu_fw_cfg > > [ 175.883153] ---[ end trace 5b67f12a67d1f373 ]--- > > > > I should be able to rebuild the kernels/qemu and test this patch set > > over the next day or two. > Thanks. > > > > Thanks. > > > > - Alex > -- > Regards > Nitesh >
On Wed, Mar 06, 2019 at 10:00:05AM -0800, Alexander Duyck wrote: > I have been thinking about this. Instead of stealing the page couldn't > you simply flag it that there is a hint in progress and simply wait in > arch_alloc_page until the hint has been processed? The problem is in > stealing pages you are going to introduce false OOM issues when the > memory isn't available because it is being hinted on. Can we not give them back in an OOM notifier?
On Thu, Mar 07, 2019 at 10:45:58AM -0800, Alexander Duyck wrote: > To that end what I think w may want to do is instead just walk the LRU > list for a given zone/order in reverse order so that we can try to > identify the pages that are most likely to be cold and unused and > those are the first ones we want to be hinting on rather than the ones > that were just freed. If we can look at doing something like adding a > jiffies value to the page indicating when it was last freed we could > even have a good point for determining when we should stop processing > pages in a given zone/order list. > > In reality the approach wouldn't be too different from what you are > doing now, the only real difference would be that we would just want > to walk the LRU list for the given zone/order rather then pulling > hints on what to free from the calls to free_one_page. In addition we > would need to add a couple bits to indicate if the page has been > hinted on, is in the middle of getting hinted on, and something such > as the jiffies value I mentioned which we could use to determine how > old the page is. Do we really need bits in the page? Would it be bad to just have a separate hint list? If you run out of free memory you can check the hint list, if you find stuff there you can spin or kick the hypervisor to hurry up. Core mm/ changes, so nothing's easy, I know.
On 07.03.19 19:53, Michael S. Tsirkin wrote: > On Thu, Mar 07, 2019 at 10:45:58AM -0800, Alexander Duyck wrote: >> To that end what I think w may want to do is instead just walk the LRU >> list for a given zone/order in reverse order so that we can try to >> identify the pages that are most likely to be cold and unused and >> those are the first ones we want to be hinting on rather than the ones >> that were just freed. If we can look at doing something like adding a >> jiffies value to the page indicating when it was last freed we could >> even have a good point for determining when we should stop processing >> pages in a given zone/order list. >> >> In reality the approach wouldn't be too different from what you are >> doing now, the only real difference would be that we would just want >> to walk the LRU list for the given zone/order rather then pulling >> hints on what to free from the calls to free_one_page. In addition we >> would need to add a couple bits to indicate if the page has been >> hinted on, is in the middle of getting hinted on, and something such >> as the jiffies value I mentioned which we could use to determine how >> old the page is. > > Do we really need bits in the page? > Would it be bad to just have a separate hint list? > > If you run out of free memory you can check the hint > list, if you find stuff there you can spin > or kick the hypervisor to hurry up. > > Core mm/ changes, so nothing's easy, I know. We evaluated the idea of busy spinning on some bit/list entry a while ago. While it sounds interesting, it is usually not what we want and has other negative performance impacts. Talking about "marking" pages, what we actually would want is to rework the buddy to skip over these "marked" pages and only really spin in case there are no other pages left. Allocation paths should only ever be blocked if OOM, not if just some hinting activity is going on on another VCPU. However as you correctly say: "core mm changes". New page flag? Basically impossible. Reuse another one? Can easily get horrbily confusing and can easily get rejected upstream. What about the buddy wanting to merge pages that are marked (assuming we also want something < MAX_ORDER - 1)? This smells like possibly heavy core mm changes. Lesson learned: Avoid such heavy changes. Especially in the first shot. The interesting thing about Nitesh's aproach right now is that we can easily rework these details later on. The host->guest interface will stay the same. Instead of temporarily taking pages out of the buddy, we could e.g. mark them and make the buddy or other users skip over them.
On 3/7/19 1:45 PM, Alexander Duyck wrote: > On Thu, Mar 7, 2019 at 5:09 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >> >> On 3/6/19 5:05 PM, Alexander Duyck wrote: >>> On Wed, Mar 6, 2019 at 11:07 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>> On 3/6/19 1:00 PM, Alexander Duyck wrote: >>>>> On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >>>>>> >>>>>> Benefit: >>>>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >>>>>> >>>>>> Changelog in v9: >>>>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >>>>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >>>>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. >>>>> Without a kthread this has the potential to get really ugly really >>>>> fast. If we are going to run asynchronously we should probably be >>>>> truly asynchonous and just place a few pieces of data in the page that >>>>> a worker thread can use to identify which pages have been hinted and >>>>> which pages have not. >>>> Can you please explain what do you mean by truly asynchronous? >>>> >>>> With this implementation also I am not reporting the pages synchronously. >>> The problem is you are making it pseudo synchronous by having to push >>> pages off to a side buffer aren't you? In my mind we should be able to >>> have the page hinting go on with little to no interference with >>> existing page allocation and freeing. >> We have to opt one of the two options: >> 1. Block allocation by using a flag or acquire a lock to prevent the >> usage of pages we are hinting. >> 2. Remove the page set entirely from the buddy. (This is what I am doing >> right now) >> >> The reason I would prefer the second approach is that we are not >> blocking the allocation in any way and as we are only working with a >> smaller set of pages we should be fine. >> However, with the current approach as we are reporting asynchronously >> there is a chance that we end up hinting more than 2-3 times for a >> single workload run. In situation where this could lead to low memory >> condition in the guest, the hinting will anyways fail as the guest will >> not allow page isolation. >> I can possibly try and test the same to ensure that we don't get OOM due >> to hinting when the guest is under memory pressure. > So in either case you are essentially blocking allocation since the > memory cannot be used. My concern is more with guaranteeing forward > progress for as many CPUs as possible. > > With your current design you have one minor issue in that you aren't > taking the lock to re-insert the pages back into the buddy allocator. > When you add that step in it means you are going to be blocking > allocation on that zone while you are reinserting the pages. > > Also right now you are using the calls to free_one_page to generate a > list of hints where to search. I'm thinking that may not be the best > approach since what we want to do is provide hints on idle free pages, > not just pages that will be free for a short period of time. > > To that end what I think w may want to do is instead just walk the LRU > list for a given zone/order in reverse order so that we can try to > identify the pages that are most likely to be cold and unused and > those are the first ones we want to be hinting on rather than the ones > that were just freed. If we can look at doing something like adding a > jiffies value to the page indicating when it was last freed we could > even have a good point for determining when we should stop processing > pages in a given zone/order list. > > In reality the approach wouldn't be too different from what you are > doing now, the only real difference would be that we would just want > to walk the LRU list for the given zone/order rather then pulling > hints on what to free from the calls to free_one_page. In addition we > would need to add a couple bits to indicate if the page has been > hinted on, is in the middle of getting hinted on, and something such > as the jiffies value I mentioned which we could use to determine how > old the page is. > >>>>> Then we can have that one thread just walking >>>>> through the zone memory pulling out fixed size pieces at a time and >>>>> providing hints on that. By doing that we avoid the potential of >>>>> creating a batch of pages that eat up most of the system memory. >>>>> >>>>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >>>>>> * Dynamically allocated space is used to hold the isolated guest free pages. >>>>> I have concerns that doing this per CPU and allocating memory >>>>> dynamically can result in you losing a significant amount of memory as >>>>> it sits waiting to be hinted. >>>> It should not as the buddy will keep merging the pages and we are only >>>> capturing MAX_ORDER - 1. >>>> This was the issue with the last patch-series when I was capturing all >>>> order pages resulting in the per-cpu array to be filled with lower order >>>> pages. >>>>>> * All the pages are reported asynchronously to the host via virtio driver. >>>>>> * Pages are returned back to the guest buddy free list only when the host response is received. >>>>> I have been thinking about this. Instead of stealing the page couldn't >>>>> you simply flag it that there is a hint in progress and simply wait in >>>>> arch_alloc_page until the hint has been processed? >>>> With the flag, I am assuming you mean to block the allocation until >>>> hinting is going on, which is an issue. That was one of the issues >>>> discussed earlier which I wanted to solve with this implementation. >>> With the flag we would allow the allocation, but would have to >>> synchronize with the hinting at that point. I got the idea from the >>> way the s390 code works. They have both an arch_free_page and an >>> arch_alloc_page. If I understand correctly the arch_alloc_page is what >>> is meant to handle the case of a page that has been marked for >>> hinting, but may not have been hinted on yet. My thought for now is to >>> keep it simple and use a page flag to indicate that a page is >>> currently pending a hint. >> I am assuming this page flag will be located in the page structure. >>> We should be able to spin in such a case and >>> it would probably still perform better than a solution where we would >>> not have the memory available and possibly be under memory pressure. >> I had this same idea earlier. However, the thing about which I was not >> sure is if adding a flag in the page structure will be acceptable upstream. >>>>> The problem is in >>>>> stealing pages you are going to introduce false OOM issues when the >>>>> memory isn't available because it is being hinted on. >>>> I think this situation will arise when the guest is under memory >>>> pressure. In such situations any attempt to perform isolation will >>>> anyways fail and we may not be reporting anything at that time. >>> What I want to avoid is the scenario where an application grabs a >>> large amount of memory, then frees said memory, and we are sitting on >>> it for some time because we decide to try and hint on the large chunk. >> I agree. >>> By processing this sometime after the pages are sent to the buddy >>> allocator in a separate thread, and by processing a small fixed window >>> of memory at a time we can avoid making freeing memory expensive, and >>> still provide the hints in a reasonable time frame. >> My impression is that the current window on which I am working may give >> issues for smaller size guests. But otherwise, we are already working >> with a smaller fixed window of memory. >> >> I can further restrict this to just 128 entries and test which would >> bring down the window of memory. Let me know what you think. > The problem is 128 entries is still pretty big when you consider you > are working with 4M pages. If I am not mistaken that is a half > gigabyte of memory. For lower order pages 128 would probably be fine, > but with the higher order pages we may want to contain things to > something smaller like 16MB to 64MB worth of memory. This is something with which we can certainly play around or may even make configurable. For now, I think I will continue testing with 128. > >>>>>> Pending items: >>>>>> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. >>>>>> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) >>>>>> * Compare reporting free pages via vring with vhost. >>>>>> * Decide between MADV_DONTNEED and MADV_FREE. >>>>>> * Analyze overall performance impact due to guest free page hinting. >>>>>> * Come up with proper/traceable error-message/logs. >>>>> I'll try applying these patches and see if I can reproduce the results >>>>> you reported. >>>> Thanks. Let me know if you run into any issues. >>>>> With the last patch set I couldn't reproduce the results >>>>> as you reported them. >>>> If I remember correctly then the last time you only tried with multiple >>>> vcpus and not with 1 vcpu. >>> I had tried 1 vcpu, however I ended up running into some other issues >>> that made it difficult to even boot the system last week. >>> >>>>> It has me wondering if you were somehow seeing >>>>> the effects of a balloon instead of the actual memory hints as I >>>>> couldn't find any evidence of the memory ever actually being freed >>>>> back by the hints functionality. >>>> Can you please elaborate what kind of evidence you are looking for? >>>> >>>> I did trace the hints on the QEMU/host side. >>> It looks like the new patches are working as I am seeing the memory >>> freeing occurring this time around. Although it looks like this is >>> still generating traces from free_pcpages_bulk if I enable multiple >>> VCPUs: >> I am assuming with the changes you suggested you were able to run this >> patch-series. Is that correct? > Yes, I got it working by disabling SMP. I think I found and pointed > out the issue in your other patch where you were using __free_one_page > without holding the zone lock. Yeah. Thanks. > >>> [ 175.823539] list_add corruption. next->prev should be prev >>> (ffff947c7ffd61e0), but was ffffc7a29f9e0008. (next=ffffc7a29f4c0008). >>> [ 175.825978] ------------[ cut here ]------------ >>> [ 175.826889] kernel BUG at lib/list_debug.c:25! >>> [ 175.827766] invalid opcode: 0000 [#1] SMP PTI >>> [ 175.828621] CPU: 5 PID: 1344 Comm: page_fault1_thr Not tainted >>> 5.0.0-next-20190306-baseline+ #76 >>> [ 175.830312] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), >>> BIOS Bochs 01/01/2011 >>> [ 175.831885] RIP: 0010:__list_add_valid+0x35/0x70 >>> [ 175.832784] Code: 18 48 8b 32 48 39 f0 75 39 48 39 c7 74 1e 48 39 >>> fa 74 19 b8 01 00 00 00 c3 48 89 c1 48 c7 c7 80 b5 0f a9 31 c0 e8 8f >>> aa c8 ff <0f> 0b 48 89 c1 48 89 fe 31 c0 48 c7 c7 30 b6 0f a9 e8 79 aa >>> c8 ff >>> [ 175.836379] RSP: 0018:ffffa717c40839b0 EFLAGS: 00010046 >>> [ 175.837394] RAX: 0000000000000075 RBX: ffff947c7ffd61e0 RCX: 0000000000000000 >>> [ 175.838779] RDX: 0000000000000000 RSI: ffff947c5f957188 RDI: ffff947c5f957188 >>> [ 175.840162] RBP: ffff947c7ffd61d0 R08: 000000000000026f R09: 0000000000000005 >>> [ 175.841539] R10: 0000000000000000 R11: ffffa717c4083730 R12: ffffc7a29f260008 >>> [ 175.842932] R13: ffff947c7ffd5d00 R14: ffffc7a29f4c0008 R15: ffffc7a29f260000 >>> [ 175.844319] FS: 0000000000000000(0000) GS:ffff947c5f940000(0000) >>> knlGS:0000000000000000 >>> [ 175.845896] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 >>> [ 175.847009] CR2: 00007fffe3421000 CR3: 000000051220e006 CR4: 0000000000160ee0 >>> [ 175.848390] Call Trace: >>> [ 175.848896] free_pcppages_bulk+0x4bc/0x6a0 >>> [ 175.849723] free_unref_page_list+0x10d/0x190 >>> [ 175.850567] release_pages+0x103/0x4a0 >>> [ 175.851313] tlb_flush_mmu_free+0x36/0x50 >>> [ 175.852105] unmap_page_range+0x963/0xd50 >>> [ 175.852897] unmap_vmas+0x62/0xc0 >>> [ 175.853549] exit_mmap+0xb5/0x1a0 >>> [ 175.854205] mmput+0x5b/0x120 >>> [ 175.854794] do_exit+0x273/0xc30 >>> [ 175.855426] ? free_unref_page_commit+0x85/0xf0 >>> [ 175.856312] do_group_exit+0x39/0xa0 >>> [ 175.857018] get_signal+0x172/0x7c0 >>> [ 175.857703] do_signal+0x36/0x620 >>> [ 175.858355] ? percpu_counter_add_batch+0x4b/0x60 >>> [ 175.859280] ? __do_munmap+0x288/0x390 >>> [ 175.860020] exit_to_usermode_loop+0x4c/0xa8 >>> [ 175.860859] do_syscall_64+0x152/0x170 >>> [ 175.861595] entry_SYSCALL_64_after_hwframe+0x44/0xa9 >>> [ 175.862586] RIP: 0033:0x7ffff76a8ec7 >>> [ 175.863292] Code: Bad RIP value. >>> [ 175.863928] RSP: 002b:00007ffff4422eb8 EFLAGS: 00000212 ORIG_RAX: >>> 000000000000000b >>> [ 175.865396] RAX: 0000000000000000 RBX: 00007ffff7ff7280 RCX: 00007ffff76a8ec7 >>> [ 175.866799] RDX: 00007fffe3422000 RSI: 0000000008000000 RDI: 00007fffdb422000 >>> [ 175.868194] RBP: 0000000000001000 R08: ffffffffffffffff R09: 0000000000000000 >>> [ 175.869582] R10: 0000000000000022 R11: 0000000000000212 R12: 00007ffff4422fc0 >>> [ 175.870984] R13: 0000000000000001 R14: 00007fffffffc1b0 R15: 00007ffff44239c0 >>> [ 175.872350] Modules linked in: ip6t_rpfilter ip6t_REJECT >>> nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat >>> ebtable_broute bridge stp llc ip6table_nat ip6table_mangle >>> ip6table_raw ip6table_security iptable_nat nf_nat nf_conntrack >>> nf_defrag_ipv6 nf_defrag_ipv4 iptable_mangle iptable_raw >>> iptable_security ebtable_filter ebtables ip6table_filter ip6_tables >>> sunrpc sb_edac crct10dif_pclmul crc32_pclmul ghash_clmulni_intel >>> kvm_intel kvm ppdev irqbypass parport_pc parport virtio_balloon pcspkr >>> i2c_piix4 joydev xfs libcrc32c cirrus drm_kms_helper ttm drm e1000 >>> crc32c_intel virtio_blk serio_raw ata_generic floppy pata_acpi >>> qemu_fw_cfg >>> [ 175.883153] ---[ end trace 5b67f12a67d1f373 ]--- >>> >>> I should be able to rebuild the kernels/qemu and test this patch set >>> over the next day or two. >> Thanks. >>> Thanks. >>> >>> - Alex >> -- >> Regards >> Nitesh >>
On 07.03.19 19:45, Alexander Duyck wrote: > On Thu, Mar 7, 2019 at 5:09 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >> >> >> On 3/6/19 5:05 PM, Alexander Duyck wrote: >>> On Wed, Mar 6, 2019 at 11:07 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>> >>>> On 3/6/19 1:00 PM, Alexander Duyck wrote: >>>>> On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >>>>>> >>>>>> Benefit: >>>>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >>>>>> >>>>>> Changelog in v9: >>>>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >>>>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >>>>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. >>>>> Without a kthread this has the potential to get really ugly really >>>>> fast. If we are going to run asynchronously we should probably be >>>>> truly asynchonous and just place a few pieces of data in the page that >>>>> a worker thread can use to identify which pages have been hinted and >>>>> which pages have not. >>>> Can you please explain what do you mean by truly asynchronous? >>>> >>>> With this implementation also I am not reporting the pages synchronously. >>> The problem is you are making it pseudo synchronous by having to push >>> pages off to a side buffer aren't you? In my mind we should be able to >>> have the page hinting go on with little to no interference with >>> existing page allocation and freeing. >> We have to opt one of the two options: >> 1. Block allocation by using a flag or acquire a lock to prevent the >> usage of pages we are hinting. >> 2. Remove the page set entirely from the buddy. (This is what I am doing >> right now) >> >> The reason I would prefer the second approach is that we are not >> blocking the allocation in any way and as we are only working with a >> smaller set of pages we should be fine. >> However, with the current approach as we are reporting asynchronously >> there is a chance that we end up hinting more than 2-3 times for a >> single workload run. In situation where this could lead to low memory >> condition in the guest, the hinting will anyways fail as the guest will >> not allow page isolation. >> I can possibly try and test the same to ensure that we don't get OOM due >> to hinting when the guest is under memory pressure. > > So in either case you are essentially blocking allocation since the > memory cannot be used. My concern is more with guaranteeing forward > progress for as many CPUs as possible. > > With your current design you have one minor issue in that you aren't > taking the lock to re-insert the pages back into the buddy allocator. > When you add that step in it means you are going to be blocking > allocation on that zone while you are reinserting the pages. > > Also right now you are using the calls to free_one_page to generate a > list of hints where to search. I'm thinking that may not be the best > approach since what we want to do is provide hints on idle free pages, > not just pages that will be free for a short period of time. > > To that end what I think w may want to do is instead just walk the LRU > list for a given zone/order in reverse order so that we can try to > identify the pages that are most likely to be cold and unused and > those are the first ones we want to be hinting on rather than the ones > that were just freed. If we can look at doing something like adding a > jiffies value to the page indicating when it was last freed we could > even have a good point for determining when we should stop processing > pages in a given zone/order list. > > In reality the approach wouldn't be too different from what you are > doing now, the only real difference would be that we would just want > to walk the LRU list for the given zone/order rather then pulling > hints on what to free from the calls to free_one_page. In addition we > would need to add a couple bits to indicate if the page has been > hinted on, is in the middle of getting hinted on, and something such > as the jiffies value I mentioned which we could use to determine how > old the page is. > >>> >>>>> Then we can have that one thread just walking >>>>> through the zone memory pulling out fixed size pieces at a time and >>>>> providing hints on that. By doing that we avoid the potential of >>>>> creating a batch of pages that eat up most of the system memory. >>>>> >>>>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >>>>>> * Dynamically allocated space is used to hold the isolated guest free pages. >>>>> I have concerns that doing this per CPU and allocating memory >>>>> dynamically can result in you losing a significant amount of memory as >>>>> it sits waiting to be hinted. >>>> It should not as the buddy will keep merging the pages and we are only >>>> capturing MAX_ORDER - 1. >>>> This was the issue with the last patch-series when I was capturing all >>>> order pages resulting in the per-cpu array to be filled with lower order >>>> pages. >>>>>> * All the pages are reported asynchronously to the host via virtio driver. >>>>>> * Pages are returned back to the guest buddy free list only when the host response is received. >>>>> I have been thinking about this. Instead of stealing the page couldn't >>>>> you simply flag it that there is a hint in progress and simply wait in >>>>> arch_alloc_page until the hint has been processed? >>>> With the flag, I am assuming you mean to block the allocation until >>>> hinting is going on, which is an issue. That was one of the issues >>>> discussed earlier which I wanted to solve with this implementation. >>> With the flag we would allow the allocation, but would have to >>> synchronize with the hinting at that point. I got the idea from the >>> way the s390 code works. They have both an arch_free_page and an >>> arch_alloc_page. If I understand correctly the arch_alloc_page is what >>> is meant to handle the case of a page that has been marked for >>> hinting, but may not have been hinted on yet. My thought for now is to >>> keep it simple and use a page flag to indicate that a page is >>> currently pending a hint. >> I am assuming this page flag will be located in the page structure. >>> We should be able to spin in such a case and >>> it would probably still perform better than a solution where we would >>> not have the memory available and possibly be under memory pressure. >> I had this same idea earlier. However, the thing about which I was not >> sure is if adding a flag in the page structure will be acceptable upstream. >>> >>>>> The problem is in >>>>> stealing pages you are going to introduce false OOM issues when the >>>>> memory isn't available because it is being hinted on. >>>> I think this situation will arise when the guest is under memory >>>> pressure. In such situations any attempt to perform isolation will >>>> anyways fail and we may not be reporting anything at that time. >>> What I want to avoid is the scenario where an application grabs a >>> large amount of memory, then frees said memory, and we are sitting on >>> it for some time because we decide to try and hint on the large chunk. >> I agree. >>> By processing this sometime after the pages are sent to the buddy >>> allocator in a separate thread, and by processing a small fixed window >>> of memory at a time we can avoid making freeing memory expensive, and >>> still provide the hints in a reasonable time frame. >> >> My impression is that the current window on which I am working may give >> issues for smaller size guests. But otherwise, we are already working >> with a smaller fixed window of memory. >> >> I can further restrict this to just 128 entries and test which would >> bring down the window of memory. Let me know what you think. > > The problem is 128 entries is still pretty big when you consider you > are working with 4M pages. If I am not mistaken that is a half > gigabyte of memory. For lower order pages 128 would probably be fine, > but with the higher order pages we may want to contain things to > something smaller like 16MB to 64MB worth of memory. > I agree, I also still consider it too big for 4MB pages. It would be different e.g. for 128KB pages.
On Thu, Mar 7, 2019 at 10:53 AM Michael S. Tsirkin <mst@redhat.com> wrote: > > On Thu, Mar 07, 2019 at 10:45:58AM -0800, Alexander Duyck wrote: > > To that end what I think w may want to do is instead just walk the LRU > > list for a given zone/order in reverse order so that we can try to > > identify the pages that are most likely to be cold and unused and > > those are the first ones we want to be hinting on rather than the ones > > that were just freed. If we can look at doing something like adding a > > jiffies value to the page indicating when it was last freed we could > > even have a good point for determining when we should stop processing > > pages in a given zone/order list. > > > > In reality the approach wouldn't be too different from what you are > > doing now, the only real difference would be that we would just want > > to walk the LRU list for the given zone/order rather then pulling > > hints on what to free from the calls to free_one_page. In addition we > > would need to add a couple bits to indicate if the page has been > > hinted on, is in the middle of getting hinted on, and something such > > as the jiffies value I mentioned which we could use to determine how > > old the page is. > > Do we really need bits in the page? > Would it be bad to just have a separate hint list? The issue is lists are expensive to search. If we have a single bit in the page we can check it as soon as we have the page. > If you run out of free memory you can check the hint > list, if you find stuff there you can spin > or kick the hypervisor to hurry up. This implies you are keeping a separate list of pages for what has been hinted on. If we are pulling pages out of the LRU list for that it will require the zone lock to move the pages back and forth and for higher core counts that isn't going to scale very well, and if you are trying to pull out a page that is currently being hinted on you will run into the same issue of having to wait for the hint to be completed before proceeding. > Core mm/ changes, so nothing's easy, I know. We might be able to reuse some existing page flags. For example, there is the PG_young and PG_idle flags that would actually be a pretty good fit in terms of what we are looking for in behavior. We could set PG_young when the page is initially freed, then clear it when we start to perform the hint, and set PG_idle once the hint has been completed. The check for if we could use a page would be pretty fast as a result as well since if PG_young or PG_idle are set it means the page is free to use so the check in arch_alloc_page would be pretty cheap since we could probably test for both bits in one read.
On 07.03.19 22:14, Alexander Duyck wrote: > On Thu, Mar 7, 2019 at 10:53 AM Michael S. Tsirkin <mst@redhat.com> wrote: >> >> On Thu, Mar 07, 2019 at 10:45:58AM -0800, Alexander Duyck wrote: >>> To that end what I think w may want to do is instead just walk the LRU >>> list for a given zone/order in reverse order so that we can try to >>> identify the pages that are most likely to be cold and unused and >>> those are the first ones we want to be hinting on rather than the ones >>> that were just freed. If we can look at doing something like adding a >>> jiffies value to the page indicating when it was last freed we could >>> even have a good point for determining when we should stop processing >>> pages in a given zone/order list. >>> >>> In reality the approach wouldn't be too different from what you are >>> doing now, the only real difference would be that we would just want >>> to walk the LRU list for the given zone/order rather then pulling >>> hints on what to free from the calls to free_one_page. In addition we >>> would need to add a couple bits to indicate if the page has been >>> hinted on, is in the middle of getting hinted on, and something such >>> as the jiffies value I mentioned which we could use to determine how >>> old the page is. >> >> Do we really need bits in the page? >> Would it be bad to just have a separate hint list? > > The issue is lists are expensive to search. If we have a single bit in > the page we can check it as soon as we have the page. > >> If you run out of free memory you can check the hint >> list, if you find stuff there you can spin >> or kick the hypervisor to hurry up. > > This implies you are keeping a separate list of pages for what has > been hinted on. If we are pulling pages out of the LRU list for that > it will require the zone lock to move the pages back and forth and for > higher core counts that isn't going to scale very well, and if you are > trying to pull out a page that is currently being hinted on you will > run into the same issue of having to wait for the hint to be completed > before proceeding. > >> Core mm/ changes, so nothing's easy, I know. > > We might be able to reuse some existing page flags. For example, there > is the PG_young and PG_idle flags that would actually be a pretty good > fit in terms of what we are looking for in behavior. We could set > PG_young when the page is initially freed, then clear it when we start > to perform the hint, and set PG_idle once the hint has been completed. Just noting that when hinting, we have to set all affected sub-page bits as far as I see. > > The check for if we could use a page would be pretty fast as a result > as well since if PG_young or PG_idle are set it means the page is free > to use so the check in arch_alloc_page would be pretty cheap since we > could probably test for both bits in one read. > I still dislike spinning on ordinary allocation paths. If we want to go that way, core mm has to consider these bits and try other pages first.
On Thu, Mar 7, 2019 at 1:28 PM David Hildenbrand <david@redhat.com> wrote: > > On 07.03.19 22:14, Alexander Duyck wrote: > > On Thu, Mar 7, 2019 at 10:53 AM Michael S. Tsirkin <mst@redhat.com> wrote: > >> > >> On Thu, Mar 07, 2019 at 10:45:58AM -0800, Alexander Duyck wrote: > >>> To that end what I think w may want to do is instead just walk the LRU > >>> list for a given zone/order in reverse order so that we can try to > >>> identify the pages that are most likely to be cold and unused and > >>> those are the first ones we want to be hinting on rather than the ones > >>> that were just freed. If we can look at doing something like adding a > >>> jiffies value to the page indicating when it was last freed we could > >>> even have a good point for determining when we should stop processing > >>> pages in a given zone/order list. > >>> > >>> In reality the approach wouldn't be too different from what you are > >>> doing now, the only real difference would be that we would just want > >>> to walk the LRU list for the given zone/order rather then pulling > >>> hints on what to free from the calls to free_one_page. In addition we > >>> would need to add a couple bits to indicate if the page has been > >>> hinted on, is in the middle of getting hinted on, and something such > >>> as the jiffies value I mentioned which we could use to determine how > >>> old the page is. > >> > >> Do we really need bits in the page? > >> Would it be bad to just have a separate hint list? > > > > The issue is lists are expensive to search. If we have a single bit in > > the page we can check it as soon as we have the page. > > > >> If you run out of free memory you can check the hint > >> list, if you find stuff there you can spin > >> or kick the hypervisor to hurry up. > > > > This implies you are keeping a separate list of pages for what has > > been hinted on. If we are pulling pages out of the LRU list for that > > it will require the zone lock to move the pages back and forth and for > > higher core counts that isn't going to scale very well, and if you are > > trying to pull out a page that is currently being hinted on you will > > run into the same issue of having to wait for the hint to be completed > > before proceeding. > > > >> Core mm/ changes, so nothing's easy, I know. > > > > We might be able to reuse some existing page flags. For example, there > > is the PG_young and PG_idle flags that would actually be a pretty good > > fit in terms of what we are looking for in behavior. We could set > > PG_young when the page is initially freed, then clear it when we start > > to perform the hint, and set PG_idle once the hint has been completed. > > Just noting that when hinting, we have to set all affected sub-page bits > as far as I see. You may be correct there. One thing I hadn't thought about is what happens if the page is split or merged up to a higher order. I guess I could be talked into being okay with a side list that we maintain a few pages in that are isolated from the rest. > > > > The check for if we could use a page would be pretty fast as a result > > as well since if PG_young or PG_idle are set it means the page is free > > to use so the check in arch_alloc_page would be pretty cheap since we > > could probably test for both bits in one read. > > > > I still dislike spinning on ordinary allocation paths. If we want to go > that way, core mm has to consider these bits and try other pages first. Agreed. I was just thinking that would be follow-on work since in my mind the collision rate for these should be low.
On Thu, Mar 07, 2019 at 08:27:32PM +0100, David Hildenbrand wrote: > On 07.03.19 19:53, Michael S. Tsirkin wrote: > > On Thu, Mar 07, 2019 at 10:45:58AM -0800, Alexander Duyck wrote: > >> To that end what I think w may want to do is instead just walk the LRU > >> list for a given zone/order in reverse order so that we can try to > >> identify the pages that are most likely to be cold and unused and > >> those are the first ones we want to be hinting on rather than the ones > >> that were just freed. If we can look at doing something like adding a > >> jiffies value to the page indicating when it was last freed we could > >> even have a good point for determining when we should stop processing > >> pages in a given zone/order list. > >> > >> In reality the approach wouldn't be too different from what you are > >> doing now, the only real difference would be that we would just want > >> to walk the LRU list for the given zone/order rather then pulling > >> hints on what to free from the calls to free_one_page. In addition we > >> would need to add a couple bits to indicate if the page has been > >> hinted on, is in the middle of getting hinted on, and something such > >> as the jiffies value I mentioned which we could use to determine how > >> old the page is. > > > > Do we really need bits in the page? > > Would it be bad to just have a separate hint list? > > > > If you run out of free memory you can check the hint > > list, if you find stuff there you can spin > > or kick the hypervisor to hurry up. > > > > Core mm/ changes, so nothing's easy, I know. > > We evaluated the idea of busy spinning on some bit/list entry a while > ago. While it sounds interesting, it is usually not what we want and has > other negative performance impacts. > > Talking about "marking" pages, what we actually would want is to rework > the buddy to skip over these "marked" pages and only really spin in case > there are no other pages left. Allocation paths should only ever be > blocked if OOM, not if just some hinting activity is going on on another > VCPU. > > However as you correctly say: "core mm changes". New page flag? > Basically impossible. Well not exactly. page bits are at a premium but only for *allocated* pages. pages in the buddy are free and there are some unused bits for these. > Reuse another one? Can easily get horrbily > confusing and can easily get rejected upstream. What about the buddy > wanting to merge pages that are marked (assuming we also want something > < MAX_ORDER - 1)? This smells like possibly heavy core mm changes. > > Lesson learned: Avoid such heavy changes. Especially in the first shot. > > The interesting thing about Nitesh's aproach right now is that we can > easily rework these details later on. The host->guest interface will > stay the same. Instead of temporarily taking pages out of the buddy, we > could e.g. mark them and make the buddy or other users skip over them. > > -- > > Thanks, > > David / dhildenb
On 08.03.19 03:24, Michael S. Tsirkin wrote: > On Thu, Mar 07, 2019 at 08:27:32PM +0100, David Hildenbrand wrote: >> On 07.03.19 19:53, Michael S. Tsirkin wrote: >>> On Thu, Mar 07, 2019 at 10:45:58AM -0800, Alexander Duyck wrote: >>>> To that end what I think w may want to do is instead just walk the LRU >>>> list for a given zone/order in reverse order so that we can try to >>>> identify the pages that are most likely to be cold and unused and >>>> those are the first ones we want to be hinting on rather than the ones >>>> that were just freed. If we can look at doing something like adding a >>>> jiffies value to the page indicating when it was last freed we could >>>> even have a good point for determining when we should stop processing >>>> pages in a given zone/order list. >>>> >>>> In reality the approach wouldn't be too different from what you are >>>> doing now, the only real difference would be that we would just want >>>> to walk the LRU list for the given zone/order rather then pulling >>>> hints on what to free from the calls to free_one_page. In addition we >>>> would need to add a couple bits to indicate if the page has been >>>> hinted on, is in the middle of getting hinted on, and something such >>>> as the jiffies value I mentioned which we could use to determine how >>>> old the page is. >>> >>> Do we really need bits in the page? >>> Would it be bad to just have a separate hint list? >>> >>> If you run out of free memory you can check the hint >>> list, if you find stuff there you can spin >>> or kick the hypervisor to hurry up. >>> >>> Core mm/ changes, so nothing's easy, I know. >> >> We evaluated the idea of busy spinning on some bit/list entry a while >> ago. While it sounds interesting, it is usually not what we want and has >> other negative performance impacts. >> >> Talking about "marking" pages, what we actually would want is to rework >> the buddy to skip over these "marked" pages and only really spin in case >> there are no other pages left. Allocation paths should only ever be >> blocked if OOM, not if just some hinting activity is going on on another >> VCPU. >> >> However as you correctly say: "core mm changes". New page flag? >> Basically impossible. > > Well not exactly. page bits are at a premium but only for > *allocated* pages. pages in the buddy are free and there are > some unused bits for these. > As I said, we have to be very careful here. Most parts of struct page can me modified by *the owner* of the page. In case the page is online but not allocated, buddy is the owner. Not some kvm/virtio thingy that hooks into some callback. Manipulating random page bits of buddy pages in *some* kernel module I consider problematic and will most probably not be accepted upstream. What could work is, factoring out these parts e.g. into mm/page_hinting.c, then it gets part of the core mm in some way. Which would actually be a nice thing to do either way we go.
On 07.03.19 19:46, Michael S. Tsirkin wrote: > On Wed, Mar 06, 2019 at 10:00:05AM -0800, Alexander Duyck wrote: >> I have been thinking about this. Instead of stealing the page couldn't >> you simply flag it that there is a hint in progress and simply wait in >> arch_alloc_page until the hint has been processed? The problem is in >> stealing pages you are going to introduce false OOM issues when the >> memory isn't available because it is being hinted on. > > Can we not give them back in an OOM notifier? > In the OOM notifier we might simply return "pages made available" as long as some pages are currently being hinted. We can use an atomic_t to track the number of requests that are sill being processed by the hypervisor. The larger the page granularity we have, the less likely the issue in running into this. But yes, it might happen if the starts align.
On 3/6/19 1:12 PM, Michael S. Tsirkin wrote: > On Wed, Mar 06, 2019 at 01:07:50PM -0500, Nitesh Narayan Lal wrote: >> On 3/6/19 11:09 AM, Michael S. Tsirkin wrote: >>> On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote: >>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >>>> >>>> Benefit: >>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >>>> >>>> Changelog in v9: >>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. >>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >>>> * Dynamically allocated space is used to hold the isolated guest free pages. >>>> * All the pages are reported asynchronously to the host via virtio driver. >>>> * Pages are returned back to the guest buddy free list only when the host response is received. >>>> >>>> Pending items: >>>> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. >>>> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) >>>> * Compare reporting free pages via vring with vhost. >>>> * Decide between MADV_DONTNEED and MADV_FREE. >>>> * Analyze overall performance impact due to guest free page hinting. >>>> * Come up with proper/traceable error-message/logs. >>>> >>>> Tests: >>>> 1. Use-case - Number of guests we can launch >>>> >>>> NUMA Nodes = 1 with 15 GB memory >>>> Guest Memory = 5 GB >>>> Number of cores in guest = 1 >>>> Workload = test allocation program allocates 4GB memory, touches it via memset and exits. >>>> Procedure = >>>> The first guest is launched and once its console is up, the test allocation program is executed with 4 GB memory request (Due to this the guest occupies almost 4-5 GB of memory in the host in a system without page hinting). Once this program exits at that time another guest is launched in the host and the same process is followed. We continue launching the guests until a guest gets killed due to low memory condition in the host. >>>> >>>> Results: >>>> Without hinting = 3 >>>> With hinting = 5 >>>> >>>> 2. Hackbench >>>> Guest Memory = 5 GB >>>> Number of cores = 4 >>>> Number of tasks Time with Hinting Time without Hinting >>>> 4000 19.540 17.818 >>>> >>> How about memhog btw? >>> Alex reported: >>> >>> My testing up till now has consisted of setting up 4 8GB VMs on a system >>> with 32GB of memory and 4GB of swap. To stress the memory on the system I >>> would run "memhog 8G" sequentially on each of the guests and observe how >>> long it took to complete the run. The observed behavior is that on the >>> systems with these patches applied in both the guest and on the host I was >>> able to complete the test with a time of 5 to 7 seconds per guest. On a >>> system without these patches the time ranged from 7 to 49 seconds per >>> guest. I am assuming the variability is due to time being spent writing >>> pages out to disk in order to free up space for the guest. >>> >> Here are the results: >> >> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with >> total memory of 15GB and no swap. In each of the guest, memhog is run >> with 5GB. Post-execution of memhog, Host memory usage is monitored by >> using Free command. >> >> Without Hinting: >> Time of execution Host used memory >> Guest 1: 45 seconds 5.4 GB >> Guest 2: 45 seconds 10 GB >> Guest 3: 1 minute 15 GB >> >> With Hinting: >> Time of execution Host used memory >> Guest 1: 49 seconds 2.4 GB >> Guest 2: 40 seconds 4.3 GB >> Guest 3: 50 seconds 6.3 GB > OK so no improvement. OTOH Alex's patches cut time down to 5-7 seconds > which seems better. Want to try testing Alex's patches for comparison? > I realized that the last time I reported the memhog numbers, I didn't enable the swap due to which the actual benefits of the series were not shown. I have re-run the test by including some of the changes suggested by Alexander and David: * Reduced the size of the per-cpu array to 32 and minimum hinting threshold to 16. * Reported length of isolated pages along with start pfn, instead of the order from the guest. * Used the reported length to madvise the entire length of address instead of a single 4K page. * Replaced MADV_DONTNEED with MADV_FREE. Setup for the test: NUMA node:1 Memory: 15GB Swap: 4GB Guest memory: 6GB Number of core: 1 Process: A guest is launched and memhog is run with 6GB. As its execution is over next guest is launched. Everytime memhog execution time is monitored. Results: Without Hinting: Time of execution Guest1: 22s Guest2: 24s Guest3: 1m29s With Hinting: Time of execution Guest1: 24s Guest2: 25s Guest3: 28s When hinting is enabled swap space is not used until memhog with 6GB is ran in 6th guest.
On Thu, Mar 14, 2019 at 9:43 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > > > On 3/6/19 1:12 PM, Michael S. Tsirkin wrote: > > On Wed, Mar 06, 2019 at 01:07:50PM -0500, Nitesh Narayan Lal wrote: > >> On 3/6/19 11:09 AM, Michael S. Tsirkin wrote: > >>> On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote: > >>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. > >>>> > >>>> Benefit: > >>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). > >>>> > >>>> Changelog in v9: > >>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. > >>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. > >>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. > >>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. > >>>> * Dynamically allocated space is used to hold the isolated guest free pages. > >>>> * All the pages are reported asynchronously to the host via virtio driver. > >>>> * Pages are returned back to the guest buddy free list only when the host response is received. > >>>> > >>>> Pending items: > >>>> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. > >>>> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) > >>>> * Compare reporting free pages via vring with vhost. > >>>> * Decide between MADV_DONTNEED and MADV_FREE. > >>>> * Analyze overall performance impact due to guest free page hinting. > >>>> * Come up with proper/traceable error-message/logs. > >>>> > >>>> Tests: > >>>> 1. Use-case - Number of guests we can launch > >>>> > >>>> NUMA Nodes = 1 with 15 GB memory > >>>> Guest Memory = 5 GB > >>>> Number of cores in guest = 1 > >>>> Workload = test allocation program allocates 4GB memory, touches it via memset and exits. > >>>> Procedure = > >>>> The first guest is launched and once its console is up, the test allocation program is executed with 4 GB memory request (Due to this the guest occupies almost 4-5 GB of memory in the host in a system without page hinting). Once this program exits at that time another guest is launched in the host and the same process is followed. We continue launching the guests until a guest gets killed due to low memory condition in the host. > >>>> > >>>> Results: > >>>> Without hinting = 3 > >>>> With hinting = 5 > >>>> > >>>> 2. Hackbench > >>>> Guest Memory = 5 GB > >>>> Number of cores = 4 > >>>> Number of tasks Time with Hinting Time without Hinting > >>>> 4000 19.540 17.818 > >>>> > >>> How about memhog btw? > >>> Alex reported: > >>> > >>> My testing up till now has consisted of setting up 4 8GB VMs on a system > >>> with 32GB of memory and 4GB of swap. To stress the memory on the system I > >>> would run "memhog 8G" sequentially on each of the guests and observe how > >>> long it took to complete the run. The observed behavior is that on the > >>> systems with these patches applied in both the guest and on the host I was > >>> able to complete the test with a time of 5 to 7 seconds per guest. On a > >>> system without these patches the time ranged from 7 to 49 seconds per > >>> guest. I am assuming the variability is due to time being spent writing > >>> pages out to disk in order to free up space for the guest. > >>> > >> Here are the results: > >> > >> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with > >> total memory of 15GB and no swap. In each of the guest, memhog is run > >> with 5GB. Post-execution of memhog, Host memory usage is monitored by > >> using Free command. > >> > >> Without Hinting: > >> Time of execution Host used memory > >> Guest 1: 45 seconds 5.4 GB > >> Guest 2: 45 seconds 10 GB > >> Guest 3: 1 minute 15 GB > >> > >> With Hinting: > >> Time of execution Host used memory > >> Guest 1: 49 seconds 2.4 GB > >> Guest 2: 40 seconds 4.3 GB > >> Guest 3: 50 seconds 6.3 GB > > OK so no improvement. OTOH Alex's patches cut time down to 5-7 seconds > > which seems better. Want to try testing Alex's patches for comparison? > > > I realized that the last time I reported the memhog numbers, I didn't > enable the swap due to which the actual benefits of the series were not > shown. > I have re-run the test by including some of the changes suggested by > Alexander and David: > * Reduced the size of the per-cpu array to 32 and minimum hinting > threshold to 16. > * Reported length of isolated pages along with start pfn, instead of > the order from the guest. > * Used the reported length to madvise the entire length of address > instead of a single 4K page. > * Replaced MADV_DONTNEED with MADV_FREE. > > Setup for the test: > NUMA node:1 > Memory: 15GB > Swap: 4GB > Guest memory: 6GB > Number of core: 1 > > Process: A guest is launched and memhog is run with 6GB. As its > execution is over next guest is launched. Everytime memhog execution > time is monitored. > Results: > Without Hinting: > Time of execution > Guest1: 22s > Guest2: 24s > Guest3: 1m29s > > With Hinting: > Time of execution > Guest1: 24s > Guest2: 25s > Guest3: 28s > > When hinting is enabled swap space is not used until memhog with 6GB is > ran in 6th guest. So one change you may want to make to your test setup would be to launch the tests sequentially after all the guests all up, instead of combining the test and guest bring-up. In addition you could run through the guests more than once to determine a more-or-less steady state in terms of the performance as you move between the guests after they have hit the point of having to either swap or pull MADV_FREE pages.
On 3/14/19 12:58 PM, Alexander Duyck wrote: > On Thu, Mar 14, 2019 at 9:43 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >> >> On 3/6/19 1:12 PM, Michael S. Tsirkin wrote: >>> On Wed, Mar 06, 2019 at 01:07:50PM -0500, Nitesh Narayan Lal wrote: >>>> On 3/6/19 11:09 AM, Michael S. Tsirkin wrote: >>>>> On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote: >>>>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >>>>>> >>>>>> Benefit: >>>>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >>>>>> >>>>>> Changelog in v9: >>>>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >>>>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >>>>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. >>>>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >>>>>> * Dynamically allocated space is used to hold the isolated guest free pages. >>>>>> * All the pages are reported asynchronously to the host via virtio driver. >>>>>> * Pages are returned back to the guest buddy free list only when the host response is received. >>>>>> >>>>>> Pending items: >>>>>> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. >>>>>> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) >>>>>> * Compare reporting free pages via vring with vhost. >>>>>> * Decide between MADV_DONTNEED and MADV_FREE. >>>>>> * Analyze overall performance impact due to guest free page hinting. >>>>>> * Come up with proper/traceable error-message/logs. >>>>>> >>>>>> Tests: >>>>>> 1. Use-case - Number of guests we can launch >>>>>> >>>>>> NUMA Nodes = 1 with 15 GB memory >>>>>> Guest Memory = 5 GB >>>>>> Number of cores in guest = 1 >>>>>> Workload = test allocation program allocates 4GB memory, touches it via memset and exits. >>>>>> Procedure = >>>>>> The first guest is launched and once its console is up, the test allocation program is executed with 4 GB memory request (Due to this the guest occupies almost 4-5 GB of memory in the host in a system without page hinting). Once this program exits at that time another guest is launched in the host and the same process is followed. We continue launching the guests until a guest gets killed due to low memory condition in the host. >>>>>> >>>>>> Results: >>>>>> Without hinting = 3 >>>>>> With hinting = 5 >>>>>> >>>>>> 2. Hackbench >>>>>> Guest Memory = 5 GB >>>>>> Number of cores = 4 >>>>>> Number of tasks Time with Hinting Time without Hinting >>>>>> 4000 19.540 17.818 >>>>>> >>>>> How about memhog btw? >>>>> Alex reported: >>>>> >>>>> My testing up till now has consisted of setting up 4 8GB VMs on a system >>>>> with 32GB of memory and 4GB of swap. To stress the memory on the system I >>>>> would run "memhog 8G" sequentially on each of the guests and observe how >>>>> long it took to complete the run. The observed behavior is that on the >>>>> systems with these patches applied in both the guest and on the host I was >>>>> able to complete the test with a time of 5 to 7 seconds per guest. On a >>>>> system without these patches the time ranged from 7 to 49 seconds per >>>>> guest. I am assuming the variability is due to time being spent writing >>>>> pages out to disk in order to free up space for the guest. >>>>> >>>> Here are the results: >>>> >>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with >>>> total memory of 15GB and no swap. In each of the guest, memhog is run >>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by >>>> using Free command. >>>> >>>> Without Hinting: >>>> Time of execution Host used memory >>>> Guest 1: 45 seconds 5.4 GB >>>> Guest 2: 45 seconds 10 GB >>>> Guest 3: 1 minute 15 GB >>>> >>>> With Hinting: >>>> Time of execution Host used memory >>>> Guest 1: 49 seconds 2.4 GB >>>> Guest 2: 40 seconds 4.3 GB >>>> Guest 3: 50 seconds 6.3 GB >>> OK so no improvement. OTOH Alex's patches cut time down to 5-7 seconds >>> which seems better. Want to try testing Alex's patches for comparison? >>> >> I realized that the last time I reported the memhog numbers, I didn't >> enable the swap due to which the actual benefits of the series were not >> shown. >> I have re-run the test by including some of the changes suggested by >> Alexander and David: >> * Reduced the size of the per-cpu array to 32 and minimum hinting >> threshold to 16. >> * Reported length of isolated pages along with start pfn, instead of >> the order from the guest. >> * Used the reported length to madvise the entire length of address >> instead of a single 4K page. >> * Replaced MADV_DONTNEED with MADV_FREE. >> >> Setup for the test: >> NUMA node:1 >> Memory: 15GB >> Swap: 4GB >> Guest memory: 6GB >> Number of core: 1 >> >> Process: A guest is launched and memhog is run with 6GB. As its >> execution is over next guest is launched. Everytime memhog execution >> time is monitored. >> Results: >> Without Hinting: >> Time of execution >> Guest1: 22s >> Guest2: 24s >> Guest3: 1m29s >> >> With Hinting: >> Time of execution >> Guest1: 24s >> Guest2: 25s >> Guest3: 28s >> >> When hinting is enabled swap space is not used until memhog with 6GB is >> ran in 6th guest. > So one change you may want to make to your test setup would be to > launch the tests sequentially after all the guests all up, instead of > combining the test and guest bring-up. In addition you could run > through the guests more than once to determine a more-or-less steady > state in terms of the performance as you move between the guests after > they have hit the point of having to either swap or pull MADV_FREE > pages. I tried running memhog as you suggested, here are the results: Setup for the test: NUMA node:1 Memory: 15GB Swap: 4GB Guest memory: 6GB Number of core: 1 Process: 3 guests are launched and memhog is run with 6GB. Results are monitored after 1st-time execution of memhog. Memhog is launched sequentially in each of the guests and time is observed after the execution of all 3 memhog is over. Results: Without Hinting Time of Execution 1. 6m48s 2. 6m9s With Hinting Array size:16 Minimum Threshold:8 1. 2m57s 2. 2m20s The memhog execution time in the case of hinting is still not that low as we would have expected. This is due to the usage of swap space. Although wrt to non-hinting when swap used space is around 3.5G, with hinting it remains to around 1.1-1.5G. I did try using a zone free page barrier which prevented hinting when free pages of order HINTING_ORDER goes below 256. This further brings down the swap usage to 100-150 MB. The tricky part of this approach is to configure this barrier condition for different guests. Array size:16 Minimum Threshold:8 1. 1m16s 2. 1m41s Note: Memhog time does seem to vary a little bit on every boot with or without hinting.
On 18.03.19 16:57, Nitesh Narayan Lal wrote: > On 3/14/19 12:58 PM, Alexander Duyck wrote: >> On Thu, Mar 14, 2019 at 9:43 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>> >>> On 3/6/19 1:12 PM, Michael S. Tsirkin wrote: >>>> On Wed, Mar 06, 2019 at 01:07:50PM -0500, Nitesh Narayan Lal wrote: >>>>> On 3/6/19 11:09 AM, Michael S. Tsirkin wrote: >>>>>> On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote: >>>>>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >>>>>>> >>>>>>> Benefit: >>>>>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >>>>>>> >>>>>>> Changelog in v9: >>>>>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >>>>>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >>>>>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. >>>>>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >>>>>>> * Dynamically allocated space is used to hold the isolated guest free pages. >>>>>>> * All the pages are reported asynchronously to the host via virtio driver. >>>>>>> * Pages are returned back to the guest buddy free list only when the host response is received. >>>>>>> >>>>>>> Pending items: >>>>>>> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. >>>>>>> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) >>>>>>> * Compare reporting free pages via vring with vhost. >>>>>>> * Decide between MADV_DONTNEED and MADV_FREE. >>>>>>> * Analyze overall performance impact due to guest free page hinting. >>>>>>> * Come up with proper/traceable error-message/logs. >>>>>>> >>>>>>> Tests: >>>>>>> 1. Use-case - Number of guests we can launch >>>>>>> >>>>>>> NUMA Nodes = 1 with 15 GB memory >>>>>>> Guest Memory = 5 GB >>>>>>> Number of cores in guest = 1 >>>>>>> Workload = test allocation program allocates 4GB memory, touches it via memset and exits. >>>>>>> Procedure = >>>>>>> The first guest is launched and once its console is up, the test allocation program is executed with 4 GB memory request (Due to this the guest occupies almost 4-5 GB of memory in the host in a system without page hinting). Once this program exits at that time another guest is launched in the host and the same process is followed. We continue launching the guests until a guest gets killed due to low memory condition in the host. >>>>>>> >>>>>>> Results: >>>>>>> Without hinting = 3 >>>>>>> With hinting = 5 >>>>>>> >>>>>>> 2. Hackbench >>>>>>> Guest Memory = 5 GB >>>>>>> Number of cores = 4 >>>>>>> Number of tasks Time with Hinting Time without Hinting >>>>>>> 4000 19.540 17.818 >>>>>>> >>>>>> How about memhog btw? >>>>>> Alex reported: >>>>>> >>>>>> My testing up till now has consisted of setting up 4 8GB VMs on a system >>>>>> with 32GB of memory and 4GB of swap. To stress the memory on the system I >>>>>> would run "memhog 8G" sequentially on each of the guests and observe how >>>>>> long it took to complete the run. The observed behavior is that on the >>>>>> systems with these patches applied in both the guest and on the host I was >>>>>> able to complete the test with a time of 5 to 7 seconds per guest. On a >>>>>> system without these patches the time ranged from 7 to 49 seconds per >>>>>> guest. I am assuming the variability is due to time being spent writing >>>>>> pages out to disk in order to free up space for the guest. >>>>>> >>>>> Here are the results: >>>>> >>>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with >>>>> total memory of 15GB and no swap. In each of the guest, memhog is run >>>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by >>>>> using Free command. >>>>> >>>>> Without Hinting: >>>>> Time of execution Host used memory >>>>> Guest 1: 45 seconds 5.4 GB >>>>> Guest 2: 45 seconds 10 GB >>>>> Guest 3: 1 minute 15 GB >>>>> >>>>> With Hinting: >>>>> Time of execution Host used memory >>>>> Guest 1: 49 seconds 2.4 GB >>>>> Guest 2: 40 seconds 4.3 GB >>>>> Guest 3: 50 seconds 6.3 GB >>>> OK so no improvement. OTOH Alex's patches cut time down to 5-7 seconds >>>> which seems better. Want to try testing Alex's patches for comparison? >>>> >>> I realized that the last time I reported the memhog numbers, I didn't >>> enable the swap due to which the actual benefits of the series were not >>> shown. >>> I have re-run the test by including some of the changes suggested by >>> Alexander and David: >>> * Reduced the size of the per-cpu array to 32 and minimum hinting >>> threshold to 16. >>> * Reported length of isolated pages along with start pfn, instead of >>> the order from the guest. >>> * Used the reported length to madvise the entire length of address >>> instead of a single 4K page. >>> * Replaced MADV_DONTNEED with MADV_FREE. >>> >>> Setup for the test: >>> NUMA node:1 >>> Memory: 15GB >>> Swap: 4GB >>> Guest memory: 6GB >>> Number of core: 1 >>> >>> Process: A guest is launched and memhog is run with 6GB. As its >>> execution is over next guest is launched. Everytime memhog execution >>> time is monitored. >>> Results: >>> Without Hinting: >>> Time of execution >>> Guest1: 22s >>> Guest2: 24s >>> Guest3: 1m29s >>> >>> With Hinting: >>> Time of execution >>> Guest1: 24s >>> Guest2: 25s >>> Guest3: 28s >>> >>> When hinting is enabled swap space is not used until memhog with 6GB is >>> ran in 6th guest. >> So one change you may want to make to your test setup would be to >> launch the tests sequentially after all the guests all up, instead of >> combining the test and guest bring-up. In addition you could run >> through the guests more than once to determine a more-or-less steady >> state in terms of the performance as you move between the guests after >> they have hit the point of having to either swap or pull MADV_FREE >> pages. > I tried running memhog as you suggested, here are the results: > Setup for the test: > NUMA node:1 > Memory: 15GB > Swap: 4GB > Guest memory: 6GB > Number of core: 1 > > Process: 3 guests are launched and memhog is run with 6GB. Results are > monitored after 1st-time execution of memhog. Memhog is launched > sequentially in each of the guests and time is observed after the > execution of all 3 memhog is over. > > Results: > Without Hinting > Time of Execution > 1. 6m48s > 2. 6m9s > > With Hinting > Array size:16 Minimum Threshold:8 > 1. 2m57s > 2. 2m20s > > The memhog execution time in the case of hinting is still not that low > as we would have expected. This is due to the usage of swap space. > Although wrt to non-hinting when swap used space is around 3.5G, with > hinting it remains to around 1.1-1.5G. > I did try using a zone free page barrier which prevented hinting when > free pages of order HINTING_ORDER goes below 256. This further brings > down the swap usage to 100-150 MB. The tricky part of this approach is > to configure this barrier condition for different guests. > > Array size:16 Minimum Threshold:8 > 1. 1m16s > 2. 1m41s > > Note: Memhog time does seem to vary a little bit on every boot with or > without hinting. > I don't quite understand yet why "hinting more pages" (no free page barrier) should result in a higher swap usage in the hypervisor (1.1-1.5GB vs. 100-150 MB). If we are "hinting more pages" I would have guessed that runtime could get slower, but not that we need more swap. One theory: If you hint all MAX_ORDER - 1 pages, at one point it could be that all "remaining" free pages are currently isolated to be hinted. As MM needs more pages for a process, it will fallback to using "MAX_ORDER - 2" pages and so on. These pages, when they are freed, you won't hint anymore unless they get merged. But after all they won't get merged because they can't be merged (otherwise they wouldn't be "MAX_ORDER - 2" after all right from the beginning). Try hinting a smaller granularity to see if this could actually be the case.
On 3/19/19 9:33 AM, David Hildenbrand wrote: > On 18.03.19 16:57, Nitesh Narayan Lal wrote: >> On 3/14/19 12:58 PM, Alexander Duyck wrote: >>> On Thu, Mar 14, 2019 at 9:43 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>> On 3/6/19 1:12 PM, Michael S. Tsirkin wrote: >>>>> On Wed, Mar 06, 2019 at 01:07:50PM -0500, Nitesh Narayan Lal wrote: >>>>>> On 3/6/19 11:09 AM, Michael S. Tsirkin wrote: >>>>>>> On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote: >>>>>>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >>>>>>>> >>>>>>>> Benefit: >>>>>>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >>>>>>>> >>>>>>>> Changelog in v9: >>>>>>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >>>>>>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >>>>>>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. >>>>>>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >>>>>>>> * Dynamically allocated space is used to hold the isolated guest free pages. >>>>>>>> * All the pages are reported asynchronously to the host via virtio driver. >>>>>>>> * Pages are returned back to the guest buddy free list only when the host response is received. >>>>>>>> >>>>>>>> Pending items: >>>>>>>> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. >>>>>>>> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) >>>>>>>> * Compare reporting free pages via vring with vhost. >>>>>>>> * Decide between MADV_DONTNEED and MADV_FREE. >>>>>>>> * Analyze overall performance impact due to guest free page hinting. >>>>>>>> * Come up with proper/traceable error-message/logs. >>>>>>>> >>>>>>>> Tests: >>>>>>>> 1. Use-case - Number of guests we can launch >>>>>>>> >>>>>>>> NUMA Nodes = 1 with 15 GB memory >>>>>>>> Guest Memory = 5 GB >>>>>>>> Number of cores in guest = 1 >>>>>>>> Workload = test allocation program allocates 4GB memory, touches it via memset and exits. >>>>>>>> Procedure = >>>>>>>> The first guest is launched and once its console is up, the test allocation program is executed with 4 GB memory request (Due to this the guest occupies almost 4-5 GB of memory in the host in a system without page hinting). Once this program exits at that time another guest is launched in the host and the same process is followed. We continue launching the guests until a guest gets killed due to low memory condition in the host. >>>>>>>> >>>>>>>> Results: >>>>>>>> Without hinting = 3 >>>>>>>> With hinting = 5 >>>>>>>> >>>>>>>> 2. Hackbench >>>>>>>> Guest Memory = 5 GB >>>>>>>> Number of cores = 4 >>>>>>>> Number of tasks Time with Hinting Time without Hinting >>>>>>>> 4000 19.540 17.818 >>>>>>>> >>>>>>> How about memhog btw? >>>>>>> Alex reported: >>>>>>> >>>>>>> My testing up till now has consisted of setting up 4 8GB VMs on a system >>>>>>> with 32GB of memory and 4GB of swap. To stress the memory on the system I >>>>>>> would run "memhog 8G" sequentially on each of the guests and observe how >>>>>>> long it took to complete the run. The observed behavior is that on the >>>>>>> systems with these patches applied in both the guest and on the host I was >>>>>>> able to complete the test with a time of 5 to 7 seconds per guest. On a >>>>>>> system without these patches the time ranged from 7 to 49 seconds per >>>>>>> guest. I am assuming the variability is due to time being spent writing >>>>>>> pages out to disk in order to free up space for the guest. >>>>>>> >>>>>> Here are the results: >>>>>> >>>>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with >>>>>> total memory of 15GB and no swap. In each of the guest, memhog is run >>>>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by >>>>>> using Free command. >>>>>> >>>>>> Without Hinting: >>>>>> Time of execution Host used memory >>>>>> Guest 1: 45 seconds 5.4 GB >>>>>> Guest 2: 45 seconds 10 GB >>>>>> Guest 3: 1 minute 15 GB >>>>>> >>>>>> With Hinting: >>>>>> Time of execution Host used memory >>>>>> Guest 1: 49 seconds 2.4 GB >>>>>> Guest 2: 40 seconds 4.3 GB >>>>>> Guest 3: 50 seconds 6.3 GB >>>>> OK so no improvement. OTOH Alex's patches cut time down to 5-7 seconds >>>>> which seems better. Want to try testing Alex's patches for comparison? >>>>> >>>> I realized that the last time I reported the memhog numbers, I didn't >>>> enable the swap due to which the actual benefits of the series were not >>>> shown. >>>> I have re-run the test by including some of the changes suggested by >>>> Alexander and David: >>>> * Reduced the size of the per-cpu array to 32 and minimum hinting >>>> threshold to 16. >>>> * Reported length of isolated pages along with start pfn, instead of >>>> the order from the guest. >>>> * Used the reported length to madvise the entire length of address >>>> instead of a single 4K page. >>>> * Replaced MADV_DONTNEED with MADV_FREE. >>>> >>>> Setup for the test: >>>> NUMA node:1 >>>> Memory: 15GB >>>> Swap: 4GB >>>> Guest memory: 6GB >>>> Number of core: 1 >>>> >>>> Process: A guest is launched and memhog is run with 6GB. As its >>>> execution is over next guest is launched. Everytime memhog execution >>>> time is monitored. >>>> Results: >>>> Without Hinting: >>>> Time of execution >>>> Guest1: 22s >>>> Guest2: 24s >>>> Guest3: 1m29s >>>> >>>> With Hinting: >>>> Time of execution >>>> Guest1: 24s >>>> Guest2: 25s >>>> Guest3: 28s >>>> >>>> When hinting is enabled swap space is not used until memhog with 6GB is >>>> ran in 6th guest. >>> So one change you may want to make to your test setup would be to >>> launch the tests sequentially after all the guests all up, instead of >>> combining the test and guest bring-up. In addition you could run >>> through the guests more than once to determine a more-or-less steady >>> state in terms of the performance as you move between the guests after >>> they have hit the point of having to either swap or pull MADV_FREE >>> pages. >> I tried running memhog as you suggested, here are the results: >> Setup for the test: >> NUMA node:1 >> Memory: 15GB >> Swap: 4GB >> Guest memory: 6GB >> Number of core: 1 >> >> Process: 3 guests are launched and memhog is run with 6GB. Results are >> monitored after 1st-time execution of memhog. Memhog is launched >> sequentially in each of the guests and time is observed after the >> execution of all 3 memhog is over. >> >> Results: >> Without Hinting >> Time of Execution >> 1. 6m48s >> 2. 6m9s >> >> With Hinting >> Array size:16 Minimum Threshold:8 >> 1. 2m57s >> 2. 2m20s >> >> The memhog execution time in the case of hinting is still not that low >> as we would have expected. This is due to the usage of swap space. >> Although wrt to non-hinting when swap used space is around 3.5G, with >> hinting it remains to around 1.1-1.5G. >> I did try using a zone free page barrier which prevented hinting when >> free pages of order HINTING_ORDER goes below 256. This further brings >> down the swap usage to 100-150 MB. The tricky part of this approach is >> to configure this barrier condition for different guests. >> >> Array size:16 Minimum Threshold:8 >> 1. 1m16s >> 2. 1m41s >> >> Note: Memhog time does seem to vary a little bit on every boot with or >> without hinting. >> > I don't quite understand yet why "hinting more pages" (no free page > barrier) should result in a higher swap usage in the hypervisor > (1.1-1.5GB vs. 100-150 MB). If we are "hinting more pages" I would have > guessed that runtime could get slower, but not that we need more swap. > > One theory: > > If you hint all MAX_ORDER - 1 pages, at one point it could be that all > "remaining" free pages are currently isolated to be hinted. As MM needs > more pages for a process, it will fallback to using "MAX_ORDER - 2" > pages and so on. These pages, when they are freed, you won't hint > anymore unless they get merged. But after all they won't get merged > because they can't be merged (otherwise they wouldn't be "MAX_ORDER - 2" > after all right from the beginning). > > Try hinting a smaller granularity to see if this could actually be the case. So I have two questions in my mind after looking at the results now: 1. Why swap is coming into the picture when hinting is enabled? 2. Same to what you have raised. For the 1st question, I think the answer is: (correct me if I am wrong.) Memhog while writing the memory does free memory but the pages it frees are of a lower order which doesn't merge until the memhog write completes. After which we do get the MAX_ORDER - 1 page from the buddy resulting in hinting. As all 3 memhog are running parallelly we don't get free memory until one of them completes. This does explain that when 3 guests each of 6GB on a 15GB host tries to run memhog with 6GB parallelly, swap comes into the picture even if hinting is enabled. This doesn't explain why putting a barrier or avoid hinting reduced the swap usage. It seems I possibly had a wrong impression of the delaying hinting idea which we discussed. As I was observing the value of the swap at the end of the memhog execution which is logically incorrect. I will re-run the test and observe the highest swap usage during the entire execution of memhog for hinting vs non-hinting.
On 3/19/19 1:38 PM, Alexander Duyck wrote: > On Tue, Mar 19, 2019 at 9:04 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >> On 3/19/19 9:33 AM, David Hildenbrand wrote: >>> On 18.03.19 16:57, Nitesh Narayan Lal wrote: >>>> On 3/14/19 12:58 PM, Alexander Duyck wrote: >>>>> On Thu, Mar 14, 2019 at 9:43 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>> On 3/6/19 1:12 PM, Michael S. Tsirkin wrote: >>>>>>> On Wed, Mar 06, 2019 at 01:07:50PM -0500, Nitesh Narayan Lal wrote: >>>>>>>> On 3/6/19 11:09 AM, Michael S. Tsirkin wrote: >>>>>>>>> On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote: >>>>>>>>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >>>>>>>>>> >>>>>>>>>> Benefit: >>>>>>>>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >>>>>>>>>> >>>>>>>>>> Changelog in v9: >>>>>>>>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >>>>>>>>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >>>>>>>>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. >>>>>>>>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >>>>>>>>>> * Dynamically allocated space is used to hold the isolated guest free pages. >>>>>>>>>> * All the pages are reported asynchronously to the host via virtio driver. >>>>>>>>>> * Pages are returned back to the guest buddy free list only when the host response is received. >>>>>>>>>> >>>>>>>>>> Pending items: >>>>>>>>>> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. >>>>>>>>>> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) >>>>>>>>>> * Compare reporting free pages via vring with vhost. >>>>>>>>>> * Decide between MADV_DONTNEED and MADV_FREE. >>>>>>>>>> * Analyze overall performance impact due to guest free page hinting. >>>>>>>>>> * Come up with proper/traceable error-message/logs. >>>>>>>>>> >>>>>>>>>> Tests: >>>>>>>>>> 1. Use-case - Number of guests we can launch >>>>>>>>>> >>>>>>>>>> NUMA Nodes = 1 with 15 GB memory >>>>>>>>>> Guest Memory = 5 GB >>>>>>>>>> Number of cores in guest = 1 >>>>>>>>>> Workload = test allocation program allocates 4GB memory, touches it via memset and exits. >>>>>>>>>> Procedure = >>>>>>>>>> The first guest is launched and once its console is up, the test allocation program is executed with 4 GB memory request (Due to this the guest occupies almost 4-5 GB of memory in the host in a system without page hinting). Once this program exits at that time another guest is launched in the host and the same process is followed. We continue launching the guests until a guest gets killed due to low memory condition in the host. >>>>>>>>>> >>>>>>>>>> Results: >>>>>>>>>> Without hinting = 3 >>>>>>>>>> With hinting = 5 >>>>>>>>>> >>>>>>>>>> 2. Hackbench >>>>>>>>>> Guest Memory = 5 GB >>>>>>>>>> Number of cores = 4 >>>>>>>>>> Number of tasks Time with Hinting Time without Hinting >>>>>>>>>> 4000 19.540 17.818 >>>>>>>>>> >>>>>>>>> How about memhog btw? >>>>>>>>> Alex reported: >>>>>>>>> >>>>>>>>> My testing up till now has consisted of setting up 4 8GB VMs on a system >>>>>>>>> with 32GB of memory and 4GB of swap. To stress the memory on the system I >>>>>>>>> would run "memhog 8G" sequentially on each of the guests and observe how >>>>>>>>> long it took to complete the run. The observed behavior is that on the >>>>>>>>> systems with these patches applied in both the guest and on the host I was >>>>>>>>> able to complete the test with a time of 5 to 7 seconds per guest. On a >>>>>>>>> system without these patches the time ranged from 7 to 49 seconds per >>>>>>>>> guest. I am assuming the variability is due to time being spent writing >>>>>>>>> pages out to disk in order to free up space for the guest. >>>>>>>>> >>>>>>>> Here are the results: >>>>>>>> >>>>>>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with >>>>>>>> total memory of 15GB and no swap. In each of the guest, memhog is run >>>>>>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by >>>>>>>> using Free command. >>>>>>>> >>>>>>>> Without Hinting: >>>>>>>> Time of execution Host used memory >>>>>>>> Guest 1: 45 seconds 5.4 GB >>>>>>>> Guest 2: 45 seconds 10 GB >>>>>>>> Guest 3: 1 minute 15 GB >>>>>>>> >>>>>>>> With Hinting: >>>>>>>> Time of execution Host used memory >>>>>>>> Guest 1: 49 seconds 2.4 GB >>>>>>>> Guest 2: 40 seconds 4.3 GB >>>>>>>> Guest 3: 50 seconds 6.3 GB >>>>>>> OK so no improvement. OTOH Alex's patches cut time down to 5-7 seconds >>>>>>> which seems better. Want to try testing Alex's patches for comparison? >>>>>>> >>>>>> I realized that the last time I reported the memhog numbers, I didn't >>>>>> enable the swap due to which the actual benefits of the series were not >>>>>> shown. >>>>>> I have re-run the test by including some of the changes suggested by >>>>>> Alexander and David: >>>>>> * Reduced the size of the per-cpu array to 32 and minimum hinting >>>>>> threshold to 16. >>>>>> * Reported length of isolated pages along with start pfn, instead of >>>>>> the order from the guest. >>>>>> * Used the reported length to madvise the entire length of address >>>>>> instead of a single 4K page. >>>>>> * Replaced MADV_DONTNEED with MADV_FREE. >>>>>> >>>>>> Setup for the test: >>>>>> NUMA node:1 >>>>>> Memory: 15GB >>>>>> Swap: 4GB >>>>>> Guest memory: 6GB >>>>>> Number of core: 1 >>>>>> >>>>>> Process: A guest is launched and memhog is run with 6GB. As its >>>>>> execution is over next guest is launched. Everytime memhog execution >>>>>> time is monitored. >>>>>> Results: >>>>>> Without Hinting: >>>>>> Time of execution >>>>>> Guest1: 22s >>>>>> Guest2: 24s >>>>>> Guest3: 1m29s >>>>>> >>>>>> With Hinting: >>>>>> Time of execution >>>>>> Guest1: 24s >>>>>> Guest2: 25s >>>>>> Guest3: 28s >>>>>> >>>>>> When hinting is enabled swap space is not used until memhog with 6GB is >>>>>> ran in 6th guest. >>>>> So one change you may want to make to your test setup would be to >>>>> launch the tests sequentially after all the guests all up, instead of >>>>> combining the test and guest bring-up. In addition you could run >>>>> through the guests more than once to determine a more-or-less steady >>>>> state in terms of the performance as you move between the guests after >>>>> they have hit the point of having to either swap or pull MADV_FREE >>>>> pages. >>>> I tried running memhog as you suggested, here are the results: >>>> Setup for the test: >>>> NUMA node:1 >>>> Memory: 15GB >>>> Swap: 4GB >>>> Guest memory: 6GB >>>> Number of core: 1 >>>> >>>> Process: 3 guests are launched and memhog is run with 6GB. Results are >>>> monitored after 1st-time execution of memhog. Memhog is launched >>>> sequentially in each of the guests and time is observed after the >>>> execution of all 3 memhog is over. >>>> >>>> Results: >>>> Without Hinting >>>> Time of Execution >>>> 1. 6m48s >>>> 2. 6m9s >>>> >>>> With Hinting >>>> Array size:16 Minimum Threshold:8 >>>> 1. 2m57s >>>> 2. 2m20s >>>> >>>> The memhog execution time in the case of hinting is still not that low >>>> as we would have expected. This is due to the usage of swap space. >>>> Although wrt to non-hinting when swap used space is around 3.5G, with >>>> hinting it remains to around 1.1-1.5G. >>>> I did try using a zone free page barrier which prevented hinting when >>>> free pages of order HINTING_ORDER goes below 256. This further brings >>>> down the swap usage to 100-150 MB. The tricky part of this approach is >>>> to configure this barrier condition for different guests. >>>> >>>> Array size:16 Minimum Threshold:8 >>>> 1. 1m16s >>>> 2. 1m41s >>>> >>>> Note: Memhog time does seem to vary a little bit on every boot with or >>>> without hinting. >>>> >>> I don't quite understand yet why "hinting more pages" (no free page >>> barrier) should result in a higher swap usage in the hypervisor >>> (1.1-1.5GB vs. 100-150 MB). If we are "hinting more pages" I would have >>> guessed that runtime could get slower, but not that we need more swap. >>> >>> One theory: >>> >>> If you hint all MAX_ORDER - 1 pages, at one point it could be that all >>> "remaining" free pages are currently isolated to be hinted. As MM needs >>> more pages for a process, it will fallback to using "MAX_ORDER - 2" >>> pages and so on. These pages, when they are freed, you won't hint >>> anymore unless they get merged. But after all they won't get merged >>> because they can't be merged (otherwise they wouldn't be "MAX_ORDER - 2" >>> after all right from the beginning). >>> >>> Try hinting a smaller granularity to see if this could actually be the case. >> So I have two questions in my mind after looking at the results now: >> 1. Why swap is coming into the picture when hinting is enabled? >> 2. Same to what you have raised. >> For the 1st question, I think the answer is: (correct me if I am wrong.) >> Memhog while writing the memory does free memory but the pages it frees >> are of a lower order which doesn't merge until the memhog write >> completes. After which we do get the MAX_ORDER - 1 page from the buddy >> resulting in hinting. >> As all 3 memhog are running parallelly we don't get free memory until >> one of them completes. >> This does explain that when 3 guests each of 6GB on a 15GB host tries to >> run memhog with 6GB parallelly, swap comes into the picture even if >> hinting is enabled. > Are you running them in parallel or sequentially? I was running them parallelly but then I realized to see any benefits, in that case, I should have run less number of guests. > I had suggested > running them serially so that the previous one could complete and free > the memory before the next one allocated memory. In that setup you > should see the guests still swapping without hints, but with hints the > guest should free the memory up before the next one starts using it. Yeah, I just realized this. Thanks for the clarification. > If you are running them in parallel then you are going to see things > going to swap because memhog does like what the name implies and it > will use all of the memory you give it. It isn't until it completes > that the memory is freed. > >> This doesn't explain why putting a barrier or avoid hinting reduced the >> swap usage. It seems I possibly had a wrong impression of the delaying >> hinting idea which we discussed. >> As I was observing the value of the swap at the end of the memhog >> execution which is logically incorrect. I will re-run the test and >> observe the highest swap usage during the entire execution of memhog for >> hinting vs non-hinting. > So one option you may look at if you are wanting to run the tests in > parallel would be to limit the number of tests you have running at the > same time. If you have 15G of memory and 6G per guest you should be > able to run 2 sessions at a time without going to swap, however if you > run all 3 then you are likely going to be going to swap even with > hinting. > > - Alex
On 3/19/19 1:59 PM, Nitesh Narayan Lal wrote: > On 3/19/19 1:38 PM, Alexander Duyck wrote: >> On Tue, Mar 19, 2019 at 9:04 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>> On 3/19/19 9:33 AM, David Hildenbrand wrote: >>>> On 18.03.19 16:57, Nitesh Narayan Lal wrote: >>>>> On 3/14/19 12:58 PM, Alexander Duyck wrote: >>>>>> On Thu, Mar 14, 2019 at 9:43 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>>> On 3/6/19 1:12 PM, Michael S. Tsirkin wrote: >>>>>>>> On Wed, Mar 06, 2019 at 01:07:50PM -0500, Nitesh Narayan Lal wrote: >>>>>>>>> On 3/6/19 11:09 AM, Michael S. Tsirkin wrote: >>>>>>>>>> On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote: >>>>>>>>>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >>>>>>>>>>> >>>>>>>>>>> Benefit: >>>>>>>>>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >>>>>>>>>>> >>>>>>>>>>> Changelog in v9: >>>>>>>>>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >>>>>>>>>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >>>>>>>>>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. >>>>>>>>>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >>>>>>>>>>> * Dynamically allocated space is used to hold the isolated guest free pages. >>>>>>>>>>> * All the pages are reported asynchronously to the host via virtio driver. >>>>>>>>>>> * Pages are returned back to the guest buddy free list only when the host response is received. >>>>>>>>>>> >>>>>>>>>>> Pending items: >>>>>>>>>>> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. >>>>>>>>>>> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) >>>>>>>>>>> * Compare reporting free pages via vring with vhost. >>>>>>>>>>> * Decide between MADV_DONTNEED and MADV_FREE. >>>>>>>>>>> * Analyze overall performance impact due to guest free page hinting. >>>>>>>>>>> * Come up with proper/traceable error-message/logs. >>>>>>>>>>> >>>>>>>>>>> Tests: >>>>>>>>>>> 1. Use-case - Number of guests we can launch >>>>>>>>>>> >>>>>>>>>>> NUMA Nodes = 1 with 15 GB memory >>>>>>>>>>> Guest Memory = 5 GB >>>>>>>>>>> Number of cores in guest = 1 >>>>>>>>>>> Workload = test allocation program allocates 4GB memory, touches it via memset and exits. >>>>>>>>>>> Procedure = >>>>>>>>>>> The first guest is launched and once its console is up, the test allocation program is executed with 4 GB memory request (Due to this the guest occupies almost 4-5 GB of memory in the host in a system without page hinting). Once this program exits at that time another guest is launched in the host and the same process is followed. We continue launching the guests until a guest gets killed due to low memory condition in the host. >>>>>>>>>>> >>>>>>>>>>> Results: >>>>>>>>>>> Without hinting = 3 >>>>>>>>>>> With hinting = 5 >>>>>>>>>>> >>>>>>>>>>> 2. Hackbench >>>>>>>>>>> Guest Memory = 5 GB >>>>>>>>>>> Number of cores = 4 >>>>>>>>>>> Number of tasks Time with Hinting Time without Hinting >>>>>>>>>>> 4000 19.540 17.818 >>>>>>>>>>> >>>>>>>>>> How about memhog btw? >>>>>>>>>> Alex reported: >>>>>>>>>> >>>>>>>>>> My testing up till now has consisted of setting up 4 8GB VMs on a system >>>>>>>>>> with 32GB of memory and 4GB of swap. To stress the memory on the system I >>>>>>>>>> would run "memhog 8G" sequentially on each of the guests and observe how >>>>>>>>>> long it took to complete the run. The observed behavior is that on the >>>>>>>>>> systems with these patches applied in both the guest and on the host I was >>>>>>>>>> able to complete the test with a time of 5 to 7 seconds per guest. On a >>>>>>>>>> system without these patches the time ranged from 7 to 49 seconds per >>>>>>>>>> guest. I am assuming the variability is due to time being spent writing >>>>>>>>>> pages out to disk in order to free up space for the guest. >>>>>>>>>> >>>>>>>>> Here are the results: >>>>>>>>> >>>>>>>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with >>>>>>>>> total memory of 15GB and no swap. In each of the guest, memhog is run >>>>>>>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by >>>>>>>>> using Free command. >>>>>>>>> >>>>>>>>> Without Hinting: >>>>>>>>> Time of execution Host used memory >>>>>>>>> Guest 1: 45 seconds 5.4 GB >>>>>>>>> Guest 2: 45 seconds 10 GB >>>>>>>>> Guest 3: 1 minute 15 GB >>>>>>>>> >>>>>>>>> With Hinting: >>>>>>>>> Time of execution Host used memory >>>>>>>>> Guest 1: 49 seconds 2.4 GB >>>>>>>>> Guest 2: 40 seconds 4.3 GB >>>>>>>>> Guest 3: 50 seconds 6.3 GB >>>>>>>> OK so no improvement. OTOH Alex's patches cut time down to 5-7 seconds >>>>>>>> which seems better. Want to try testing Alex's patches for comparison? >>>>>>>> >>>>>>> I realized that the last time I reported the memhog numbers, I didn't >>>>>>> enable the swap due to which the actual benefits of the series were not >>>>>>> shown. >>>>>>> I have re-run the test by including some of the changes suggested by >>>>>>> Alexander and David: >>>>>>> * Reduced the size of the per-cpu array to 32 and minimum hinting >>>>>>> threshold to 16. >>>>>>> * Reported length of isolated pages along with start pfn, instead of >>>>>>> the order from the guest. >>>>>>> * Used the reported length to madvise the entire length of address >>>>>>> instead of a single 4K page. >>>>>>> * Replaced MADV_DONTNEED with MADV_FREE. >>>>>>> >>>>>>> Setup for the test: >>>>>>> NUMA node:1 >>>>>>> Memory: 15GB >>>>>>> Swap: 4GB >>>>>>> Guest memory: 6GB >>>>>>> Number of core: 1 >>>>>>> >>>>>>> Process: A guest is launched and memhog is run with 6GB. As its >>>>>>> execution is over next guest is launched. Everytime memhog execution >>>>>>> time is monitored. >>>>>>> Results: >>>>>>> Without Hinting: >>>>>>> Time of execution >>>>>>> Guest1: 22s >>>>>>> Guest2: 24s >>>>>>> Guest3: 1m29s >>>>>>> >>>>>>> With Hinting: >>>>>>> Time of execution >>>>>>> Guest1: 24s >>>>>>> Guest2: 25s >>>>>>> Guest3: 28s >>>>>>> >>>>>>> When hinting is enabled swap space is not used until memhog with 6GB is >>>>>>> ran in 6th guest. >>>>>> So one change you may want to make to your test setup would be to >>>>>> launch the tests sequentially after all the guests all up, instead of >>>>>> combining the test and guest bring-up. In addition you could run >>>>>> through the guests more than once to determine a more-or-less steady >>>>>> state in terms of the performance as you move between the guests after >>>>>> they have hit the point of having to either swap or pull MADV_FREE >>>>>> pages. >>>>> I tried running memhog as you suggested, here are the results: >>>>> Setup for the test: >>>>> NUMA node:1 >>>>> Memory: 15GB >>>>> Swap: 4GB >>>>> Guest memory: 6GB >>>>> Number of core: 1 >>>>> >>>>> Process: 3 guests are launched and memhog is run with 6GB. Results are >>>>> monitored after 1st-time execution of memhog. Memhog is launched >>>>> sequentially in each of the guests and time is observed after the >>>>> execution of all 3 memhog is over. >>>>> >>>>> Results: >>>>> Without Hinting >>>>> Time of Execution >>>>> 1. 6m48s >>>>> 2. 6m9s >>>>> >>>>> With Hinting >>>>> Array size:16 Minimum Threshold:8 >>>>> 1. 2m57s >>>>> 2. 2m20s >>>>> >>>>> The memhog execution time in the case of hinting is still not that low >>>>> as we would have expected. This is due to the usage of swap space. >>>>> Although wrt to non-hinting when swap used space is around 3.5G, with >>>>> hinting it remains to around 1.1-1.5G. >>>>> I did try using a zone free page barrier which prevented hinting when >>>>> free pages of order HINTING_ORDER goes below 256. This further brings >>>>> down the swap usage to 100-150 MB. The tricky part of this approach is >>>>> to configure this barrier condition for different guests. >>>>> >>>>> Array size:16 Minimum Threshold:8 >>>>> 1. 1m16s >>>>> 2. 1m41s >>>>> >>>>> Note: Memhog time does seem to vary a little bit on every boot with or >>>>> without hinting. >>>>> >>>> I don't quite understand yet why "hinting more pages" (no free page >>>> barrier) should result in a higher swap usage in the hypervisor >>>> (1.1-1.5GB vs. 100-150 MB). If we are "hinting more pages" I would have >>>> guessed that runtime could get slower, but not that we need more swap. >>>> >>>> One theory: >>>> >>>> If you hint all MAX_ORDER - 1 pages, at one point it could be that all >>>> "remaining" free pages are currently isolated to be hinted. As MM needs >>>> more pages for a process, it will fallback to using "MAX_ORDER - 2" >>>> pages and so on. These pages, when they are freed, you won't hint >>>> anymore unless they get merged. But after all they won't get merged >>>> because they can't be merged (otherwise they wouldn't be "MAX_ORDER - 2" >>>> after all right from the beginning). >>>> >>>> Try hinting a smaller granularity to see if this could actually be the case. >>> So I have two questions in my mind after looking at the results now: >>> 1. Why swap is coming into the picture when hinting is enabled? >>> 2. Same to what you have raised. >>> For the 1st question, I think the answer is: (correct me if I am wrong.) >>> Memhog while writing the memory does free memory but the pages it frees >>> are of a lower order which doesn't merge until the memhog write >>> completes. After which we do get the MAX_ORDER - 1 page from the buddy >>> resulting in hinting. >>> As all 3 memhog are running parallelly we don't get free memory until >>> one of them completes. >>> This does explain that when 3 guests each of 6GB on a 15GB host tries to >>> run memhog with 6GB parallelly, swap comes into the picture even if >>> hinting is enabled. >> Are you running them in parallel or sequentially? > I was running them parallelly but then I realized to see any benefits, > in that case, I should have run less number of guests. >> I had suggested >> running them serially so that the previous one could complete and free >> the memory before the next one allocated memory. In that setup you >> should see the guests still swapping without hints, but with hints the >> guest should free the memory up before the next one starts using it. > Yeah, I just realized this. Thanks for the clarification. >> If you are running them in parallel then you are going to see things >> going to swap because memhog does like what the name implies and it >> will use all of the memory you give it. It isn't until it completes >> that the memory is freed. >> >>> This doesn't explain why putting a barrier or avoid hinting reduced the >>> swap usage. It seems I possibly had a wrong impression of the delaying >>> hinting idea which we discussed. >>> As I was observing the value of the swap at the end of the memhog >>> execution which is logically incorrect. I will re-run the test and >>> observe the highest swap usage during the entire execution of memhog for >>> hinting vs non-hinting. >> So one option you may look at if you are wanting to run the tests in >> parallel would be to limit the number of tests you have running at the >> same time. If you have 15G of memory and 6G per guest you should be >> able to run 2 sessions at a time without going to swap, however if you >> run all 3 then you are likely going to be going to swap even with >> hinting. >> >> - Alex Here are the updated numbers excluding the guest bring-up cost: Setup for the test- NUMA node:1 Memory: 15GB Swap: 4GB Guest memory: 6GB Number of core: 1 Process: 3 guests are launched and memhog is run serially with 6GB. Results: Without Hinting Time of Execution Guest1: 56s Guest2: 45s Guest3: 3m41s With Hinting Guest1: 46s Guest2: 45s Guest3: 49s
On 3/20/19 9:18 AM, Nitesh Narayan Lal wrote: > On 3/19/19 1:59 PM, Nitesh Narayan Lal wrote: >> On 3/19/19 1:38 PM, Alexander Duyck wrote: >>> On Tue, Mar 19, 2019 at 9:04 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>> On 3/19/19 9:33 AM, David Hildenbrand wrote: >>>>> On 18.03.19 16:57, Nitesh Narayan Lal wrote: >>>>>> On 3/14/19 12:58 PM, Alexander Duyck wrote: >>>>>>> On Thu, Mar 14, 2019 at 9:43 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>>>> On 3/6/19 1:12 PM, Michael S. Tsirkin wrote: >>>>>>>>> On Wed, Mar 06, 2019 at 01:07:50PM -0500, Nitesh Narayan Lal wrote: >>>>>>>>>> On 3/6/19 11:09 AM, Michael S. Tsirkin wrote: >>>>>>>>>>> On Wed, Mar 06, 2019 at 10:50:42AM -0500, Nitesh Narayan Lal wrote: >>>>>>>>>>>> The following patch-set proposes an efficient mechanism for handing freed memory between the guest and the host. It enables the guests with no page cache to rapidly free and reclaims memory to and from the host respectively. >>>>>>>>>>>> >>>>>>>>>>>> Benefit: >>>>>>>>>>>> With this patch-series, in our test-case, executed on a single system and single NUMA node with 15GB memory, we were able to successfully launch 5 guests(each with 5 GB memory) when page hinting was enabled and 3 without it. (Detailed explanation of the test procedure is provided at the bottom under Test - 1). >>>>>>>>>>>> >>>>>>>>>>>> Changelog in v9: >>>>>>>>>>>> * Guest free page hinting hook is now invoked after a page has been merged in the buddy. >>>>>>>>>>>> * Free pages only with order FREE_PAGE_HINTING_MIN_ORDER(currently defined as MAX_ORDER - 1) are captured. >>>>>>>>>>>> * Removed kthread which was earlier used to perform the scanning, isolation & reporting of free pages. >>>>>>>>>>>> * Pages, captured in the per cpu array are sorted based on the zone numbers. This is to avoid redundancy of acquiring zone locks. >>>>>>>>>>>> * Dynamically allocated space is used to hold the isolated guest free pages. >>>>>>>>>>>> * All the pages are reported asynchronously to the host via virtio driver. >>>>>>>>>>>> * Pages are returned back to the guest buddy free list only when the host response is received. >>>>>>>>>>>> >>>>>>>>>>>> Pending items: >>>>>>>>>>>> * Make sure that the guest free page hinting's current implementation doesn't break hugepages or device assigned guests. >>>>>>>>>>>> * Follow up on VIRTIO_BALLOON_F_PAGE_POISON's device side support. (It is currently missing) >>>>>>>>>>>> * Compare reporting free pages via vring with vhost. >>>>>>>>>>>> * Decide between MADV_DONTNEED and MADV_FREE. >>>>>>>>>>>> * Analyze overall performance impact due to guest free page hinting. >>>>>>>>>>>> * Come up with proper/traceable error-message/logs. >>>>>>>>>>>> >>>>>>>>>>>> Tests: >>>>>>>>>>>> 1. Use-case - Number of guests we can launch >>>>>>>>>>>> >>>>>>>>>>>> NUMA Nodes = 1 with 15 GB memory >>>>>>>>>>>> Guest Memory = 5 GB >>>>>>>>>>>> Number of cores in guest = 1 >>>>>>>>>>>> Workload = test allocation program allocates 4GB memory, touches it via memset and exits. >>>>>>>>>>>> Procedure = >>>>>>>>>>>> The first guest is launched and once its console is up, the test allocation program is executed with 4 GB memory request (Due to this the guest occupies almost 4-5 GB of memory in the host in a system without page hinting). Once this program exits at that time another guest is launched in the host and the same process is followed. We continue launching the guests until a guest gets killed due to low memory condition in the host. >>>>>>>>>>>> >>>>>>>>>>>> Results: >>>>>>>>>>>> Without hinting = 3 >>>>>>>>>>>> With hinting = 5 >>>>>>>>>>>> >>>>>>>>>>>> 2. Hackbench >>>>>>>>>>>> Guest Memory = 5 GB >>>>>>>>>>>> Number of cores = 4 >>>>>>>>>>>> Number of tasks Time with Hinting Time without Hinting >>>>>>>>>>>> 4000 19.540 17.818 >>>>>>>>>>>> >>>>>>>>>>> How about memhog btw? >>>>>>>>>>> Alex reported: >>>>>>>>>>> >>>>>>>>>>> My testing up till now has consisted of setting up 4 8GB VMs on a system >>>>>>>>>>> with 32GB of memory and 4GB of swap. To stress the memory on the system I >>>>>>>>>>> would run "memhog 8G" sequentially on each of the guests and observe how >>>>>>>>>>> long it took to complete the run. The observed behavior is that on the >>>>>>>>>>> systems with these patches applied in both the guest and on the host I was >>>>>>>>>>> able to complete the test with a time of 5 to 7 seconds per guest. On a >>>>>>>>>>> system without these patches the time ranged from 7 to 49 seconds per >>>>>>>>>>> guest. I am assuming the variability is due to time being spent writing >>>>>>>>>>> pages out to disk in order to free up space for the guest. >>>>>>>>>>> >>>>>>>>>> Here are the results: >>>>>>>>>> >>>>>>>>>> Procedure: 3 Guests of size 5GB is launched on a single NUMA node with >>>>>>>>>> total memory of 15GB and no swap. In each of the guest, memhog is run >>>>>>>>>> with 5GB. Post-execution of memhog, Host memory usage is monitored by >>>>>>>>>> using Free command. >>>>>>>>>> >>>>>>>>>> Without Hinting: >>>>>>>>>> Time of execution Host used memory >>>>>>>>>> Guest 1: 45 seconds 5.4 GB >>>>>>>>>> Guest 2: 45 seconds 10 GB >>>>>>>>>> Guest 3: 1 minute 15 GB >>>>>>>>>> >>>>>>>>>> With Hinting: >>>>>>>>>> Time of execution Host used memory >>>>>>>>>> Guest 1: 49 seconds 2.4 GB >>>>>>>>>> Guest 2: 40 seconds 4.3 GB >>>>>>>>>> Guest 3: 50 seconds 6.3 GB >>>>>>>>> OK so no improvement. OTOH Alex's patches cut time down to 5-7 seconds >>>>>>>>> which seems better. Want to try testing Alex's patches for comparison? >>>>>>>>> >>>>>>>> I realized that the last time I reported the memhog numbers, I didn't >>>>>>>> enable the swap due to which the actual benefits of the series were not >>>>>>>> shown. >>>>>>>> I have re-run the test by including some of the changes suggested by >>>>>>>> Alexander and David: >>>>>>>> * Reduced the size of the per-cpu array to 32 and minimum hinting >>>>>>>> threshold to 16. >>>>>>>> * Reported length of isolated pages along with start pfn, instead of >>>>>>>> the order from the guest. >>>>>>>> * Used the reported length to madvise the entire length of address >>>>>>>> instead of a single 4K page. >>>>>>>> * Replaced MADV_DONTNEED with MADV_FREE. >>>>>>>> >>>>>>>> Setup for the test: >>>>>>>> NUMA node:1 >>>>>>>> Memory: 15GB >>>>>>>> Swap: 4GB >>>>>>>> Guest memory: 6GB >>>>>>>> Number of core: 1 >>>>>>>> >>>>>>>> Process: A guest is launched and memhog is run with 6GB. As its >>>>>>>> execution is over next guest is launched. Everytime memhog execution >>>>>>>> time is monitored. >>>>>>>> Results: >>>>>>>> Without Hinting: >>>>>>>> Time of execution >>>>>>>> Guest1: 22s >>>>>>>> Guest2: 24s >>>>>>>> Guest3: 1m29s >>>>>>>> >>>>>>>> With Hinting: >>>>>>>> Time of execution >>>>>>>> Guest1: 24s >>>>>>>> Guest2: 25s >>>>>>>> Guest3: 28s >>>>>>>> >>>>>>>> When hinting is enabled swap space is not used until memhog with 6GB is >>>>>>>> ran in 6th guest. >>>>>>> So one change you may want to make to your test setup would be to >>>>>>> launch the tests sequentially after all the guests all up, instead of >>>>>>> combining the test and guest bring-up. In addition you could run >>>>>>> through the guests more than once to determine a more-or-less steady >>>>>>> state in terms of the performance as you move between the guests after >>>>>>> they have hit the point of having to either swap or pull MADV_FREE >>>>>>> pages. >>>>>> I tried running memhog as you suggested, here are the results: >>>>>> Setup for the test: >>>>>> NUMA node:1 >>>>>> Memory: 15GB >>>>>> Swap: 4GB >>>>>> Guest memory: 6GB >>>>>> Number of core: 1 >>>>>> >>>>>> Process: 3 guests are launched and memhog is run with 6GB. Results are >>>>>> monitored after 1st-time execution of memhog. Memhog is launched >>>>>> sequentially in each of the guests and time is observed after the >>>>>> execution of all 3 memhog is over. >>>>>> >>>>>> Results: >>>>>> Without Hinting >>>>>> Time of Execution >>>>>> 1. 6m48s >>>>>> 2. 6m9s >>>>>> >>>>>> With Hinting >>>>>> Array size:16 Minimum Threshold:8 >>>>>> 1. 2m57s >>>>>> 2. 2m20s >>>>>> >>>>>> The memhog execution time in the case of hinting is still not that low >>>>>> as we would have expected. This is due to the usage of swap space. >>>>>> Although wrt to non-hinting when swap used space is around 3.5G, with >>>>>> hinting it remains to around 1.1-1.5G. >>>>>> I did try using a zone free page barrier which prevented hinting when >>>>>> free pages of order HINTING_ORDER goes below 256. This further brings >>>>>> down the swap usage to 100-150 MB. The tricky part of this approach is >>>>>> to configure this barrier condition for different guests. >>>>>> >>>>>> Array size:16 Minimum Threshold:8 >>>>>> 1. 1m16s >>>>>> 2. 1m41s >>>>>> >>>>>> Note: Memhog time does seem to vary a little bit on every boot with or >>>>>> without hinting. >>>>>> >>>>> I don't quite understand yet why "hinting more pages" (no free page >>>>> barrier) should result in a higher swap usage in the hypervisor >>>>> (1.1-1.5GB vs. 100-150 MB). If we are "hinting more pages" I would have >>>>> guessed that runtime could get slower, but not that we need more swap. >>>>> >>>>> One theory: >>>>> >>>>> If you hint all MAX_ORDER - 1 pages, at one point it could be that all >>>>> "remaining" free pages are currently isolated to be hinted. As MM needs >>>>> more pages for a process, it will fallback to using "MAX_ORDER - 2" >>>>> pages and so on. These pages, when they are freed, you won't hint >>>>> anymore unless they get merged. But after all they won't get merged >>>>> because they can't be merged (otherwise they wouldn't be "MAX_ORDER - 2" >>>>> after all right from the beginning). >>>>> >>>>> Try hinting a smaller granularity to see if this could actually be the case. >>>> So I have two questions in my mind after looking at the results now: >>>> 1. Why swap is coming into the picture when hinting is enabled? >>>> 2. Same to what you have raised. >>>> For the 1st question, I think the answer is: (correct me if I am wrong.) >>>> Memhog while writing the memory does free memory but the pages it frees >>>> are of a lower order which doesn't merge until the memhog write >>>> completes. After which we do get the MAX_ORDER - 1 page from the buddy >>>> resulting in hinting. >>>> As all 3 memhog are running parallelly we don't get free memory until >>>> one of them completes. >>>> This does explain that when 3 guests each of 6GB on a 15GB host tries to >>>> run memhog with 6GB parallelly, swap comes into the picture even if >>>> hinting is enabled. >>> Are you running them in parallel or sequentially? >> I was running them parallelly but then I realized to see any benefits, >> in that case, I should have run less number of guests. >>> I had suggested >>> running them serially so that the previous one could complete and free >>> the memory before the next one allocated memory. In that setup you >>> should see the guests still swapping without hints, but with hints the >>> guest should free the memory up before the next one starts using it. >> Yeah, I just realized this. Thanks for the clarification. >>> If you are running them in parallel then you are going to see things >>> going to swap because memhog does like what the name implies and it >>> will use all of the memory you give it. It isn't until it completes >>> that the memory is freed. >>> >>>> This doesn't explain why putting a barrier or avoid hinting reduced the >>>> swap usage. It seems I possibly had a wrong impression of the delaying >>>> hinting idea which we discussed. >>>> As I was observing the value of the swap at the end of the memhog >>>> execution which is logically incorrect. I will re-run the test and >>>> observe the highest swap usage during the entire execution of memhog for >>>> hinting vs non-hinting. >>> So one option you may look at if you are wanting to run the tests in >>> parallel would be to limit the number of tests you have running at the >>> same time. If you have 15G of memory and 6G per guest you should be >>> able to run 2 sessions at a time without going to swap, however if you >>> run all 3 then you are likely going to be going to swap even with >>> hinting. >>> >>> - Alex > Here are the updated numbers excluding the guest bring-up cost: > Setup for the test- > NUMA node:1 > Memory: 15GB > Swap: 4GB > Guest memory: 6GB > Number of core: 1 > Process: 3 guests are launched and memhog is run serially with 6GB. > Results: > Without Hinting > Time of Execution > Guest1: 56s > Guest2: 45s > Guest3: 3m41s > > With Hinting > Guest1: 46s > Guest2: 45s > Guest3: 49s > > > > I performed some experiments to see if the current implementation of hinting breaks THP. I used AnonHugePages to track the THP pages currently in use and memhog as the guest workload. Setup: Host Size: 30GB (No swap) Guest Size: 15GB THP Size: 2MB Process: Guest is installed with different kernels to hint different granularities(MAX_ORDER - 1, MAX_ORDER - 2 and MAX_ORDER - 3). Memhog 15G is run multiple times in the same guest to see AnonHugePages usage in the host. Observation: There is no THP split for order MAX_ORDER - 1 & MAX_ORDER - 2 whereas for hinting granularity MAX_ORDER - 3 THP does split irrespective of MADVISE_FREE or MADVISE_DONTNEED.
On Mon, Mar 25, 2019 at 10:27:46AM -0400, Nitesh Narayan Lal wrote: > I performed some experiments to see if the current implementation of > hinting breaks THP. I used AnonHugePages to track the THP pages > currently in use and memhog as the guest workload. > Setup: > Host Size: 30GB (No swap) > Guest Size: 15GB > THP Size: 2MB > Process: Guest is installed with different kernels to hint different > granularities(MAX_ORDER - 1, MAX_ORDER - 2 and MAX_ORDER - 3). Memhog > 15G is run multiple times in the same guest to see AnonHugePages usage > in the host. > > Observation: > There is no THP split for order MAX_ORDER - 1 & MAX_ORDER - 2 whereas > for hinting granularity MAX_ORDER - 3 THP does split irrespective of > MADVISE_FREE or MADVISE_DONTNEED. > -- > Regards > Nitesh > This is on x86 right? So THP is 2M and MAX_ORDER is 8M. MAX_ORDER - 3 ==> 1M. Seems to work out.
On 3/25/19 11:37 AM, Michael S. Tsirkin wrote: > On Mon, Mar 25, 2019 at 10:27:46AM -0400, Nitesh Narayan Lal wrote: >> I performed some experiments to see if the current implementation of >> hinting breaks THP. I used AnonHugePages to track the THP pages >> currently in use and memhog as the guest workload. >> Setup: >> Host Size: 30GB (No swap) >> Guest Size: 15GB >> THP Size: 2MB >> Process: Guest is installed with different kernels to hint different >> granularities(MAX_ORDER - 1, MAX_ORDER - 2 and MAX_ORDER - 3). Memhog >> 15G is run multiple times in the same guest to see AnonHugePages usage >> in the host. >> >> Observation: >> There is no THP split for order MAX_ORDER - 1 & MAX_ORDER - 2 whereas >> for hinting granularity MAX_ORDER - 3 THP does split irrespective of >> MADVISE_FREE or MADVISE_DONTNEED. >> -- >> Regards >> Nitesh >> > This is on x86 right? Yes. > So THP is 2M and MAX_ORDER is 8M. > MAX_ORDER - 3 ==> 1M. > Seems to work out. > >