Message ID | 20171113154126.13038-2-george.dunlap@citrix.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote: > --- a/SUPPORT.md > +++ b/SUPPORT.md > @@ -16,6 +16,65 @@ for the definitions of the support status levels etc. > > # Feature Support > > +## Memory Management > + > +### Memory Ballooning > + > + Status: Supported Is this a proper feature in the context we're talking about? To me it's meaningful in guest OS context only. I also wouldn't really consider it "core", but placement within the series clearly is a minor aspect. I'd prefer this to be dropped altogether as a feature, but Acked-by: Jan Beulich <jbeulich@suse.com> is independent of that. > +### Credit2 Scheduler > + > + Status: Supported Sort of unrelated, but with this having been the case since 4.8 as it looks, is there a reason it still isn't the default scheduler? Jan
On 11/21/2017 08:03 AM, Jan Beulich wrote: >>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote: >> --- a/SUPPORT.md >> +++ b/SUPPORT.md >> @@ -16,6 +16,65 @@ for the definitions of the support status levels etc. >> >> # Feature Support >> >> +## Memory Management >> + >> +### Memory Ballooning >> + >> + Status: Supported > > Is this a proper feature in the context we're talking about? To me > it's meaningful in guest OS context only. I also wouldn't really > consider it "core", but placement within the series clearly is a minor > aspect. > > I'd prefer this to be dropped altogether as a feature, but This doesn't make any sense to me. Allowing a guest to modify its own memory requires a *lot* of support, spread throughout the hypervisor; and there are a huge number of recent security holes that would have been much more difficult to exploit if guests didn't have the ability to balloon up or down. If what you mean is *specifically* the technique of making a "memory balloon" to trick the guest OS into handing back memory without knowing it, then it's just a matter of semantics. We could call this "dynamic memory control" or something like that if you prefer (although we'd have to mention ballooning in the description to make sure people can find it). > Acked-by: Jan Beulich <jbeulich@suse.com> > is independent of that. > >> +### Credit2 Scheduler >> + >> + Status: Supported > > Sort of unrelated, but with this having been the case since 4.8 as it > looks, is there a reason it still isn't the default scheduler? Well first of all it was missing some features which credit1 had: namely, soft affinity (i.e., required for host NUMA awareness) and caps. These were checked in this release cycle; but we also wanted to switch the default at the beginning of a development cycle to get the highest chance of shaking out any weird bugs. So according to those criteria, we could switch to credit2 being the default scheduler as soon as 4.10 development window opens. At some point recently Dario said there were still some unusual behavior he wanted to dig into; but I think with him not working for Citrix anymore, it's doubtful we'll have resource to take that up; the best option might be to just pull the lever and see what happens. -George
>>> On 21.11.17 at 11:36, <george.dunlap@citrix.com> wrote: > On 11/21/2017 08:03 AM, Jan Beulich wrote: >>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote: >>> --- a/SUPPORT.md >>> +++ b/SUPPORT.md >>> @@ -16,6 +16,65 @@ for the definitions of the support status levels etc. >>> >>> # Feature Support >>> >>> +## Memory Management >>> + >>> +### Memory Ballooning >>> + >>> + Status: Supported >> >> Is this a proper feature in the context we're talking about? To me >> it's meaningful in guest OS context only. I also wouldn't really >> consider it "core", but placement within the series clearly is a minor >> aspect. >> >> I'd prefer this to be dropped altogether as a feature, but > > This doesn't make any sense to me. Allowing a guest to modify its own > memory requires a *lot* of support, spread throughout the hypervisor; > and there are a huge number of recent security holes that would have > been much more difficult to exploit if guests didn't have the ability to > balloon up or down. > > If what you mean is *specifically* the technique of making a "memory > balloon" to trick the guest OS into handing back memory without knowing > it, then it's just a matter of semantics. We could call this "dynamic > memory control" or something like that if you prefer (although we'd have > to mention ballooning in the description to make sure people can find it). Indeed I'd prefer the alternative naming: Outside of p2m-pod.c there's no mention of the term "balloon" in any of the hypervisor source files. Furthermore this "dynamic memory control" can be used for things other than ballooning, all of which I think is (to be) supported. Jan
diff --git a/SUPPORT.md b/SUPPORT.md index d7f2ae45e4..064a2f43e9 100644 --- a/SUPPORT.md +++ b/SUPPORT.md @@ -16,6 +16,65 @@ for the definitions of the support status levels etc. # Feature Support +## Memory Management + +### Memory Ballooning + + Status: Supported + +## Resource Management + +### CPU Pools + + Status: Supported + +Groups physical cpus into distinct groups called "cpupools", +with each pool having the capability +of using different schedulers and scheduling properties. + +### Credit Scheduler + + Status: Supported + +A weighted proportional fair share virtual CPU scheduler. +This is the default scheduler. + +### Credit2 Scheduler + + Status: Supported + +A general purpose scheduler for Xen, +designed with particular focus on fairness, responsiveness, and scalability + +### RTDS based Scheduler + + Status: Experimental + +A soft real-time CPU scheduler +built to provide guaranteed CPU capacity to guest VMs on SMP hosts + +### ARINC653 Scheduler + + Status: Supported + +A periodically repeating fixed timeslice scheduler. +Currently only single-vcpu domains are supported. + +### Null Scheduler + + Status: Experimental + +A very simple, very static scheduling policy +that always schedules the same vCPU(s) on the same pCPU(s). +It is designed for maximum determinism and minimum overhead +on embedded platforms. + +### NUMA scheduler affinity + + Status, x86: Supported + +Enables NUMA aware scheduling in Xen + # Format and definitions This file contains prose, and machine-readable fragments.
Core memory management and scheduling. Signed-off-by: George Dunlap <george.dunlap@citrix.com> --- CC: Ian Jackson <ian.jackson@citrix.com> CC: Wei Liu <wei.liu2@citrix.com> CC: Andrew Cooper <andrew.cooper3@citrix.com> CC: Jan Beulich <jbeulich@suse.com> CC: Tim Deegan <tim@xen.org> CC: Dario Faggioli <dario.faggioli@citrix.com> CC: Nathan Studer <nathan.studer@dornerworks.com> --- SUPPORT.md | 59 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+)