From patchwork Fri May 19 15:50:36 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Ostrovsky X-Patchwork-Id: 9737513 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3D53D6041F for ; Fri, 19 May 2017 15:52:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3EEE526AE3 for ; Fri, 19 May 2017 15:52:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 33E3D28179; Fri, 19 May 2017 15:52:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8BA9626AE3 for ; Fri, 19 May 2017 15:52:26 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dBkA4-0000id-2w; Fri, 19 May 2017 15:49:52 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dBkA2-0000gD-A3 for xen-devel@lists.xen.org; Fri, 19 May 2017 15:49:50 +0000 Received: from [85.158.143.35] by server-5.bemta-6.messagelabs.com id 32/5C-03371-D141F195; Fri, 19 May 2017 15:49:49 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrHLMWRWlGSWpSXmKPExsUyZ7p8oK6siHy kwcnt/BZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8allxfYCi45VXTd/M/WwLhGr4uRi0NIYCKT xLsbq1khnG+MEo/vrWKEcDYwStztuMAO4fQwSlxcvYi5i5GTg03ASOLs0emMILaIgLTEtc+Xw TqYBRqYJJ6fOwiWEBYwl1gy8QkbiM0ioCpxa8sesGZeAS+JLzdOsIPYEgIKElMevgeLcwp4S7 yZNIsVxBYCqvk1q48VosZQ4vTDbYwTGPkWMDKsYtQoTi0qSy3SNTLVSyrKTM8oyU3MzNE1NDD Ty00tLk5MT81JTCrWS87P3cQIDBcGINjBuGpB4CFGSQ4mJVFex8NykUJ8SfkplRmJxRnxRaU5 qcWHGGU4OJQkeM8JyUcKCRalpqdWpGXmAAMXJi3BwaMkwmsrDJTmLS5IzC3OTIdInWJUlBLn/ QDSJwCSyCjNg2uDRcslRlkpYV5GoEOEeApSi3IzS1DlXzGKczAqCfOKgIznycwrgZv+CmgxE9 Di5gfSIItLEhFSUg2M5daGm37Z3C9in5GceH7/LQun5sgkkfrUm80v+bqjJEXnl23xPFCXtu7 6c/3zGz6oTlVNqZ00wWSvn1KEnae1QciemkuJDWVPf57m/OGlM2GloFqXk2qPTcPVEDN55eKy W5kC3do7nNTUdvt47/vBGHGzI5yzLUZijbLonpoIF/VVwq78PUosxRmJhlrMRcWJAASKatCRA gAA X-Env-Sender: boris.ostrovsky@oracle.com X-Msg-Ref: server-10.tower-21.messagelabs.com!1495208987!62982242!1 X-Originating-IP: [156.151.31.81] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n X-StarScan-Received: X-StarScan-Version: 9.4.12; banners=-,-,- X-VirusChecked: Checked Received: (qmail 7632 invoked from network); 19 May 2017 15:49:48 -0000 Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81) by server-10.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 19 May 2017 15:49:48 -0000 Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v4JFnfZj032624 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 19 May 2017 15:49:42 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id v4JFnfG3026078 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 19 May 2017 15:49:41 GMT Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7]) by aserv0121.oracle.com (8.13.8/8.13.8) with ESMTP id v4JFndQE008585; Fri, 19 May 2017 15:49:40 GMT Received: from ovs104.us.oracle.com (/10.149.76.204) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 19 May 2017 08:49:38 -0700 From: Boris Ostrovsky To: xen-devel@lists.xen.org Date: Fri, 19 May 2017 11:50:36 -0400 Message-Id: <1495209040-11101-5-git-send-email-boris.ostrovsky@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1495209040-11101-1-git-send-email-boris.ostrovsky@oracle.com> References: <1495209040-11101-1-git-send-email-boris.ostrovsky@oracle.com> X-Source-IP: userv0021.oracle.com [156.151.31.71] Cc: sstabellini@kernel.org, wei.liu2@citrix.com, George.Dunlap@eu.citrix.com, andrew.cooper3@citrix.com, ian.jackson@eu.citrix.com, tim@xen.org, jbeulich@suse.com, Boris Ostrovsky Subject: [Xen-devel] [PATCH v4 4/8] mm: Scrub memory from idle loop X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Instead of scrubbing pages during guest destruction (from free_heap_pages()) do this opportunistically, from the idle loop. Signed-off-by: Boris Ostrovsky --- Changes in v4: * Be careful with tasklets in idle_loop() * Use per-cpu mapcache override * Update node_to_scrub() algorithm to select closest node (and add comment explaining what it does) * Put buddy back in the heap directly (as opposed to using merge_and_free_buddy() which is dropped anyway) * Don't stop scrubbing immediately when softirq is pending, try to scrub at least a few (8) pages. xen/arch/arm/domain.c | 16 ++++--- xen/arch/x86/domain.c | 3 +- xen/arch/x86/domain_page.c | 8 ++-- xen/common/page_alloc.c | 113 +++++++++++++++++++++++++++++++++++++++------ xen/include/xen/mm.h | 1 + 5 files changed, 117 insertions(+), 24 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 76310ed..9931ca2 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -46,15 +46,19 @@ void idle_loop(void) if ( cpu_is_offline(smp_processor_id()) ) stop_cpu(); - local_irq_disable(); - if ( cpu_is_haltable(smp_processor_id()) ) + do_tasklet(); + + if ( cpu_is_haltable(smp_processor_id()) && !scrub_free_pages() ) { - dsb(sy); - wfi(); + local_irq_disable(); + if ( cpu_is_haltable(smp_processor_id()) ) + { + dsb(sy); + wfi(); + } + local_irq_enable(); } - local_irq_enable(); - do_tasklet(); do_softirq(); /* * We MUST be last (or before dsb, wfi). Otherwise after we get the diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 13cdc50..229711f 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -118,8 +118,9 @@ static void idle_loop(void) { if ( cpu_is_offline(smp_processor_id()) ) play_dead(); - (*pm_idle)(); do_tasklet(); + if ( cpu_is_haltable(smp_processor_id()) && !scrub_free_pages() ) + (*pm_idle)(); do_softirq(); /* * We MUST be last (or before pm_idle). Otherwise after we get the diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c index 71baede..cfe7cc1 100644 --- a/xen/arch/x86/domain_page.c +++ b/xen/arch/x86/domain_page.c @@ -18,12 +18,14 @@ #include #include -static struct vcpu *__read_mostly override; +static DEFINE_PER_CPU(struct vcpu *, override); static inline struct vcpu *mapcache_current_vcpu(void) { + struct vcpu *v, *this_vcpu = this_cpu(override); + /* In the common case we use the mapcache of the running VCPU. */ - struct vcpu *v = override ?: current; + v = this_vcpu ?: current; /* * When current isn't properly set up yet, this is equivalent to @@ -59,7 +61,7 @@ static inline struct vcpu *mapcache_current_vcpu(void) void __init mapcache_override_current(struct vcpu *v) { - override = v; + this_cpu(override) = v; } #define mapcache_l2_entry(e) ((e) >> PAGETABLE_ORDER) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index b7c7426..6e505b1 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -1010,15 +1010,79 @@ static int reserve_offlined_page(struct page_info *head) return count; } -static void scrub_free_pages(unsigned int node) +static nodemask_t node_scrubbing; + +/* + * If get_node is true this will return closest node that needs to be scrubbed, + * with appropriate bit in node_scrubbing set. + * If get_node is not set, this will return *a* node that needs to be scrubbed. + * node_scrubbing bitmask will no be updated. + * If no node needs scrubbing then NUMA_NO_NODE is returned. + */ +static unsigned int node_to_scrub(bool get_node) { - struct page_info *pg; - unsigned int zone; + nodeid_t node = cpu_to_node(smp_processor_id()), local_node; + nodeid_t closest = NUMA_NO_NODE; + u8 dist, shortest = 0xff; - ASSERT(spin_is_locked(&heap_lock)); + if ( node == NUMA_NO_NODE ) + node = 0; - if ( !node_need_scrub[node] ) - return; + if ( node_need_scrub[node] && + (!get_node || !node_test_and_set(node, node_scrubbing)) ) + return node; + + /* + * See if there are memory-only nodes that need scrubbing and choose + * the closest one. + */ + local_node = node; + while ( 1 ) + { + do { + node = cycle_node(node, node_online_map); + } while ( !cpumask_empty(&node_to_cpumask(node)) && + (node != local_node) ); + + if ( node == local_node ) + break; + + if ( node_need_scrub[node] ) + { + if ( !get_node ) + return node; + + dist = __node_distance(local_node, node); + if ( dist < shortest || closest == NUMA_NO_NODE ) + { + if ( !node_test_and_set(node, node_scrubbing) ) + { + if ( closest != NUMA_NO_NODE ) + node_clear(closest, node_scrubbing); + shortest = dist; + closest = node; + } + } + } + } + + return closest; +} + +bool scrub_free_pages(void) +{ + struct page_info *pg; + unsigned int zone; + unsigned int cpu = smp_processor_id(); + bool preempt = false; + nodeid_t node; + unsigned int cnt = 0; + + node = node_to_scrub(true); + if ( node == NUMA_NO_NODE ) + return false; + + spin_lock(&heap_lock); for ( zone = 0; zone < NR_ZONES; zone++ ) { @@ -1035,22 +1099,46 @@ static void scrub_free_pages(unsigned int node) for ( i = pg->u.free.first_dirty; i < (1U << order); i++) { + cnt++; if ( test_bit(_PGC_need_scrub, &pg[i].count_info) ) { scrub_one_page(&pg[i]); pg[i].count_info &= ~PGC_need_scrub; node_need_scrub[node]--; + cnt += 100; /* scrubbed pages add heavier weight. */ } - } - page_list_del(pg, &heap(node, zone, order)); - page_list_add_scrub(pg, node, zone, order, INVALID_DIRTY_IDX); + /* + * Scrub a few (8) pages before becoming eligible for + * preemtion. But also count non-scrubbing loop iteration + * so that we don't get stuck here with an almost clean + * heap. + */ + if ( softirq_pending(cpu) && cnt > 800 ) + { + preempt = true; + break; + } + } - if ( node_need_scrub[node] == 0 ) - return; + if ( i == (1U << order) ) + { + page_list_del(pg, &heap(node, zone, order)); + page_list_add_scrub(pg, node, zone, order, INVALID_DIRTY_IDX); + } + else + pg->u.free.first_dirty = i + 1; + + if ( preempt || (node_need_scrub[node] == 0) ) + goto out; } } while ( order-- != 0 ); } + + out: + spin_unlock(&heap_lock); + node_clear(node, node_scrubbing); + return softirq_pending(cpu) || (node_to_scrub(false) != NUMA_NO_NODE); } /* Free 2^@order set of pages. */ @@ -1166,9 +1254,6 @@ static void free_heap_pages( if ( tainted ) reserve_offlined_page(pg); - if ( need_scrub ) - scrub_free_pages(node); - spin_unlock(&heap_lock); } diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index 0d4b7c2..ed90a61 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -138,6 +138,7 @@ void init_xenheap_pages(paddr_t ps, paddr_t pe); void xenheap_max_mfn(unsigned long mfn); void *alloc_xenheap_pages(unsigned int order, unsigned int memflags); void free_xenheap_pages(void *v, unsigned int order); +bool scrub_free_pages(void); #define alloc_xenheap_page() (alloc_xenheap_pages(0,0)) #define free_xenheap_page(v) (free_xenheap_pages(v,0)) /* Map machine page range in Xen virtual address space. */