From patchwork Fri Aug 4 17:05:46 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Ostrovsky X-Patchwork-Id: 9881747 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BEEA9602B8 for ; Fri, 4 Aug 2017 17:05:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A9121288F7 for ; Fri, 4 Aug 2017 17:05:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9DF8B289E2; Fri, 4 Aug 2017 17:05:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0F921288F7 for ; Fri, 4 Aug 2017 17:05:55 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ddg0g-0006kq-0j; Fri, 04 Aug 2017 17:03:38 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ddg0e-0006jY-Dn for xen-devel@lists.xen.org; Fri, 04 Aug 2017 17:03:36 +0000 Received: from [85.158.143.35] by server-4.bemta-6.messagelabs.com id BE/26-02962-7E8A4895; Fri, 04 Aug 2017 17:03:35 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpnkeJIrShJLcpLzFFi42LpnVTnqvt8RUu kwds/7BZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8aHPVYFO1Qr7t4+y9jA2C7bxcjFISQwgUli 9e95TBDOH0aJ89O3skM4GxglXu3rgnJ6GCX2fD7P3MXIycEmYCRx9uh0RhBbREBa4trny4wgR cwCDUwSz88dBHI4OIQFLCSWdsSD1LAIqEqsnH2SHcTmFfCUOHN1PhuILSGgIDHl4XuwmZwCXh LNrS/AaoSAak7e/8EEUWMocfrhNsYJjHwLGBlWMaoXpxaVpRbpmuolFWWmZ5TkJmbm6BoamOn lphYXJ6an5iQmFesl5+duYgQGCgMQ7GCcftn/EKMkB5OSKG/1saZIIb6k/JTKjMTijPii0pzU 4kOMMhwcShK865e3RAoJFqWmp1akZeYAQxYmLcHBoyTC+3UZUJq3uCAxtzgzHSJ1ilGX49WE/ 9+YhFjy8vNSpcR5F4PMEAApyijNgxsBi59LjLJSwryMQEcJ8RSkFuVmlqDKv2IU52BUEuatBJ nCk5lXArfpFdARTEBH/KlrBDmiJBEhJdXAOKH7YEWpw89mcYkp+nNO9LE95DTQ56hm3Hgiqeq Q6A47+4c+r+Ymt5rumLfN9+Nlg/l91wOuetmkbC/Q7lzeaWr/nudi78Ry/e6TJSavCn/+evzi 4oN5Dcq1ia/Xnkuxzbzk/OhYz8YNW83ZXd6bKr2p+nx1n5J0h6/QnaVGc7d8FFd1tY8sVWIpz kg01GIuKk4EAMB7P+maAgAA X-Env-Sender: boris.ostrovsky@oracle.com X-Msg-Ref: server-7.tower-21.messagelabs.com!1501866213!76558299!1 X-Originating-IP: [141.146.126.69] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 3697 invoked from network); 4 Aug 2017 17:03:34 -0000 Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com) (141.146.126.69) by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 4 Aug 2017 17:03:34 -0000 Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v74H3UaH028107 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 4 Aug 2017 17:03:31 GMT Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id v74H3TwT027159 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 4 Aug 2017 17:03:30 GMT Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id v74H3TRU030352; Fri, 4 Aug 2017 17:03:29 GMT Received: from ovs104.us.oracle.com (/10.149.76.204) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 04 Aug 2017 10:03:29 -0700 From: Boris Ostrovsky To: xen-devel@lists.xen.org Date: Fri, 4 Aug 2017 13:05:46 -0400 Message-Id: <1501866346-9774-9-git-send-email-boris.ostrovsky@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1501866346-9774-1-git-send-email-boris.ostrovsky@oracle.com> References: <1501866346-9774-1-git-send-email-boris.ostrovsky@oracle.com> X-Source-IP: userv0021.oracle.com [156.151.31.71] Cc: sstabellini@kernel.org, wei.liu2@citrix.com, George.Dunlap@eu.citrix.com, andrew.cooper3@citrix.com, ian.jackson@eu.citrix.com, tim@xen.org, jbeulich@suse.com, Boris Ostrovsky Subject: [Xen-devel] [PATCH v6 8/8] mm: Make sure pages are scrubbed X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Add a debug Kconfig option that will make page allocator verify that pages that were supposed to be scrubbed are, in fact, clean. Signed-off-by: Boris Ostrovsky Reviewed-by: Jan Beulich --- xen/Kconfig.debug | 7 ++++++ xen/common/page_alloc.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 69 insertions(+), 1 deletion(-) diff --git a/xen/Kconfig.debug b/xen/Kconfig.debug index 689f297..195d504 100644 --- a/xen/Kconfig.debug +++ b/xen/Kconfig.debug @@ -114,6 +114,13 @@ config DEVICE_TREE_DEBUG logged in the Xen ring buffer. If unsure, say N here. +config SCRUB_DEBUG + bool "Page scrubbing test" + default DEBUG + ---help--- + Verify that pages that need to be scrubbed before being allocated to + a guest are indeed scrubbed. + endif # DEBUG || EXPERT endmenu diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 7cd736c..aac1ff2 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -170,6 +170,10 @@ boolean_param("bootscrub", opt_bootscrub); static unsigned long __initdata opt_bootscrub_chunk = MB(128); size_param("bootscrub_chunk", opt_bootscrub_chunk); +#ifdef CONFIG_SCRUB_DEBUG +static bool __read_mostly boot_scrub_done; +#endif + /* * Bit width of the DMA heap -- used to override NUMA-node-first. * allocation strategy, which can otherwise exhaust low memory. @@ -698,6 +702,43 @@ static void page_list_add_scrub(struct page_info *pg, unsigned int node, page_list_add(pg, &heap(node, zone, order)); } +/* SCRUB_PATTERN needs to be a repeating series of bytes. */ +#ifndef NDEBUG +#define SCRUB_PATTERN 0xc2c2c2c2c2c2c2c2ULL +#else +#define SCRUB_PATTERN 0ULL +#endif +#define SCRUB_BYTE_PATTERN (SCRUB_PATTERN & 0xff) + +static void poison_one_page(struct page_info *pg) +{ +#ifdef CONFIG_SCRUB_DEBUG + mfn_t mfn = _mfn(page_to_mfn(pg)); + uint64_t *ptr; + + ptr = map_domain_page(mfn); + *ptr = ~SCRUB_PATTERN; + unmap_domain_page(ptr); +#endif +} + +static void check_one_page(struct page_info *pg) +{ +#ifdef CONFIG_SCRUB_DEBUG + mfn_t mfn = _mfn(page_to_mfn(pg)); + const uint64_t *ptr; + unsigned int i; + + if ( !boot_scrub_done ) + return; + + ptr = map_domain_page(mfn); + for ( i = 0; i < PAGE_SIZE / sizeof (*ptr); i++ ) + ASSERT(ptr[i] == SCRUB_PATTERN); + unmap_domain_page(ptr); +#endif +} + static void check_and_stop_scrub(struct page_info *head) { if ( head->u.free.scrub_state == BUDDY_SCRUBBING ) @@ -932,6 +973,9 @@ static struct page_info *alloc_heap_pages( * guest can control its own visibility of/through the cache. */ flush_page_to_ram(page_to_mfn(&pg[i]), !(memflags & MEMF_no_icache_flush)); + + if ( !(memflags & MEMF_no_scrub) ) + check_one_page(&pg[i]); } spin_unlock(&heap_lock); @@ -1306,7 +1350,10 @@ static void free_heap_pages( set_gpfn_from_mfn(mfn + i, INVALID_M2P_ENTRY); if ( need_scrub ) + { pg[i].count_info |= PGC_need_scrub; + poison_one_page(&pg[i]); + } } avail[node][zone] += 1 << order; @@ -1664,7 +1711,12 @@ static void init_heap_pages( nr_pages -= n; } +#ifndef CONFIG_SCRUB_DEBUG free_heap_pages(pg + i, 0, false); +#else + free_heap_pages(pg + i, 0, boot_scrub_done); +#endif + } } @@ -1930,6 +1982,10 @@ void __init scrub_heap_pages(void) printk("done.\n"); +#ifdef CONFIG_SCRUB_DEBUG + boot_scrub_done = true; +#endif + /* Now that the heap is initialized, run checks and set bounds * for the low mem virq algorithm. */ setup_low_mem_virq(); @@ -2203,12 +2259,16 @@ void free_domheap_pages(struct page_info *pg, unsigned int order) spin_unlock_recursive(&d->page_alloc_lock); +#ifndef CONFIG_SCRUB_DEBUG /* * Normally we expect a domain to clear pages before freeing them, * if it cares about the secrecy of their contents. However, after * a domain has died we assume responsibility for erasure. */ scrub = !!d->is_dying; +#else + scrub = true; +#endif } else { @@ -2300,7 +2360,8 @@ void scrub_one_page(struct page_info *pg) #ifndef NDEBUG /* Avoid callers relying on allocations returning zeroed pages. */ - unmap_domain_page(memset(__map_domain_page(pg), 0xc2, PAGE_SIZE)); + unmap_domain_page(memset(__map_domain_page(pg), + SCRUB_BYTE_PATTERN, PAGE_SIZE)); #else /* For a production build, clear_page() is the fastest way to scrub. */ clear_domain_page(_mfn(page_to_mfn(pg)));