From patchwork Mon May 15 14:05:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Punit Agrawal X-Patchwork-Id: 9727085 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6111360380 for ; Mon, 15 May 2017 14:07:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 530DA2885A for ; Mon, 15 May 2017 14:07:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 47CC128918; Mon, 15 May 2017 14:07:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C1F852885A for ; Mon, 15 May 2017 14:07:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dAGcy-0004wJ-Sb; Mon, 15 May 2017 14:05:36 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dAGcx-0004vo-M7 for xen-devel@lists.xen.org; Mon, 15 May 2017 14:05:35 +0000 Received: from [85.158.143.35] by server-3.bemta-6.messagelabs.com id F3/A3-03058-EA5B9195; Mon, 15 May 2017 14:05:34 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrMLMWRWlGSWpSXmKPExsVysyfVTXfdVsl Igw/zZC2WfFzM4sDocXT3b6YAxijWzLyk/IoE1oxd328xF9xVqXjTuIOxgfGITBcjF4eQwGZG iSWPd7BDONsZJZZv72HpYuTkYBPQlpj6eyIziC0iIC1x7fNlRpAiZoH/jBLbrvYwgSSEBYokf m9cC9TAwcEioCqx7kQESJhXwFLi/IQ5YCUSAvISu9ousoLYnAJWEos2vmIHsYWAaj5OvsECUS Mpsefkd8YJjDwLGBlWMWoUpxaVpRbpGprrJRVlpmeU5CZm5ugaGpjp5aYWFyemp+YkJhXrJef nbmIEep4BCHYw3t4YcIhRkoNJSZTX9aV4pBBfUn5KZUZicUZ8UWlOavEhRhkODiUJ3oItkpFC gkWp6akVaZk5wBCESUtw8CiJ8FqApHmLCxJzizPTIVKnGBWlxHlLQBICIImM0jy4NljYX2KUl RLmZQQ6RIinILUoN7MEVf4VozgHo5Iw77/NQFN4MvNK4Ka/AlrMBLQ4DORm3uKSRISUVAPj5J 2/hG9kpzjr5j9S0ym7f1/xSfimt3d8ulbmf/+Tq1GjmjLv4mY/7bOsGsw3z4V8PbfT97H451K 34KuC2xKYlLi4Tld/szxx+4um+MdNawQ5v77de/zi6wdcLz7/OP63WKW7SWXK2vPP1bf0C17P 2O/Z+j3jw4Pb7Ev4GbbOyMvL99R5of28UImlOCPRUIu5qDgRAImxh/x2AgAA X-Env-Sender: punit.agrawal@arm.com X-Msg-Ref: server-10.tower-21.messagelabs.com!1494857133!62038842!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.12; banners=-,-,- X-VirusChecked: Checked Received: (qmail 12811 invoked from network); 15 May 2017 14:05:34 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-10.tower-21.messagelabs.com with SMTP; 15 May 2017 14:05:34 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A29E31516; Mon, 15 May 2017 07:05:33 -0700 (PDT) Received: from localhost (e105922-lin.cambridge.arm.com [10.1.207.56]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 718D23F41F; Mon, 15 May 2017 07:05:33 -0700 (PDT) From: Punit Agrawal To: xen-devel@lists.xen.org Date: Mon, 15 May 2017 15:05:04 +0100 Message-Id: <20170515140504.6461-4-punit.agrawal@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170515140504.6461-1-punit.agrawal@arm.com> References: <20170515140504.6461-1-punit.agrawal@arm.com> X-ARM-No-Footer: FoSSMail Cc: sstabellini@kernel.org, wei.liu2@citrix.com, konrad.wil@oracle.com, George.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, Punit Agrawal , tim@xen.org, julien.grall@arm.com, jbeulich@suse.com, ian.jackson@eu.citrix.com Subject: [Xen-devel] [For Xen-4.10 PATCH 3/3] Avoid excess icache flushes in populate_physmap() before domain has been created X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP populate_physmap() calls alloc_heap_pages() per requested extent. alloc_heap_pages() invalidates the entire icache per extent. During domain creation, the icache invalidations can be deffered until all the extents have been allocated as there is no risk of executing stale instructions from the icache. Introduce a new flag "MEMF_no_icache_flush" to be used to prevent alloc_heap_pages() from performing icache maintenance operations. Use the flag in populate_physmap() before the domain has been unpaused and perform required icache maintenance function at the end of the allocation. One concern is the lack of synchronisation around testing for "creation_finished". But it seems, in practice the window where it is out of sync should be small enough to not matter. Signed-off-by: Punit Agrawal --- xen/common/memory.c | 31 ++++++++++++++++++++++--------- xen/common/page_alloc.c | 2 +- xen/include/asm-x86/page.h | 4 ++++ xen/include/xen/mm.h | 2 ++ 4 files changed, 29 insertions(+), 10 deletions(-) diff --git a/xen/common/memory.c b/xen/common/memory.c index 52879e7438..34d2dda8b4 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -152,16 +152,26 @@ static void populate_physmap(struct memop_args *a) max_order(curr_d)) ) return; - /* - * With MEMF_no_tlbflush set, alloc_heap_pages() will ignore - * TLB-flushes. After VM creation, this is a security issue (it can - * make pages accessible to guest B, when guest A may still have a - * cached mapping to them). So we do this only during domain creation, - * when the domain itself has not yet been unpaused for the first - * time. - */ if ( unlikely(!d->creation_finished) ) + { + /* + * With MEMF_no_tlbflush set, alloc_heap_pages() will ignore + * TLB-flushes. After VM creation, this is a security issue (it can + * make pages accessible to guest B, when guest A may still have a + * cached mapping to them). So we do this only during domain creation, + * when the domain itself has not yet been unpaused for the first + * time. + */ a->memflags |= MEMF_no_tlbflush; + /* + * With MEMF_no_icache_flush, alloc_heap_pages() will skip + * performing icache flushes. We do it only before domain + * creation as once the domain is running there is a danger of + * executing instructions from stale caches if icache flush is + * delayed. + */ + a->memflags |= MEMF_no_icache_flush; + } for ( i = a->nr_done; i < a->nr_extents; i++ ) { @@ -211,7 +221,6 @@ static void populate_physmap(struct memop_args *a) } mfn = gpfn; - page = mfn_to_page(mfn); } else { @@ -255,6 +264,10 @@ static void populate_physmap(struct memop_args *a) out: if ( need_tlbflush ) filtered_flush_tlb_mask(tlbflush_timestamp); + + if ( a->memflags & MEMF_no_icache_flush ) + invalidate_icache(); + a->nr_done = i; } diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index eba78f1a3d..8bcef6a547 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -833,7 +833,7 @@ static struct page_info *alloc_heap_pages( /* Ensure cache and RAM are consistent for platforms where the * guest can control its own visibility of/through the cache. */ - flush_page_to_ram(page_to_mfn(&pg[i]), true); + flush_page_to_ram(page_to_mfn(&pg[i]), !(memflags & MEMF_no_icache_flush)); } spin_unlock(&heap_lock); diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h index 4cadb12646..3a375282f6 100644 --- a/xen/include/asm-x86/page.h +++ b/xen/include/asm-x86/page.h @@ -375,6 +375,10 @@ perms_strictly_increased(uint32_t old_flags, uint32_t new_flags) #define PAGE_ALIGN(x) (((x) + PAGE_SIZE - 1) & PAGE_MASK) +static inline void invalidate_icache(void) +{ +} + #endif /* __X86_PAGE_H__ */ /* diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h index 88de3c1fa6..ee50d4cd7b 100644 --- a/xen/include/xen/mm.h +++ b/xen/include/xen/mm.h @@ -224,6 +224,8 @@ struct npfec { #define MEMF_no_owner (1U<<_MEMF_no_owner) #define _MEMF_no_tlbflush 6 #define MEMF_no_tlbflush (1U<<_MEMF_no_tlbflush) +#define _MEMF_no_icache_flush 7 +#define MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush) #define _MEMF_node 8 #define MEMF_node_mask ((1U << (8 * sizeof(nodeid_t))) - 1) #define MEMF_node(n) ((((n) + 1) & MEMF_node_mask) << _MEMF_node)