From patchwork Thu Apr 6 15:53:37 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 9667841 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A9E3A602B8 for ; Thu, 6 Apr 2017 16:15:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EA1632859F for ; Thu, 6 Apr 2017 16:15:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DE849285A4; Thu, 6 Apr 2017 16:15:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 70CDA2859F for ; Thu, 6 Apr 2017 16:15:38 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cwA2F-0005mf-LV; Thu, 06 Apr 2017 16:13:23 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cwA2E-0005lO-Cv for xen-devel@lists.xen.org; Thu, 06 Apr 2017 16:13:22 +0000 Received: from [85.158.139.211] by server-3.bemta-5.messagelabs.com id 10/3A-01936-12966E85; Thu, 06 Apr 2017 16:13:21 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrPLMWRWlGSWpSXmKPExsXS1tYhr6uQ+Sz CYNc0U4slHxezODB6HN39mymAMYo1My8pvyKBNWPtzWOsBR/1Khr7VzA1MM5W6mLk5BASqJS4 tHUfG4gtIcArcWTZDFYI21+iY+ZCxi5GLqCadkaJpYfugxWxCWhL/Fj9mxHEFhGQlrj2+TJYE bPAXkaJT2uOsoE4wgINjBIX+y6AVbEIqEo0X2hgAbF5BbwkVl99yQKxQk7i5LHJYOs4BbwlNj ZPZIU4yUuiY8FElgmMvAsYGVYxqhenFpWlFuma6SUVZaZnlOQmZuboGhqY6uWmFhcnpqfmJCY V6yXn525iBAYEAxDsYJza4HyIUZKDSUmUV8HnSYQQX1J+SmVGYnFGfFFpTmrxIUYZDg4lCd7g 9GcRQoJFqempFWmZOcDQhElLcPAoifDag6R5iwsSc4sz0yFSpxgVpcR5r6QBJQRAEhmleXBts Hi4xCgrJczLCHSIEE9BalFuZgmq/CtGcQ5GJWHeNpDxPJl5JXDTXwEtZgJa7HPrKcjikkSElF QDo/Sq1PTr5rsZ+qvviz3XXbys2LhzZ51256EPap8+fFNRb7hu6DHlJa+vbZTfOh7FfborOE5 vFLMrWme4c/HWumNZVnbmZ42SnZu9Q+ec4FjSt0vIW0RXNWZN89RzRp8Odwec+xXncvyAkqLZ tue7mmYKvl6ybJLDVUXtZ+ybd3rtX+OWF37ykBJLcUaioRZzUXEiAO8h8kSCAgAA X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-15.tower-206.messagelabs.com!1491495187!78261313!4 X-Originating-IP: [134.134.136.31] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.12; banners=-,-,- X-VirusChecked: Checked Received: (qmail 51806 invoked from network); 6 Apr 2017 16:13:20 -0000 Received: from mga06.intel.com (HELO mga06.intel.com) (134.134.136.31) by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 6 Apr 2017 16:13:20 -0000 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP; 06 Apr 2017 09:13:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,160,1488873600"; d="scan'208";a="842784827" Received: from zhangyu-optiplex-9020.bj.intel.com ([10.238.135.159]) by FMSMGA003.fm.intel.com with ESMTP; 06 Apr 2017 09:13:18 -0700 From: Yu Zhang To: xen-devel@lists.xen.org Date: Thu, 6 Apr 2017 23:53:37 +0800 Message-Id: <1491494017-30743-7-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1491494017-30743-1-git-send-email-yu.c.zhang@linux.intel.com> References: <1491494017-30743-1-git-send-email-yu.c.zhang@linux.intel.com> Cc: George Dunlap , Andrew Cooper , Paul Durrant , zhiyuan.lv@intel.com, Jan Beulich Subject: [Xen-devel] [PATCH v12 6/6] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP After an ioreq server has unmapped, the remaining p2m_ioreq_server entries need to be reset back to p2m_ram_rw. This patch does this synchronously by iterating the p2m table. The synchronous resetting is necessary because we need to guarantee the p2m table is clean before another ioreq server is mapped. And since the sweeping of p2m table could be time consuming, it is done with hypercall continuation. Signed-off-by: Yu Zhang Reviewed-by: Paul Durrant Reviewed-by: Jan Beulich Reviewed-by: George Dunlap --- Cc: Paul Durrant Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap changes in v4: - Added "Reviewed-by: Paul Durrant " - Added "Reviewed-by: Jan Beulich " - Added "Reviewed-by: George Dunlap " changes in v3: - According to comments from Paul: use mar_nr, instead of last_gfn for p2m_finish_type_change(). - According to comments from Jan: use gfn_t as type of first_gfn in p2m_finish_type_change(). - According to comments from Jan: simplify the if condition before using p2m_finish_type_change(). changes in v2: - According to comments from Jan and Andrew: do not use the HVMOP type hypercall continuation method. Instead, adding an opaque in xen_dm_op_map_mem_type_to_ioreq_server to store the gfn. - According to comments from Jan: change routine's comments and name of parameters of p2m_finish_type_change(). changes in v1: - This patch is splitted from patch 4 of last version. - According to comments from Jan: update the gfn_start for when use hypercall continuation to reset the p2m type. - According to comments from Jan: use min() to compare gfn_end and max mapped pfn in p2m_finish_type_change() --- xen/arch/x86/hvm/dm.c | 41 ++++++++++++++++++++++++++++++++++++++--- xen/arch/x86/mm/p2m.c | 29 +++++++++++++++++++++++++++++ xen/include/asm-x86/p2m.h | 6 ++++++ 3 files changed, 73 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 7e0da81..d72b7bd 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -384,15 +384,50 @@ static int dm_op(domid_t domid, case XEN_DMOP_map_mem_type_to_ioreq_server: { - const struct xen_dm_op_map_mem_type_to_ioreq_server *data = + struct xen_dm_op_map_mem_type_to_ioreq_server *data = &op.u.map_mem_type_to_ioreq_server; + unsigned long first_gfn = data->opaque; + + const_op = false; rc = -EOPNOTSUPP; if ( !hap_enabled(d) ) break; - rc = hvm_map_mem_type_to_ioreq_server(d, data->id, - data->type, data->flags); + if ( first_gfn == 0 ) + rc = hvm_map_mem_type_to_ioreq_server(d, data->id, + data->type, data->flags); + else + rc = 0; + + /* + * Iterate p2m table when an ioreq server unmaps from p2m_ioreq_server, + * and reset the remaining p2m_ioreq_server entries back to p2m_ram_rw. + */ + if ( rc == 0 && data->flags == 0 ) + { + struct p2m_domain *p2m = p2m_get_hostp2m(d); + + while ( read_atomic(&p2m->ioreq.entry_count) && + first_gfn <= p2m->max_mapped_pfn ) + { + /* Iterate p2m table for 256 gfns each time. */ + p2m_finish_type_change(d, _gfn(first_gfn), 256, + p2m_ioreq_server, p2m_ram_rw); + + first_gfn += 256; + + /* Check for continuation if it's not the last iteration. */ + if ( first_gfn <= p2m->max_mapped_pfn && + hypercall_preempt_check() ) + { + rc = -ERESTART; + data->opaque = first_gfn; + break; + } + } + } + break; } diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 4169d18..1d57e5c 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1011,6 +1011,35 @@ void p2m_change_type_range(struct domain *d, p2m_unlock(p2m); } +/* Synchronously modify the p2m type for a range of gfns from ot to nt. */ +void p2m_finish_type_change(struct domain *d, + gfn_t first_gfn, unsigned long max_nr, + p2m_type_t ot, p2m_type_t nt) +{ + struct p2m_domain *p2m = p2m_get_hostp2m(d); + p2m_type_t t; + unsigned long gfn = gfn_x(first_gfn); + unsigned long last_gfn = gfn + max_nr - 1; + + ASSERT(ot != nt); + ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt)); + + p2m_lock(p2m); + + last_gfn = min(last_gfn, p2m->max_mapped_pfn); + while ( gfn <= last_gfn ) + { + get_gfn_query_unlocked(d, gfn, &t); + + if ( t == ot ) + p2m_change_type_one(d, gfn, t, nt); + + gfn++; + } + + p2m_unlock(p2m); +} + /* * Returns: * 0 for success diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index e7e390d..0e670af 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -611,6 +611,12 @@ void p2m_change_type_range(struct domain *d, int p2m_change_type_one(struct domain *d, unsigned long gfn, p2m_type_t ot, p2m_type_t nt); +/* Synchronously change the p2m type for a range of gfns */ +void p2m_finish_type_change(struct domain *d, + gfn_t first_gfn, + unsigned long max_nr, + p2m_type_t ot, p2m_type_t nt); + /* Report a change affecting memory types. */ void p2m_memory_type_changed(struct domain *d);