From patchwork Tue Oct 17 13:24:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 10012021 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1CE5260235 for ; Tue, 17 Oct 2017 13:27:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 115BF2884B for ; Tue, 17 Oct 2017 13:27:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 05159288BF; Tue, 17 Oct 2017 13:27:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 125FF2884B for ; Tue, 17 Oct 2017 13:27:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1e4RrU-0000EA-Vr; Tue, 17 Oct 2017 13:24:48 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1e4RrS-0000BO-VB for xen-devel@lists.xenproject.org; Tue, 17 Oct 2017 13:24:47 +0000 Received: from [85.158.137.68] by server-5.bemta-3.messagelabs.com id E5/C4-20834-E9406E95; Tue, 17 Oct 2017 13:24:46 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprAIsWRWlGSWpSXmKPExsXitHSDve5clme RBnN+SVp83zKZyYHR4/CHKywBjFGsmXlJ+RUJrBlPPt1mLfiZWLFk0RXmBsannl2MnBwSAv4S X6Y+YwOx2QR0JKY+vcTaxcjBISKgInF7r0EXIxcHs8AzJom1fdvBaoQFfCVaVh9hBbFZBFQll t//yAJi8wrYSBy+MIMJYqa8xK62i2A1nAK2Eq//HWAGsYWAajbNW88KYatIrJ86iw2iV1Di5M wnYHOYBSQkDr54wTyBkXcWktQsJKkFjEyrGNWLU4vKUot0TfWSijLTM0pyEzNzdA0NjPVyU4u LE9NTcxKTivWS83M3MQJDp56BgXEH4+WvTocYJTmYlER5nQ2fRArxJeWnVGYkFmfEF5XmpBYf YpTh4FCS4A1mfhYpJFiUmp5akZaZAwximLQEB4+SCG8GSJq3uCAxtzgzHSJ1ilGXo+Pm3T9MQ ix5+XmpUuK8giBFAiBFGaV5cCNgEXWJUVZKmJeRgYFBiKcgtSg3swRV/hWjOAejkjCvAcgUns y8ErhNr4COYAI6Yp3TE5AjShIRUlINjNHrtux98DJJ5tXbyRtuMnAfWmSakKReuTXk026/E/q vHvHo9t1mk8q6p9az9pr1hCm3X1b72WswJU/UaZZ+Pdfzj7hguPDT8r3Lv/xb7ugdWl7XkR+3 fdK2GqePZ4UDsrff/sb7rezg693/jpT48tka81/lmLKwb9P6mY42k/jbk42YlrjJGiuxFGckG moxFxUnAgA0lUvHowIAAA== X-Env-Sender: prvs=4567f63fb=Paul.Durrant@citrix.com X-Msg-Ref: server-8.tower-31.messagelabs.com!1508246683!110724835!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 54789 invoked from network); 17 Oct 2017 13:24:44 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP; 17 Oct 2017 13:24:44 -0000 X-IronPort-AV: E=Sophos;i="5.43,391,1503360000"; d="scan'208";a="454570833" From: Paul Durrant To: Date: Tue, 17 Oct 2017 14:24:27 +0100 Message-ID: <20171017132432.24093-7-paul.durrant@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171017132432.24093-1-paul.durrant@citrix.com> References: <20171017132432.24093-1-paul.durrant@citrix.com> MIME-Version: 1.0 Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Paul Durrant , Jan Beulich Subject: [Xen-devel] [PATCH v12 06/11] x86/hvm/ioreq: add a new mappable resource type... X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ... XENMEM_resource_ioreq_server This patch adds support for a new resource type that can be mapped using the XENMEM_acquire_resource memory op. If an emulator makes use of this resource type then, instead of mapping gfns, the IOREQ server will allocate pages from the heap. These pages will never be present in the P2M of the guest at any point and so are not vulnerable to any direct attack by the guest. They are only ever accessible by Xen and any domain that has mapping privilege over the guest (which may or may not be limited to the domain running the emulator). NOTE: Use of the new resource type is not compatible with use of XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns flag is set. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: George Dunlap Cc: Wei Liu Cc: Jan Beulich Cc: Andrew Cooper Cc: Ian Jackson Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan v12: - Addressed more comments from Jan. - Dropped George's A-b and Wei's R-b because of material change. v11: - Addressed more comments from Jan. v10: - Addressed comments from Jan. v8: - Re-base on new boilerplate. - Adjust function signature of hvm_get_ioreq_server_frame(), and test whether the bufioreq page is present. v5: - Use get_ioreq_server() function rather than indexing array directly. - Add more explanation into comments to state than mapping guest frames and allocation of pages for ioreq servers are not simultaneously permitted. - Add a comment into asm/ioreq.h stating the meaning of the index value passed to hvm_get_ioreq_server_frame(). --- xen/arch/x86/hvm/ioreq.c | 156 ++++++++++++++++++++++++++++++++++++++++ xen/arch/x86/mm.c | 22 ++++++ xen/common/memory.c | 5 ++ xen/include/asm-x86/hvm/ioreq.h | 2 + xen/include/asm-x86/mm.h | 5 ++ xen/include/public/hvm/dm_op.h | 4 ++ xen/include/public/memory.h | 9 +++ 7 files changed, 203 insertions(+) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index f654e7796c..2c611fbffa 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -259,6 +259,19 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; int rc; + if ( iorp->page ) + { + /* + * If a page has already been allocated (which will happen on + * demand if hvm_get_ioreq_server_frame() is called), then + * mapping a guest frame is not permitted. + */ + if ( gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + if ( d->is_dying ) return -EINVAL; @@ -281,6 +294,70 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct domain *currd = current->domain; + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + + if ( iorp->page ) + { + /* + * If a guest frame has already been mapped (which may happen + * on demand if hvm_get_ioreq_server_info() is called), then + * allocating a page is not permitted. + */ + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) + return -EPERM; + + return 0; + } + + /* + * Allocated IOREQ server pages are assigned to the emulating + * domain, not the target domain. This is because the emulator is + * likely to be destroyed after the target domain has been torn + * down, and we must use MEMF_no_refcount otherwise page allocation + * could fail if the emulating domain has already reached its + * maximum allocation. + */ + iorp->page = alloc_domheap_page(currd, MEMF_no_refcount); + if ( !iorp->page ) + return -ENOMEM; + + if ( !get_page_type(iorp->page, PGT_writable_page) ) + { + ASSERT_UNREACHABLE(); + put_page(iorp->page); + iorp->page = NULL; + return -ENOMEM; + } + + iorp->va = __map_domain_page_global(iorp->page); + if ( !iorp->va ) + { + put_page_and_type(iorp->page); + iorp->page = NULL; + return -ENOMEM; + } + + clear_page(iorp->va); + return 0; +} + +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; + + if ( !iorp->page ) + return; + + unmap_domain_page_global(iorp->va); + iorp->va = NULL; + + put_page_and_type(iorp->page); + iorp->page = NULL; +} + bool is_ioreq_server_page(struct domain *d, const struct page_info *page) { const struct hvm_ioreq_server *s; @@ -484,6 +561,27 @@ static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) hvm_unmap_ioreq_gfn(s, false); } +static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +{ + int rc; + + rc = hvm_alloc_ioreq_mfn(s, false); + + if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) ) + rc = hvm_alloc_ioreq_mfn(s, true); + + if ( rc ) + hvm_free_ioreq_mfn(s, false); + + return rc; +} + +static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s) +{ + hvm_free_ioreq_mfn(s, true); + hvm_free_ioreq_mfn(s, false); +} + static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) { unsigned int i; @@ -612,7 +710,18 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, fail_add: hvm_ioreq_server_remove_all_vcpus(s); + + /* + * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * hvm_ioreq_server_free_pages() in that order. + * This is because the former will do nothing if the pages + * are not mapped, leaving the page to be freed by the latter. + * However if the pages are mapped then the former will set + * the page_info pointer to NULL, meaning the latter will do + * nothing. + */ hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); return rc; } @@ -622,6 +731,7 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) ASSERT(!s->enabled); hvm_ioreq_server_remove_all_vcpus(s); hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); hvm_ioreq_server_free_rangesets(s); } @@ -777,6 +887,52 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, return rc; } +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn) +{ + struct hvm_ioreq_server *s; + int rc; + + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); + + if ( id == DEFAULT_IOSERVID ) + return -EOPNOTSUPP; + + s = get_ioreq_server(d, id); + + ASSERT(!IS_DEFAULT(s)); + + rc = hvm_ioreq_server_alloc_pages(s); + if ( rc ) + goto out; + + switch ( idx ) + { + case XENMEM_resource_ioreq_server_frame_bufioreq: + rc = -ENOENT; + if ( !HANDLE_BUFIOREQ(s) ) + goto out; + + *mfn = _mfn(page_to_mfn(s->bufioreq.page)); + rc = 0; + break; + + case XENMEM_resource_ioreq_server_frame_ioreq(0): + *mfn = _mfn(page_to_mfn(s->ioreq.page)); + rc = 0; + break; + + default: + rc = -EINVAL; + break; + } + + out: + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); + + return rc; +} + int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index d9df5ca69f..1d15ae2a15 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -122,6 +122,7 @@ #include #include #include +#include #include #include @@ -3866,6 +3867,27 @@ int xenmem_add_to_physmap_one( return rc; } +int xenmem_acquire_ioreq_server(struct domain *d, unsigned int id, + unsigned long frame, + unsigned int nr_frames, + unsigned long mfn_list[]) +{ + unsigned int i; + + for ( i = 0; i < nr_frames; i++ ) + { + mfn_t mfn; + int rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn); + + if ( rc ) + return rc; + + mfn_list[i] = mfn_x(mfn); + } + + return 0; +} + long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) { int rc; diff --git a/xen/common/memory.c b/xen/common/memory.c index cdd2e030cf..b27a71c4f1 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -1011,6 +1011,11 @@ static int acquire_resource( switch ( xmar.type ) { + case XENMEM_resource_ioreq_server: + rc = xenmem_acquire_ioreq_server(d, xmar.id, xmar.frame, + xmar.nr_frames, mfn_list); + break; + default: rc = -EOPNOTSUPP; break; diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index 1829fcf43e..9e37c97a37 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -31,6 +31,8 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, unsigned long *ioreq_gfn, unsigned long *bufioreq_gfn, evtchn_port_t *bufioreq_port); +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, + unsigned long idx, mfn_t *mfn); int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end); diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index f2e0f498c4..44aac9d225 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -615,4 +615,9 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn) return mfn <= (virt_to_mfn(eva - 1) + 1); } +int xenmem_acquire_ioreq_server(struct domain *d, unsigned int id, + unsigned long frame, + unsigned int nr_frames, + unsigned long mfn_list[]); + #endif /* __ASM_X86_MM_H__ */ diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index 9677bd74e7..59b6006910 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -90,6 +90,10 @@ struct xen_dm_op_create_ioreq_server { * the frame numbers passed back in gfns and * respectively. (If the IOREQ Server is not handling buffered emulation * only will be valid). + * + * NOTE: To access the synchronous ioreq structures and buffered ioreq + * ring, it is preferable to use the XENMEM_acquire_resource memory + * op specifying resource type XENMEM_resource_ioreq_server. */ #define XEN_DMOP_get_ioreq_server_info 2 diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h index 18118ea5c6..9596ebf2c7 100644 --- a/xen/include/public/memory.h +++ b/xen/include/public/memory.h @@ -609,9 +609,14 @@ struct xen_mem_acquire_resource { domid_t domid; /* IN - the type of resource */ uint16_t type; + +#define XENMEM_resource_ioreq_server 0 + /* * IN - a type-specific resource identifier, which must be zero * unless stated otherwise. + * + * type == XENMEM_resource_ioreq_server -> id == ioreq server id */ uint32_t id; /* IN/OUT - As an IN parameter number of frames of the resource @@ -625,6 +630,10 @@ struct xen_mem_acquire_resource { * is ignored if nr_frames is 0. */ uint64_aligned_t frame; + +#define XENMEM_resource_ioreq_server_frame_bufioreq 0 +#define XENMEM_resource_ioreq_server_frame_ioreq(n) (1 + (n)) + /* IN/OUT - If the tools domain is PV then, upon return, frame_list * will be populated with the MFNs of the resource. * If the tools domain is HVM then it is expected that, on