From patchwork Mon May 9 16:14:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 9047531 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4799E9F30C for ; Mon, 9 May 2016 16:16:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 565DD20103 for ; Mon, 9 May 2016 16:16:51 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 32164200ED for ; Mon, 9 May 2016 16:16:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aznpY-0000F7-B3; Mon, 09 May 2016 16:14:48 +0000 Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aznpX-0000Ey-4g for xen-devel@lists.xensource.com; Mon, 09 May 2016 16:14:47 +0000 Received: from [85.158.143.35] by server-1.bemta-6.messagelabs.com id 9B/82-18833-677B0375; Mon, 09 May 2016 16:14:46 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrBIsWRWlGSWpSXmKPExsWyU9JRQrdsu0G 4wfS1rBb3prxnd2D02N63iz2AMYo1My8pvyKBNePRHY6CLxoVy39pNDBu0Ohi5OSQEPCTWPPo HiuIzSagI7H06HwmEFtEIFBi38pZ7CA2s0CBxMM7k8BsYQEfiVcHZzB2MXIA1fhKLPpeBFGeJ PH01WUWEJtFQEViwbo+sHJeAXeJM88vAZVzcQgJnGWWWH9lOth8TgEPia4Na8H2MgrISnxpXM 0MsUtc4tYTiBskBAQkluw5zwxhi0q8fPyPFcI2kNi6dB8LyA3MApoS63fpQ7QqSkzpfgi1V1D i5MwnYPcIAd2z5fEBqDHcEis//2GZwCg6C8m2WQiTZiGZNAvJpAWMLKsY1YtTi8pSi3QN9ZKK MtMzSnITM3N0DQ3M9HJTi4sT01NzEpOK9ZLzczcxAmOEAQh2MO587nSIUZKDSUmU15VRL1yIL yk/pTIjsTgjvqg0J7X4EKMMB4eSBK/+NoNwIcGi1PTUirTMHGC0wqQlOHiURHjtQdK8xQWJuc WZ6RCpU4yKUuK8+7YCJQRAEhmleXBtsARxiVFWSpiXEegQIZ6C1KLczBJU+VeM4hyMSsK8vCD jeTLzSuCmvwJazAS0WI5NH2RxSSJCSqqBsaW402z+GYmcNRIbFO7mFLbbTDFruv7E+tGFyKlH Pvh7ZR6cJMZY8qq9hn+y/P9Y/+9/hJ6km6drtpUEqpyKqu6SZD4f7pzG1MkgGvqn3XXho/fnG RsnK4tzGV97t+x8X1P2HtVJHKlZT44rSoYrHG3mXBwjLz2laPHzKNe/N2I3hO+JmzxbiaU4I9 FQi7moOBEA3t9pOwsDAAA= X-Env-Sender: prvs=930fe50b8=Paul.Durrant@citrix.com X-Msg-Ref: server-10.tower-21.messagelabs.com!1462810485!13216384!1 X-Originating-IP: [185.25.65.24] X-SpamReason: No, hits=0.0 required=7.0 tests=received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 7614 invoked from network); 9 May 2016 16:14:46 -0000 Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24) by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 9 May 2016 16:14:46 -0000 X-IronPort-AV: E=Sophos;i="5.24,601,1454976000"; d="scan'208";a="21112322" From: Paul Durrant To: Paolo Bonzini , Martin Cerveny Thread-Topic: [Xen-devel] Overlaped PIO with multiple ioreq_server (Xen4.6.1) Thread-Index: AQHRoLx+1z20DCum/ECyXfOp3NX5kJ+e8xQAgAAs6YD///vkAIAAIpZwgBFCvICAACJfcIAAMc2AgAAE0oA= Date: Mon, 9 May 2016 16:14:44 +0000 Message-ID: <0c7ffa680c8b41898771010239e16111@AMSPEX02CL03.citrite.net> References: <2207476cf32b4e8fad39374a6ebd8a1f@AMSPEX02CL03.citrite.net> <573088D1.3080400@redhat.com> <95a929eb370a4761b2a3ef504e79f0cb@AMSPEX02CL03.citrite.net> In-Reply-To: <95a929eb370a4761b2a3ef504e79f0cb@AMSPEX02CL03.citrite.net> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted MIME-Version: 1.0 X-DLP: AMS1 Cc: "xen-devel@lists.xensource.com" , George Dunlap Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server (Xen4.6.1) X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP > -----Original Message----- > From: Paul Durrant > Sent: 09 May 2016 17:02 > To: Paul Durrant; Paolo Bonzini; Martin Cerveny > Cc: xen-devel@lists.xensource.com; George Dunlap > Subject: RE: [Xen-devel] Overlaped PIO with multiple ioreq_server > (Xen4.6.1) > > > -----Original Message----- > > From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of > > Paul Durrant > > Sent: 09 May 2016 14:00 > > To: Paolo Bonzini; Martin Cerveny > > Cc: xen-devel@lists.xensource.com; George Dunlap > > Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server > > (Xen4.6.1) > > > > > -----Original Message----- > > > From: Paolo Bonzini [mailto:pbonzini@redhat.com] > > > Sent: 09 May 2016 13:56 > > > To: Paul Durrant; Martin Cerveny > > > Cc: George Dunlap; xen-devel@lists.xensource.com > > > Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server > > > (Xen4.6.1) > > > > > > > > > > > > On 28/04/2016 13:25, Paul Durrant wrote: > > > >> Maybe you are lucky, qemu is registered before your own demu > > > >> emulator. > > > > > > > > I guess I was lucky. > > > > > > Yeah, QEMU has been doing that since 2013 (commit 3bb28b7, "memory: > > > Provide separate handling of unassigned io ports accesses", 2013-09-05). > > > > > > >> I used for testing your "demu" 2 years ago, now extending Citrix > > > >> "vgpu", all was fine up to xen 4.5.2 (with qemu 2.0.2) but > > > >> problem begin when I switched to 4.6.1 (with qemu 2.2.1), but it > > > >> maybe lucky timing in registration. > > > > > > > > I think Xen should really be spotting range overlaps like this, but > > > > the QEMU<->Xen interface will clearly need to be fixed to avoid the > > > > over-claiming of I/O ports like this. > > > > > > If the handling of unassigned I/O ports is sane in Xen (in QEMU they > > > return all ones and discard writes), > > > > Yes, it does exactly that. > > > > > it would be okay to make the > > > background 0-65535 range conditional on !xen_enabled(). See > > > memory_map_init() in QEMU's exec.c file. > > > > > > > Cool. Thanks for the tip. Will have a look at that now. > > > > Looks like creation of the background range is required. (Well, when I simply > #if 0-ed out creating it QEMU crashed on invocation). So, I guess I need to be > able to spot, from the memory listener callback in Xen, when a background > range is being added so it can be ignored. Same actually goes for memory as > well as I/O, since Xen will handle access to unimplemented MMIO ranges in a > similar fashion. > In fact, this patch seems to do the trick for I/O: > Paul > > > Cheers, > > > > Paul > > > > > Paolo > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xen.org > > http://lists.xen.org/xen-devel diff --git a/xen-hvm.c b/xen-hvm.c index 039680a..8ab44f0 100644 --- a/xen-hvm.c +++ b/xen-hvm.c @@ -510,8 +510,12 @@ static void xen_io_add(MemoryListener *listener, MemoryRegionSection *section) { XenIOState *state = container_of(listener, XenIOState, io_listener); + MemoryRegion *mr = section->mr; - memory_region_ref(section->mr); + if (mr->ops == &unassigned_io_ops) + return; + + memory_region_ref(mr); xen_map_io_section(xen_xc, xen_domid, state->ioservid, section); } @@ -520,10 +524,14 @@ static void xen_io_del(MemoryListener *listener, MemoryRegionSection *section) { XenIOState *state = container_of(listener, XenIOState, io_listener); + MemoryRegion *mr = section->mr; + + if (mr->ops == &unassigned_io_ops) + return; xen_unmap_io_section(xen_xc, xen_domid, state->ioservid, section); - memory_region_unref(section->mr); + memory_region_unref(mr); } Paul