From patchwork Wed Jun 8 13:09:43 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9164579 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B8C2960572 for ; Wed, 8 Jun 2016 13:11:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AA5B626B4A for ; Wed, 8 Jun 2016 13:11:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9F5322823D; Wed, 8 Jun 2016 13:11:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7DEBA26B4A for ; Wed, 8 Jun 2016 13:11:48 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bAdF0-0001Ll-2o; Wed, 08 Jun 2016 13:09:50 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bAdEz-0001LR-NV for xen-devel@lists.xenproject.org; Wed, 08 Jun 2016 13:09:49 +0000 Received: from [85.158.139.211] by server-3.bemta-5.messagelabs.com id 8F/B3-15735-D1918575; Wed, 08 Jun 2016 13:09:49 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrNIsWRWlGSWpSXmKPExsXS6fjDS1dGMiL coPG7hMX3LZOZHBg9Dn+4whLAGMWamZeUX5HAmrHr4VOmgqdSFd1fNzE2MO4S6WLk5BASyJPo ePaOFcTmFbCT2NS0kRHElhAwlNg3fxUbiM0ioCqx7vYXsBo2AXWJtmfbgWwODhEBA4lzR5NAw swCOhLv/81nBgkLC7hIfJ1pCjHdTuLTrgawKZwC9hKdV0FsDqBNghJ/dwhDdNpJfJ+3hWUCI8 8shMwsJBkIW0vi4a9bULa2xLKFr5lBypkFpCWW/+OACDtLPPwygRlVCYjtJ3Fv0mX2BYwcqxj Vi1OLylKLdE31kooy0zNKchMzc3QNDUz1clOLixPTU3MSk4r1kvNzNzECA5UBCHYwful3PsQo ycGkJMqr6B4eLsSXlJ9SmZFYnBFfVJqTWnyIUYaDQ0mC94F4RLiQYFFqempFWmYOMGZg0hIcP EoivE9B0rzFBYm5xZnpEKlTjIpS4rwXQBICIImM0jy4NlicXmKUlRLmZQQ6RIinILUoN7MEVf 4VozgHo5Iw7z+QKTyZeSVw018BLWYCWrz8SDjI4pJEhJRUA2O2x958CZuwVc8n3Mp7dszzzlG uN3qrDobN4uH6XWPfdv7Y4ehHUqL/a+a8si1YkcurvrAlstDev3RH7uXID9O2mp/vvD4797HY HmfV8vM7XHKXZeV8zq1JkAws4Vr52//f3EmJVfpvDogXPZzswntfc3PCE8GYMxZ9svMu9Ih6v 2JYHf5xTakSS3FGoqEWc1FxIgDhEIWSzgIAAA== X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-14.tower-206.messagelabs.com!1465391386!7449884!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 17561 invoked from network); 8 Jun 2016 13:09:48 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 8 Jun 2016 13:09:48 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Wed, 08 Jun 2016 07:09:45 -0600 Message-Id: <5758353702000078000F30FA@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.0 Date: Wed, 08 Jun 2016 07:09:43 -0600 From: "Jan Beulich" To: "xen-devel" References: <575833E402000078000F30E7@prv-mh.provo.novell.com> In-Reply-To: <575833E402000078000F30E7@prv-mh.provo.novell.com> Mime-Version: 1.0 Cc: Paul Durrant Subject: [Xen-devel] [PATCH 1/2] x86/HVM: latch linear->phys translation results X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ... to avoid re-doing the same translation later again (in a retry, for example). This doesn't help very often according to my testing, but it's pretty cheap to have, and will be of further use subsequently. Signed-off-by: Jan Beulich x86/HVM: latch linear->phys translation results ... to avoid re-doing the same translation later again (in a retry, for example). This doesn't help very often according to my testing, but it's pretty cheap to have, and will be of further use subsequently. Signed-off-by: Jan Beulich --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -678,6 +678,19 @@ static struct hvm_mmio_cache *hvmemul_fi return cache; } +static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gla, + unsigned long gpa, bool_t write) +{ + if ( vio->mmio_access.gla_valid ) + return; + + vio->mmio_gva = gla & PAGE_MASK; + vio->mmio_gpfn = PFN_DOWN(gpa); + vio->mmio_access = (struct npfec){ .gla_valid = 1, + .read_access = 1, + .write_access = write }; +} + static int hvmemul_linear_mmio_access( unsigned long gla, unsigned int size, uint8_t dir, void *buffer, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t known_gpfn) @@ -703,6 +716,8 @@ static int hvmemul_linear_mmio_access( hvmemul_ctxt); if ( rc != X86EMUL_OKAY ) return rc; + + latch_linear_to_phys(vio, gla, gpa, dir == IOREQ_WRITE); } for ( ;; ) Reviewed-by: Paul Durrant --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -678,6 +678,19 @@ static struct hvm_mmio_cache *hvmemul_fi return cache; } +static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gla, + unsigned long gpa, bool_t write) +{ + if ( vio->mmio_access.gla_valid ) + return; + + vio->mmio_gva = gla & PAGE_MASK; + vio->mmio_gpfn = PFN_DOWN(gpa); + vio->mmio_access = (struct npfec){ .gla_valid = 1, + .read_access = 1, + .write_access = write }; +} + static int hvmemul_linear_mmio_access( unsigned long gla, unsigned int size, uint8_t dir, void *buffer, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t known_gpfn) @@ -703,6 +716,8 @@ static int hvmemul_linear_mmio_access( hvmemul_ctxt); if ( rc != X86EMUL_OKAY ) return rc; + + latch_linear_to_phys(vio, gla, gpa, dir == IOREQ_WRITE); } for ( ;; )