From patchwork Wed Jun 8 13:10:11 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9164589 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6A9E560572 for ; Wed, 8 Jun 2016 13:12:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B7B826B4A for ; Wed, 8 Jun 2016 13:12:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5011D282DC; Wed, 8 Jun 2016 13:12:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0937C26B4A for ; Wed, 8 Jun 2016 13:12:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bAdFS-0001o3-Be; Wed, 08 Jun 2016 13:10:18 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bAdFR-0001nr-Ok for xen-devel@lists.xenproject.org; Wed, 08 Jun 2016 13:10:17 +0000 Received: from [193.109.254.147] by server-5.bemta-14.messagelabs.com id 03/EF-14748-93918575; Wed, 08 Jun 2016 13:10:17 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrNIsWRWlGSWpSXmKPExsXS6fjDS9dCMiL c4HGLiMX3LZOZHBg9Dn+4whLAGMWamZeUX5HAmjF/8yeWgqteFevuHWZtYPxm3sXIySEkkCdx 78l7NhCbV8BOYtaCWYwgtoSAocS++avA4iwCqhIzm5+zg9hsAuoSbc+2s3YxcnCICBhInDuaB BJmFtCReP9vPjOILSyQIPFj5TVGiPF2Ep92NYCN4RSwl+i8CmJzAK0SlPi7Qxii1U6i7e565g mMPLMQMrOQZCBsLYmHv26xQNjaEssWvmYGKWcWkJZY/o8DIuwqsXRSFyumkgCJ/ifaCxg5VjG qF6cWlaUW6RrrJRVlpmeU5CZm5ugaGpro5aYWFyemp+YkJhXrJefnbmIEBioDEOxgvNvnfIhR koNJSZRX0T08XIgvKT+lMiOxOCO+qDQntfgQowwHh5IEr59ERLiQYFFqempFWmYOMGZg0hIcP EoivHIgad7igsTc4sx0iNQpRkUpcV4HkIQASCKjNA+uDRanlxhlpYR5GYEOEeIpSC3KzSxBlX /FKM7BqCTMawEyhSczrwRu+iugxUxAi5cfCQdZXJKIkJJqYDSbekWBnbvCcMFW7dz3Jrvfhrm cq7w5h+vorsOlgqZZFae1pE8vP/ZTaEvOqbuGT83i2iuTOKq8zuvKXk+uidNwZDD/u7poztu6 jJCPyY9Ka24Ii3tu1y8UPnPbJWTvr/qubSlXEm3fTv/77zDL/JhpZ3Xuxr5Pn8zH+s7wji/D+ 2WJK66aTVViKc5INNRiLipOBABmPNmMzgIAAA== X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-5.tower-27.messagelabs.com!1465391414!46506187!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 16888 invoked from network); 8 Jun 2016 13:10:15 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 8 Jun 2016 13:10:15 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Wed, 08 Jun 2016 07:10:13 -0600 Message-Id: <5758355302000078000F30FE@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.0 Date: Wed, 08 Jun 2016 07:10:11 -0600 From: "Jan Beulich" To: "xen-devel" References: <575833E402000078000F30E7@prv-mh.provo.novell.com> In-Reply-To: <575833E402000078000F30E7@prv-mh.provo.novell.com> Mime-Version: 1.0 Cc: Paul Durrant Subject: [Xen-devel] [PATCH 2/2] x86/HVM: use available linear->phys translations in REP MOVS/STOS handling X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP If we have the translation result available already, we should also use is here. In my tests with Linux guests this eliminates all calls to hvmemul_linear_to_phys() out of the two functions being changed. Signed-off-by: Jan Beulich x86/HVM: use available linear->phys translations in REP MOVS/STOS handling If we have the translation result available already, we should also use is here. In my tests with Linux guests this eliminates all calls to hvmemul_linear_to_phys() out of the two functions being changed. Signed-off-by: Jan Beulich --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -1156,6 +1156,7 @@ static int hvmemul_rep_movs( { struct hvm_emulate_ctxt *hvmemul_ctxt = container_of(ctxt, struct hvm_emulate_ctxt, ctxt); + struct hvm_vcpu_io *vio = ¤t->arch.hvm_vcpu.hvm_io; unsigned long saddr, daddr, bytes; paddr_t sgpa, dgpa; uint32_t pfec = PFEC_page_present; @@ -1178,16 +1179,43 @@ static int hvmemul_rep_movs( if ( hvmemul_ctxt->seg_reg[x86_seg_ss].attr.fields.dpl == 3 ) pfec |= PFEC_user_mode; - rc = hvmemul_linear_to_phys( - saddr, &sgpa, bytes_per_rep, reps, pfec, hvmemul_ctxt); - if ( rc != X86EMUL_OKAY ) - return rc; + bytes = PAGE_SIZE - (saddr & ~PAGE_MASK); + if ( vio->mmio_access.read_access && + (vio->mmio_gva == (saddr & PAGE_MASK)) && + bytes >= bytes_per_rep ) + { + sgpa = pfn_to_paddr(vio->mmio_gpfn) | (saddr & ~PAGE_MASK); + if ( *reps * bytes_per_rep > bytes ) + *reps = bytes / bytes_per_rep; + } + else + { + rc = hvmemul_linear_to_phys(saddr, &sgpa, bytes_per_rep, reps, pfec, + hvmemul_ctxt); + if ( rc != X86EMUL_OKAY ) + return rc; - rc = hvmemul_linear_to_phys( - daddr, &dgpa, bytes_per_rep, reps, - pfec | PFEC_write_access, hvmemul_ctxt); - if ( rc != X86EMUL_OKAY ) - return rc; + latch_linear_to_phys(vio, saddr, sgpa, 0); + } + + bytes = PAGE_SIZE - (daddr & ~PAGE_MASK); + if ( vio->mmio_access.write_access && + (vio->mmio_gva == (daddr & PAGE_MASK)) && + bytes >= bytes_per_rep ) + { + dgpa = pfn_to_paddr(vio->mmio_gpfn) | (daddr & ~PAGE_MASK); + if ( *reps * bytes_per_rep > bytes ) + *reps = bytes / bytes_per_rep; + } + else + { + rc = hvmemul_linear_to_phys(daddr, &dgpa, bytes_per_rep, reps, + pfec | PFEC_write_access, hvmemul_ctxt); + if ( rc != X86EMUL_OKAY ) + return rc; + + latch_linear_to_phys(vio, daddr, dgpa, 1); + } /* Check for MMIO ops */ (void) get_gfn_query_unlocked(current->domain, sgpa >> PAGE_SHIFT, &sp2mt); @@ -1279,25 +1307,40 @@ static int hvmemul_rep_stos( { struct hvm_emulate_ctxt *hvmemul_ctxt = container_of(ctxt, struct hvm_emulate_ctxt, ctxt); - unsigned long addr; + struct hvm_vcpu_io *vio = ¤t->arch.hvm_vcpu.hvm_io; + unsigned long addr, bytes; paddr_t gpa; p2m_type_t p2mt; bool_t df = !!(ctxt->regs->eflags & X86_EFLAGS_DF); int rc = hvmemul_virtual_to_linear(seg, offset, bytes_per_rep, reps, hvm_access_write, hvmemul_ctxt, &addr); - if ( rc == X86EMUL_OKAY ) + if ( rc != X86EMUL_OKAY ) + return rc; + + bytes = PAGE_SIZE - (addr & ~PAGE_MASK); + if ( vio->mmio_access.write_access && + (vio->mmio_gva == (addr & PAGE_MASK)) && + bytes >= bytes_per_rep ) + { + gpa = pfn_to_paddr(vio->mmio_gpfn) | (addr & ~PAGE_MASK); + if ( *reps * bytes_per_rep > bytes ) + *reps = bytes / bytes_per_rep; + } + else { uint32_t pfec = PFEC_page_present | PFEC_write_access; if ( hvmemul_ctxt->seg_reg[x86_seg_ss].attr.fields.dpl == 3 ) pfec |= PFEC_user_mode; - rc = hvmemul_linear_to_phys( - addr, &gpa, bytes_per_rep, reps, pfec, hvmemul_ctxt); + rc = hvmemul_linear_to_phys(addr, &gpa, bytes_per_rep, reps, pfec, + hvmemul_ctxt); + if ( rc != X86EMUL_OKAY ) + return rc; + + latch_linear_to_phys(vio, addr, gpa, 1); } - if ( rc != X86EMUL_OKAY ) - return rc; /* Check for MMIO op */ (void)get_gfn_query_unlocked(current->domain, gpa >> PAGE_SHIFT, &p2mt); Reviewed-by: Paul Durrant --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -1156,6 +1156,7 @@ static int hvmemul_rep_movs( { struct hvm_emulate_ctxt *hvmemul_ctxt = container_of(ctxt, struct hvm_emulate_ctxt, ctxt); + struct hvm_vcpu_io *vio = ¤t->arch.hvm_vcpu.hvm_io; unsigned long saddr, daddr, bytes; paddr_t sgpa, dgpa; uint32_t pfec = PFEC_page_present; @@ -1178,16 +1179,43 @@ static int hvmemul_rep_movs( if ( hvmemul_ctxt->seg_reg[x86_seg_ss].attr.fields.dpl == 3 ) pfec |= PFEC_user_mode; - rc = hvmemul_linear_to_phys( - saddr, &sgpa, bytes_per_rep, reps, pfec, hvmemul_ctxt); - if ( rc != X86EMUL_OKAY ) - return rc; + bytes = PAGE_SIZE - (saddr & ~PAGE_MASK); + if ( vio->mmio_access.read_access && + (vio->mmio_gva == (saddr & PAGE_MASK)) && + bytes >= bytes_per_rep ) + { + sgpa = pfn_to_paddr(vio->mmio_gpfn) | (saddr & ~PAGE_MASK); + if ( *reps * bytes_per_rep > bytes ) + *reps = bytes / bytes_per_rep; + } + else + { + rc = hvmemul_linear_to_phys(saddr, &sgpa, bytes_per_rep, reps, pfec, + hvmemul_ctxt); + if ( rc != X86EMUL_OKAY ) + return rc; - rc = hvmemul_linear_to_phys( - daddr, &dgpa, bytes_per_rep, reps, - pfec | PFEC_write_access, hvmemul_ctxt); - if ( rc != X86EMUL_OKAY ) - return rc; + latch_linear_to_phys(vio, saddr, sgpa, 0); + } + + bytes = PAGE_SIZE - (daddr & ~PAGE_MASK); + if ( vio->mmio_access.write_access && + (vio->mmio_gva == (daddr & PAGE_MASK)) && + bytes >= bytes_per_rep ) + { + dgpa = pfn_to_paddr(vio->mmio_gpfn) | (daddr & ~PAGE_MASK); + if ( *reps * bytes_per_rep > bytes ) + *reps = bytes / bytes_per_rep; + } + else + { + rc = hvmemul_linear_to_phys(daddr, &dgpa, bytes_per_rep, reps, + pfec | PFEC_write_access, hvmemul_ctxt); + if ( rc != X86EMUL_OKAY ) + return rc; + + latch_linear_to_phys(vio, daddr, dgpa, 1); + } /* Check for MMIO ops */ (void) get_gfn_query_unlocked(current->domain, sgpa >> PAGE_SHIFT, &sp2mt); @@ -1279,25 +1307,40 @@ static int hvmemul_rep_stos( { struct hvm_emulate_ctxt *hvmemul_ctxt = container_of(ctxt, struct hvm_emulate_ctxt, ctxt); - unsigned long addr; + struct hvm_vcpu_io *vio = ¤t->arch.hvm_vcpu.hvm_io; + unsigned long addr, bytes; paddr_t gpa; p2m_type_t p2mt; bool_t df = !!(ctxt->regs->eflags & X86_EFLAGS_DF); int rc = hvmemul_virtual_to_linear(seg, offset, bytes_per_rep, reps, hvm_access_write, hvmemul_ctxt, &addr); - if ( rc == X86EMUL_OKAY ) + if ( rc != X86EMUL_OKAY ) + return rc; + + bytes = PAGE_SIZE - (addr & ~PAGE_MASK); + if ( vio->mmio_access.write_access && + (vio->mmio_gva == (addr & PAGE_MASK)) && + bytes >= bytes_per_rep ) + { + gpa = pfn_to_paddr(vio->mmio_gpfn) | (addr & ~PAGE_MASK); + if ( *reps * bytes_per_rep > bytes ) + *reps = bytes / bytes_per_rep; + } + else { uint32_t pfec = PFEC_page_present | PFEC_write_access; if ( hvmemul_ctxt->seg_reg[x86_seg_ss].attr.fields.dpl == 3 ) pfec |= PFEC_user_mode; - rc = hvmemul_linear_to_phys( - addr, &gpa, bytes_per_rep, reps, pfec, hvmemul_ctxt); + rc = hvmemul_linear_to_phys(addr, &gpa, bytes_per_rep, reps, pfec, + hvmemul_ctxt); + if ( rc != X86EMUL_OKAY ) + return rc; + + latch_linear_to_phys(vio, addr, gpa, 1); } - if ( rc != X86EMUL_OKAY ) - return rc; /* Check for MMIO op */ (void)get_gfn_query_unlocked(current->domain, gpa >> PAGE_SHIFT, &p2mt);