From patchwork Tue Jun 14 14:44:59 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 9175989 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AD0AA6075D for ; Tue, 14 Jun 2016 14:47:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9E2332012F for ; Tue, 14 Jun 2016 14:47:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 92C4A2823D; Tue, 14 Jun 2016 14:47:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 023EB2012F for ; Tue, 14 Jun 2016 14:47:08 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bCpaW-0004MZ-0J; Tue, 14 Jun 2016 14:45:08 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1bCpaU-0004MT-9y for xen-devel@lists.xenproject.org; Tue, 14 Jun 2016 14:45:06 +0000 Received: from [193.109.254.147] by server-15.bemta-14.messagelabs.com id FE/69-15653-17810675; Tue, 14 Jun 2016 14:45:05 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrCIsWRWlGSWpSXmKPExsXS6fjDS7dAIiH cYH2/psX3LZOZHBg9Dn+4whLAGMWamZeUX5HAmnF70wa2gvV+FXeWfWVvYNxi28XIySEkkCdx e8ZjZhCbV8BOom/eZRYQW0LAUGLf/FVsIDaLgKrEmd2zmEBsNgF1ibZn21m7GDk4RAQMJM4dT QIJMwtESfQtvwpWIixgLfHi4UawEl4BQYm/O4QhSuwkJpzdzDKBkWsWQmYWkgyErSXx8NctKF tbYtnC18wg5cwC0hLL/3FAhM0lbkydzoaqBMR2kvh07jDrAkaOVYzqxalFZalFuuZ6SUWZ6Rk luYmZObqGhiZ6uanFxYnpqTmJScV6yfm5mxiBoccABDsYvyxxPsQoycGkJMq7lTkhXIgvKT+l MiOxOCO+qDQntfgQowwHh5IE7z0xoJxgUWp6akVaZg4wCmDSEhw8SiK8XOJAad7igsTc4sx0i NQpRkUpcd7vIH0CIImM0jy4NljkXWKUlRLmZQQ6RIinILUoN7MEVf4VozgHo5IwLx/IeJ7MvB K46a+AFjMBLbaZHg+yuCQRISXVwFiyaamKW2rNX9699euffY1O+dE8ISE3rOSb75s5D2qddh+ arKu0Xf930pf+Izs/+jP75yaY5cy4u6b/5gRRna0nsh1MI7c5605nvegh1bxQ93PZo0slW5mU OlxWH/TOmZuqHHDc6bC3jX+umHTy661VJRJLUhuSE20aHU2/SC3aFsK+hzlGSomlOCPRUIu5q DgRAEYAfWG3AgAA X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-13.tower-27.messagelabs.com!1465915502!47704339!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 8.46; banners=-,-,- X-VirusChecked: Checked Received: (qmail 10073 invoked from network); 14 Jun 2016 14:45:04 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 14 Jun 2016 14:45:04 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Tue, 14 Jun 2016 08:45:02 -0600 Message-Id: <5760348B02000078000F4DC9@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.0 Date: Tue, 14 Jun 2016 08:44:59 -0600 From: "Jan Beulich" To: "xen-devel" Mime-Version: 1.0 Cc: Andrew Cooper , Paul Durrant Subject: [Xen-devel] [PATCH] x86/HVM: rename mmio_gva field to mmio_gla X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP ... to correctly reflect its purpose. To make things consistent also rename handle_mmio_with_translation()'s respective parameter (but don't touch sh_page_fault(), as renaming its parameter would require quite a few more changes there). Suggested-by: Andrew Cooper Signed-off-by: Jan Beulich x86/HVM: rename mmio_gva field to mmio_gla ... to correctly reflect its purpose. To make things consistent also rename handle_mmio_with_translation()'s respective parameter (but don't touch sh_page_fault(), as renaming its parameter would require quite a few more changes there). Suggested-by: Andrew Cooper Signed-off-by: Jan Beulich --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -684,7 +684,7 @@ static void latch_linear_to_phys(struct if ( vio->mmio_access.gla_valid ) return; - vio->mmio_gva = gla & PAGE_MASK; + vio->mmio_gla = gla & PAGE_MASK; vio->mmio_gpfn = PFN_DOWN(gpa); vio->mmio_access = (struct npfec){ .gla_valid = 1, .read_access = 1, @@ -782,7 +782,7 @@ static int __hvmemul_read( if ( ((access_type != hvm_access_insn_fetch ? vio->mmio_access.read_access : vio->mmio_access.insn_fetch)) && - (vio->mmio_gva == (addr & PAGE_MASK)) ) + (vio->mmio_gla == (addr & PAGE_MASK)) ) return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec, hvmemul_ctxt, 1); if ( (seg != x86_seg_none) && @@ -889,7 +889,7 @@ static int hvmemul_write( return rc; if ( vio->mmio_access.write_access && - (vio->mmio_gva == (addr & PAGE_MASK)) ) + (vio->mmio_gla == (addr & PAGE_MASK)) ) return hvmemul_linear_mmio_write(addr, bytes, p_data, pfec, hvmemul_ctxt, 1); if ( (seg != x86_seg_none) && @@ -1181,7 +1181,7 @@ static int hvmemul_rep_movs( bytes = PAGE_SIZE - (saddr & ~PAGE_MASK); if ( vio->mmio_access.read_access && - (vio->mmio_gva == (saddr & PAGE_MASK)) && + (vio->mmio_gla == (saddr & PAGE_MASK)) && bytes >= bytes_per_rep ) { sgpa = pfn_to_paddr(vio->mmio_gpfn) | (saddr & ~PAGE_MASK); @@ -1200,7 +1200,7 @@ static int hvmemul_rep_movs( bytes = PAGE_SIZE - (daddr & ~PAGE_MASK); if ( vio->mmio_access.write_access && - (vio->mmio_gva == (daddr & PAGE_MASK)) && + (vio->mmio_gla == (daddr & PAGE_MASK)) && bytes >= bytes_per_rep ) { dgpa = pfn_to_paddr(vio->mmio_gpfn) | (daddr & ~PAGE_MASK); @@ -1320,7 +1320,7 @@ static int hvmemul_rep_stos( bytes = PAGE_SIZE - (addr & ~PAGE_MASK); if ( vio->mmio_access.write_access && - (vio->mmio_gva == (addr & PAGE_MASK)) && + (vio->mmio_gla == (addr & PAGE_MASK)) && bytes >= bytes_per_rep ) { gpa = pfn_to_paddr(vio->mmio_gpfn) | (addr & ~PAGE_MASK); --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -114,7 +114,7 @@ int handle_mmio(void) return 1; } -int handle_mmio_with_translation(unsigned long gva, unsigned long gpfn, +int handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, struct npfec access) { struct hvm_vcpu_io *vio = ¤t->arch.hvm_vcpu.hvm_io; @@ -122,7 +122,7 @@ int handle_mmio_with_translation(unsigne vio->mmio_access = access.gla_valid && access.kind == npfec_kind_with_gla ? access : (struct npfec){}; - vio->mmio_gva = gva & PAGE_MASK; + vio->mmio_gla = gla & PAGE_MASK; vio->mmio_gpfn = gpfn; return handle_mmio(); } --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -119,7 +119,7 @@ void relocate_portio_handler( void send_timeoffset_req(unsigned long timeoff); void send_invalidate_req(void); int handle_mmio(void); -int handle_mmio_with_translation(unsigned long gva, unsigned long gpfn, +int handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, struct npfec); int handle_pio(uint16_t port, unsigned int size, int dir); void hvm_interrupt_post(struct vcpu *v, int vector, int type); --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -60,12 +60,12 @@ struct hvm_vcpu_io { /* * HVM emulation: - * Virtual address @mmio_gva maps to MMIO physical frame @mmio_gpfn. + * Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn. * The latter is known to be an MMIO frame (not RAM). * This translation is only valid for accesses as per @mmio_access. */ struct npfec mmio_access; - unsigned long mmio_gva; + unsigned long mmio_gla; unsigned long mmio_gpfn; /* Reviewed-by: Andrew Cooper Reviewed-by: Paul Durrant --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -684,7 +684,7 @@ static void latch_linear_to_phys(struct if ( vio->mmio_access.gla_valid ) return; - vio->mmio_gva = gla & PAGE_MASK; + vio->mmio_gla = gla & PAGE_MASK; vio->mmio_gpfn = PFN_DOWN(gpa); vio->mmio_access = (struct npfec){ .gla_valid = 1, .read_access = 1, @@ -782,7 +782,7 @@ static int __hvmemul_read( if ( ((access_type != hvm_access_insn_fetch ? vio->mmio_access.read_access : vio->mmio_access.insn_fetch)) && - (vio->mmio_gva == (addr & PAGE_MASK)) ) + (vio->mmio_gla == (addr & PAGE_MASK)) ) return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec, hvmemul_ctxt, 1); if ( (seg != x86_seg_none) && @@ -889,7 +889,7 @@ static int hvmemul_write( return rc; if ( vio->mmio_access.write_access && - (vio->mmio_gva == (addr & PAGE_MASK)) ) + (vio->mmio_gla == (addr & PAGE_MASK)) ) return hvmemul_linear_mmio_write(addr, bytes, p_data, pfec, hvmemul_ctxt, 1); if ( (seg != x86_seg_none) && @@ -1181,7 +1181,7 @@ static int hvmemul_rep_movs( bytes = PAGE_SIZE - (saddr & ~PAGE_MASK); if ( vio->mmio_access.read_access && - (vio->mmio_gva == (saddr & PAGE_MASK)) && + (vio->mmio_gla == (saddr & PAGE_MASK)) && bytes >= bytes_per_rep ) { sgpa = pfn_to_paddr(vio->mmio_gpfn) | (saddr & ~PAGE_MASK); @@ -1200,7 +1200,7 @@ static int hvmemul_rep_movs( bytes = PAGE_SIZE - (daddr & ~PAGE_MASK); if ( vio->mmio_access.write_access && - (vio->mmio_gva == (daddr & PAGE_MASK)) && + (vio->mmio_gla == (daddr & PAGE_MASK)) && bytes >= bytes_per_rep ) { dgpa = pfn_to_paddr(vio->mmio_gpfn) | (daddr & ~PAGE_MASK); @@ -1320,7 +1320,7 @@ static int hvmemul_rep_stos( bytes = PAGE_SIZE - (addr & ~PAGE_MASK); if ( vio->mmio_access.write_access && - (vio->mmio_gva == (addr & PAGE_MASK)) && + (vio->mmio_gla == (addr & PAGE_MASK)) && bytes >= bytes_per_rep ) { gpa = pfn_to_paddr(vio->mmio_gpfn) | (addr & ~PAGE_MASK); --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -114,7 +114,7 @@ int handle_mmio(void) return 1; } -int handle_mmio_with_translation(unsigned long gva, unsigned long gpfn, +int handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, struct npfec access) { struct hvm_vcpu_io *vio = ¤t->arch.hvm_vcpu.hvm_io; @@ -122,7 +122,7 @@ int handle_mmio_with_translation(unsigne vio->mmio_access = access.gla_valid && access.kind == npfec_kind_with_gla ? access : (struct npfec){}; - vio->mmio_gva = gva & PAGE_MASK; + vio->mmio_gla = gla & PAGE_MASK; vio->mmio_gpfn = gpfn; return handle_mmio(); } --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -119,7 +119,7 @@ void relocate_portio_handler( void send_timeoffset_req(unsigned long timeoff); void send_invalidate_req(void); int handle_mmio(void); -int handle_mmio_with_translation(unsigned long gva, unsigned long gpfn, +int handle_mmio_with_translation(unsigned long gla, unsigned long gpfn, struct npfec); int handle_pio(uint16_t port, unsigned int size, int dir); void hvm_interrupt_post(struct vcpu *v, int vector, int type); --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -60,12 +60,12 @@ struct hvm_vcpu_io { /* * HVM emulation: - * Virtual address @mmio_gva maps to MMIO physical frame @mmio_gpfn. + * Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn. * The latter is known to be an MMIO frame (not RAM). * This translation is only valid for accesses as per @mmio_access. */ struct npfec mmio_access; - unsigned long mmio_gva; + unsigned long mmio_gla; unsigned long mmio_gpfn; /*