From patchwork Mon May 20 12:55:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Stefan ISAILA X-Patchwork-Id: 10951083 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF3B614C0 for ; Mon, 20 May 2019 12:57:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AE1182852C for ; Mon, 20 May 2019 12:57:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A237228735; Mon, 20 May 2019 12:57:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D6BB12852C for ; Mon, 20 May 2019 12:57:13 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hShoz-0006ge-Md; Mon, 20 May 2019 12:55:17 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hShoy-0006gY-1I for xen-devel@lists.xenproject.org; Mon, 20 May 2019 12:55:16 +0000 X-Inumbo-ID: 821f3f17-7afe-11e9-8980-bc764e045a96 Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown [40.107.14.108]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 821f3f17-7afe-11e9-8980-bc764e045a96; Mon, 20 May 2019 12:55:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bitdefender.onmicrosoft.com; s=selector1-bitdefender-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SuUQL9kwXn3FXXCbT4tbnX+gSMarRXx+PfuoCom0wKw=; b=IFIzwOqb/NEYJn6v31pcyZSZP4tp4SwVDi/l0PvjE3hWIB5o0mt9+Xs8wlLgp/d6wjZLlQoRRoYMv51/kOkU/E9Sn9Pl7527ZMSgqJvD5E2/JbjwmPG7KukQgeqjGotcfyQC6BWd5cECBKjJ0x4aPWJjE1fJqeaMetKAz4SyCOw= Received: from VI1PR0202MB2928.eurprd02.prod.outlook.com (10.171.106.11) by VI1PR0202MB2863.eurprd02.prod.outlook.com (10.171.102.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1900.16; Mon, 20 May 2019 12:55:10 +0000 Received: from VI1PR0202MB2928.eurprd02.prod.outlook.com ([fe80::fdcf:4771:37b7:9830]) by VI1PR0202MB2928.eurprd02.prod.outlook.com ([fe80::fdcf:4771:37b7:9830%9]) with mapi id 15.20.1900.020; Mon, 20 May 2019 12:55:10 +0000 From: Alexandru Stefan ISAILA To: "xen-devel@lists.xenproject.org" Thread-Topic: [PATCH v4 1/2] x86/emulate: Move hvmemul_linear_to_phys Thread-Index: AQHVDwtCd30Zw9xzVEyqgGU2/KVoUQ== Date: Mon, 20 May 2019 12:55:10 +0000 Message-ID: <20190520125454.14805-1-aisaila@bitdefender.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: AM0PR05CA0078.eurprd05.prod.outlook.com (2603:10a6:208:136::18) To VI1PR0202MB2928.eurprd02.prod.outlook.com (2603:10a6:800:e3::11) authentication-results: spf=none (sender IP is ) smtp.mailfrom=aisaila@bitdefender.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [91.199.104.6] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: aab4f67a-916e-487a-f5d7-08d6dd226473 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600141)(711020)(4605104)(2017052603328)(7193020); SRVR:VI1PR0202MB2863; x-ms-traffictypediagnostic: VI1PR0202MB2863:|VI1PR0202MB2863: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:2803; x-forefront-prvs: 004395A01C x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(346002)(136003)(396003)(376002)(39860400002)(366004)(199004)(189003)(71190400001)(4326008)(1076003)(66066001)(71200400001)(305945005)(7736002)(53936002)(107886003)(256004)(3846002)(6116002)(86362001)(14444005)(36756003)(8676002)(81156014)(81166006)(2351001)(25786009)(316002)(6916009)(54906003)(14454004)(5660300002)(8936002)(478600001)(68736007)(486006)(50226002)(476003)(2501003)(6512007)(66446008)(2616005)(66556008)(66476007)(66946007)(73956011)(64756008)(2906002)(6506007)(102836004)(52116002)(26005)(6436002)(386003)(5640700003)(99286004)(186003)(6486002); DIR:OUT; SFP:1102; SCL:1; SRVR:VI1PR0202MB2863; H:VI1PR0202MB2928.eurprd02.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: bitdefender.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: yvZSwWCqGs0tV6HVUIjGNoU+SDBzdT8N7KjZjMVBbctf7CikKZDnyKgftKwpveHsCB6MKeN1oZIXy6BcPV3ZN2ORCxewfGhXe0BeK6UeezY4Vg3BqJYXDN2qgKmz+SXSr7gm7pWsPsQjdeCt2CscgGcZnv59IPvVbZsEqnXswDMEVJ5QouTvRpUjhfvtSYMkALB3rJdrC60wNi/nGnxGlSGenTeULQ55o+K0Ppc8TPgyQjkweAmqvUNnBuELMJ1Ep6pBgUxUbYtf/Ap4sFOBqSlMHV9JKpntUkhoNMgICghhP+SpK30E6qGUm2MhGq6FZs9eJI8qW7DedGf+43ne8+EtJHHN2rP5zo6ltlEu9aar8qGtGoUeK+hnwu6xNc9oAHHQzVB99814PKDTdfKSCctpAeebE+cZLZhd50JUw88= MIME-Version: 1.0 X-OriginatorOrg: bitdefender.com X-MS-Exchange-CrossTenant-Network-Message-Id: aab4f67a-916e-487a-f5d7-08d6dd226473 X-MS-Exchange-CrossTenant-originalarrivaltime: 20 May 2019 12:55:10.8045 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 487baf29-f1da-469a-9221-243f830c36f3 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0202MB2863 Subject: [Xen-devel] [PATCH v4 1/2] x86/emulate: Move hvmemul_linear_to_phys X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: "tamas@tklengyel.com" , "wei.liu2@citrix.com" , "rcojocaru@bitdefender.com" , "george.dunlap@eu.citrix.com" , "andrew.cooper3@citrix.com" , "paul.durrant@citrix.com" , "jbeulich@suse.com" , Alexandru Stefan ISAILA , "roger.pau@citrix.com" Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Thiis is done so hvmemul_linear_to_phys() can be called from hvmemul_send_vm_event(). Signed-off-by: Alexandru Isaila --- xen/arch/x86/hvm/emulate.c | 181 ++++++++++++++++++------------------- 1 file changed, 90 insertions(+), 91 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 8659c89862..254ff6515d 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -530,6 +530,95 @@ static int hvmemul_do_mmio_addr(paddr_t mmio_gpa, return hvmemul_do_io_addr(1, mmio_gpa, reps, size, dir, df, ram_gpa); } +/* + * Convert addr from linear to physical form, valid over the range + * [addr, addr + *reps * bytes_per_rep]. *reps is adjusted according to + * the valid computed range. It is always >0 when X86EMUL_OKAY is returned. + * @pfec indicates the access checks to be performed during page-table walks. + */ +static int hvmemul_linear_to_phys( + unsigned long addr, + paddr_t *paddr, + unsigned int bytes_per_rep, + unsigned long *reps, + uint32_t pfec, + struct hvm_emulate_ctxt *hvmemul_ctxt) +{ + struct vcpu *curr = current; + unsigned long pfn, npfn, done, todo, i, offset = addr & ~PAGE_MASK; + int reverse; + + /* + * Clip repetitions to a sensible maximum. This avoids extensive looping in + * this function while still amortising the cost of I/O trap-and-emulate. + */ + *reps = min_t(unsigned long, *reps, 4096); + + /* With no paging it's easy: linear == physical. */ + if ( !(curr->arch.hvm.guest_cr[0] & X86_CR0_PG) ) + { + *paddr = addr; + return X86EMUL_OKAY; + } + + /* Reverse mode if this is a backwards multi-iteration string operation. */ + reverse = (hvmemul_ctxt->ctxt.regs->eflags & X86_EFLAGS_DF) && (*reps > 1); + + if ( reverse && ((PAGE_SIZE - offset) < bytes_per_rep) ) + { + /* Do page-straddling first iteration forwards via recursion. */ + paddr_t _paddr; + unsigned long one_rep = 1; + int rc = hvmemul_linear_to_phys( + addr, &_paddr, bytes_per_rep, &one_rep, pfec, hvmemul_ctxt); + if ( rc != X86EMUL_OKAY ) + return rc; + pfn = _paddr >> PAGE_SHIFT; + } + else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == gfn_x(INVALID_GFN) ) + { + if ( pfec & (PFEC_page_paged | PFEC_page_shared) ) + return X86EMUL_RETRY; + *reps = 0; + x86_emul_pagefault(pfec, addr, &hvmemul_ctxt->ctxt); + return X86EMUL_EXCEPTION; + } + + done = reverse ? bytes_per_rep + offset : PAGE_SIZE - offset; + todo = *reps * bytes_per_rep; + for ( i = 1; done < todo; i++ ) + { + /* Get the next PFN in the range. */ + addr += reverse ? -PAGE_SIZE : PAGE_SIZE; + npfn = paging_gva_to_gfn(curr, addr, &pfec); + + /* Is it contiguous with the preceding PFNs? If not then we're done. */ + if ( (npfn == gfn_x(INVALID_GFN)) || + (npfn != (pfn + (reverse ? -i : i))) ) + { + if ( pfec & (PFEC_page_paged | PFEC_page_shared) ) + return X86EMUL_RETRY; + done /= bytes_per_rep; + if ( done == 0 ) + { + ASSERT(!reverse); + if ( npfn != gfn_x(INVALID_GFN) ) + return X86EMUL_UNHANDLEABLE; + *reps = 0; + x86_emul_pagefault(pfec, addr & PAGE_MASK, &hvmemul_ctxt->ctxt); + return X86EMUL_EXCEPTION; + } + *reps = done; + break; + } + + done += PAGE_SIZE; + } + + *paddr = ((paddr_t)pfn << PAGE_SHIFT) | offset; + return X86EMUL_OKAY; +} + /* * Map the frame(s) covering an individual linear access, for writeable * access. May return NULL for MMIO, or ERR_PTR(~X86EMUL_*) for other errors @@ -692,97 +781,7 @@ static void hvmemul_unmap_linear_addr( *mfn++ = _mfn(0); } #endif -} - -/* - * Convert addr from linear to physical form, valid over the range - * [addr, addr + *reps * bytes_per_rep]. *reps is adjusted according to - * the valid computed range. It is always >0 when X86EMUL_OKAY is returned. - * @pfec indicates the access checks to be performed during page-table walks. - */ -static int hvmemul_linear_to_phys( - unsigned long addr, - paddr_t *paddr, - unsigned int bytes_per_rep, - unsigned long *reps, - uint32_t pfec, - struct hvm_emulate_ctxt *hvmemul_ctxt) -{ - struct vcpu *curr = current; - unsigned long pfn, npfn, done, todo, i, offset = addr & ~PAGE_MASK; - int reverse; - - /* - * Clip repetitions to a sensible maximum. This avoids extensive looping in - * this function while still amortising the cost of I/O trap-and-emulate. - */ - *reps = min_t(unsigned long, *reps, 4096); - - /* With no paging it's easy: linear == physical. */ - if ( !(curr->arch.hvm.guest_cr[0] & X86_CR0_PG) ) - { - *paddr = addr; - return X86EMUL_OKAY; - } - - /* Reverse mode if this is a backwards multi-iteration string operation. */ - reverse = (hvmemul_ctxt->ctxt.regs->eflags & X86_EFLAGS_DF) && (*reps > 1); - - if ( reverse && ((PAGE_SIZE - offset) < bytes_per_rep) ) - { - /* Do page-straddling first iteration forwards via recursion. */ - paddr_t _paddr; - unsigned long one_rep = 1; - int rc = hvmemul_linear_to_phys( - addr, &_paddr, bytes_per_rep, &one_rep, pfec, hvmemul_ctxt); - if ( rc != X86EMUL_OKAY ) - return rc; - pfn = _paddr >> PAGE_SHIFT; - } - else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == gfn_x(INVALID_GFN) ) - { - if ( pfec & (PFEC_page_paged | PFEC_page_shared) ) - return X86EMUL_RETRY; - *reps = 0; - x86_emul_pagefault(pfec, addr, &hvmemul_ctxt->ctxt); - return X86EMUL_EXCEPTION; - } - - done = reverse ? bytes_per_rep + offset : PAGE_SIZE - offset; - todo = *reps * bytes_per_rep; - for ( i = 1; done < todo; i++ ) - { - /* Get the next PFN in the range. */ - addr += reverse ? -PAGE_SIZE : PAGE_SIZE; - npfn = paging_gva_to_gfn(curr, addr, &pfec); - - /* Is it contiguous with the preceding PFNs? If not then we're done. */ - if ( (npfn == gfn_x(INVALID_GFN)) || - (npfn != (pfn + (reverse ? -i : i))) ) - { - if ( pfec & (PFEC_page_paged | PFEC_page_shared) ) - return X86EMUL_RETRY; - done /= bytes_per_rep; - if ( done == 0 ) - { - ASSERT(!reverse); - if ( npfn != gfn_x(INVALID_GFN) ) - return X86EMUL_UNHANDLEABLE; - *reps = 0; - x86_emul_pagefault(pfec, addr & PAGE_MASK, &hvmemul_ctxt->ctxt); - return X86EMUL_EXCEPTION; - } - *reps = done; - break; - } - - done += PAGE_SIZE; - } - - *paddr = ((paddr_t)pfn << PAGE_SHIFT) | offset; - return X86EMUL_OKAY; -} - +} static int hvmemul_virtual_to_linear( enum x86_segment seg,