From patchwork Mon May 20 12:55:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Stefan ISAILA X-Patchwork-Id: 10951083 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BF3B614C0 for ; Mon, 20 May 2019 12:57:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AE1182852C for ; Mon, 20 May 2019 12:57:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A237228735; Mon, 20 May 2019 12:57:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D6BB12852C for ; Mon, 20 May 2019 12:57:13 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hShoz-0006ge-Md; Mon, 20 May 2019 12:55:17 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hShoy-0006gY-1I for xen-devel@lists.xenproject.org; Mon, 20 May 2019 12:55:16 +0000 X-Inumbo-ID: 821f3f17-7afe-11e9-8980-bc764e045a96 Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown [40.107.14.108]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 821f3f17-7afe-11e9-8980-bc764e045a96; Mon, 20 May 2019 12:55:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bitdefender.onmicrosoft.com; s=selector1-bitdefender-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SuUQL9kwXn3FXXCbT4tbnX+gSMarRXx+PfuoCom0wKw=; b=IFIzwOqb/NEYJn6v31pcyZSZP4tp4SwVDi/l0PvjE3hWIB5o0mt9+Xs8wlLgp/d6wjZLlQoRRoYMv51/kOkU/E9Sn9Pl7527ZMSgqJvD5E2/JbjwmPG7KukQgeqjGotcfyQC6BWd5cECBKjJ0x4aPWJjE1fJqeaMetKAz4SyCOw= Received: from VI1PR0202MB2928.eurprd02.prod.outlook.com (10.171.106.11) by VI1PR0202MB2863.eurprd02.prod.outlook.com (10.171.102.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1900.16; Mon, 20 May 2019 12:55:10 +0000 Received: from VI1PR0202MB2928.eurprd02.prod.outlook.com ([fe80::fdcf:4771:37b7:9830]) by VI1PR0202MB2928.eurprd02.prod.outlook.com ([fe80::fdcf:4771:37b7:9830%9]) with mapi id 15.20.1900.020; Mon, 20 May 2019 12:55:10 +0000 From: Alexandru Stefan ISAILA To: "xen-devel@lists.xenproject.org" Thread-Topic: [PATCH v4 1/2] x86/emulate: Move hvmemul_linear_to_phys Thread-Index: AQHVDwtCd30Zw9xzVEyqgGU2/KVoUQ== Date: Mon, 20 May 2019 12:55:10 +0000 Message-ID: <20190520125454.14805-1-aisaila@bitdefender.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: AM0PR05CA0078.eurprd05.prod.outlook.com (2603:10a6:208:136::18) To VI1PR0202MB2928.eurprd02.prod.outlook.com (2603:10a6:800:e3::11) authentication-results: spf=none (sender IP is ) smtp.mailfrom=aisaila@bitdefender.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [91.199.104.6] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: aab4f67a-916e-487a-f5d7-08d6dd226473 x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600141)(711020)(4605104)(2017052603328)(7193020); SRVR:VI1PR0202MB2863; x-ms-traffictypediagnostic: VI1PR0202MB2863:|VI1PR0202MB2863: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:2803; x-forefront-prvs: 004395A01C x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(346002)(136003)(396003)(376002)(39860400002)(366004)(199004)(189003)(71190400001)(4326008)(1076003)(66066001)(71200400001)(305945005)(7736002)(53936002)(107886003)(256004)(3846002)(6116002)(86362001)(14444005)(36756003)(8676002)(81156014)(81166006)(2351001)(25786009)(316002)(6916009)(54906003)(14454004)(5660300002)(8936002)(478600001)(68736007)(486006)(50226002)(476003)(2501003)(6512007)(66446008)(2616005)(66556008)(66476007)(66946007)(73956011)(64756008)(2906002)(6506007)(102836004)(52116002)(26005)(6436002)(386003)(5640700003)(99286004)(186003)(6486002); DIR:OUT; SFP:1102; SCL:1; SRVR:VI1PR0202MB2863; H:VI1PR0202MB2928.eurprd02.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: bitdefender.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: yvZSwWCqGs0tV6HVUIjGNoU+SDBzdT8N7KjZjMVBbctf7CikKZDnyKgftKwpveHsCB6MKeN1oZIXy6BcPV3ZN2ORCxewfGhXe0BeK6UeezY4Vg3BqJYXDN2qgKmz+SXSr7gm7pWsPsQjdeCt2CscgGcZnv59IPvVbZsEqnXswDMEVJ5QouTvRpUjhfvtSYMkALB3rJdrC60wNi/nGnxGlSGenTeULQ55o+K0Ppc8TPgyQjkweAmqvUNnBuELMJ1Ep6pBgUxUbYtf/Ap4sFOBqSlMHV9JKpntUkhoNMgICghhP+SpK30E6qGUm2MhGq6FZs9eJI8qW7DedGf+43ne8+EtJHHN2rP5zo6ltlEu9aar8qGtGoUeK+hnwu6xNc9oAHHQzVB99814PKDTdfKSCctpAeebE+cZLZhd50JUw88= MIME-Version: 1.0 X-OriginatorOrg: bitdefender.com X-MS-Exchange-CrossTenant-Network-Message-Id: aab4f67a-916e-487a-f5d7-08d6dd226473 X-MS-Exchange-CrossTenant-originalarrivaltime: 20 May 2019 12:55:10.8045 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 487baf29-f1da-469a-9221-243f830c36f3 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0202MB2863 Subject: [Xen-devel] [PATCH v4 1/2] x86/emulate: Move hvmemul_linear_to_phys X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: "tamas@tklengyel.com" , "wei.liu2@citrix.com" , "rcojocaru@bitdefender.com" , "george.dunlap@eu.citrix.com" , "andrew.cooper3@citrix.com" , "paul.durrant@citrix.com" , "jbeulich@suse.com" , Alexandru Stefan ISAILA , "roger.pau@citrix.com" Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Thiis is done so hvmemul_linear_to_phys() can be called from hvmemul_send_vm_event(). Signed-off-by: Alexandru Isaila --- xen/arch/x86/hvm/emulate.c | 181 ++++++++++++++++++------------------- 1 file changed, 90 insertions(+), 91 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 8659c89862..254ff6515d 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -530,6 +530,95 @@ static int hvmemul_do_mmio_addr(paddr_t mmio_gpa, return hvmemul_do_io_addr(1, mmio_gpa, reps, size, dir, df, ram_gpa); } +/* + * Convert addr from linear to physical form, valid over the range + * [addr, addr + *reps * bytes_per_rep]. *reps is adjusted according to + * the valid computed range. It is always >0 when X86EMUL_OKAY is returned. + * @pfec indicates the access checks to be performed during page-table walks. + */ +static int hvmemul_linear_to_phys( + unsigned long addr, + paddr_t *paddr, + unsigned int bytes_per_rep, + unsigned long *reps, + uint32_t pfec, + struct hvm_emulate_ctxt *hvmemul_ctxt) +{ + struct vcpu *curr = current; + unsigned long pfn, npfn, done, todo, i, offset = addr & ~PAGE_MASK; + int reverse; + + /* + * Clip repetitions to a sensible maximum. This avoids extensive looping in + * this function while still amortising the cost of I/O trap-and-emulate. + */ + *reps = min_t(unsigned long, *reps, 4096); + + /* With no paging it's easy: linear == physical. */ + if ( !(curr->arch.hvm.guest_cr[0] & X86_CR0_PG) ) + { + *paddr = addr; + return X86EMUL_OKAY; + } + + /* Reverse mode if this is a backwards multi-iteration string operation. */ + reverse = (hvmemul_ctxt->ctxt.regs->eflags & X86_EFLAGS_DF) && (*reps > 1); + + if ( reverse && ((PAGE_SIZE - offset) < bytes_per_rep) ) + { + /* Do page-straddling first iteration forwards via recursion. */ + paddr_t _paddr; + unsigned long one_rep = 1; + int rc = hvmemul_linear_to_phys( + addr, &_paddr, bytes_per_rep, &one_rep, pfec, hvmemul_ctxt); + if ( rc != X86EMUL_OKAY ) + return rc; + pfn = _paddr >> PAGE_SHIFT; + } + else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == gfn_x(INVALID_GFN) ) + { + if ( pfec & (PFEC_page_paged | PFEC_page_shared) ) + return X86EMUL_RETRY; + *reps = 0; + x86_emul_pagefault(pfec, addr, &hvmemul_ctxt->ctxt); + return X86EMUL_EXCEPTION; + } + + done = reverse ? bytes_per_rep + offset : PAGE_SIZE - offset; + todo = *reps * bytes_per_rep; + for ( i = 1; done < todo; i++ ) + { + /* Get the next PFN in the range. */ + addr += reverse ? -PAGE_SIZE : PAGE_SIZE; + npfn = paging_gva_to_gfn(curr, addr, &pfec); + + /* Is it contiguous with the preceding PFNs? If not then we're done. */ + if ( (npfn == gfn_x(INVALID_GFN)) || + (npfn != (pfn + (reverse ? -i : i))) ) + { + if ( pfec & (PFEC_page_paged | PFEC_page_shared) ) + return X86EMUL_RETRY; + done /= bytes_per_rep; + if ( done == 0 ) + { + ASSERT(!reverse); + if ( npfn != gfn_x(INVALID_GFN) ) + return X86EMUL_UNHANDLEABLE; + *reps = 0; + x86_emul_pagefault(pfec, addr & PAGE_MASK, &hvmemul_ctxt->ctxt); + return X86EMUL_EXCEPTION; + } + *reps = done; + break; + } + + done += PAGE_SIZE; + } + + *paddr = ((paddr_t)pfn << PAGE_SHIFT) | offset; + return X86EMUL_OKAY; +} + /* * Map the frame(s) covering an individual linear access, for writeable * access. May return NULL for MMIO, or ERR_PTR(~X86EMUL_*) for other errors @@ -692,97 +781,7 @@ static void hvmemul_unmap_linear_addr( *mfn++ = _mfn(0); } #endif -} - -/* - * Convert addr from linear to physical form, valid over the range - * [addr, addr + *reps * bytes_per_rep]. *reps is adjusted according to - * the valid computed range. It is always >0 when X86EMUL_OKAY is returned. - * @pfec indicates the access checks to be performed during page-table walks. - */ -static int hvmemul_linear_to_phys( - unsigned long addr, - paddr_t *paddr, - unsigned int bytes_per_rep, - unsigned long *reps, - uint32_t pfec, - struct hvm_emulate_ctxt *hvmemul_ctxt) -{ - struct vcpu *curr = current; - unsigned long pfn, npfn, done, todo, i, offset = addr & ~PAGE_MASK; - int reverse; - - /* - * Clip repetitions to a sensible maximum. This avoids extensive looping in - * this function while still amortising the cost of I/O trap-and-emulate. - */ - *reps = min_t(unsigned long, *reps, 4096); - - /* With no paging it's easy: linear == physical. */ - if ( !(curr->arch.hvm.guest_cr[0] & X86_CR0_PG) ) - { - *paddr = addr; - return X86EMUL_OKAY; - } - - /* Reverse mode if this is a backwards multi-iteration string operation. */ - reverse = (hvmemul_ctxt->ctxt.regs->eflags & X86_EFLAGS_DF) && (*reps > 1); - - if ( reverse && ((PAGE_SIZE - offset) < bytes_per_rep) ) - { - /* Do page-straddling first iteration forwards via recursion. */ - paddr_t _paddr; - unsigned long one_rep = 1; - int rc = hvmemul_linear_to_phys( - addr, &_paddr, bytes_per_rep, &one_rep, pfec, hvmemul_ctxt); - if ( rc != X86EMUL_OKAY ) - return rc; - pfn = _paddr >> PAGE_SHIFT; - } - else if ( (pfn = paging_gva_to_gfn(curr, addr, &pfec)) == gfn_x(INVALID_GFN) ) - { - if ( pfec & (PFEC_page_paged | PFEC_page_shared) ) - return X86EMUL_RETRY; - *reps = 0; - x86_emul_pagefault(pfec, addr, &hvmemul_ctxt->ctxt); - return X86EMUL_EXCEPTION; - } - - done = reverse ? bytes_per_rep + offset : PAGE_SIZE - offset; - todo = *reps * bytes_per_rep; - for ( i = 1; done < todo; i++ ) - { - /* Get the next PFN in the range. */ - addr += reverse ? -PAGE_SIZE : PAGE_SIZE; - npfn = paging_gva_to_gfn(curr, addr, &pfec); - - /* Is it contiguous with the preceding PFNs? If not then we're done. */ - if ( (npfn == gfn_x(INVALID_GFN)) || - (npfn != (pfn + (reverse ? -i : i))) ) - { - if ( pfec & (PFEC_page_paged | PFEC_page_shared) ) - return X86EMUL_RETRY; - done /= bytes_per_rep; - if ( done == 0 ) - { - ASSERT(!reverse); - if ( npfn != gfn_x(INVALID_GFN) ) - return X86EMUL_UNHANDLEABLE; - *reps = 0; - x86_emul_pagefault(pfec, addr & PAGE_MASK, &hvmemul_ctxt->ctxt); - return X86EMUL_EXCEPTION; - } - *reps = done; - break; - } - - done += PAGE_SIZE; - } - - *paddr = ((paddr_t)pfn << PAGE_SHIFT) | offset; - return X86EMUL_OKAY; -} - +} static int hvmemul_virtual_to_linear( enum x86_segment seg, From patchwork Mon May 20 12:55:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Stefan ISAILA X-Patchwork-Id: 10951081 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5A7F7112C for ; Mon, 20 May 2019 12:56:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A2F32863C for ; Mon, 20 May 2019 12:56:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3DABB28820; Mon, 20 May 2019 12:56:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 03C702863C for ; Mon, 20 May 2019 12:56:44 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hShp6-0006hV-1e; Mon, 20 May 2019 12:55:24 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hShp4-0006hG-KF for xen-devel@lists.xenproject.org; Mon, 20 May 2019 12:55:22 +0000 X-Inumbo-ID: 85fda251-7afe-11e9-8980-bc764e045a96 Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe1f::700]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 85fda251-7afe-11e9-8980-bc764e045a96; Mon, 20 May 2019 12:55:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bitdefender.onmicrosoft.com; s=selector1-bitdefender-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=h068F1DXR33YtQAGwSZ6CtB44hCVp6hRTRbXSU4hSu0=; b=NmIrBrJY0O/S51UiBhkSevWfuazjz8hSjMhpvIL0yTegqIB6+XCN0Ixcbbj2fpeta+hI+FzeWv4jLrhnsgkLk/tOdLBm7o5xkdc3Vu+YxlIlLBiA3N/pEpjGiOtkfdL2PtDDAy2V/RnUQNP5eFZMXG8KXluzTalI2dYZxr9YykU= Received: from VI1PR0202MB2928.eurprd02.prod.outlook.com (10.171.106.11) by VI1PR0202MB2863.eurprd02.prod.outlook.com (10.171.102.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1900.16; Mon, 20 May 2019 12:55:18 +0000 Received: from VI1PR0202MB2928.eurprd02.prod.outlook.com ([fe80::fdcf:4771:37b7:9830]) by VI1PR0202MB2928.eurprd02.prod.outlook.com ([fe80::fdcf:4771:37b7:9830%9]) with mapi id 15.20.1900.020; Mon, 20 May 2019 12:55:18 +0000 From: Alexandru Stefan ISAILA To: "xen-devel@lists.xenproject.org" Thread-Topic: [PATCH v4 2/2] x86/emulate: Send vm_event from emulate Thread-Index: AQHVDwtG9NQRMXonfUaGlUQCgO162g== Date: Mon, 20 May 2019 12:55:17 +0000 Message-ID: <20190520125454.14805-2-aisaila@bitdefender.com> References: <20190520125454.14805-1-aisaila@bitdefender.com> In-Reply-To: <20190520125454.14805-1-aisaila@bitdefender.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: AM0PR05CA0078.eurprd05.prod.outlook.com (2603:10a6:208:136::18) To VI1PR0202MB2928.eurprd02.prod.outlook.com (2603:10a6:800:e3::11) authentication-results: spf=none (sender IP is ) smtp.mailfrom=aisaila@bitdefender.com; x-ms-exchange-messagesentrepresentingtype: 1 x-mailer: git-send-email 2.17.1 x-originating-ip: [91.199.104.6] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 37582899-8b73-4e9e-71cf-08d6dd2268bc x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600141)(711020)(4605104)(2017052603328)(7193020); SRVR:VI1PR0202MB2863; x-ms-traffictypediagnostic: VI1PR0202MB2863:|VI1PR0202MB2863: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:2887; x-forefront-prvs: 004395A01C x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(346002)(136003)(396003)(376002)(39860400002)(366004)(199004)(189003)(71190400001)(4326008)(1076003)(66066001)(71200400001)(305945005)(7736002)(53936002)(107886003)(256004)(3846002)(6116002)(86362001)(14444005)(36756003)(8676002)(81156014)(81166006)(2351001)(25786009)(316002)(6916009)(54906003)(14454004)(5660300002)(8936002)(478600001)(68736007)(486006)(50226002)(476003)(2501003)(6512007)(66446008)(2616005)(11346002)(446003)(66556008)(66476007)(66946007)(73956011)(64756008)(2906002)(6506007)(102836004)(52116002)(26005)(6436002)(386003)(5640700003)(99286004)(76176011)(186003)(6486002)(309714004); DIR:OUT; SFP:1102; SCL:1; SRVR:VI1PR0202MB2863; H:VI1PR0202MB2928.eurprd02.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: bitdefender.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: igbeTy26lDopsom2xLr78TlzBR0KDG17uCYmZB1Fd8MoFIj4TQw5a9r2KLshVYl1DSI3EmpMWY2HVa1msBz3DhLoYY/9r7kNn42q05ff1iyiy/3Ghsj614f4Qmu9h9rJ1191lo22tczF0UKqa7tAdHy3mnUjidp9LrG+N+eEDmeka7SrQo0qKMKk6F7CyvhxuizUOklM3rrmBbIUYqWtk+7M+Zab84yNlojGBN/RfajecWq7HaqQo8JmDpr6RF5Vp/shoqVtk9AF8JL1xooCO6WfDrMwUqHBpWU4taWZwlj1SzIlS3Lmove4RAqNOFFVdECSmJtLqGKwa3uDWvorHd/uzBc+v1Lw6UKKJkEc6b2lsP420M7DtaNegEopXZtXH9htthN24LvXlA3iOnWVdU3YEqKupl3/zt5NY9AsQcc= MIME-Version: 1.0 X-OriginatorOrg: bitdefender.com X-MS-Exchange-CrossTenant-Network-Message-Id: 37582899-8b73-4e9e-71cf-08d6dd2268bc X-MS-Exchange-CrossTenant-originalarrivaltime: 20 May 2019 12:55:17.8563 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 487baf29-f1da-469a-9221-243f830c36f3 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0202MB2863 Subject: [Xen-devel] [PATCH v4 2/2] x86/emulate: Send vm_event from emulate X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: "tamas@tklengyel.com" , "wei.liu2@citrix.com" , "rcojocaru@bitdefender.com" , "george.dunlap@eu.citrix.com" , "andrew.cooper3@citrix.com" , "paul.durrant@citrix.com" , "jbeulich@suse.com" , Alexandru Stefan ISAILA , "roger.pau@citrix.com" Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch aims to have mem access vm events sent from the emulator. This is useful in the case of emulated instructions that cause page-walks on access protected pages. We use hvmemul_map_linear_addr() ro intercept r/w access and hvmemul_insn_fetch() to intercept exec access. First we try to send a vm event and if the event is sent then emulation returns X86EMUL_ACCESS_EXCEPTION. If the event is not sent then the emulation goes on as expected. Signed-off-by: Alexandru Isaila --- Changes since V3: - Calculate gpa in hvmemul_send_vm_event() - Move hvmemul_linear_to_phys() call inside hvmemul_send_vm_event() - Check only if hvmemul_virtual_to_linear() returns X86EMUL_OKAY - Add commnet for X86EMUL_ACCESS_EXCEPTION. --- xen/arch/x86/hvm/emulate.c | 89 +++++++++++++++++++++++++- xen/arch/x86/hvm/vm_event.c | 2 +- xen/arch/x86/mm/mem_access.c | 3 +- xen/arch/x86/x86_emulate/x86_emulate.h | 2 + xen/include/asm-x86/hvm/emulate.h | 4 +- 5 files changed, 95 insertions(+), 5 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 254ff6515d..75403ebc9b 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -26,6 +27,7 @@ #include #include #include +#include static void hvmtrace_io_assist(const ioreq_t *p) { @@ -619,6 +621,68 @@ static int hvmemul_linear_to_phys( return X86EMUL_OKAY; } +static bool hvmemul_send_vm_event(unsigned long gla, + uint32_t pfec, unsigned int bytes, + struct hvm_emulate_ctxt ctxt) +{ + xenmem_access_t access; + vm_event_request_t req = {}; + gfn_t gfn; + paddr_t gpa; + unsigned long reps = 1; + int rc; + + if ( !ctxt.send_event || !pfec ) + return false; + + rc = hvmemul_linear_to_phys(gla, &gpa, bytes, &reps, pfec, &ctxt); + + if ( rc != X86EMUL_OKAY ) + return false; + + gfn = gaddr_to_gfn(gpa); + + if ( p2m_get_mem_access(current->domain, gfn, &access, + altp2m_vcpu_idx(current)) != 0 ) + return false; + + switch ( access ) { + case XENMEM_access_x: + case XENMEM_access_rx: + if ( pfec & PFEC_write_access ) + req.u.mem_access.flags = MEM_ACCESS_R | MEM_ACCESS_W; + break; + + case XENMEM_access_w: + case XENMEM_access_rw: + if ( pfec & PFEC_insn_fetch ) + req.u.mem_access.flags = MEM_ACCESS_X; + break; + + case XENMEM_access_r: + case XENMEM_access_n: + if ( pfec & PFEC_write_access ) + req.u.mem_access.flags |= MEM_ACCESS_R | MEM_ACCESS_W; + if ( pfec & PFEC_insn_fetch ) + req.u.mem_access.flags |= MEM_ACCESS_X; + break; + + default: + return false; + } + + if ( !req.u.mem_access.flags ) + return false; /* no violation */ + + req.reason = VM_EVENT_REASON_MEM_ACCESS; + req.u.mem_access.gfn = gfn_x(gfn); + req.u.mem_access.flags |= MEM_ACCESS_FAULT_WITH_GLA | MEM_ACCESS_GLA_VALID; + req.u.mem_access.gla = gla; + req.u.mem_access.offset = gpa & ((1 << PAGE_SHIFT) - 1); + + return monitor_traps(current, true, &req) >= 0; +} + /* * Map the frame(s) covering an individual linear access, for writeable * access. May return NULL for MMIO, or ERR_PTR(~X86EMUL_*) for other errors @@ -636,6 +700,7 @@ static void *hvmemul_map_linear_addr( unsigned int nr_frames = ((linear + bytes - !!bytes) >> PAGE_SHIFT) - (linear >> PAGE_SHIFT) + 1; unsigned int i; + gfn_t gfn; /* * mfn points to the next free slot. All used slots have a page reference @@ -674,7 +739,7 @@ static void *hvmemul_map_linear_addr( ASSERT(mfn_x(*mfn) == 0); res = hvm_translate_get_page(curr, addr, true, pfec, - &pfinfo, &page, NULL, &p2mt); + &pfinfo, &page, &gfn, &p2mt); switch ( res ) { @@ -704,6 +769,11 @@ static void *hvmemul_map_linear_addr( if ( pfec & PFEC_write_access ) { + if ( hvmemul_send_vm_event(addr, pfec, bytes, *hvmemul_ctxt) ) + { + err = ERR_PTR(~X86EMUL_ACCESS_EXCEPTION); + goto out; + } if ( p2m_is_discard_write(p2mt) ) { err = ERR_PTR(~X86EMUL_OKAY); @@ -1248,7 +1318,21 @@ int hvmemul_insn_fetch( container_of(ctxt, struct hvm_emulate_ctxt, ctxt); /* Careful, as offset can wrap or truncate WRT insn_buf_eip. */ uint8_t insn_off = offset - hvmemul_ctxt->insn_buf_eip; + uint32_t pfec = PFEC_page_present | PFEC_insn_fetch; + unsigned long addr, reps = 1; + int rc = 0; + + rc = hvmemul_virtual_to_linear( + seg, offset, bytes, &reps, hvm_access_insn_fetch, hvmemul_ctxt, &addr); + + if ( rc != X86EMUL_OKAY || !bytes ) + return rc; + + if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 ) + pfec |= PFEC_user_mode; + if ( hvmemul_send_vm_event(addr, pfec, bytes, *hvmemul_ctxt) ) + return X86EMUL_ACCESS_EXCEPTION; /* * Fall back if requested bytes are not in the prefetch cache. * But always perform the (fake) read when bytes == 0. @@ -2508,12 +2592,13 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla) } void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, - unsigned int errcode) + unsigned int errcode, bool send_event) { struct hvm_emulate_ctxt ctx = {{ 0 }}; int rc; hvm_emulate_init_once(&ctx, NULL, guest_cpu_user_regs()); + ctx.send_event = send_event; switch ( kind ) { diff --git a/xen/arch/x86/hvm/vm_event.c b/xen/arch/x86/hvm/vm_event.c index 121de23071..6d203e8db5 100644 --- a/xen/arch/x86/hvm/vm_event.c +++ b/xen/arch/x86/hvm/vm_event.c @@ -87,7 +87,7 @@ void hvm_vm_event_do_resume(struct vcpu *v) kind = EMUL_KIND_SET_CONTEXT_INSN; hvm_emulate_one_vm_event(kind, TRAP_invalid_op, - X86_EVENT_NO_EC); + X86_EVENT_NO_EC, false); v->arch.vm_event->emulate_flags = 0; } diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c index 0144f92b98..c9972bab8c 100644 --- a/xen/arch/x86/mm/mem_access.c +++ b/xen/arch/x86/mm/mem_access.c @@ -214,7 +214,8 @@ bool p2m_mem_access_check(paddr_t gpa, unsigned long gla, d->arch.monitor.inguest_pagefault_disabled && npfec.kind != npfec_kind_with_gla ) /* don't send a mem_event */ { - hvm_emulate_one_vm_event(EMUL_KIND_NORMAL, TRAP_invalid_op, X86_EVENT_NO_EC); + hvm_emulate_one_vm_event(EMUL_KIND_NORMAL, TRAP_invalid_op, + X86_EVENT_NO_EC, true); return true; } diff --git a/xen/arch/x86/x86_emulate/x86_emulate.h b/xen/arch/x86/x86_emulate/x86_emulate.h index 08645762cc..8a20e733fa 100644 --- a/xen/arch/x86/x86_emulate/x86_emulate.h +++ b/xen/arch/x86/x86_emulate/x86_emulate.h @@ -162,6 +162,8 @@ struct x86_emul_fpu_aux { #define X86EMUL_UNRECOGNIZED X86EMUL_UNIMPLEMENTED /* (cmpxchg accessor): CMPXCHG failed. */ #define X86EMUL_CMPXCHG_FAILED 7 +/* Emulator tried to access a protected page. */ +#define X86EMUL_ACCESS_EXCEPTION 8 /* FPU sub-types which may be requested via ->get_fpu(). */ enum x86_emulate_fpu_type { diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h index b39a1a0331..ed22ed0baf 100644 --- a/xen/include/asm-x86/hvm/emulate.h +++ b/xen/include/asm-x86/hvm/emulate.h @@ -47,6 +47,7 @@ struct hvm_emulate_ctxt { uint32_t intr_shadow; bool_t set_context; + bool send_event; }; enum emul_kind { @@ -63,7 +64,8 @@ int hvm_emulate_one( struct hvm_emulate_ctxt *hvmemul_ctxt); void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, - unsigned int errcode); + unsigned int errcode, + bool send_event); /* Must be called once to set up hvmemul state. */ void hvm_emulate_init_once( struct hvm_emulate_ctxt *hvmemul_ctxt,