From patchwork Wed May 19 20:22:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06F33C433B4 for ; Wed, 19 May 2021 20:25:20 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 791086135C for ; Wed, 19 May 2021 20:25:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 791086135C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:53248 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSko-0007QU-H9 for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:25:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53156) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSil-00044K-CN; Wed, 19 May 2021 16:23:11 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48100 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSic-0003CT-Py; Wed, 19 May 2021 16:23:10 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id A76B360803BD; Wed, 19 May 2021 22:22:55 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 01/19] hvf: Move assert_hvf_ok() into common directory Date: Wed, 19 May 2021 22:22:35 +0200 Message-Id: <20210519202253.76782-2-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Until now, Hypervisor.framework has only been available on x86_64 systems. With Apple Silicon shipping now, it extends its reach to aarch64. To prepare for support for multiple architectures, let's start moving common code out into its own accel directory. This patch moves assert_hvf_ok() and introduces generic build infrastructure. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez --- MAINTAINERS | 8 +++++++ accel/hvf/hvf-all.c | 47 ++++++++++++++++++++++++++++++++++++++++ accel/hvf/meson.build | 6 +++++ accel/meson.build | 1 + include/sysemu/hvf_int.h | 18 +++++++++++++++ target/i386/hvf/hvf.c | 33 +--------------------------- 6 files changed, 81 insertions(+), 32 deletions(-) create mode 100644 accel/hvf/hvf-all.c create mode 100644 accel/hvf/meson.build create mode 100644 include/sysemu/hvf_int.h diff --git a/MAINTAINERS b/MAINTAINERS index 89741cfc19..262e96714b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -434,7 +434,15 @@ M: Roman Bolshakov W: https://wiki.qemu.org/Features/HVF S: Maintained F: target/i386/hvf/ + +HVF +M: Cameron Esfahani +M: Roman Bolshakov +W: https://wiki.qemu.org/Features/HVF +S: Maintained +F: accel/hvf/ F: include/sysemu/hvf.h +F: include/sysemu/hvf_int.h WHPX CPUs M: Sunil Muthuswamy diff --git a/accel/hvf/hvf-all.c b/accel/hvf/hvf-all.c new file mode 100644 index 0000000000..f185b0830a --- /dev/null +++ b/accel/hvf/hvf-all.c @@ -0,0 +1,47 @@ +/* + * QEMU Hypervisor.framework support + * + * This work is licensed under the terms of the GNU GPL, version 2. See + * the COPYING file in the top-level directory. + * + * Contributions after 2012-01-13 are licensed under the terms of the + * GNU GPL, version 2 or (at your option) any later version. + */ + +#include "qemu/osdep.h" +#include "qemu-common.h" +#include "qemu/error-report.h" +#include "sysemu/hvf.h" +#include "sysemu/hvf_int.h" + +void assert_hvf_ok(hv_return_t ret) +{ + if (ret == HV_SUCCESS) { + return; + } + + switch (ret) { + case HV_ERROR: + error_report("Error: HV_ERROR"); + break; + case HV_BUSY: + error_report("Error: HV_BUSY"); + break; + case HV_BAD_ARGUMENT: + error_report("Error: HV_BAD_ARGUMENT"); + break; + case HV_NO_RESOURCES: + error_report("Error: HV_NO_RESOURCES"); + break; + case HV_NO_DEVICE: + error_report("Error: HV_NO_DEVICE"); + break; + case HV_UNSUPPORTED: + error_report("Error: HV_UNSUPPORTED"); + break; + default: + error_report("Unknown Error"); + } + + abort(); +} diff --git a/accel/hvf/meson.build b/accel/hvf/meson.build new file mode 100644 index 0000000000..227b11cd71 --- /dev/null +++ b/accel/hvf/meson.build @@ -0,0 +1,6 @@ +hvf_ss = ss.source_set() +hvf_ss.add(files( + 'hvf-all.c', +)) + +specific_ss.add_all(when: 'CONFIG_HVF', if_true: hvf_ss) diff --git a/accel/meson.build b/accel/meson.build index b44ba30c86..dfd808d2c8 100644 --- a/accel/meson.build +++ b/accel/meson.build @@ -2,6 +2,7 @@ specific_ss.add(files('accel-common.c')) softmmu_ss.add(files('accel-softmmu.c')) user_ss.add(files('accel-user.c')) +subdir('hvf') subdir('qtest') subdir('kvm') subdir('tcg') diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h new file mode 100644 index 0000000000..3deb4cfacc --- /dev/null +++ b/include/sysemu/hvf_int.h @@ -0,0 +1,18 @@ +/* + * QEMU Hypervisor.framework (HVF) support + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + * + */ + +/* header to be included in HVF-specific code */ + +#ifndef HVF_INT_H +#define HVF_INT_H + +#include + +void assert_hvf_ok(hv_return_t ret); + +#endif diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c index f044181d06..32f42f1592 100644 --- a/target/i386/hvf/hvf.c +++ b/target/i386/hvf/hvf.c @@ -51,6 +51,7 @@ #include "qemu/error-report.h" #include "sysemu/hvf.h" +#include "sysemu/hvf_int.h" #include "sysemu/runstate.h" #include "hvf-i386.h" #include "vmcs.h" @@ -76,38 +77,6 @@ HVFState *hvf_state; -static void assert_hvf_ok(hv_return_t ret) -{ - if (ret == HV_SUCCESS) { - return; - } - - switch (ret) { - case HV_ERROR: - error_report("Error: HV_ERROR"); - break; - case HV_BUSY: - error_report("Error: HV_BUSY"); - break; - case HV_BAD_ARGUMENT: - error_report("Error: HV_BAD_ARGUMENT"); - break; - case HV_NO_RESOURCES: - error_report("Error: HV_NO_RESOURCES"); - break; - case HV_NO_DEVICE: - error_report("Error: HV_NO_DEVICE"); - break; - case HV_UNSUPPORTED: - error_report("Error: HV_UNSUPPORTED"); - break; - default: - error_report("Unknown Error"); - } - - abort(); -} - /* Memory slots */ hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size) { From patchwork Wed May 19 20:22:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D76C0C433B4 for ; Wed, 19 May 2021 20:27:11 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 95BCB6135C for ; Wed, 19 May 2021 20:27:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 95BCB6135C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:35186 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSmc-0005ig-Oi for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:27:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53300) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSit-0004ca-Ce; Wed, 19 May 2021 16:23:19 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48106 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSic-0003CZ-OX; Wed, 19 May 2021 16:23:19 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 54AA360803D8; Wed, 19 May 2021 22:22:56 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 02/19] hvf: Move vcpu thread functions into common directory Date: Wed, 19 May 2021 22:22:36 +0200 Message-Id: <20210519202253.76782-3-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_PASS=-0.001, T_SPF_HELO_TEMPERROR=0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Until now, Hypervisor.framework has only been available on x86_64 systems. With Apple Silicon shipping now, it extends its reach to aarch64. To prepare for support for multiple architectures, let's start moving common code out into its own accel directory. This patch moves the vCPU thread loop over. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez --- {target/i386 => accel}/hvf/hvf-accel-ops.c | 0 {target/i386 => accel}/hvf/hvf-accel-ops.h | 0 accel/hvf/meson.build | 1 + target/i386/hvf/meson.build | 1 - target/i386/hvf/x86hvf.c | 2 +- 5 files changed, 2 insertions(+), 2 deletions(-) rename {target/i386 => accel}/hvf/hvf-accel-ops.c (100%) rename {target/i386 => accel}/hvf/hvf-accel-ops.h (100%) diff --git a/target/i386/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c similarity index 100% rename from target/i386/hvf/hvf-accel-ops.c rename to accel/hvf/hvf-accel-ops.c diff --git a/target/i386/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h similarity index 100% rename from target/i386/hvf/hvf-accel-ops.h rename to accel/hvf/hvf-accel-ops.h diff --git a/accel/hvf/meson.build b/accel/hvf/meson.build index 227b11cd71..fc52cb7843 100644 --- a/accel/hvf/meson.build +++ b/accel/hvf/meson.build @@ -1,6 +1,7 @@ hvf_ss = ss.source_set() hvf_ss.add(files( 'hvf-all.c', + 'hvf-accel-ops.c', )) specific_ss.add_all(when: 'CONFIG_HVF', if_true: hvf_ss) diff --git a/target/i386/hvf/meson.build b/target/i386/hvf/meson.build index d253d5fd10..f6d4c394d3 100644 --- a/target/i386/hvf/meson.build +++ b/target/i386/hvf/meson.build @@ -1,6 +1,5 @@ i386_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files( 'hvf.c', - 'hvf-accel-ops.c', 'x86.c', 'x86_cpuid.c', 'x86_decode.c', diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c index 0d7533742e..2b99f3eaa2 100644 --- a/target/i386/hvf/x86hvf.c +++ b/target/i386/hvf/x86hvf.c @@ -32,7 +32,7 @@ #include #include -#include "hvf-accel-ops.h" +#include "accel/hvf/hvf-accel-ops.h" void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg, SegmentCache *qseg, bool is_tr) From patchwork Wed May 19 20:22:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8115C433ED for ; Wed, 19 May 2021 20:25:20 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3D1B06135F for ; Wed, 19 May 2021 20:25:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D1B06135F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:53308 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSkp-0007Sf-5U for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:25:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53186) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSil-000457-QL; Wed, 19 May 2021 16:23:11 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48128 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSic-0003DK-NX; Wed, 19 May 2021 16:23:11 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id EC98B6080618; Wed, 19 May 2021 22:22:56 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 03/19] hvf: Move cpu functions into common directory Date: Wed, 19 May 2021 22:22:37 +0200 Message-Id: <20210519202253.76782-4-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Until now, Hypervisor.framework has only been available on x86_64 systems. With Apple Silicon shipping now, it extends its reach to aarch64. To prepare for support for multiple architectures, let's start moving common code out into its own accel directory. This patch moves CPU and memory operations over. While at it, make sure the code is consumable on non-i386 systems. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez --- accel/hvf/hvf-accel-ops.c | 308 ++++++++++++++++++++++++++++++++++++- include/sysemu/hvf_int.h | 4 + target/i386/hvf/hvf-i386.h | 2 - target/i386/hvf/hvf.c | 302 ------------------------------------ target/i386/hvf/x86hvf.h | 2 - 5 files changed, 311 insertions(+), 307 deletions(-) diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c index cbaad238e0..c2136dfbb8 100644 --- a/accel/hvf/hvf-accel-ops.c +++ b/accel/hvf/hvf-accel-ops.c @@ -50,13 +50,319 @@ #include "qemu/osdep.h" #include "qemu/error-report.h" #include "qemu/main-loop.h" +#include "exec/address-spaces.h" +#include "exec/exec-all.h" +#include "sysemu/cpus.h" #include "sysemu/hvf.h" +#include "sysemu/hvf_int.h" #include "sysemu/runstate.h" -#include "target/i386/cpu.h" #include "qemu/guest-random.h" #include "hvf-accel-ops.h" +HVFState *hvf_state; + +/* Memory slots */ + +hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size) +{ + hvf_slot *slot; + int x; + for (x = 0; x < hvf_state->num_slots; ++x) { + slot = &hvf_state->slots[x]; + if (slot->size && start < (slot->start + slot->size) && + (start + size) > slot->start) { + return slot; + } + } + return NULL; +} + +struct mac_slot { + int present; + uint64_t size; + uint64_t gpa_start; + uint64_t gva; +}; + +struct mac_slot mac_slots[32]; + +static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags) +{ + struct mac_slot *macslot; + hv_return_t ret; + + macslot = &mac_slots[slot->slot_id]; + + if (macslot->present) { + if (macslot->size != slot->size) { + macslot->present = 0; + ret = hv_vm_unmap(macslot->gpa_start, macslot->size); + assert_hvf_ok(ret); + } + } + + if (!slot->size) { + return 0; + } + + macslot->present = 1; + macslot->gpa_start = slot->start; + macslot->size = slot->size; + ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags); + assert_hvf_ok(ret); + return 0; +} + +void hvf_set_phys_mem(MemoryRegionSection *section, bool add) +{ + hvf_slot *mem; + MemoryRegion *area = section->mr; + bool writeable = !area->readonly && !area->rom_device; + hv_memory_flags_t flags; + + if (!memory_region_is_ram(area)) { + if (writeable) { + return; + } else if (!memory_region_is_romd(area)) { + /* + * If the memory device is not in romd_mode, then we actually want + * to remove the hvf memory slot so all accesses will trap. + */ + add = false; + } + } + + mem = hvf_find_overlap_slot( + section->offset_within_address_space, + int128_get64(section->size)); + + if (mem && add) { + if (mem->size == int128_get64(section->size) && + mem->start == section->offset_within_address_space && + mem->mem == (memory_region_get_ram_ptr(area) + + section->offset_within_region)) { + return; /* Same region was attempted to register, go away. */ + } + } + + /* Region needs to be reset. set the size to 0 and remap it. */ + if (mem) { + mem->size = 0; + if (do_hvf_set_memory(mem, 0)) { + error_report("Failed to reset overlapping slot"); + abort(); + } + } + + if (!add) { + return; + } + + if (area->readonly || + (!memory_region_is_ram(area) && memory_region_is_romd(area))) { + flags = HV_MEMORY_READ | HV_MEMORY_EXEC; + } else { + flags = HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC; + } + + /* Now make a new slot. */ + int x; + + for (x = 0; x < hvf_state->num_slots; ++x) { + mem = &hvf_state->slots[x]; + if (!mem->size) { + break; + } + } + + if (x == hvf_state->num_slots) { + error_report("No free slots"); + abort(); + } + + mem->size = int128_get64(section->size); + mem->mem = memory_region_get_ram_ptr(area) + section->offset_within_region; + mem->start = section->offset_within_address_space; + mem->region = area; + + if (do_hvf_set_memory(mem, flags)) { + error_report("Error registering new memory slot"); + abort(); + } +} + +static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg) +{ + if (!cpu->vcpu_dirty) { + hvf_get_registers(cpu); + cpu->vcpu_dirty = true; + } +} + +void hvf_cpu_synchronize_state(CPUState *cpu) +{ + if (!cpu->vcpu_dirty) { + run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL); + } +} + +static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu, + run_on_cpu_data arg) +{ + hvf_put_registers(cpu); + cpu->vcpu_dirty = false; +} + +void hvf_cpu_synchronize_post_reset(CPUState *cpu) +{ + run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL); +} + +static void do_hvf_cpu_synchronize_post_init(CPUState *cpu, + run_on_cpu_data arg) +{ + hvf_put_registers(cpu); + cpu->vcpu_dirty = false; +} + +void hvf_cpu_synchronize_post_init(CPUState *cpu) +{ + run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL); +} + +static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu, + run_on_cpu_data arg) +{ + cpu->vcpu_dirty = true; +} + +void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu) +{ + run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL); +} + +static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on) +{ + hvf_slot *slot; + + slot = hvf_find_overlap_slot( + section->offset_within_address_space, + int128_get64(section->size)); + + /* protect region against writes; begin tracking it */ + if (on) { + slot->flags |= HVF_SLOT_LOG; + hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size, + HV_MEMORY_READ); + /* stop tracking region*/ + } else { + slot->flags &= ~HVF_SLOT_LOG; + hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size, + HV_MEMORY_READ | HV_MEMORY_WRITE); + } +} + +static void hvf_log_start(MemoryListener *listener, + MemoryRegionSection *section, int old, int new) +{ + if (old != 0) { + return; + } + + hvf_set_dirty_tracking(section, 1); +} + +static void hvf_log_stop(MemoryListener *listener, + MemoryRegionSection *section, int old, int new) +{ + if (new != 0) { + return; + } + + hvf_set_dirty_tracking(section, 0); +} + +static void hvf_log_sync(MemoryListener *listener, + MemoryRegionSection *section) +{ + /* + * sync of dirty pages is handled elsewhere; just make sure we keep + * tracking the region. + */ + hvf_set_dirty_tracking(section, 1); +} + +static void hvf_region_add(MemoryListener *listener, + MemoryRegionSection *section) +{ + hvf_set_phys_mem(section, true); +} + +static void hvf_region_del(MemoryListener *listener, + MemoryRegionSection *section) +{ + hvf_set_phys_mem(section, false); +} + +static MemoryListener hvf_memory_listener = { + .priority = 10, + .region_add = hvf_region_add, + .region_del = hvf_region_del, + .log_start = hvf_log_start, + .log_stop = hvf_log_stop, + .log_sync = hvf_log_sync, +}; + +static void dummy_signal(int sig) +{ +} + +bool hvf_allowed; + +static int hvf_accel_init(MachineState *ms) +{ + int x; + hv_return_t ret; + HVFState *s; + + ret = hv_vm_create(HV_VM_DEFAULT); + assert_hvf_ok(ret); + + s = g_new0(HVFState, 1); + + s->num_slots = 32; + for (x = 0; x < s->num_slots; ++x) { + s->slots[x].size = 0; + s->slots[x].slot_id = x; + } + + hvf_state = s; + memory_listener_register(&hvf_memory_listener, &address_space_memory); + return 0; +} + +static void hvf_accel_class_init(ObjectClass *oc, void *data) +{ + AccelClass *ac = ACCEL_CLASS(oc); + ac->name = "HVF"; + ac->init_machine = hvf_accel_init; + ac->allowed = &hvf_allowed; +} + +static const TypeInfo hvf_accel_type = { + .name = TYPE_HVF_ACCEL, + .parent = TYPE_ACCEL, + .class_init = hvf_accel_class_init, +}; + +static void hvf_type_init(void) +{ + type_register_static(&hvf_accel_type); +} + +type_init(hvf_type_init); + /* * The HVF-specific vCPU thread function. This one should only run when the host * CPU supports the VMX "unrestricted guest" feature. diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h index 3deb4cfacc..4c657b054c 100644 --- a/include/sysemu/hvf_int.h +++ b/include/sysemu/hvf_int.h @@ -13,6 +13,10 @@ #include +void hvf_set_phys_mem(MemoryRegionSection *, bool); void assert_hvf_ok(hv_return_t ret); +hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t); +int hvf_put_registers(CPUState *); +int hvf_get_registers(CPUState *); #endif diff --git a/target/i386/hvf/hvf-i386.h b/target/i386/hvf/hvf-i386.h index 59cfca8875..94e5c788c4 100644 --- a/target/i386/hvf/hvf-i386.h +++ b/target/i386/hvf/hvf-i386.h @@ -51,9 +51,7 @@ struct HVFState { }; extern HVFState *hvf_state; -void hvf_set_phys_mem(MemoryRegionSection *, bool); void hvf_handle_io(CPUArchState *, uint16_t, void *, int, int, int); -hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t); #ifdef NEED_CPU_H /* Functions exported to host specific mode */ diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c index 32f42f1592..100ede2a4d 100644 --- a/target/i386/hvf/hvf.c +++ b/target/i386/hvf/hvf.c @@ -75,137 +75,6 @@ #include "hvf-accel-ops.h" -HVFState *hvf_state; - -/* Memory slots */ -hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size) -{ - hvf_slot *slot; - int x; - for (x = 0; x < hvf_state->num_slots; ++x) { - slot = &hvf_state->slots[x]; - if (slot->size && start < (slot->start + slot->size) && - (start + size) > slot->start) { - return slot; - } - } - return NULL; -} - -struct mac_slot { - int present; - uint64_t size; - uint64_t gpa_start; - uint64_t gva; -}; - -struct mac_slot mac_slots[32]; - -static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags) -{ - struct mac_slot *macslot; - hv_return_t ret; - - macslot = &mac_slots[slot->slot_id]; - - if (macslot->present) { - if (macslot->size != slot->size) { - macslot->present = 0; - ret = hv_vm_unmap(macslot->gpa_start, macslot->size); - assert_hvf_ok(ret); - } - } - - if (!slot->size) { - return 0; - } - - macslot->present = 1; - macslot->gpa_start = slot->start; - macslot->size = slot->size; - ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags); - assert_hvf_ok(ret); - return 0; -} - -void hvf_set_phys_mem(MemoryRegionSection *section, bool add) -{ - hvf_slot *mem; - MemoryRegion *area = section->mr; - bool writeable = !area->readonly && !area->rom_device; - hv_memory_flags_t flags; - - if (!memory_region_is_ram(area)) { - if (writeable) { - return; - } else if (!memory_region_is_romd(area)) { - /* - * If the memory device is not in romd_mode, then we actually want - * to remove the hvf memory slot so all accesses will trap. - */ - add = false; - } - } - - mem = hvf_find_overlap_slot( - section->offset_within_address_space, - int128_get64(section->size)); - - if (mem && add) { - if (mem->size == int128_get64(section->size) && - mem->start == section->offset_within_address_space && - mem->mem == (memory_region_get_ram_ptr(area) + - section->offset_within_region)) { - return; /* Same region was attempted to register, go away. */ - } - } - - /* Region needs to be reset. set the size to 0 and remap it. */ - if (mem) { - mem->size = 0; - if (do_hvf_set_memory(mem, 0)) { - error_report("Failed to reset overlapping slot"); - abort(); - } - } - - if (!add) { - return; - } - - if (area->readonly || - (!memory_region_is_ram(area) && memory_region_is_romd(area))) { - flags = HV_MEMORY_READ | HV_MEMORY_EXEC; - } else { - flags = HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC; - } - - /* Now make a new slot. */ - int x; - - for (x = 0; x < hvf_state->num_slots; ++x) { - mem = &hvf_state->slots[x]; - if (!mem->size) { - break; - } - } - - if (x == hvf_state->num_slots) { - error_report("No free slots"); - abort(); - } - - mem->size = int128_get64(section->size); - mem->mem = memory_region_get_ram_ptr(area) + section->offset_within_region; - mem->start = section->offset_within_address_space; - mem->region = area; - - if (do_hvf_set_memory(mem, flags)) { - error_report("Error registering new memory slot"); - abort(); - } -} - void vmx_update_tpr(CPUState *cpu) { /* TODO: need integrate APIC handling */ @@ -245,56 +114,6 @@ void hvf_handle_io(CPUArchState *env, uint16_t port, void *buffer, } } -static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg) -{ - if (!cpu->vcpu_dirty) { - hvf_get_registers(cpu); - cpu->vcpu_dirty = true; - } -} - -void hvf_cpu_synchronize_state(CPUState *cpu) -{ - if (!cpu->vcpu_dirty) { - run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL); - } -} - -static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu, - run_on_cpu_data arg) -{ - hvf_put_registers(cpu); - cpu->vcpu_dirty = false; -} - -void hvf_cpu_synchronize_post_reset(CPUState *cpu) -{ - run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL); -} - -static void do_hvf_cpu_synchronize_post_init(CPUState *cpu, - run_on_cpu_data arg) -{ - hvf_put_registers(cpu); - cpu->vcpu_dirty = false; -} - -void hvf_cpu_synchronize_post_init(CPUState *cpu) -{ - run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL); -} - -static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu, - run_on_cpu_data arg) -{ - cpu->vcpu_dirty = true; -} - -void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu) -{ - run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL); -} - static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual) { int read, write; @@ -339,78 +158,6 @@ static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual) return false; } -static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on) -{ - hvf_slot *slot; - - slot = hvf_find_overlap_slot( - section->offset_within_address_space, - int128_get64(section->size)); - - /* protect region against writes; begin tracking it */ - if (on) { - slot->flags |= HVF_SLOT_LOG; - hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size, - HV_MEMORY_READ); - /* stop tracking region*/ - } else { - slot->flags &= ~HVF_SLOT_LOG; - hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size, - HV_MEMORY_READ | HV_MEMORY_WRITE); - } -} - -static void hvf_log_start(MemoryListener *listener, - MemoryRegionSection *section, int old, int new) -{ - if (old != 0) { - return; - } - - hvf_set_dirty_tracking(section, 1); -} - -static void hvf_log_stop(MemoryListener *listener, - MemoryRegionSection *section, int old, int new) -{ - if (new != 0) { - return; - } - - hvf_set_dirty_tracking(section, 0); -} - -static void hvf_log_sync(MemoryListener *listener, - MemoryRegionSection *section) -{ - /* - * sync of dirty pages is handled elsewhere; just make sure we keep - * tracking the region. - */ - hvf_set_dirty_tracking(section, 1); -} - -static void hvf_region_add(MemoryListener *listener, - MemoryRegionSection *section) -{ - hvf_set_phys_mem(section, true); -} - -static void hvf_region_del(MemoryListener *listener, - MemoryRegionSection *section) -{ - hvf_set_phys_mem(section, false); -} - -static MemoryListener hvf_memory_listener = { - .priority = 10, - .region_add = hvf_region_add, - .region_del = hvf_region_del, - .log_start = hvf_log_start, - .log_stop = hvf_log_stop, - .log_sync = hvf_log_sync, -}; - void hvf_vcpu_destroy(CPUState *cpu) { X86CPU *x86_cpu = X86_CPU(cpu); @@ -421,10 +168,6 @@ void hvf_vcpu_destroy(CPUState *cpu) assert_hvf_ok(ret); } -static void dummy_signal(int sig) -{ -} - static void init_tsc_freq(CPUX86State *env) { size_t length; @@ -931,48 +674,3 @@ int hvf_vcpu_exec(CPUState *cpu) return ret; } - -bool hvf_allowed; - -static int hvf_accel_init(MachineState *ms) -{ - int x; - hv_return_t ret; - HVFState *s; - - ret = hv_vm_create(HV_VM_DEFAULT); - assert_hvf_ok(ret); - - s = g_new0(HVFState, 1); - - s->num_slots = 32; - for (x = 0; x < s->num_slots; ++x) { - s->slots[x].size = 0; - s->slots[x].slot_id = x; - } - - hvf_state = s; - memory_listener_register(&hvf_memory_listener, &address_space_memory); - return 0; -} - -static void hvf_accel_class_init(ObjectClass *oc, void *data) -{ - AccelClass *ac = ACCEL_CLASS(oc); - ac->name = "HVF"; - ac->init_machine = hvf_accel_init; - ac->allowed = &hvf_allowed; -} - -static const TypeInfo hvf_accel_type = { - .name = TYPE_HVF_ACCEL, - .parent = TYPE_ACCEL, - .class_init = hvf_accel_class_init, -}; - -static void hvf_type_init(void) -{ - type_register_static(&hvf_accel_type); -} - -type_init(hvf_type_init); diff --git a/target/i386/hvf/x86hvf.h b/target/i386/hvf/x86hvf.h index 635ab0f34e..99ed8d608d 100644 --- a/target/i386/hvf/x86hvf.h +++ b/target/i386/hvf/x86hvf.h @@ -21,8 +21,6 @@ #include "x86_descr.h" int hvf_process_events(CPUState *); -int hvf_put_registers(CPUState *); -int hvf_get_registers(CPUState *); bool hvf_inject_interrupts(CPUState *); void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg, SegmentCache *qseg, bool is_tr); From patchwork Wed May 19 20:22:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FE10C433B4 for ; Wed, 19 May 2021 20:25:34 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AF0806135F for ; Wed, 19 May 2021 20:25:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AF0806135F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:54732 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSl2-0008NI-QQ for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:25:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53146) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSij-00042r-Au; Wed, 19 May 2021 16:23:10 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48142 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSic-0003Dz-PN; Wed, 19 May 2021 16:23:08 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 9BC3C6080619; Wed, 19 May 2021 22:22:57 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 04/19] hvf: Move hvf internal definitions into common header Date: Wed, 19 May 2021 22:22:38 +0200 Message-Id: <20210519202253.76782-5-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Until now, Hypervisor.framework has only been available on x86_64 systems. With Apple Silicon shipping now, it extends its reach to aarch64. To prepare for support for multiple architectures, let's start moving common code out into its own accel directory. This patch moves a few internal struct and constant defines over. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez --- include/sysemu/hvf_int.h | 30 ++++++++++++++++++++++++++++++ target/i386/hvf/hvf-i386.h | 31 +------------------------------ 2 files changed, 31 insertions(+), 30 deletions(-) diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h index 4c657b054c..ef84a24dd9 100644 --- a/include/sysemu/hvf_int.h +++ b/include/sysemu/hvf_int.h @@ -13,6 +13,36 @@ #include +/* hvf_slot flags */ +#define HVF_SLOT_LOG (1 << 0) + +typedef struct hvf_slot { + uint64_t start; + uint64_t size; + uint8_t *mem; + int slot_id; + uint32_t flags; + MemoryRegion *region; +} hvf_slot; + +typedef struct hvf_vcpu_caps { + uint64_t vmx_cap_pinbased; + uint64_t vmx_cap_procbased; + uint64_t vmx_cap_procbased2; + uint64_t vmx_cap_entry; + uint64_t vmx_cap_exit; + uint64_t vmx_cap_preemption_timer; +} hvf_vcpu_caps; + +struct HVFState { + AccelState parent; + hvf_slot slots[32]; + int num_slots; + + hvf_vcpu_caps *hvf_caps; +}; +extern HVFState *hvf_state; + void hvf_set_phys_mem(MemoryRegionSection *, bool); void assert_hvf_ok(hv_return_t ret); hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t); diff --git a/target/i386/hvf/hvf-i386.h b/target/i386/hvf/hvf-i386.h index 94e5c788c4..76e9235524 100644 --- a/target/i386/hvf/hvf-i386.h +++ b/target/i386/hvf/hvf-i386.h @@ -18,39 +18,10 @@ #include "qemu/accel.h" #include "sysemu/hvf.h" +#include "sysemu/hvf_int.h" #include "cpu.h" #include "x86.h" -/* hvf_slot flags */ -#define HVF_SLOT_LOG (1 << 0) - -typedef struct hvf_slot { - uint64_t start; - uint64_t size; - uint8_t *mem; - int slot_id; - uint32_t flags; - MemoryRegion *region; -} hvf_slot; - -typedef struct hvf_vcpu_caps { - uint64_t vmx_cap_pinbased; - uint64_t vmx_cap_procbased; - uint64_t vmx_cap_procbased2; - uint64_t vmx_cap_entry; - uint64_t vmx_cap_exit; - uint64_t vmx_cap_preemption_timer; -} hvf_vcpu_caps; - -struct HVFState { - AccelState parent; - hvf_slot slots[32]; - int num_slots; - - hvf_vcpu_caps *hvf_caps; -}; -extern HVFState *hvf_state; - void hvf_handle_io(CPUArchState *, uint16_t, void *, int, int, int); #ifdef NEED_CPU_H From patchwork Wed May 19 20:22:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89E10C433ED for ; Wed, 19 May 2021 20:27:06 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4CB3F61353 for ; Wed, 19 May 2021 20:27:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4CB3F61353 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:35064 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSmX-0005e7-EE for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:27:05 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53424) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSj4-0004pE-8g; Wed, 19 May 2021 16:23:30 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48268 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSil-0003IB-I3; Wed, 19 May 2021 16:23:28 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 4377C6080629; Wed, 19 May 2021 22:22:58 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 05/19] hvf: Make hvf_set_phys_mem() static Date: Wed, 19 May 2021 22:22:39 +0200 Message-Id: <20210519202253.76782-6-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" The hvf_set_phys_mem() function is only called within the same file. Make it static. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez --- accel/hvf/hvf-accel-ops.c | 2 +- include/sysemu/hvf_int.h | 1 - 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c index c2136dfbb8..5bec7b4d6d 100644 --- a/accel/hvf/hvf-accel-ops.c +++ b/accel/hvf/hvf-accel-ops.c @@ -114,7 +114,7 @@ static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags) return 0; } -void hvf_set_phys_mem(MemoryRegionSection *section, bool add) +static void hvf_set_phys_mem(MemoryRegionSection *section, bool add) { hvf_slot *mem; MemoryRegion *area = section->mr; diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h index ef84a24dd9..d15fa3302a 100644 --- a/include/sysemu/hvf_int.h +++ b/include/sysemu/hvf_int.h @@ -43,7 +43,6 @@ struct HVFState { }; extern HVFState *hvf_state; -void hvf_set_phys_mem(MemoryRegionSection *, bool); void assert_hvf_ok(hv_return_t ret); hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t); int hvf_put_registers(CPUState *); From patchwork Wed May 19 20:22:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FF2DC433B4 for ; Wed, 19 May 2021 20:26:27 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C3C436139A for ; Wed, 19 May 2021 20:26:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C3C436139A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:60210 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSlt-0003bk-Sf for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:26:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53382) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSj0-0004lX-Pd; Wed, 19 May 2021 16:23:28 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48270 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSil-0003IC-FN; Wed, 19 May 2021 16:23:25 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id DBB67608062F; Wed, 19 May 2021 22:22:58 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 06/19] hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t Date: Wed, 19 May 2021 22:22:40 +0200 Message-Id: <20210519202253.76782-7-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" The ARM version of Hypervisor.framework no longer defines these two types, so let's just revert to standard ones. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez --- accel/hvf/hvf-accel-ops.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c index 5bec7b4d6d..7370fcfba0 100644 --- a/accel/hvf/hvf-accel-ops.c +++ b/accel/hvf/hvf-accel-ops.c @@ -109,7 +109,7 @@ static int do_hvf_set_memory(hvf_slot *slot, hv_memory_flags_t flags) macslot->present = 1; macslot->gpa_start = slot->start; macslot->size = slot->size; - ret = hv_vm_map((hv_uvaddr_t)slot->mem, slot->start, slot->size, flags); + ret = hv_vm_map(slot->mem, slot->start, slot->size, flags); assert_hvf_ok(ret); return 0; } @@ -253,12 +253,12 @@ static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on) /* protect region against writes; begin tracking it */ if (on) { slot->flags |= HVF_SLOT_LOG; - hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size, + hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size, HV_MEMORY_READ); /* stop tracking region*/ } else { slot->flags &= ~HVF_SLOT_LOG; - hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size, + hv_vm_protect((uintptr_t)slot->start, (size_t)slot->size, HV_MEMORY_READ | HV_MEMORY_WRITE); } } From patchwork Wed May 19 20:22:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CF5EC433ED for ; Wed, 19 May 2021 20:33:23 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0EF4C611AE for ; Wed, 19 May 2021 20:33:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0EF4C611AE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:53606 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSsc-0001IO-1g for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:33:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53390) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSj1-0004la-1V; Wed, 19 May 2021 16:23:28 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48272 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSim-0003J9-Hz; Wed, 19 May 2021 16:23:26 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 848246080686; Wed, 19 May 2021 22:22:59 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 07/19] hvf: Split out common code on vcpu init and destroy Date: Wed, 19 May 2021 22:22:41 +0200 Message-Id: <20210519202253.76782-8-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Until now, Hypervisor.framework has only been available on x86_64 systems. With Apple Silicon shipping now, it extends its reach to aarch64. To prepare for support for multiple architectures, let's start moving common code out into its own accel directory. This patch splits the vcpu init and destroy functions into a generic and an architecture specific portion. This also allows us to move the generic functions into the generic hvf code, removing exported functions. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez --- accel/hvf/hvf-accel-ops.c | 30 ++++++++++++++++++++++++++++++ accel/hvf/hvf-accel-ops.h | 2 -- include/sysemu/hvf_int.h | 2 ++ target/i386/hvf/hvf.c | 23 ++--------------------- 4 files changed, 34 insertions(+), 23 deletions(-) diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c index 7370fcfba0..b262efd8b6 100644 --- a/accel/hvf/hvf-accel-ops.c +++ b/accel/hvf/hvf-accel-ops.c @@ -363,6 +363,36 @@ static void hvf_type_init(void) type_init(hvf_type_init); +static void hvf_vcpu_destroy(CPUState *cpu) +{ + hv_return_t ret = hv_vcpu_destroy(cpu->hvf_fd); + assert_hvf_ok(ret); + + hvf_arch_vcpu_destroy(cpu); +} + +static int hvf_init_vcpu(CPUState *cpu) +{ + int r; + + /* init cpu signals */ + sigset_t set; + struct sigaction sigact; + + memset(&sigact, 0, sizeof(sigact)); + sigact.sa_handler = dummy_signal; + sigaction(SIG_IPI, &sigact, NULL); + + pthread_sigmask(SIG_BLOCK, NULL, &set); + sigdelset(&set, SIG_IPI); + + r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT); + cpu->vcpu_dirty = 1; + assert_hvf_ok(r); + + return hvf_arch_init_vcpu(cpu); +} + /* * The HVF-specific vCPU thread function. This one should only run when the host * CPU supports the VMX "unrestricted guest" feature. diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h index 8f992da168..09fcf22067 100644 --- a/accel/hvf/hvf-accel-ops.h +++ b/accel/hvf/hvf-accel-ops.h @@ -12,12 +12,10 @@ #include "sysemu/cpus.h" -int hvf_init_vcpu(CPUState *); int hvf_vcpu_exec(CPUState *); void hvf_cpu_synchronize_state(CPUState *); void hvf_cpu_synchronize_post_reset(CPUState *); void hvf_cpu_synchronize_post_init(CPUState *); void hvf_cpu_synchronize_pre_loadvm(CPUState *); -void hvf_vcpu_destroy(CPUState *); #endif /* HVF_CPUS_H */ diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h index d15fa3302a..80c1a8f946 100644 --- a/include/sysemu/hvf_int.h +++ b/include/sysemu/hvf_int.h @@ -44,6 +44,8 @@ struct HVFState { extern HVFState *hvf_state; void assert_hvf_ok(hv_return_t ret); +int hvf_arch_init_vcpu(CPUState *cpu); +void hvf_arch_vcpu_destroy(CPUState *cpu); hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t); int hvf_put_registers(CPUState *); int hvf_get_registers(CPUState *); diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c index 100ede2a4d..c7132ee370 100644 --- a/target/i386/hvf/hvf.c +++ b/target/i386/hvf/hvf.c @@ -158,14 +158,12 @@ static bool ept_emulation_fault(hvf_slot *slot, uint64_t gpa, uint64_t ept_qual) return false; } -void hvf_vcpu_destroy(CPUState *cpu) +void hvf_arch_vcpu_destroy(CPUState *cpu) { X86CPU *x86_cpu = X86_CPU(cpu); CPUX86State *env = &x86_cpu->env; - hv_return_t ret = hv_vcpu_destroy((hv_vcpuid_t)cpu->hvf_fd); g_free(env->hvf_mmio_buf); - assert_hvf_ok(ret); } static void init_tsc_freq(CPUX86State *env) @@ -210,23 +208,10 @@ static inline bool apic_bus_freq_is_known(CPUX86State *env) return env->apic_bus_freq != 0; } -int hvf_init_vcpu(CPUState *cpu) +int hvf_arch_init_vcpu(CPUState *cpu) { - X86CPU *x86cpu = X86_CPU(cpu); CPUX86State *env = &x86cpu->env; - int r; - - /* init cpu signals */ - sigset_t set; - struct sigaction sigact; - - memset(&sigact, 0, sizeof(sigact)); - sigact.sa_handler = dummy_signal; - sigaction(SIG_IPI, &sigact, NULL); - - pthread_sigmask(SIG_BLOCK, NULL, &set); - sigdelset(&set, SIG_IPI); init_emu(); init_decoder(); @@ -243,10 +228,6 @@ int hvf_init_vcpu(CPUState *cpu) } } - r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT); - cpu->vcpu_dirty = 1; - assert_hvf_ok(r); - if (hv_vmx_read_capability(HV_VMX_CAP_PINBASED, &hvf_state->hvf_caps->vmx_cap_pinbased)) { abort(); From patchwork Wed May 19 20:22:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.9 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6CBBC433ED for ; Wed, 19 May 2021 20:29:55 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 62D8C6135F for ; Wed, 19 May 2021 20:29:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 62D8C6135F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:43678 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSpG-0002so-EM for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:29:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53428) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSj4-0004qb-Su; Wed, 19 May 2021 16:23:31 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48274 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSim-0003JD-L8; Wed, 19 May 2021 16:23:30 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 294A8608068C; Wed, 19 May 2021 22:23:00 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 08/19] hvf: Use cpu_synchronize_state() Date: Wed, 19 May 2021 22:22:42 +0200 Message-Id: <20210519202253.76782-9-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" There is no reason to call the hvf specific hvf_cpu_synchronize_state() when we can just use the generic cpu_synchronize_state() instead. This allows us to have less dependency on internal function definitions and allows us to make hvf_cpu_synchronize_state() static. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez --- accel/hvf/hvf-accel-ops.c | 2 +- accel/hvf/hvf-accel-ops.h | 1 - target/i386/hvf/x86hvf.c | 9 ++++----- 3 files changed, 5 insertions(+), 7 deletions(-) diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c index b262efd8b6..3b599ac57c 100644 --- a/accel/hvf/hvf-accel-ops.c +++ b/accel/hvf/hvf-accel-ops.c @@ -200,7 +200,7 @@ static void do_hvf_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg) } } -void hvf_cpu_synchronize_state(CPUState *cpu) +static void hvf_cpu_synchronize_state(CPUState *cpu) { if (!cpu->vcpu_dirty) { run_on_cpu(cpu, do_hvf_cpu_synchronize_state, RUN_ON_CPU_NULL); diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h index 09fcf22067..f6192b56f0 100644 --- a/accel/hvf/hvf-accel-ops.h +++ b/accel/hvf/hvf-accel-ops.h @@ -13,7 +13,6 @@ #include "sysemu/cpus.h" int hvf_vcpu_exec(CPUState *); -void hvf_cpu_synchronize_state(CPUState *); void hvf_cpu_synchronize_post_reset(CPUState *); void hvf_cpu_synchronize_post_init(CPUState *); void hvf_cpu_synchronize_pre_loadvm(CPUState *); diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c index 2b99f3eaa2..cc381307ab 100644 --- a/target/i386/hvf/x86hvf.c +++ b/target/i386/hvf/x86hvf.c @@ -26,14 +26,13 @@ #include "cpu.h" #include "x86_descr.h" #include "x86_decode.h" +#include "sysemu/hw_accel.h" #include "hw/i386/apic_internal.h" #include #include -#include "accel/hvf/hvf-accel-ops.h" - void hvf_set_segment(struct CPUState *cpu, struct vmx_segment *vmx_seg, SegmentCache *qseg, bool is_tr) { @@ -437,7 +436,7 @@ int hvf_process_events(CPUState *cpu_state) env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS); if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) { - hvf_cpu_synchronize_state(cpu_state); + cpu_synchronize_state(cpu_state); do_cpu_init(cpu); } @@ -451,12 +450,12 @@ int hvf_process_events(CPUState *cpu_state) cpu_state->halted = 0; } if (cpu_state->interrupt_request & CPU_INTERRUPT_SIPI) { - hvf_cpu_synchronize_state(cpu_state); + cpu_synchronize_state(cpu_state); do_cpu_sipi(cpu); } if (cpu_state->interrupt_request & CPU_INTERRUPT_TPR) { cpu_state->interrupt_request &= ~CPU_INTERRUPT_TPR; - hvf_cpu_synchronize_state(cpu_state); + cpu_synchronize_state(cpu_state); apic_handle_tpr_access_report(cpu->apic_state, env->eip, env->tpr_access_type); } From patchwork Wed May 19 20:22:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CEBFC433ED for ; Wed, 19 May 2021 20:35:50 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ED861611C2 for ; Wed, 19 May 2021 20:35:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED861611C2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:33814 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSuz-0006sZ-3x for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:35:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53402) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSj2-0004mu-2h; Wed, 19 May 2021 16:23:28 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48278 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSim-0003JN-MF; Wed, 19 May 2021 16:23:27 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id BDF2860806A1; Wed, 19 May 2021 22:23:00 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 09/19] hvf: Make synchronize functions static Date: Wed, 19 May 2021 22:22:43 +0200 Message-Id: <20210519202253.76782-10-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" The hvf accel synchronize functions are only used as input for local callback functions, so we can make them static. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez --- accel/hvf/hvf-accel-ops.c | 6 +++--- accel/hvf/hvf-accel-ops.h | 3 --- 2 files changed, 3 insertions(+), 6 deletions(-) diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c index 3b599ac57c..69741ce708 100644 --- a/accel/hvf/hvf-accel-ops.c +++ b/accel/hvf/hvf-accel-ops.c @@ -214,7 +214,7 @@ static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu, cpu->vcpu_dirty = false; } -void hvf_cpu_synchronize_post_reset(CPUState *cpu) +static void hvf_cpu_synchronize_post_reset(CPUState *cpu) { run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL); } @@ -226,7 +226,7 @@ static void do_hvf_cpu_synchronize_post_init(CPUState *cpu, cpu->vcpu_dirty = false; } -void hvf_cpu_synchronize_post_init(CPUState *cpu) +static void hvf_cpu_synchronize_post_init(CPUState *cpu) { run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL); } @@ -237,7 +237,7 @@ static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu, cpu->vcpu_dirty = true; } -void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu) +static void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu) { run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL); } diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h index f6192b56f0..018a4e22f6 100644 --- a/accel/hvf/hvf-accel-ops.h +++ b/accel/hvf/hvf-accel-ops.h @@ -13,8 +13,5 @@ #include "sysemu/cpus.h" int hvf_vcpu_exec(CPUState *); -void hvf_cpu_synchronize_post_reset(CPUState *); -void hvf_cpu_synchronize_post_init(CPUState *); -void hvf_cpu_synchronize_pre_loadvm(CPUState *); #endif /* HVF_CPUS_H */ From patchwork Wed May 19 20:22:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1180C433B4 for ; Wed, 19 May 2021 20:29:37 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6A8606135C for ; Wed, 19 May 2021 20:29:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6A8606135C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:41862 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSoy-0001iE-IE for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:29:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53410) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSj2-0004oZ-MV; Wed, 19 May 2021 16:23:30 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48276 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSim-0003JO-L6; Wed, 19 May 2021 16:23:28 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 634D660806A6; Wed, 19 May 2021 22:23:01 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 10/19] hvf: Remove hvf-accel-ops.h Date: Wed, 19 May 2021 22:22:44 +0200 Message-Id: <20210519202253.76782-11-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" We can move the definition of hvf_vcpu_exec() into our internal hvf header, obsoleting the need for hvf-accel-ops.h. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez --- accel/hvf/hvf-accel-ops.c | 2 -- accel/hvf/hvf-accel-ops.h | 17 ----------------- include/sysemu/hvf_int.h | 1 + target/i386/hvf/hvf.c | 2 -- 4 files changed, 1 insertion(+), 21 deletions(-) delete mode 100644 accel/hvf/hvf-accel-ops.h diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c index 69741ce708..14fc49791e 100644 --- a/accel/hvf/hvf-accel-ops.c +++ b/accel/hvf/hvf-accel-ops.c @@ -58,8 +58,6 @@ #include "sysemu/runstate.h" #include "qemu/guest-random.h" -#include "hvf-accel-ops.h" - HVFState *hvf_state; /* Memory slots */ diff --git a/accel/hvf/hvf-accel-ops.h b/accel/hvf/hvf-accel-ops.h deleted file mode 100644 index 018a4e22f6..0000000000 --- a/accel/hvf/hvf-accel-ops.h +++ /dev/null @@ -1,17 +0,0 @@ -/* - * Accelerator CPUS Interface - * - * Copyright 2020 SUSE LLC - * - * This work is licensed under the terms of the GNU GPL, version 2 or later. - * See the COPYING file in the top-level directory. - */ - -#ifndef HVF_CPUS_H -#define HVF_CPUS_H - -#include "sysemu/cpus.h" - -int hvf_vcpu_exec(CPUState *); - -#endif /* HVF_CPUS_H */ diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h index 80c1a8f946..fd1dcaf26e 100644 --- a/include/sysemu/hvf_int.h +++ b/include/sysemu/hvf_int.h @@ -46,6 +46,7 @@ extern HVFState *hvf_state; void assert_hvf_ok(hv_return_t ret); int hvf_arch_init_vcpu(CPUState *cpu); void hvf_arch_vcpu_destroy(CPUState *cpu); +int hvf_vcpu_exec(CPUState *); hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t); int hvf_put_registers(CPUState *); int hvf_get_registers(CPUState *); diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c index c7132ee370..02f7be6cfd 100644 --- a/target/i386/hvf/hvf.c +++ b/target/i386/hvf/hvf.c @@ -73,8 +73,6 @@ #include "qemu/accel.h" #include "target/i386/cpu.h" -#include "hvf-accel-ops.h" - void vmx_update_tpr(CPUState *cpu) { /* TODO: need integrate APIC handling */ From patchwork Wed May 19 20:22:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5906EC433ED for ; Wed, 19 May 2021 20:32:39 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C18F1611AE for ; Wed, 19 May 2021 20:32:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C18F1611AE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:50736 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSrt-0007od-TF for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:32:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53502) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSjI-0005Nf-T3; Wed, 19 May 2021 16:23:44 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48280 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSjE-0003Of-9s; Wed, 19 May 2021 16:23:44 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 0E40960806A7; Wed, 19 May 2021 22:23:02 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 11/19] hvf: Introduce hvf vcpu struct Date: Wed, 19 May 2021 22:22:45 +0200 Message-Id: <20210519202253.76782-12-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , =?utf-8?q?Alex_Benn=C3=A9e?= , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" We will need more than a single field for hvf going forward. To keep the global vcpu struct uncluttered, let's allocate a special hvf vcpu struct, similar to how hax does it. Signed-off-by: Alexander Graf Reviewed-by: Roman Bolshakov Tested-by: Roman Bolshakov Reviewed-by: Alex Bennée Reviewed-by: Sergio Lopez --- v4 -> v5: - Use g_free() on destroy --- accel/hvf/hvf-accel-ops.c | 8 +- include/hw/core/cpu.h | 3 +- include/sysemu/hvf_int.h | 4 + target/i386/hvf/hvf.c | 104 +++++++++--------- target/i386/hvf/vmx.h | 24 +++-- target/i386/hvf/x86.c | 28 ++--- target/i386/hvf/x86_descr.c | 26 ++--- target/i386/hvf/x86_emu.c | 62 +++++------ target/i386/hvf/x86_mmu.c | 4 +- target/i386/hvf/x86_task.c | 12 +-- target/i386/hvf/x86hvf.c | 210 ++++++++++++++++++------------------ 11 files changed, 248 insertions(+), 237 deletions(-) diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c index 14fc49791e..ded918c443 100644 --- a/accel/hvf/hvf-accel-ops.c +++ b/accel/hvf/hvf-accel-ops.c @@ -363,16 +363,20 @@ type_init(hvf_type_init); static void hvf_vcpu_destroy(CPUState *cpu) { - hv_return_t ret = hv_vcpu_destroy(cpu->hvf_fd); + hv_return_t ret = hv_vcpu_destroy(cpu->hvf->fd); assert_hvf_ok(ret); hvf_arch_vcpu_destroy(cpu); + g_free(cpu->hvf); + cpu->hvf = NULL; } static int hvf_init_vcpu(CPUState *cpu) { int r; + cpu->hvf = g_malloc0(sizeof(*cpu->hvf)); + /* init cpu signals */ sigset_t set; struct sigaction sigact; @@ -384,7 +388,7 @@ static int hvf_init_vcpu(CPUState *cpu) pthread_sigmask(SIG_BLOCK, NULL, &set); sigdelset(&set, SIG_IPI); - r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf_fd, HV_VCPU_DEFAULT); + r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT); cpu->vcpu_dirty = 1; assert_hvf_ok(r); diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h index d45f78290e..47d82a5aa9 100644 --- a/include/hw/core/cpu.h +++ b/include/hw/core/cpu.h @@ -251,6 +251,7 @@ struct KVMState; struct kvm_run; struct hax_vcpu_state; +struct hvf_vcpu_state; #define TB_JMP_CACHE_BITS 12 #define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS) @@ -436,7 +437,7 @@ struct CPUState { struct hax_vcpu_state *hax_vcpu; - int hvf_fd; + struct hvf_vcpu_state *hvf; /* track IOMMUs whose translations we've cached in the TCG TLB */ GArray *iommu_notifiers; diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h index fd1dcaf26e..8b66a4e7d0 100644 --- a/include/sysemu/hvf_int.h +++ b/include/sysemu/hvf_int.h @@ -43,6 +43,10 @@ struct HVFState { }; extern HVFState *hvf_state; +struct hvf_vcpu_state { + int fd; +}; + void assert_hvf_ok(hv_return_t ret); int hvf_arch_init_vcpu(CPUState *cpu); void hvf_arch_vcpu_destroy(CPUState *cpu); diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c index 02f7be6cfd..346dbcc26f 100644 --- a/target/i386/hvf/hvf.c +++ b/target/i386/hvf/hvf.c @@ -80,11 +80,11 @@ void vmx_update_tpr(CPUState *cpu) int tpr = cpu_get_apic_tpr(x86_cpu->apic_state) << 4; int irr = apic_get_highest_priority_irr(x86_cpu->apic_state); - wreg(cpu->hvf_fd, HV_X86_TPR, tpr); + wreg(cpu->hvf->fd, HV_X86_TPR, tpr); if (irr == -1) { - wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0); + wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0); } else { - wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 : + wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, (irr > tpr) ? tpr >> 4 : irr >> 4); } } @@ -92,7 +92,7 @@ void vmx_update_tpr(CPUState *cpu) static void update_apic_tpr(CPUState *cpu) { X86CPU *x86_cpu = X86_CPU(cpu); - int tpr = rreg(cpu->hvf_fd, HV_X86_TPR) >> 4; + int tpr = rreg(cpu->hvf->fd, HV_X86_TPR) >> 4; cpu_set_apic_tpr(x86_cpu->apic_state, tpr); } @@ -244,43 +244,43 @@ int hvf_arch_init_vcpu(CPUState *cpu) } /* set VMCS control fields */ - wvmcs(cpu->hvf_fd, VMCS_PIN_BASED_CTLS, + wvmcs(cpu->hvf->fd, VMCS_PIN_BASED_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_pinbased, VMCS_PIN_BASED_CTLS_EXTINT | VMCS_PIN_BASED_CTLS_NMI | VMCS_PIN_BASED_CTLS_VNMI)); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, + wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased, VMCS_PRI_PROC_BASED_CTLS_HLT | VMCS_PRI_PROC_BASED_CTLS_MWAIT | VMCS_PRI_PROC_BASED_CTLS_TSC_OFFSET | VMCS_PRI_PROC_BASED_CTLS_TPR_SHADOW) | VMCS_PRI_PROC_BASED_CTLS_SEC_CONTROL); - wvmcs(cpu->hvf_fd, VMCS_SEC_PROC_BASED_CTLS, + wvmcs(cpu->hvf->fd, VMCS_SEC_PROC_BASED_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_procbased2, VMCS_PRI_PROC_BASED2_CTLS_APIC_ACCESSES)); - wvmcs(cpu->hvf_fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_entry, + wvmcs(cpu->hvf->fd, VMCS_ENTRY_CTLS, cap2ctrl(hvf_state->hvf_caps->vmx_cap_entry, 0)); - wvmcs(cpu->hvf_fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */ + wvmcs(cpu->hvf->fd, VMCS_EXCEPTION_BITMAP, 0); /* Double fault */ - wvmcs(cpu->hvf_fd, VMCS_TPR_THRESHOLD, 0); + wvmcs(cpu->hvf->fd, VMCS_TPR_THRESHOLD, 0); x86cpu = X86_CPU(cpu); x86cpu->env.xsave_buf = qemu_memalign(4096, 4096); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_STAR, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_LSTAR, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_CSTAR, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FMASK, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_FSBASE, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_GSBASE, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_KERNELGSBASE, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_TSC_AUX, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_TSC, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_CS, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_EIP, 1); - hv_vcpu_enable_native_msr(cpu->hvf_fd, MSR_IA32_SYSENTER_ESP, 1); + hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_STAR, 1); + hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_LSTAR, 1); + hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_CSTAR, 1); + hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_FMASK, 1); + hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_FSBASE, 1); + hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_GSBASE, 1); + hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_KERNELGSBASE, 1); + hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_TSC_AUX, 1); + hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_TSC, 1); + hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_CS, 1); + hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_EIP, 1); + hv_vcpu_enable_native_msr(cpu->hvf->fd, MSR_IA32_SYSENTER_ESP, 1); return 0; } @@ -321,16 +321,16 @@ static void hvf_store_events(CPUState *cpu, uint32_t ins_len, uint64_t idtvec_in } if (idtvec_info & VMCS_IDT_VEC_ERRCODE_VALID) { env->has_error_code = true; - env->error_code = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_ERROR); + env->error_code = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_ERROR); } } - if ((rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) & + if ((rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY) & VMCS_INTERRUPTIBILITY_NMI_BLOCKING)) { env->hflags2 |= HF2_NMI_MASK; } else { env->hflags2 &= ~HF2_NMI_MASK; } - if (rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY) & + if (rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY) & (VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) { env->hflags |= HF_INHIBIT_IRQ_MASK; @@ -409,20 +409,20 @@ int hvf_vcpu_exec(CPUState *cpu) return EXCP_HLT; } - hv_return_t r = hv_vcpu_run(cpu->hvf_fd); + hv_return_t r = hv_vcpu_run(cpu->hvf->fd); assert_hvf_ok(r); /* handle VMEXIT */ - uint64_t exit_reason = rvmcs(cpu->hvf_fd, VMCS_EXIT_REASON); - uint64_t exit_qual = rvmcs(cpu->hvf_fd, VMCS_EXIT_QUALIFICATION); - uint32_t ins_len = (uint32_t)rvmcs(cpu->hvf_fd, + uint64_t exit_reason = rvmcs(cpu->hvf->fd, VMCS_EXIT_REASON); + uint64_t exit_qual = rvmcs(cpu->hvf->fd, VMCS_EXIT_QUALIFICATION); + uint32_t ins_len = (uint32_t)rvmcs(cpu->hvf->fd, VMCS_EXIT_INSTRUCTION_LENGTH); - uint64_t idtvec_info = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INFO); + uint64_t idtvec_info = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_INFO); hvf_store_events(cpu, ins_len, idtvec_info); - rip = rreg(cpu->hvf_fd, HV_X86_RIP); - env->eflags = rreg(cpu->hvf_fd, HV_X86_RFLAGS); + rip = rreg(cpu->hvf->fd, HV_X86_RIP); + env->eflags = rreg(cpu->hvf->fd, HV_X86_RFLAGS); qemu_mutex_lock_iothread(); @@ -452,7 +452,7 @@ int hvf_vcpu_exec(CPUState *cpu) case EXIT_REASON_EPT_FAULT: { hvf_slot *slot; - uint64_t gpa = rvmcs(cpu->hvf_fd, VMCS_GUEST_PHYSICAL_ADDRESS); + uint64_t gpa = rvmcs(cpu->hvf->fd, VMCS_GUEST_PHYSICAL_ADDRESS); if (((idtvec_info & VMCS_IDT_VEC_VALID) == 0) && ((exit_qual & EXIT_QUAL_NMIUDTI) != 0)) { @@ -497,7 +497,7 @@ int hvf_vcpu_exec(CPUState *cpu) store_regs(cpu); break; } else if (!string && !in) { - RAX(env) = rreg(cpu->hvf_fd, HV_X86_RAX); + RAX(env) = rreg(cpu->hvf->fd, HV_X86_RAX); hvf_handle_io(env, port, &RAX(env), 1, size, 1); macvm_set_rip(cpu, rip + ins_len); break; @@ -513,21 +513,21 @@ int hvf_vcpu_exec(CPUState *cpu) break; } case EXIT_REASON_CPUID: { - uint32_t rax = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX); - uint32_t rbx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RBX); - uint32_t rcx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX); - uint32_t rdx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX); + uint32_t rax = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RAX); + uint32_t rbx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RBX); + uint32_t rcx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RCX); + uint32_t rdx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RDX); if (rax == 1) { /* CPUID1.ecx.OSXSAVE needs to know CR4 */ - env->cr[4] = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4); + env->cr[4] = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR4); } hvf_cpu_x86_cpuid(env, rax, rcx, &rax, &rbx, &rcx, &rdx); - wreg(cpu->hvf_fd, HV_X86_RAX, rax); - wreg(cpu->hvf_fd, HV_X86_RBX, rbx); - wreg(cpu->hvf_fd, HV_X86_RCX, rcx); - wreg(cpu->hvf_fd, HV_X86_RDX, rdx); + wreg(cpu->hvf->fd, HV_X86_RAX, rax); + wreg(cpu->hvf->fd, HV_X86_RBX, rbx); + wreg(cpu->hvf->fd, HV_X86_RCX, rcx); + wreg(cpu->hvf->fd, HV_X86_RDX, rdx); macvm_set_rip(cpu, rip + ins_len); break; @@ -535,16 +535,16 @@ int hvf_vcpu_exec(CPUState *cpu) case EXIT_REASON_XSETBV: { X86CPU *x86_cpu = X86_CPU(cpu); CPUX86State *env = &x86_cpu->env; - uint32_t eax = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RAX); - uint32_t ecx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RCX); - uint32_t edx = (uint32_t)rreg(cpu->hvf_fd, HV_X86_RDX); + uint32_t eax = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RAX); + uint32_t ecx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RCX); + uint32_t edx = (uint32_t)rreg(cpu->hvf->fd, HV_X86_RDX); if (ecx) { macvm_set_rip(cpu, rip + ins_len); break; } env->xcr0 = ((uint64_t)edx << 32) | eax; - wreg(cpu->hvf_fd, HV_X86_XCR0, env->xcr0 | 1); + wreg(cpu->hvf->fd, HV_X86_XCR0, env->xcr0 | 1); macvm_set_rip(cpu, rip + ins_len); break; } @@ -583,11 +583,11 @@ int hvf_vcpu_exec(CPUState *cpu) switch (cr) { case 0x0: { - macvm_set_cr0(cpu->hvf_fd, RRX(env, reg)); + macvm_set_cr0(cpu->hvf->fd, RRX(env, reg)); break; } case 4: { - macvm_set_cr4(cpu->hvf_fd, RRX(env, reg)); + macvm_set_cr4(cpu->hvf->fd, RRX(env, reg)); break; } case 8: { @@ -623,7 +623,7 @@ int hvf_vcpu_exec(CPUState *cpu) break; } case EXIT_REASON_TASK_SWITCH: { - uint64_t vinfo = rvmcs(cpu->hvf_fd, VMCS_IDT_VECTORING_INFO); + uint64_t vinfo = rvmcs(cpu->hvf->fd, VMCS_IDT_VECTORING_INFO); x68_segment_selector sel = {.sel = exit_qual & 0xffff}; vmx_handle_task_switch(cpu, sel, (exit_qual >> 30) & 0x3, vinfo & VMCS_INTR_VALID, vinfo & VECTORING_INFO_VECTOR_MASK, vinfo @@ -636,8 +636,8 @@ int hvf_vcpu_exec(CPUState *cpu) break; } case EXIT_REASON_RDPMC: - wreg(cpu->hvf_fd, HV_X86_RAX, 0); - wreg(cpu->hvf_fd, HV_X86_RDX, 0); + wreg(cpu->hvf->fd, HV_X86_RAX, 0); + wreg(cpu->hvf->fd, HV_X86_RDX, 0); macvm_set_rip(cpu, rip + ins_len); break; case VMX_REASON_VMCALL: diff --git a/target/i386/hvf/vmx.h b/target/i386/hvf/vmx.h index 24c4cdf0be..6df87116f6 100644 --- a/target/i386/hvf/vmx.h +++ b/target/i386/hvf/vmx.h @@ -30,6 +30,8 @@ #include "vmcs.h" #include "cpu.h" #include "x86.h" +#include "sysemu/hvf.h" +#include "sysemu/hvf_int.h" #include "exec/address-spaces.h" @@ -179,15 +181,15 @@ static inline void macvm_set_rip(CPUState *cpu, uint64_t rip) uint64_t val; /* BUG, should take considering overlap.. */ - wreg(cpu->hvf_fd, HV_X86_RIP, rip); + wreg(cpu->hvf->fd, HV_X86_RIP, rip); env->eip = rip; /* after moving forward in rip, we need to clean INTERRUPTABILITY */ - val = rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY); + val = rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY); if (val & (VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)) { env->hflags &= ~HF_INHIBIT_IRQ_MASK; - wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, + wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, val & ~(VMCS_INTERRUPTIBILITY_STI_BLOCKING | VMCS_INTERRUPTIBILITY_MOVSS_BLOCKING)); } @@ -199,9 +201,9 @@ static inline void vmx_clear_nmi_blocking(CPUState *cpu) CPUX86State *env = &x86_cpu->env; env->hflags2 &= ~HF2_NMI_MASK; - uint32_t gi = (uint32_t) rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY); + uint32_t gi = (uint32_t) rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY); gi &= ~VMCS_INTERRUPTIBILITY_NMI_BLOCKING; - wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi); + wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, gi); } static inline void vmx_set_nmi_blocking(CPUState *cpu) @@ -210,16 +212,16 @@ static inline void vmx_set_nmi_blocking(CPUState *cpu) CPUX86State *env = &x86_cpu->env; env->hflags2 |= HF2_NMI_MASK; - uint32_t gi = (uint32_t)rvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY); + uint32_t gi = (uint32_t)rvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY); gi |= VMCS_INTERRUPTIBILITY_NMI_BLOCKING; - wvmcs(cpu->hvf_fd, VMCS_GUEST_INTERRUPTIBILITY, gi); + wvmcs(cpu->hvf->fd, VMCS_GUEST_INTERRUPTIBILITY, gi); } static inline void vmx_set_nmi_window_exiting(CPUState *cpu) { uint64_t val; - val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val | + val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS); + wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val | VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING); } @@ -228,8 +230,8 @@ static inline void vmx_clear_nmi_window_exiting(CPUState *cpu) { uint64_t val; - val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val & + val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS); + wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val & ~VMCS_PRI_PROC_BASED_CTLS_NMI_WINDOW_EXITING); } diff --git a/target/i386/hvf/x86.c b/target/i386/hvf/x86.c index cd045183a8..2898bb70a8 100644 --- a/target/i386/hvf/x86.c +++ b/target/i386/hvf/x86.c @@ -62,11 +62,11 @@ bool x86_read_segment_descriptor(struct CPUState *cpu, } if (GDT_SEL == sel.ti) { - base = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE); - limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT); + base = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_BASE); + limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_LIMIT); } else { - base = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE); - limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT); + base = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_BASE); + limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_LIMIT); } if (sel.index * 8 >= limit) { @@ -85,11 +85,11 @@ bool x86_write_segment_descriptor(struct CPUState *cpu, uint32_t limit; if (GDT_SEL == sel.ti) { - base = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE); - limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT); + base = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_BASE); + limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_GDTR_LIMIT); } else { - base = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE); - limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT); + base = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_BASE); + limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_LDTR_LIMIT); } if (sel.index * 8 >= limit) { @@ -103,8 +103,8 @@ bool x86_write_segment_descriptor(struct CPUState *cpu, bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_desc, int gate) { - target_ulong base = rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_BASE); - uint32_t limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_IDTR_LIMIT); + target_ulong base = rvmcs(cpu->hvf->fd, VMCS_GUEST_IDTR_BASE); + uint32_t limit = rvmcs(cpu->hvf->fd, VMCS_GUEST_IDTR_LIMIT); memset(idt_desc, 0, sizeof(*idt_desc)); if (gate * 8 >= limit) { @@ -118,7 +118,7 @@ bool x86_read_call_gate(struct CPUState *cpu, struct x86_call_gate *idt_desc, bool x86_is_protected(struct CPUState *cpu) { - uint64_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0); + uint64_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0); return cr0 & CR0_PE; } @@ -136,7 +136,7 @@ bool x86_is_v8086(struct CPUState *cpu) bool x86_is_long_mode(struct CPUState *cpu) { - return rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER) & MSR_EFER_LMA; + return rvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER) & MSR_EFER_LMA; } bool x86_is_long64_mode(struct CPUState *cpu) @@ -149,13 +149,13 @@ bool x86_is_long64_mode(struct CPUState *cpu) bool x86_is_paging_mode(struct CPUState *cpu) { - uint64_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0); + uint64_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0); return cr0 & CR0_PG; } bool x86_is_pae_enabled(struct CPUState *cpu) { - uint64_t cr4 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR4); + uint64_t cr4 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR4); return cr4 & CR4_PAE; } diff --git a/target/i386/hvf/x86_descr.c b/target/i386/hvf/x86_descr.c index 9f539e73f6..af15c06ac5 100644 --- a/target/i386/hvf/x86_descr.c +++ b/target/i386/hvf/x86_descr.c @@ -48,47 +48,47 @@ static const struct vmx_segment_field { uint32_t vmx_read_segment_limit(CPUState *cpu, X86Seg seg) { - return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit); + return (uint32_t)rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].limit); } uint32_t vmx_read_segment_ar(CPUState *cpu, X86Seg seg) { - return (uint32_t)rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes); + return (uint32_t)rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].ar_bytes); } uint64_t vmx_read_segment_base(CPUState *cpu, X86Seg seg) { - return rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base); + return rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].base); } x68_segment_selector vmx_read_segment_selector(CPUState *cpu, X86Seg seg) { x68_segment_selector sel; - sel.sel = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector); + sel.sel = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector); return sel; } void vmx_write_segment_selector(struct CPUState *cpu, x68_segment_selector selector, X86Seg seg) { - wvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector, selector.sel); + wvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector, selector.sel); } void vmx_read_segment_descriptor(struct CPUState *cpu, struct vmx_segment *desc, X86Seg seg) { - desc->sel = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].selector); - desc->base = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].base); - desc->limit = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].limit); - desc->ar = rvmcs(cpu->hvf_fd, vmx_segment_fields[seg].ar_bytes); + desc->sel = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].selector); + desc->base = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].base); + desc->limit = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].limit); + desc->ar = rvmcs(cpu->hvf->fd, vmx_segment_fields[seg].ar_bytes); } void vmx_write_segment_descriptor(CPUState *cpu, struct vmx_segment *desc, X86Seg seg) { const struct vmx_segment_field *sf = &vmx_segment_fields[seg]; - wvmcs(cpu->hvf_fd, sf->base, desc->base); - wvmcs(cpu->hvf_fd, sf->limit, desc->limit); - wvmcs(cpu->hvf_fd, sf->selector, desc->sel); - wvmcs(cpu->hvf_fd, sf->ar_bytes, desc->ar); + wvmcs(cpu->hvf->fd, sf->base, desc->base); + wvmcs(cpu->hvf->fd, sf->limit, desc->limit); + wvmcs(cpu->hvf->fd, sf->selector, desc->sel); + wvmcs(cpu->hvf->fd, sf->ar_bytes, desc->ar); } void x86_segment_descriptor_to_vmx(struct CPUState *cpu, x68_segment_selector selector, struct x86_segment_descriptor *desc, struct vmx_segment *vmx_desc) diff --git a/target/i386/hvf/x86_emu.c b/target/i386/hvf/x86_emu.c index e52c39ddb1..7c8203b21f 100644 --- a/target/i386/hvf/x86_emu.c +++ b/target/i386/hvf/x86_emu.c @@ -674,7 +674,7 @@ void simulate_rdmsr(struct CPUState *cpu) switch (msr) { case MSR_IA32_TSC: - val = rdtscp() + rvmcs(cpu->hvf_fd, VMCS_TSC_OFFSET); + val = rdtscp() + rvmcs(cpu->hvf->fd, VMCS_TSC_OFFSET); break; case MSR_IA32_APICBASE: val = cpu_get_apic_base(X86_CPU(cpu)->apic_state); @@ -683,16 +683,16 @@ void simulate_rdmsr(struct CPUState *cpu) val = x86_cpu->ucode_rev; break; case MSR_EFER: - val = rvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER); + val = rvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER); break; case MSR_FSBASE: - val = rvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE); + val = rvmcs(cpu->hvf->fd, VMCS_GUEST_FS_BASE); break; case MSR_GSBASE: - val = rvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE); + val = rvmcs(cpu->hvf->fd, VMCS_GUEST_GS_BASE); break; case MSR_KERNELGSBASE: - val = rvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE); + val = rvmcs(cpu->hvf->fd, VMCS_HOST_FS_BASE); break; case MSR_STAR: abort(); @@ -780,13 +780,13 @@ void simulate_wrmsr(struct CPUState *cpu) cpu_set_apic_base(X86_CPU(cpu)->apic_state, data); break; case MSR_FSBASE: - wvmcs(cpu->hvf_fd, VMCS_GUEST_FS_BASE, data); + wvmcs(cpu->hvf->fd, VMCS_GUEST_FS_BASE, data); break; case MSR_GSBASE: - wvmcs(cpu->hvf_fd, VMCS_GUEST_GS_BASE, data); + wvmcs(cpu->hvf->fd, VMCS_GUEST_GS_BASE, data); break; case MSR_KERNELGSBASE: - wvmcs(cpu->hvf_fd, VMCS_HOST_FS_BASE, data); + wvmcs(cpu->hvf->fd, VMCS_HOST_FS_BASE, data); break; case MSR_STAR: abort(); @@ -799,9 +799,9 @@ void simulate_wrmsr(struct CPUState *cpu) break; case MSR_EFER: /*printf("new efer %llx\n", EFER(cpu));*/ - wvmcs(cpu->hvf_fd, VMCS_GUEST_IA32_EFER, data); + wvmcs(cpu->hvf->fd, VMCS_GUEST_IA32_EFER, data); if (data & MSR_EFER_NXE) { - hv_vcpu_invalidate_tlb(cpu->hvf_fd); + hv_vcpu_invalidate_tlb(cpu->hvf->fd); } break; case MSR_MTRRphysBase(0): @@ -1425,21 +1425,21 @@ void load_regs(struct CPUState *cpu) CPUX86State *env = &x86_cpu->env; int i = 0; - RRX(env, R_EAX) = rreg(cpu->hvf_fd, HV_X86_RAX); - RRX(env, R_EBX) = rreg(cpu->hvf_fd, HV_X86_RBX); - RRX(env, R_ECX) = rreg(cpu->hvf_fd, HV_X86_RCX); - RRX(env, R_EDX) = rreg(cpu->hvf_fd, HV_X86_RDX); - RRX(env, R_ESI) = rreg(cpu->hvf_fd, HV_X86_RSI); - RRX(env, R_EDI) = rreg(cpu->hvf_fd, HV_X86_RDI); - RRX(env, R_ESP) = rreg(cpu->hvf_fd, HV_X86_RSP); - RRX(env, R_EBP) = rreg(cpu->hvf_fd, HV_X86_RBP); + RRX(env, R_EAX) = rreg(cpu->hvf->fd, HV_X86_RAX); + RRX(env, R_EBX) = rreg(cpu->hvf->fd, HV_X86_RBX); + RRX(env, R_ECX) = rreg(cpu->hvf->fd, HV_X86_RCX); + RRX(env, R_EDX) = rreg(cpu->hvf->fd, HV_X86_RDX); + RRX(env, R_ESI) = rreg(cpu->hvf->fd, HV_X86_RSI); + RRX(env, R_EDI) = rreg(cpu->hvf->fd, HV_X86_RDI); + RRX(env, R_ESP) = rreg(cpu->hvf->fd, HV_X86_RSP); + RRX(env, R_EBP) = rreg(cpu->hvf->fd, HV_X86_RBP); for (i = 8; i < 16; i++) { - RRX(env, i) = rreg(cpu->hvf_fd, HV_X86_RAX + i); + RRX(env, i) = rreg(cpu->hvf->fd, HV_X86_RAX + i); } - env->eflags = rreg(cpu->hvf_fd, HV_X86_RFLAGS); + env->eflags = rreg(cpu->hvf->fd, HV_X86_RFLAGS); rflags_to_lflags(env); - env->eip = rreg(cpu->hvf_fd, HV_X86_RIP); + env->eip = rreg(cpu->hvf->fd, HV_X86_RIP); } void store_regs(struct CPUState *cpu) @@ -1448,20 +1448,20 @@ void store_regs(struct CPUState *cpu) CPUX86State *env = &x86_cpu->env; int i = 0; - wreg(cpu->hvf_fd, HV_X86_RAX, RAX(env)); - wreg(cpu->hvf_fd, HV_X86_RBX, RBX(env)); - wreg(cpu->hvf_fd, HV_X86_RCX, RCX(env)); - wreg(cpu->hvf_fd, HV_X86_RDX, RDX(env)); - wreg(cpu->hvf_fd, HV_X86_RSI, RSI(env)); - wreg(cpu->hvf_fd, HV_X86_RDI, RDI(env)); - wreg(cpu->hvf_fd, HV_X86_RBP, RBP(env)); - wreg(cpu->hvf_fd, HV_X86_RSP, RSP(env)); + wreg(cpu->hvf->fd, HV_X86_RAX, RAX(env)); + wreg(cpu->hvf->fd, HV_X86_RBX, RBX(env)); + wreg(cpu->hvf->fd, HV_X86_RCX, RCX(env)); + wreg(cpu->hvf->fd, HV_X86_RDX, RDX(env)); + wreg(cpu->hvf->fd, HV_X86_RSI, RSI(env)); + wreg(cpu->hvf->fd, HV_X86_RDI, RDI(env)); + wreg(cpu->hvf->fd, HV_X86_RBP, RBP(env)); + wreg(cpu->hvf->fd, HV_X86_RSP, RSP(env)); for (i = 8; i < 16; i++) { - wreg(cpu->hvf_fd, HV_X86_RAX + i, RRX(env, i)); + wreg(cpu->hvf->fd, HV_X86_RAX + i, RRX(env, i)); } lflags_to_rflags(env); - wreg(cpu->hvf_fd, HV_X86_RFLAGS, env->eflags); + wreg(cpu->hvf->fd, HV_X86_RFLAGS, env->eflags); macvm_set_rip(cpu, env->eip); } diff --git a/target/i386/hvf/x86_mmu.c b/target/i386/hvf/x86_mmu.c index 78fff04684..e9ed0f5aa1 100644 --- a/target/i386/hvf/x86_mmu.c +++ b/target/i386/hvf/x86_mmu.c @@ -127,7 +127,7 @@ static bool test_pt_entry(struct CPUState *cpu, struct gpt_translation *pt, pt->err_code |= MMU_PAGE_PT; } - uint32_t cr0 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0); + uint32_t cr0 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0); /* check protection */ if (cr0 & CR0_WP) { if (pt->write_access && !pte_write_access(pte)) { @@ -172,7 +172,7 @@ static bool walk_gpt(struct CPUState *cpu, target_ulong addr, int err_code, { int top_level, level; bool is_large = false; - target_ulong cr3 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR3); + target_ulong cr3 = rvmcs(cpu->hvf->fd, VMCS_GUEST_CR3); uint64_t page_mask = pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MASK; memset(pt, 0, sizeof(*pt)); diff --git a/target/i386/hvf/x86_task.c b/target/i386/hvf/x86_task.c index d66dfd7669..422156128b 100644 --- a/target/i386/hvf/x86_task.c +++ b/target/i386/hvf/x86_task.c @@ -62,7 +62,7 @@ static void load_state_from_tss32(CPUState *cpu, struct x86_tss_segment32 *tss) X86CPU *x86_cpu = X86_CPU(cpu); CPUX86State *env = &x86_cpu->env; - wvmcs(cpu->hvf_fd, VMCS_GUEST_CR3, tss->cr3); + wvmcs(cpu->hvf->fd, VMCS_GUEST_CR3, tss->cr3); env->eip = tss->eip; env->eflags = tss->eflags | 2; @@ -111,11 +111,11 @@ static int task_switch_32(CPUState *cpu, x68_segment_selector tss_sel, x68_segme void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, int reason, bool gate_valid, uint8_t gate, uint64_t gate_type) { - uint64_t rip = rreg(cpu->hvf_fd, HV_X86_RIP); + uint64_t rip = rreg(cpu->hvf->fd, HV_X86_RIP); if (!gate_valid || (gate_type != VMCS_INTR_T_HWEXCEPTION && gate_type != VMCS_INTR_T_HWINTR && gate_type != VMCS_INTR_T_NMI)) { - int ins_len = rvmcs(cpu->hvf_fd, VMCS_EXIT_INSTRUCTION_LENGTH); + int ins_len = rvmcs(cpu->hvf->fd, VMCS_EXIT_INSTRUCTION_LENGTH); macvm_set_rip(cpu, rip + ins_len); return; } @@ -174,12 +174,12 @@ void vmx_handle_task_switch(CPUState *cpu, x68_segment_selector tss_sel, int rea //ret = task_switch_16(cpu, tss_sel, old_tss_sel, old_tss_base, &next_tss_desc); VM_PANIC("task_switch_16"); - macvm_set_cr0(cpu->hvf_fd, rvmcs(cpu->hvf_fd, VMCS_GUEST_CR0) | CR0_TS); + macvm_set_cr0(cpu->hvf->fd, rvmcs(cpu->hvf->fd, VMCS_GUEST_CR0) | CR0_TS); x86_segment_descriptor_to_vmx(cpu, tss_sel, &next_tss_desc, &vmx_seg); vmx_write_segment_descriptor(cpu, &vmx_seg, R_TR); store_regs(cpu); - hv_vcpu_invalidate_tlb(cpu->hvf_fd); - hv_vcpu_flush(cpu->hvf_fd); + hv_vcpu_invalidate_tlb(cpu->hvf->fd); + hv_vcpu_flush(cpu->hvf->fd); } diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c index cc381307ab..28cfee4f60 100644 --- a/target/i386/hvf/x86hvf.c +++ b/target/i386/hvf/x86hvf.c @@ -80,7 +80,7 @@ void hvf_put_xsave(CPUState *cpu_state) x86_cpu_xsave_all_areas(X86_CPU(cpu_state), xsave); - if (hv_vcpu_write_fpstate(cpu_state->hvf_fd, (void*)xsave, 4096)) { + if (hv_vcpu_write_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) { abort(); } } @@ -90,19 +90,19 @@ void hvf_put_segments(CPUState *cpu_state) CPUX86State *env = &X86_CPU(cpu_state)->env; struct vmx_segment seg; - wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit); - wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE, env->idt.base); + wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit); + wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_BASE, env->idt.base); - wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit); - wvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE, env->gdt.base); + wvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_LIMIT, env->gdt.limit); + wvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_BASE, env->gdt.base); - /* wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR2, env->cr[2]); */ - wvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3, env->cr[3]); + /* wvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR2, env->cr[2]); */ + wvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR3, env->cr[3]); vmx_update_tpr(cpu_state); - wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER, env->efer); + wvmcs(cpu_state->hvf->fd, VMCS_GUEST_IA32_EFER, env->efer); - macvm_set_cr4(cpu_state->hvf_fd, env->cr[4]); - macvm_set_cr0(cpu_state->hvf_fd, env->cr[0]); + macvm_set_cr4(cpu_state->hvf->fd, env->cr[4]); + macvm_set_cr0(cpu_state->hvf->fd, env->cr[0]); hvf_set_segment(cpu_state, &seg, &env->segs[R_CS], false); vmx_write_segment_descriptor(cpu_state, &seg, R_CS); @@ -128,31 +128,31 @@ void hvf_put_segments(CPUState *cpu_state) hvf_set_segment(cpu_state, &seg, &env->ldt, false); vmx_write_segment_descriptor(cpu_state, &seg, R_LDTR); - hv_vcpu_flush(cpu_state->hvf_fd); + hv_vcpu_flush(cpu_state->hvf->fd); } void hvf_put_msrs(CPUState *cpu_state) { CPUX86State *env = &X86_CPU(cpu_state)->env; - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, + hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_CS, env->sysenter_cs); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, + hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_ESP, env->sysenter_esp); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP, + hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_EIP, env->sysenter_eip); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_STAR, env->star); + hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_STAR, env->star); #ifdef TARGET_X86_64 - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_CSTAR, env->cstar); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, env->kernelgsbase); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FMASK, env->fmask); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_LSTAR, env->lstar); + hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_CSTAR, env->cstar); + hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_KERNELGSBASE, env->kernelgsbase); + hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_FMASK, env->fmask); + hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_LSTAR, env->lstar); #endif - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_GSBASE, env->segs[R_GS].base); - hv_vcpu_write_msr(cpu_state->hvf_fd, MSR_FSBASE, env->segs[R_FS].base); + hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_GSBASE, env->segs[R_GS].base); + hv_vcpu_write_msr(cpu_state->hvf->fd, MSR_FSBASE, env->segs[R_FS].base); } @@ -162,7 +162,7 @@ void hvf_get_xsave(CPUState *cpu_state) xsave = X86_CPU(cpu_state)->env.xsave_buf; - if (hv_vcpu_read_fpstate(cpu_state->hvf_fd, (void*)xsave, 4096)) { + if (hv_vcpu_read_fpstate(cpu_state->hvf->fd, (void*)xsave, 4096)) { abort(); } @@ -201,17 +201,17 @@ void hvf_get_segments(CPUState *cpu_state) vmx_read_segment_descriptor(cpu_state, &seg, R_LDTR); hvf_get_segment(&env->ldt, &seg); - env->idt.limit = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT); - env->idt.base = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE); - env->gdt.limit = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_LIMIT); - env->gdt.base = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_GDTR_BASE); + env->idt.limit = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_LIMIT); + env->idt.base = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IDTR_BASE); + env->gdt.limit = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_LIMIT); + env->gdt.base = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_GDTR_BASE); - env->cr[0] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR0); + env->cr[0] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR0); env->cr[2] = 0; - env->cr[3] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3); - env->cr[4] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR4); + env->cr[3] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR3); + env->cr[4] = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_CR4); - env->efer = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER); + env->efer = rvmcs(cpu_state->hvf->fd, VMCS_GUEST_IA32_EFER); } void hvf_get_msrs(CPUState *cpu_state) @@ -219,27 +219,27 @@ void hvf_get_msrs(CPUState *cpu_state) CPUX86State *env = &X86_CPU(cpu_state)->env; uint64_t tmp; - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, &tmp); + hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_CS, &tmp); env->sysenter_cs = tmp; - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, &tmp); + hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_ESP, &tmp); env->sysenter_esp = tmp; - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_EIP, &tmp); + hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_SYSENTER_EIP, &tmp); env->sysenter_eip = tmp; - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_STAR, &env->star); + hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_STAR, &env->star); #ifdef TARGET_X86_64 - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_CSTAR, &env->cstar); - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_KERNELGSBASE, &env->kernelgsbase); - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_FMASK, &env->fmask); - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_LSTAR, &env->lstar); + hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_CSTAR, &env->cstar); + hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_KERNELGSBASE, &env->kernelgsbase); + hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_FMASK, &env->fmask); + hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_LSTAR, &env->lstar); #endif - hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_APICBASE, &tmp); + hv_vcpu_read_msr(cpu_state->hvf->fd, MSR_IA32_APICBASE, &tmp); - env->tsc = rdtscp() + rvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET); + env->tsc = rdtscp() + rvmcs(cpu_state->hvf->fd, VMCS_TSC_OFFSET); } int hvf_put_registers(CPUState *cpu_state) @@ -247,26 +247,26 @@ int hvf_put_registers(CPUState *cpu_state) X86CPU *x86cpu = X86_CPU(cpu_state); CPUX86State *env = &x86cpu->env; - wreg(cpu_state->hvf_fd, HV_X86_RAX, env->regs[R_EAX]); - wreg(cpu_state->hvf_fd, HV_X86_RBX, env->regs[R_EBX]); - wreg(cpu_state->hvf_fd, HV_X86_RCX, env->regs[R_ECX]); - wreg(cpu_state->hvf_fd, HV_X86_RDX, env->regs[R_EDX]); - wreg(cpu_state->hvf_fd, HV_X86_RBP, env->regs[R_EBP]); - wreg(cpu_state->hvf_fd, HV_X86_RSP, env->regs[R_ESP]); - wreg(cpu_state->hvf_fd, HV_X86_RSI, env->regs[R_ESI]); - wreg(cpu_state->hvf_fd, HV_X86_RDI, env->regs[R_EDI]); - wreg(cpu_state->hvf_fd, HV_X86_R8, env->regs[8]); - wreg(cpu_state->hvf_fd, HV_X86_R9, env->regs[9]); - wreg(cpu_state->hvf_fd, HV_X86_R10, env->regs[10]); - wreg(cpu_state->hvf_fd, HV_X86_R11, env->regs[11]); - wreg(cpu_state->hvf_fd, HV_X86_R12, env->regs[12]); - wreg(cpu_state->hvf_fd, HV_X86_R13, env->regs[13]); - wreg(cpu_state->hvf_fd, HV_X86_R14, env->regs[14]); - wreg(cpu_state->hvf_fd, HV_X86_R15, env->regs[15]); - wreg(cpu_state->hvf_fd, HV_X86_RFLAGS, env->eflags); - wreg(cpu_state->hvf_fd, HV_X86_RIP, env->eip); + wreg(cpu_state->hvf->fd, HV_X86_RAX, env->regs[R_EAX]); + wreg(cpu_state->hvf->fd, HV_X86_RBX, env->regs[R_EBX]); + wreg(cpu_state->hvf->fd, HV_X86_RCX, env->regs[R_ECX]); + wreg(cpu_state->hvf->fd, HV_X86_RDX, env->regs[R_EDX]); + wreg(cpu_state->hvf->fd, HV_X86_RBP, env->regs[R_EBP]); + wreg(cpu_state->hvf->fd, HV_X86_RSP, env->regs[R_ESP]); + wreg(cpu_state->hvf->fd, HV_X86_RSI, env->regs[R_ESI]); + wreg(cpu_state->hvf->fd, HV_X86_RDI, env->regs[R_EDI]); + wreg(cpu_state->hvf->fd, HV_X86_R8, env->regs[8]); + wreg(cpu_state->hvf->fd, HV_X86_R9, env->regs[9]); + wreg(cpu_state->hvf->fd, HV_X86_R10, env->regs[10]); + wreg(cpu_state->hvf->fd, HV_X86_R11, env->regs[11]); + wreg(cpu_state->hvf->fd, HV_X86_R12, env->regs[12]); + wreg(cpu_state->hvf->fd, HV_X86_R13, env->regs[13]); + wreg(cpu_state->hvf->fd, HV_X86_R14, env->regs[14]); + wreg(cpu_state->hvf->fd, HV_X86_R15, env->regs[15]); + wreg(cpu_state->hvf->fd, HV_X86_RFLAGS, env->eflags); + wreg(cpu_state->hvf->fd, HV_X86_RIP, env->eip); - wreg(cpu_state->hvf_fd, HV_X86_XCR0, env->xcr0); + wreg(cpu_state->hvf->fd, HV_X86_XCR0, env->xcr0); hvf_put_xsave(cpu_state); @@ -274,14 +274,14 @@ int hvf_put_registers(CPUState *cpu_state) hvf_put_msrs(cpu_state); - wreg(cpu_state->hvf_fd, HV_X86_DR0, env->dr[0]); - wreg(cpu_state->hvf_fd, HV_X86_DR1, env->dr[1]); - wreg(cpu_state->hvf_fd, HV_X86_DR2, env->dr[2]); - wreg(cpu_state->hvf_fd, HV_X86_DR3, env->dr[3]); - wreg(cpu_state->hvf_fd, HV_X86_DR4, env->dr[4]); - wreg(cpu_state->hvf_fd, HV_X86_DR5, env->dr[5]); - wreg(cpu_state->hvf_fd, HV_X86_DR6, env->dr[6]); - wreg(cpu_state->hvf_fd, HV_X86_DR7, env->dr[7]); + wreg(cpu_state->hvf->fd, HV_X86_DR0, env->dr[0]); + wreg(cpu_state->hvf->fd, HV_X86_DR1, env->dr[1]); + wreg(cpu_state->hvf->fd, HV_X86_DR2, env->dr[2]); + wreg(cpu_state->hvf->fd, HV_X86_DR3, env->dr[3]); + wreg(cpu_state->hvf->fd, HV_X86_DR4, env->dr[4]); + wreg(cpu_state->hvf->fd, HV_X86_DR5, env->dr[5]); + wreg(cpu_state->hvf->fd, HV_X86_DR6, env->dr[6]); + wreg(cpu_state->hvf->fd, HV_X86_DR7, env->dr[7]); return 0; } @@ -291,40 +291,40 @@ int hvf_get_registers(CPUState *cpu_state) X86CPU *x86cpu = X86_CPU(cpu_state); CPUX86State *env = &x86cpu->env; - env->regs[R_EAX] = rreg(cpu_state->hvf_fd, HV_X86_RAX); - env->regs[R_EBX] = rreg(cpu_state->hvf_fd, HV_X86_RBX); - env->regs[R_ECX] = rreg(cpu_state->hvf_fd, HV_X86_RCX); - env->regs[R_EDX] = rreg(cpu_state->hvf_fd, HV_X86_RDX); - env->regs[R_EBP] = rreg(cpu_state->hvf_fd, HV_X86_RBP); - env->regs[R_ESP] = rreg(cpu_state->hvf_fd, HV_X86_RSP); - env->regs[R_ESI] = rreg(cpu_state->hvf_fd, HV_X86_RSI); - env->regs[R_EDI] = rreg(cpu_state->hvf_fd, HV_X86_RDI); - env->regs[8] = rreg(cpu_state->hvf_fd, HV_X86_R8); - env->regs[9] = rreg(cpu_state->hvf_fd, HV_X86_R9); - env->regs[10] = rreg(cpu_state->hvf_fd, HV_X86_R10); - env->regs[11] = rreg(cpu_state->hvf_fd, HV_X86_R11); - env->regs[12] = rreg(cpu_state->hvf_fd, HV_X86_R12); - env->regs[13] = rreg(cpu_state->hvf_fd, HV_X86_R13); - env->regs[14] = rreg(cpu_state->hvf_fd, HV_X86_R14); - env->regs[15] = rreg(cpu_state->hvf_fd, HV_X86_R15); + env->regs[R_EAX] = rreg(cpu_state->hvf->fd, HV_X86_RAX); + env->regs[R_EBX] = rreg(cpu_state->hvf->fd, HV_X86_RBX); + env->regs[R_ECX] = rreg(cpu_state->hvf->fd, HV_X86_RCX); + env->regs[R_EDX] = rreg(cpu_state->hvf->fd, HV_X86_RDX); + env->regs[R_EBP] = rreg(cpu_state->hvf->fd, HV_X86_RBP); + env->regs[R_ESP] = rreg(cpu_state->hvf->fd, HV_X86_RSP); + env->regs[R_ESI] = rreg(cpu_state->hvf->fd, HV_X86_RSI); + env->regs[R_EDI] = rreg(cpu_state->hvf->fd, HV_X86_RDI); + env->regs[8] = rreg(cpu_state->hvf->fd, HV_X86_R8); + env->regs[9] = rreg(cpu_state->hvf->fd, HV_X86_R9); + env->regs[10] = rreg(cpu_state->hvf->fd, HV_X86_R10); + env->regs[11] = rreg(cpu_state->hvf->fd, HV_X86_R11); + env->regs[12] = rreg(cpu_state->hvf->fd, HV_X86_R12); + env->regs[13] = rreg(cpu_state->hvf->fd, HV_X86_R13); + env->regs[14] = rreg(cpu_state->hvf->fd, HV_X86_R14); + env->regs[15] = rreg(cpu_state->hvf->fd, HV_X86_R15); - env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS); - env->eip = rreg(cpu_state->hvf_fd, HV_X86_RIP); + env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS); + env->eip = rreg(cpu_state->hvf->fd, HV_X86_RIP); hvf_get_xsave(cpu_state); - env->xcr0 = rreg(cpu_state->hvf_fd, HV_X86_XCR0); + env->xcr0 = rreg(cpu_state->hvf->fd, HV_X86_XCR0); hvf_get_segments(cpu_state); hvf_get_msrs(cpu_state); - env->dr[0] = rreg(cpu_state->hvf_fd, HV_X86_DR0); - env->dr[1] = rreg(cpu_state->hvf_fd, HV_X86_DR1); - env->dr[2] = rreg(cpu_state->hvf_fd, HV_X86_DR2); - env->dr[3] = rreg(cpu_state->hvf_fd, HV_X86_DR3); - env->dr[4] = rreg(cpu_state->hvf_fd, HV_X86_DR4); - env->dr[5] = rreg(cpu_state->hvf_fd, HV_X86_DR5); - env->dr[6] = rreg(cpu_state->hvf_fd, HV_X86_DR6); - env->dr[7] = rreg(cpu_state->hvf_fd, HV_X86_DR7); + env->dr[0] = rreg(cpu_state->hvf->fd, HV_X86_DR0); + env->dr[1] = rreg(cpu_state->hvf->fd, HV_X86_DR1); + env->dr[2] = rreg(cpu_state->hvf->fd, HV_X86_DR2); + env->dr[3] = rreg(cpu_state->hvf->fd, HV_X86_DR3); + env->dr[4] = rreg(cpu_state->hvf->fd, HV_X86_DR4); + env->dr[5] = rreg(cpu_state->hvf->fd, HV_X86_DR5); + env->dr[6] = rreg(cpu_state->hvf->fd, HV_X86_DR6); + env->dr[7] = rreg(cpu_state->hvf->fd, HV_X86_DR7); x86_update_hflags(env); return 0; @@ -333,16 +333,16 @@ int hvf_get_registers(CPUState *cpu_state) static void vmx_set_int_window_exiting(CPUState *cpu) { uint64_t val; - val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val | + val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS); + wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val | VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING); } void vmx_clear_int_window_exiting(CPUState *cpu) { uint64_t val; - val = rvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS); - wvmcs(cpu->hvf_fd, VMCS_PRI_PROC_BASED_CTLS, val & + val = rvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS); + wvmcs(cpu->hvf->fd, VMCS_PRI_PROC_BASED_CTLS, val & ~VMCS_PRI_PROC_BASED_CTLS_INT_WINDOW_EXITING); } @@ -378,7 +378,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state) uint64_t info = 0; if (have_event) { info = vector | intr_type | VMCS_INTR_VALID; - uint64_t reason = rvmcs(cpu_state->hvf_fd, VMCS_EXIT_REASON); + uint64_t reason = rvmcs(cpu_state->hvf->fd, VMCS_EXIT_REASON); if (env->nmi_injected && reason != EXIT_REASON_TASK_SWITCH) { vmx_clear_nmi_blocking(cpu_state); } @@ -387,17 +387,17 @@ bool hvf_inject_interrupts(CPUState *cpu_state) info &= ~(1 << 12); /* clear undefined bit */ if (intr_type == VMCS_INTR_T_SWINTR || intr_type == VMCS_INTR_T_SWEXCEPTION) { - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, env->ins_len); + wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INST_LENGTH, env->ins_len); } if (env->has_error_code) { - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR, + wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_EXCEPTION_ERROR, env->error_code); /* Indicate that VMCS_ENTRY_EXCEPTION_ERROR is valid */ info |= VMCS_INTR_DEL_ERRCODE; } /*printf("reinject %lx err %d\n", info, err);*/ - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info); + wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, info); }; } @@ -405,7 +405,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state) if (!(env->hflags2 & HF2_NMI_MASK) && !(info & VMCS_INTR_VALID)) { cpu_state->interrupt_request &= ~CPU_INTERRUPT_NMI; info = VMCS_INTR_VALID | VMCS_INTR_T_NMI | EXCP02_NMI; - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, info); + wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, info); } else { vmx_set_nmi_window_exiting(cpu_state); } @@ -417,7 +417,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state) int line = cpu_get_pic_interrupt(&x86cpu->env); cpu_state->interrupt_request &= ~CPU_INTERRUPT_HARD; if (line >= 0) { - wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INTR_INFO, line | + wvmcs(cpu_state->hvf->fd, VMCS_ENTRY_INTR_INFO, line | VMCS_INTR_VALID | VMCS_INTR_T_HWINTR); } } @@ -433,7 +433,7 @@ int hvf_process_events(CPUState *cpu_state) X86CPU *cpu = X86_CPU(cpu_state); CPUX86State *env = &cpu->env; - env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS); + env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS); if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) { cpu_synchronize_state(cpu_state); From patchwork Wed May 19 20:22:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AECA6C433ED for ; Wed, 19 May 2021 20:37:45 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2338B611C2 for ; Wed, 19 May 2021 20:37:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2338B611C2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:39180 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSwq-00024g-71 for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:37:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53406) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSj2-0004o2-Ka; Wed, 19 May 2021 16:23:30 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48282 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSiu-0003Oe-A4; Wed, 19 May 2021 16:23:28 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id D07C760806B0; Wed, 19 May 2021 22:23:02 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 12/19] hvf: Simplify post reset/init/loadvm hooks Date: Wed, 19 May 2021 22:22:46 +0200 Message-Id: <20210519202253.76782-13-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" The hooks we have that call us after reset, init and loadvm really all just want to say "The reference of all register state is in the QEMU vcpu struct, please push it". We already have a working pushing mechanism though called cpu->vcpu_dirty, so we can just reuse that for all of the above, syncing state properly the next time we actually execute a vCPU. This fixes PSCI resets on ARM, as they modify CPU state even after the post init call has completed, but before we execute the vCPU again. To also make the scheme work for x86, we have to make sure we don't move stale eflags into our env when the vcpu state is dirty. Signed-off-by: Alexander Graf Reviewed-by: Roman Bolshakov Tested-by: Roman Bolshakov Reviewed-by: Sergio Lopez --- accel/hvf/hvf-accel-ops.c | 27 +++++++-------------------- target/i386/hvf/x86hvf.c | 5 ++++- 2 files changed, 11 insertions(+), 21 deletions(-) diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c index ded918c443..d1691be989 100644 --- a/accel/hvf/hvf-accel-ops.c +++ b/accel/hvf/hvf-accel-ops.c @@ -205,39 +205,26 @@ static void hvf_cpu_synchronize_state(CPUState *cpu) } } -static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu, - run_on_cpu_data arg) +static void do_hvf_cpu_synchronize_set_dirty(CPUState *cpu, + run_on_cpu_data arg) { - hvf_put_registers(cpu); - cpu->vcpu_dirty = false; + /* QEMU state is the reference, push it to HVF now and on next entry */ + cpu->vcpu_dirty = true; } static void hvf_cpu_synchronize_post_reset(CPUState *cpu) { - run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL); -} - -static void do_hvf_cpu_synchronize_post_init(CPUState *cpu, - run_on_cpu_data arg) -{ - hvf_put_registers(cpu); - cpu->vcpu_dirty = false; + run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL); } static void hvf_cpu_synchronize_post_init(CPUState *cpu) { - run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL); -} - -static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu, - run_on_cpu_data arg) -{ - cpu->vcpu_dirty = true; + run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL); } static void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu) { - run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL); + run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL); } static void hvf_set_dirty_tracking(MemoryRegionSection *section, bool on) diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c index 28cfee4f60..2ced2c2478 100644 --- a/target/i386/hvf/x86hvf.c +++ b/target/i386/hvf/x86hvf.c @@ -433,7 +433,10 @@ int hvf_process_events(CPUState *cpu_state) X86CPU *cpu = X86_CPU(cpu_state); CPUX86State *env = &cpu->env; - env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS); + if (!cpu_state->vcpu_dirty) { + /* light weight sync for CPU_INTERRUPT_HARD and IF_MASK */ + env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS); + } if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) { cpu_synchronize_state(cpu_state); From patchwork Wed May 19 20:22:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CF88C433B4 for ; Wed, 19 May 2021 20:39:46 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A0D4E611C2 for ; Wed, 19 May 2021 20:39:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A0D4E611C2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:42956 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSym-0004aK-KL for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:39:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53504) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSjK-0005SS-W0; Wed, 19 May 2021 16:23:48 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48286 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSjF-0003Pc-90; Wed, 19 May 2021 16:23:46 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 8244C60806B9; Wed, 19 May 2021 22:23:03 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 13/19] hvf: Add Apple Silicon support Date: Wed, 19 May 2021 22:22:47 +0200 Message-Id: <20210519202253.76782-14-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" With Apple Silicon available to the masses, it's a good time to add support for driving its virtualization extensions from QEMU. This patch adds all necessary architecture specific code to get basic VMs working. It's still pretty raw, but definitely functional. Known limitations: - Vtimer acknowledgement is hacky - Should implement more sysregs and fault on invalid ones then - WFI handling is missing, need to marry it with vtimer Signed-off-by: Alexander Graf Reviewed-by: Roman Bolshakov Reviewed-by: Sergio Lopez Tested-by: Sergio Lopez --- v1 -> v2: - Merge vcpu kick function patch - Implement WFI handling (allows vCPUs to sleep) - Synchronize system registers (fixes OVMF crashes and reboot) - Don't always call cpu_synchronize_state() - Use more fine grained iothread locking - Populate aa64mmfr0 from hardware v2 -> v3: - Advance PC on SMC - Use cp list interface for sysreg syncs - Do not set current_cpu - Fix sysreg isread mask - Move sysreg handling to functions - Remove WFI logic again - Revert to global iothread locking - Use Hypervisor.h on arm, hv.h does not contain aarch64 definitions v3 -> v4: - No longer include Hypervisor.h v5 -> v6: - Swap sysreg definition order. This way we're in line with asm outputs. v6 -> v7: - Remove osdep.h include from hvf_int.h - Synchronize SIMD registers as well - Prepend 0x for hex values - Convert DPRINTF to trace points - Use main event loop (fixes gdbstub issues) - Remove PSCI support, inject UDEF on HVC/SMC - Change vtimer logic to look at ctl.istatus for vtimer mask sync - Add kick callback again (fixes remote CPU notification) v7 -> v8: - Fix checkpatch errors --- MAINTAINERS | 5 + accel/hvf/hvf-accel-ops.c | 14 + include/sysemu/hvf_int.h | 9 +- meson.build | 1 + target/arm/hvf/hvf.c | 703 ++++++++++++++++++++++++++++++++++++ target/arm/hvf/trace-events | 10 + 6 files changed, 741 insertions(+), 1 deletion(-) create mode 100644 target/arm/hvf/hvf.c create mode 100644 target/arm/hvf/trace-events diff --git a/MAINTAINERS b/MAINTAINERS index 262e96714b..f3b4fdcf60 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -428,6 +428,11 @@ F: accel/accel-*.c F: accel/Makefile.objs F: accel/stubs/Makefile.objs +Apple Silicon HVF CPUs +M: Alexander Graf +S: Maintained +F: target/arm/hvf/ + X86 HVF CPUs M: Cameron Esfahani M: Roman Bolshakov diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c index d1691be989..48e402ef57 100644 --- a/accel/hvf/hvf-accel-ops.c +++ b/accel/hvf/hvf-accel-ops.c @@ -60,6 +60,10 @@ HVFState *hvf_state; +#ifdef __aarch64__ +#define HV_VM_DEFAULT NULL +#endif + /* Memory slots */ hvf_slot *hvf_find_overlap_slot(uint64_t start, uint64_t size) @@ -375,7 +379,11 @@ static int hvf_init_vcpu(CPUState *cpu) pthread_sigmask(SIG_BLOCK, NULL, &set); sigdelset(&set, SIG_IPI); +#ifdef __aarch64__ + r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL); +#else r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT); +#endif cpu->vcpu_dirty = 1; assert_hvf_ok(r); @@ -446,11 +454,17 @@ static void hvf_start_vcpu_thread(CPUState *cpu) cpu, QEMU_THREAD_JOINABLE); } +__attribute__((weak)) void hvf_kick_vcpu_thread(CPUState *cpu) +{ + cpus_kick_thread(cpu); +} + static void hvf_accel_ops_class_init(ObjectClass *oc, void *data) { AccelOpsClass *ops = ACCEL_OPS_CLASS(oc); ops->create_vcpu_thread = hvf_start_vcpu_thread; + ops->kick_vcpu_thread = hvf_kick_vcpu_thread; ops->synchronize_post_reset = hvf_cpu_synchronize_post_reset; ops->synchronize_post_init = hvf_cpu_synchronize_post_init; diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h index 8b66a4e7d0..e52d67ed5c 100644 --- a/include/sysemu/hvf_int.h +++ b/include/sysemu/hvf_int.h @@ -11,7 +11,11 @@ #ifndef HVF_INT_H #define HVF_INT_H +#ifdef __aarch64__ +#include +#else #include +#endif /* hvf_slot flags */ #define HVF_SLOT_LOG (1 << 0) @@ -44,7 +48,9 @@ struct HVFState { extern HVFState *hvf_state; struct hvf_vcpu_state { - int fd; + uint64_t fd; + void *exit; + bool vtimer_masked; }; void assert_hvf_ok(hv_return_t ret); @@ -54,5 +60,6 @@ int hvf_vcpu_exec(CPUState *); hvf_slot *hvf_find_overlap_slot(uint64_t, uint64_t); int hvf_put_registers(CPUState *); int hvf_get_registers(CPUState *); +void hvf_kick_vcpu_thread(CPUState *cpu); #endif diff --git a/meson.build b/meson.build index a58a75d056..698f4e9356 100644 --- a/meson.build +++ b/meson.build @@ -1856,6 +1856,7 @@ if have_system or have_user 'accel/tcg', 'hw/core', 'target/arm', + 'target/arm/hvf', 'target/hppa', 'target/i386', 'target/i386/kvm', diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c new file mode 100644 index 0000000000..3934c05979 --- /dev/null +++ b/target/arm/hvf/hvf.c @@ -0,0 +1,703 @@ +/* + * QEMU Hypervisor.framework support for Apple Silicon + + * Copyright 2020 Alexander Graf + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + * + */ + +#include "qemu/osdep.h" +#include "qemu-common.h" +#include "qemu/error-report.h" + +#include "sysemu/runstate.h" +#include "sysemu/hvf.h" +#include "sysemu/hvf_int.h" +#include "sysemu/hw_accel.h" + +#include "exec/address-spaces.h" +#include "hw/irq.h" +#include "qemu/main-loop.h" +#include "sysemu/cpus.h" +#include "target/arm/cpu.h" +#include "target/arm/internals.h" +#include "trace/trace-target_arm_hvf.h" + +#define HVF_SYSREG(crn, crm, op0, op1, op2) \ + ENCODE_AA64_CP_REG(CP_REG_ARM64_SYSREG_CP, crn, crm, op0, op1, op2) +#define PL1_WRITE_MASK 0x4 + +#define SYSREG(op0, op1, crn, crm, op2) \ + ((op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (crm << 1)) +#define SYSREG_MASK SYSREG(0x3, 0x7, 0xf, 0xf, 0x7) +#define SYSREG_CNTPCT_EL0 SYSREG(3, 3, 14, 0, 1) +#define SYSREG_PMCCNTR_EL0 SYSREG(3, 3, 9, 13, 0) + +#define WFX_IS_WFE (1 << 0) + +#define TMR_CTL_ENABLE (1 << 0) +#define TMR_CTL_IMASK (1 << 1) +#define TMR_CTL_ISTATUS (1 << 2) + +struct hvf_reg_match { + int reg; + uint64_t offset; +}; + +static const struct hvf_reg_match hvf_reg_match[] = { + { HV_REG_X0, offsetof(CPUARMState, xregs[0]) }, + { HV_REG_X1, offsetof(CPUARMState, xregs[1]) }, + { HV_REG_X2, offsetof(CPUARMState, xregs[2]) }, + { HV_REG_X3, offsetof(CPUARMState, xregs[3]) }, + { HV_REG_X4, offsetof(CPUARMState, xregs[4]) }, + { HV_REG_X5, offsetof(CPUARMState, xregs[5]) }, + { HV_REG_X6, offsetof(CPUARMState, xregs[6]) }, + { HV_REG_X7, offsetof(CPUARMState, xregs[7]) }, + { HV_REG_X8, offsetof(CPUARMState, xregs[8]) }, + { HV_REG_X9, offsetof(CPUARMState, xregs[9]) }, + { HV_REG_X10, offsetof(CPUARMState, xregs[10]) }, + { HV_REG_X11, offsetof(CPUARMState, xregs[11]) }, + { HV_REG_X12, offsetof(CPUARMState, xregs[12]) }, + { HV_REG_X13, offsetof(CPUARMState, xregs[13]) }, + { HV_REG_X14, offsetof(CPUARMState, xregs[14]) }, + { HV_REG_X15, offsetof(CPUARMState, xregs[15]) }, + { HV_REG_X16, offsetof(CPUARMState, xregs[16]) }, + { HV_REG_X17, offsetof(CPUARMState, xregs[17]) }, + { HV_REG_X18, offsetof(CPUARMState, xregs[18]) }, + { HV_REG_X19, offsetof(CPUARMState, xregs[19]) }, + { HV_REG_X20, offsetof(CPUARMState, xregs[20]) }, + { HV_REG_X21, offsetof(CPUARMState, xregs[21]) }, + { HV_REG_X22, offsetof(CPUARMState, xregs[22]) }, + { HV_REG_X23, offsetof(CPUARMState, xregs[23]) }, + { HV_REG_X24, offsetof(CPUARMState, xregs[24]) }, + { HV_REG_X25, offsetof(CPUARMState, xregs[25]) }, + { HV_REG_X26, offsetof(CPUARMState, xregs[26]) }, + { HV_REG_X27, offsetof(CPUARMState, xregs[27]) }, + { HV_REG_X28, offsetof(CPUARMState, xregs[28]) }, + { HV_REG_X29, offsetof(CPUARMState, xregs[29]) }, + { HV_REG_X30, offsetof(CPUARMState, xregs[30]) }, + { HV_REG_PC, offsetof(CPUARMState, pc) }, +}; + +static const struct hvf_reg_match hvf_fpreg_match[] = { + { HV_SIMD_FP_REG_Q0, offsetof(CPUARMState, vfp.zregs[0]) }, + { HV_SIMD_FP_REG_Q1, offsetof(CPUARMState, vfp.zregs[1]) }, + { HV_SIMD_FP_REG_Q2, offsetof(CPUARMState, vfp.zregs[2]) }, + { HV_SIMD_FP_REG_Q3, offsetof(CPUARMState, vfp.zregs[3]) }, + { HV_SIMD_FP_REG_Q4, offsetof(CPUARMState, vfp.zregs[4]) }, + { HV_SIMD_FP_REG_Q5, offsetof(CPUARMState, vfp.zregs[5]) }, + { HV_SIMD_FP_REG_Q6, offsetof(CPUARMState, vfp.zregs[6]) }, + { HV_SIMD_FP_REG_Q7, offsetof(CPUARMState, vfp.zregs[7]) }, + { HV_SIMD_FP_REG_Q8, offsetof(CPUARMState, vfp.zregs[8]) }, + { HV_SIMD_FP_REG_Q9, offsetof(CPUARMState, vfp.zregs[9]) }, + { HV_SIMD_FP_REG_Q10, offsetof(CPUARMState, vfp.zregs[10]) }, + { HV_SIMD_FP_REG_Q11, offsetof(CPUARMState, vfp.zregs[11]) }, + { HV_SIMD_FP_REG_Q12, offsetof(CPUARMState, vfp.zregs[12]) }, + { HV_SIMD_FP_REG_Q13, offsetof(CPUARMState, vfp.zregs[13]) }, + { HV_SIMD_FP_REG_Q14, offsetof(CPUARMState, vfp.zregs[14]) }, + { HV_SIMD_FP_REG_Q15, offsetof(CPUARMState, vfp.zregs[15]) }, + { HV_SIMD_FP_REG_Q16, offsetof(CPUARMState, vfp.zregs[16]) }, + { HV_SIMD_FP_REG_Q17, offsetof(CPUARMState, vfp.zregs[17]) }, + { HV_SIMD_FP_REG_Q18, offsetof(CPUARMState, vfp.zregs[18]) }, + { HV_SIMD_FP_REG_Q19, offsetof(CPUARMState, vfp.zregs[19]) }, + { HV_SIMD_FP_REG_Q20, offsetof(CPUARMState, vfp.zregs[20]) }, + { HV_SIMD_FP_REG_Q21, offsetof(CPUARMState, vfp.zregs[21]) }, + { HV_SIMD_FP_REG_Q22, offsetof(CPUARMState, vfp.zregs[22]) }, + { HV_SIMD_FP_REG_Q23, offsetof(CPUARMState, vfp.zregs[23]) }, + { HV_SIMD_FP_REG_Q24, offsetof(CPUARMState, vfp.zregs[24]) }, + { HV_SIMD_FP_REG_Q25, offsetof(CPUARMState, vfp.zregs[25]) }, + { HV_SIMD_FP_REG_Q26, offsetof(CPUARMState, vfp.zregs[26]) }, + { HV_SIMD_FP_REG_Q27, offsetof(CPUARMState, vfp.zregs[27]) }, + { HV_SIMD_FP_REG_Q28, offsetof(CPUARMState, vfp.zregs[28]) }, + { HV_SIMD_FP_REG_Q29, offsetof(CPUARMState, vfp.zregs[29]) }, + { HV_SIMD_FP_REG_Q30, offsetof(CPUARMState, vfp.zregs[30]) }, + { HV_SIMD_FP_REG_Q31, offsetof(CPUARMState, vfp.zregs[31]) }, +}; + +struct hvf_sreg_match { + int reg; + uint32_t key; +}; + +static const struct hvf_sreg_match hvf_sreg_match[] = { + { HV_SYS_REG_DBGBVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR0_EL1, HVF_SYSREG(0, 0, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR0_EL1, HVF_SYSREG(0, 0, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR1_EL1, HVF_SYSREG(0, 1, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR1_EL1, HVF_SYSREG(0, 1, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR2_EL1, HVF_SYSREG(0, 2, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR2_EL1, HVF_SYSREG(0, 2, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR3_EL1, HVF_SYSREG(0, 3, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR3_EL1, HVF_SYSREG(0, 3, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR4_EL1, HVF_SYSREG(0, 4, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR4_EL1, HVF_SYSREG(0, 4, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR5_EL1, HVF_SYSREG(0, 5, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR5_EL1, HVF_SYSREG(0, 5, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR6_EL1, HVF_SYSREG(0, 6, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR6_EL1, HVF_SYSREG(0, 6, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR7_EL1, HVF_SYSREG(0, 7, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR7_EL1, HVF_SYSREG(0, 7, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR8_EL1, HVF_SYSREG(0, 8, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR8_EL1, HVF_SYSREG(0, 8, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR9_EL1, HVF_SYSREG(0, 9, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR9_EL1, HVF_SYSREG(0, 9, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR10_EL1, HVF_SYSREG(0, 10, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR10_EL1, HVF_SYSREG(0, 10, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR11_EL1, HVF_SYSREG(0, 11, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR11_EL1, HVF_SYSREG(0, 11, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR12_EL1, HVF_SYSREG(0, 12, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR12_EL1, HVF_SYSREG(0, 12, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR13_EL1, HVF_SYSREG(0, 13, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR13_EL1, HVF_SYSREG(0, 13, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR14_EL1, HVF_SYSREG(0, 14, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR14_EL1, HVF_SYSREG(0, 14, 14, 0, 7) }, + + { HV_SYS_REG_DBGBVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 4) }, + { HV_SYS_REG_DBGBCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 5) }, + { HV_SYS_REG_DBGWVR15_EL1, HVF_SYSREG(0, 15, 14, 0, 6) }, + { HV_SYS_REG_DBGWCR15_EL1, HVF_SYSREG(0, 15, 14, 0, 7) }, + +#ifdef SYNC_NO_RAW_REGS + /* + * The registers below are manually synced on init because they are + * marked as NO_RAW. We still list them to make number space sync easier. + */ + { HV_SYS_REG_MDCCINT_EL1, HVF_SYSREG(0, 2, 2, 0, 0) }, + { HV_SYS_REG_MIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 0) }, + { HV_SYS_REG_MPIDR_EL1, HVF_SYSREG(0, 0, 3, 0, 5) }, + { HV_SYS_REG_ID_AA64PFR0_EL1, HVF_SYSREG(0, 4, 3, 0, 0) }, +#endif + { HV_SYS_REG_ID_AA64PFR1_EL1, HVF_SYSREG(0, 4, 3, 0, 2) }, + { HV_SYS_REG_ID_AA64DFR0_EL1, HVF_SYSREG(0, 5, 3, 0, 0) }, + { HV_SYS_REG_ID_AA64DFR1_EL1, HVF_SYSREG(0, 5, 3, 0, 1) }, + { HV_SYS_REG_ID_AA64ISAR0_EL1, HVF_SYSREG(0, 6, 3, 0, 0) }, + { HV_SYS_REG_ID_AA64ISAR1_EL1, HVF_SYSREG(0, 6, 3, 0, 1) }, +#ifdef SYNC_NO_MMFR0 + /* We keep the hardware MMFR0 around. HW limits are there anyway */ + { HV_SYS_REG_ID_AA64MMFR0_EL1, HVF_SYSREG(0, 7, 3, 0, 0) }, +#endif + { HV_SYS_REG_ID_AA64MMFR1_EL1, HVF_SYSREG(0, 7, 3, 0, 1) }, + { HV_SYS_REG_ID_AA64MMFR2_EL1, HVF_SYSREG(0, 7, 3, 0, 2) }, + + { HV_SYS_REG_MDSCR_EL1, HVF_SYSREG(0, 2, 2, 0, 2) }, + { HV_SYS_REG_SCTLR_EL1, HVF_SYSREG(1, 0, 3, 0, 0) }, + { HV_SYS_REG_CPACR_EL1, HVF_SYSREG(1, 0, 3, 0, 2) }, + { HV_SYS_REG_TTBR0_EL1, HVF_SYSREG(2, 0, 3, 0, 0) }, + { HV_SYS_REG_TTBR1_EL1, HVF_SYSREG(2, 0, 3, 0, 1) }, + { HV_SYS_REG_TCR_EL1, HVF_SYSREG(2, 0, 3, 0, 2) }, + + { HV_SYS_REG_APIAKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 0) }, + { HV_SYS_REG_APIAKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 1) }, + { HV_SYS_REG_APIBKEYLO_EL1, HVF_SYSREG(2, 1, 3, 0, 2) }, + { HV_SYS_REG_APIBKEYHI_EL1, HVF_SYSREG(2, 1, 3, 0, 3) }, + { HV_SYS_REG_APDAKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 0) }, + { HV_SYS_REG_APDAKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 1) }, + { HV_SYS_REG_APDBKEYLO_EL1, HVF_SYSREG(2, 2, 3, 0, 2) }, + { HV_SYS_REG_APDBKEYHI_EL1, HVF_SYSREG(2, 2, 3, 0, 3) }, + { HV_SYS_REG_APGAKEYLO_EL1, HVF_SYSREG(2, 3, 3, 0, 0) }, + { HV_SYS_REG_APGAKEYHI_EL1, HVF_SYSREG(2, 3, 3, 0, 1) }, + + { HV_SYS_REG_SPSR_EL1, HVF_SYSREG(4, 0, 3, 1, 0) }, + { HV_SYS_REG_ELR_EL1, HVF_SYSREG(4, 0, 3, 0, 1) }, + { HV_SYS_REG_SP_EL0, HVF_SYSREG(4, 1, 3, 0, 0) }, + { HV_SYS_REG_AFSR0_EL1, HVF_SYSREG(5, 1, 3, 0, 0) }, + { HV_SYS_REG_AFSR1_EL1, HVF_SYSREG(5, 1, 3, 0, 1) }, + { HV_SYS_REG_ESR_EL1, HVF_SYSREG(5, 2, 3, 0, 0) }, + { HV_SYS_REG_FAR_EL1, HVF_SYSREG(6, 0, 3, 0, 0) }, + { HV_SYS_REG_PAR_EL1, HVF_SYSREG(7, 4, 3, 0, 0) }, + { HV_SYS_REG_MAIR_EL1, HVF_SYSREG(10, 2, 3, 0, 0) }, + { HV_SYS_REG_AMAIR_EL1, HVF_SYSREG(10, 3, 3, 0, 0) }, + { HV_SYS_REG_VBAR_EL1, HVF_SYSREG(12, 0, 3, 0, 0) }, + { HV_SYS_REG_CONTEXTIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 1) }, + { HV_SYS_REG_TPIDR_EL1, HVF_SYSREG(13, 0, 3, 0, 4) }, + { HV_SYS_REG_CNTKCTL_EL1, HVF_SYSREG(14, 1, 3, 0, 0) }, + { HV_SYS_REG_CSSELR_EL1, HVF_SYSREG(0, 0, 3, 2, 0) }, + { HV_SYS_REG_TPIDR_EL0, HVF_SYSREG(13, 0, 3, 3, 2) }, + { HV_SYS_REG_TPIDRRO_EL0, HVF_SYSREG(13, 0, 3, 3, 3) }, + { HV_SYS_REG_CNTV_CTL_EL0, HVF_SYSREG(14, 3, 3, 3, 1) }, + { HV_SYS_REG_CNTV_CVAL_EL0, HVF_SYSREG(14, 3, 3, 3, 2) }, + { HV_SYS_REG_SP_EL1, HVF_SYSREG(4, 1, 3, 4, 0) }, +}; + +int hvf_get_registers(CPUState *cpu) +{ + ARMCPU *arm_cpu = ARM_CPU(cpu); + CPUARMState *env = &arm_cpu->env; + hv_return_t ret; + uint64_t val; + hv_simd_fp_uchar16_t fpval; + int i; + + for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) { + ret = hv_vcpu_get_reg(cpu->hvf->fd, hvf_reg_match[i].reg, &val); + *(uint64_t *)((void *)env + hvf_reg_match[i].offset) = val; + assert_hvf_ok(ret); + } + + for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) { + ret = hv_vcpu_get_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg, + &fpval); + memcpy((void *)env + hvf_fpreg_match[i].offset, &fpval, sizeof(fpval)); + assert_hvf_ok(ret); + } + + val = 0; + ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPCR, &val); + assert_hvf_ok(ret); + vfp_set_fpcr(env, val); + + val = 0; + ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_FPSR, &val); + assert_hvf_ok(ret); + vfp_set_fpsr(env, val); + + ret = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_CPSR, &val); + assert_hvf_ok(ret); + pstate_write(env, val); + + for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) { + ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, &val); + assert_hvf_ok(ret); + + arm_cpu->cpreg_values[i] = val; + } + write_list_to_cpustate(arm_cpu); + + return 0; +} + +int hvf_put_registers(CPUState *cpu) +{ + ARMCPU *arm_cpu = ARM_CPU(cpu); + CPUARMState *env = &arm_cpu->env; + hv_return_t ret; + uint64_t val; + hv_simd_fp_uchar16_t fpval; + int i; + + for (i = 0; i < ARRAY_SIZE(hvf_reg_match); i++) { + val = *(uint64_t *)((void *)env + hvf_reg_match[i].offset); + ret = hv_vcpu_set_reg(cpu->hvf->fd, hvf_reg_match[i].reg, val); + assert_hvf_ok(ret); + } + + for (i = 0; i < ARRAY_SIZE(hvf_fpreg_match); i++) { + memcpy(&fpval, (void *)env + hvf_fpreg_match[i].offset, sizeof(fpval)); + ret = hv_vcpu_set_simd_fp_reg(cpu->hvf->fd, hvf_fpreg_match[i].reg, + fpval); + assert_hvf_ok(ret); + } + + ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPCR, vfp_get_fpcr(env)); + assert_hvf_ok(ret); + + ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_FPSR, vfp_get_fpsr(env)); + assert_hvf_ok(ret); + + ret = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_CPSR, pstate_read(env)); + assert_hvf_ok(ret); + + write_cpustate_to_list(arm_cpu, false); + for (i = 0; i < ARRAY_SIZE(hvf_sreg_match); i++) { + val = arm_cpu->cpreg_values[i]; + ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, hvf_sreg_match[i].reg, val); + assert_hvf_ok(ret); + } + + return 0; +} + +static void flush_cpu_state(CPUState *cpu) +{ + if (cpu->vcpu_dirty) { + hvf_put_registers(cpu); + cpu->vcpu_dirty = false; + } +} + +static void hvf_set_reg(CPUState *cpu, int rt, uint64_t val) +{ + hv_return_t r; + + flush_cpu_state(cpu); + + if (rt < 31) { + r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_X0 + rt, val); + assert_hvf_ok(r); + } +} + +static uint64_t hvf_get_reg(CPUState *cpu, int rt) +{ + uint64_t val = 0; + hv_return_t r; + + flush_cpu_state(cpu); + + if (rt < 31) { + r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_X0 + rt, &val); + assert_hvf_ok(r); + } + + return val; +} + +void hvf_arch_vcpu_destroy(CPUState *cpu) +{ +} + +int hvf_arch_init_vcpu(CPUState *cpu) +{ + ARMCPU *arm_cpu = ARM_CPU(cpu); + CPUARMState *env = &arm_cpu->env; + uint32_t sregs_match_len = ARRAY_SIZE(hvf_sreg_match); + uint64_t pfr; + hv_return_t ret; + int i; + + env->aarch64 = 1; + asm volatile("mrs %0, cntfrq_el0" : "=r"(arm_cpu->gt_cntfrq_hz)); + + /* Allocate enough space for our sysreg sync */ + arm_cpu->cpreg_indexes = g_renew(uint64_t, arm_cpu->cpreg_indexes, + sregs_match_len); + arm_cpu->cpreg_values = g_renew(uint64_t, arm_cpu->cpreg_values, + sregs_match_len); + arm_cpu->cpreg_vmstate_indexes = g_renew(uint64_t, + arm_cpu->cpreg_vmstate_indexes, + sregs_match_len); + arm_cpu->cpreg_vmstate_values = g_renew(uint64_t, + arm_cpu->cpreg_vmstate_values, + sregs_match_len); + + memset(arm_cpu->cpreg_values, 0, sregs_match_len * sizeof(uint64_t)); + arm_cpu->cpreg_array_len = sregs_match_len; + arm_cpu->cpreg_vmstate_array_len = sregs_match_len; + + /* Populate cp list for all known sysregs */ + for (i = 0; i < sregs_match_len; i++) { + const ARMCPRegInfo *ri; + + arm_cpu->cpreg_indexes[i] = cpreg_to_kvm_id(hvf_sreg_match[i].key); + + ri = get_arm_cp_reginfo(arm_cpu->cp_regs, hvf_sreg_match[i].key); + if (ri) { + assert(!(ri->type & ARM_CP_NO_RAW)); + } + } + write_cpustate_to_list(arm_cpu, false); + + /* Set CP_NO_RAW system registers on init */ + ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MIDR_EL1, + arm_cpu->midr); + assert_hvf_ok(ret); + + ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_MPIDR_EL1, + arm_cpu->mp_affinity); + assert_hvf_ok(ret); + + ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, &pfr); + assert_hvf_ok(ret); + pfr |= env->gicv3state ? (1 << 24) : 0; + ret = hv_vcpu_set_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64PFR0_EL1, pfr); + assert_hvf_ok(ret); + + /* We're limited to underlying hardware caps, override internal versions */ + ret = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_ID_AA64MMFR0_EL1, + &arm_cpu->isar.id_aa64mmfr0); + assert_hvf_ok(ret); + + return 0; +} + +void hvf_kick_vcpu_thread(CPUState *cpu) +{ + hv_vcpus_exit(&cpu->hvf->fd, 1); +} + +static void hvf_raise_exception(CPUARMState *env, uint32_t excp, + uint32_t syndrome) +{ + unsigned int new_el = 1; + unsigned int old_mode = pstate_read(env); + unsigned int new_mode = aarch64_pstate_mode(new_el, true); + target_ulong addr = env->cp15.vbar_el[new_el]; + + env->cp15.esr_el[new_el] = syndrome; + aarch64_save_sp(env, arm_current_el(env)); + env->elr_el[new_el] = env->pc; + env->banked_spsr[aarch64_banked_spsr_index(new_el)] = old_mode; + pstate_write(env, PSTATE_DAIF | new_mode); + aarch64_restore_sp(env, new_el); + env->pc = addr; +} + +static uint64_t hvf_sysreg_read(CPUState *cpu, uint32_t reg) +{ + ARMCPU *arm_cpu = ARM_CPU(cpu); + uint64_t val = 0; + + switch (reg) { + case SYSREG_CNTPCT_EL0: + val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) / + gt_cntfrq_period_ns(arm_cpu); + break; + case SYSREG_PMCCNTR_EL0: + val = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL); + break; + default: + trace_hvf_unhandled_sysreg_read(reg, + (reg >> 20) & 0x3, + (reg >> 14) & 0x7, + (reg >> 10) & 0xf, + (reg >> 1) & 0xf, + (reg >> 17) & 0x7); + break; + } + + return val; +} + +static void hvf_sysreg_write(CPUState *cpu, uint32_t reg, uint64_t val) +{ + switch (reg) { + case SYSREG_CNTPCT_EL0: + break; + default: + trace_hvf_unhandled_sysreg_write(reg, + (reg >> 20) & 0x3, + (reg >> 14) & 0x7, + (reg >> 10) & 0xf, + (reg >> 1) & 0xf, + (reg >> 17) & 0x7); + break; + } +} + +static int hvf_inject_interrupts(CPUState *cpu) +{ + if (cpu->interrupt_request & CPU_INTERRUPT_FIQ) { + trace_hvf_inject_fiq(); + hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_FIQ, + true); + } + + if (cpu->interrupt_request & CPU_INTERRUPT_HARD) { + trace_hvf_inject_irq(); + hv_vcpu_set_pending_interrupt(cpu->hvf->fd, HV_INTERRUPT_TYPE_IRQ, + true); + } + + return 0; +} + +static void hvf_sync_vtimer(CPUState *cpu) +{ + ARMCPU *arm_cpu = ARM_CPU(cpu); + hv_return_t r; + uint64_t ctl; + bool irq_state; + + if (!cpu->hvf->vtimer_masked) { + /* We will get notified on vtimer changes by hvf, nothing to do */ + return; + } + + r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, &ctl); + assert_hvf_ok(r); + + irq_state = (ctl & (TMR_CTL_ENABLE | TMR_CTL_IMASK | TMR_CTL_ISTATUS)) == + (TMR_CTL_ENABLE | TMR_CTL_ISTATUS); + qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], irq_state); + + if (!irq_state) { + /* Timer no longer asserting, we can unmask it */ + hv_vcpu_set_vtimer_mask(cpu->hvf->fd, false); + cpu->hvf->vtimer_masked = false; + } +} + +int hvf_vcpu_exec(CPUState *cpu) +{ + ARMCPU *arm_cpu = ARM_CPU(cpu); + CPUARMState *env = &arm_cpu->env; + hv_vcpu_exit_t *hvf_exit = cpu->hvf->exit; + hv_return_t r; + bool advance_pc = false; + + flush_cpu_state(cpu); + + hvf_sync_vtimer(cpu); + + if (hvf_inject_interrupts(cpu)) { + return EXCP_INTERRUPT; + } + + if (cpu->halted) { + return EXCP_HLT; + } + + qemu_mutex_unlock_iothread(); + assert_hvf_ok(hv_vcpu_run(cpu->hvf->fd)); + + /* handle VMEXIT */ + uint64_t exit_reason = hvf_exit->reason; + uint64_t syndrome = hvf_exit->exception.syndrome; + uint32_t ec = syn_get_ec(syndrome); + + qemu_mutex_lock_iothread(); + switch (exit_reason) { + case HV_EXIT_REASON_EXCEPTION: + /* This is the main one, handle below. */ + break; + case HV_EXIT_REASON_VTIMER_ACTIVATED: + qemu_set_irq(arm_cpu->gt_timer_outputs[GTIMER_VIRT], 1); + cpu->hvf->vtimer_masked = true; + return 0; + case HV_EXIT_REASON_CANCELED: + /* we got kicked, no exit to process */ + return 0; + default: + assert(0); + } + + switch (ec) { + case EC_DATAABORT: { + bool isv = syndrome & ARM_EL_ISV; + bool iswrite = (syndrome >> 6) & 1; + bool s1ptw = (syndrome >> 7) & 1; + uint32_t sas = (syndrome >> 22) & 3; + uint32_t len = 1 << sas; + uint32_t srt = (syndrome >> 16) & 0x1f; + uint64_t val = 0; + + trace_hvf_data_abort(env->pc, hvf_exit->exception.virtual_address, + hvf_exit->exception.physical_address, isv, + iswrite, s1ptw, len, srt); + + assert(isv); + + if (iswrite) { + val = hvf_get_reg(cpu, srt); + address_space_write(&address_space_memory, + hvf_exit->exception.physical_address, + MEMTXATTRS_UNSPECIFIED, &val, len); + } else { + address_space_read(&address_space_memory, + hvf_exit->exception.physical_address, + MEMTXATTRS_UNSPECIFIED, &val, len); + hvf_set_reg(cpu, srt, val); + } + + advance_pc = true; + break; + } + case EC_SYSTEMREGISTERTRAP: { + bool isread = (syndrome >> 0) & 1; + uint32_t rt = (syndrome >> 5) & 0x1f; + uint32_t reg = syndrome & SYSREG_MASK; + uint64_t val = 0; + + if (isread) { + val = hvf_sysreg_read(cpu, reg); + trace_hvf_sysreg_read(reg, + (reg >> 20) & 0x3, + (reg >> 14) & 0x7, + (reg >> 10) & 0xf, + (reg >> 1) & 0xf, + (reg >> 17) & 0x7, + val); + hvf_set_reg(cpu, rt, val); + } else { + val = hvf_get_reg(cpu, rt); + trace_hvf_sysreg_write(reg, + (reg >> 20) & 0x3, + (reg >> 14) & 0x7, + (reg >> 10) & 0xf, + (reg >> 1) & 0xf, + (reg >> 17) & 0x7, + val); + hvf_sysreg_write(cpu, reg, val); + } + + advance_pc = true; + break; + } + case EC_WFX_TRAP: + advance_pc = true; + break; + case EC_AA64_HVC: + cpu_synchronize_state(cpu); + trace_hvf_unknown_hvf(env->xregs[0]); + hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized()); + break; + case EC_AA64_SMC: + cpu_synchronize_state(cpu); + trace_hvf_unknown_smc(env->xregs[0]); + hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized()); + break; + default: + cpu_synchronize_state(cpu); + trace_hvf_exit(syndrome, ec, env->pc); + error_report("0x%llx: unhandled exit 0x%llx", env->pc, exit_reason); + } + + if (advance_pc) { + uint64_t pc; + + flush_cpu_state(cpu); + + r = hv_vcpu_get_reg(cpu->hvf->fd, HV_REG_PC, &pc); + assert_hvf_ok(r); + pc += 4; + r = hv_vcpu_set_reg(cpu->hvf->fd, HV_REG_PC, pc); + assert_hvf_ok(r); + } + + return 0; +} diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events new file mode 100644 index 0000000000..49a547dcf6 --- /dev/null +++ b/target/arm/hvf/trace-events @@ -0,0 +1,10 @@ +hvf_unhandled_sysreg_read(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg read 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)" +hvf_unhandled_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2) "unhandled sysreg write 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d)" +hvf_inject_fiq(void) "injecting FIQ" +hvf_inject_irq(void) "injecting IRQ" +hvf_data_abort(uint64_t pc, uint64_t va, uint64_t pa, bool isv, bool iswrite, bool s1ptw, uint32_t len, uint32_t srt) "data abort: [pc=0x%"PRIx64" va=0x%016"PRIx64" pa=0x%016"PRIx64" isv=%d iswrite=%d s1ptw=%d len=%d srt=%d]" +hvf_sysreg_read(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg read 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d) = 0x%016"PRIx64 +hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_t crm, uint32_t op2, uint64_t val) "sysreg write 0x%08x (op0=%d op1=%d crn=%d crm=%d op2=%d, val=0x%016"PRIx64")" +hvf_unknown_hvf(uint64_t x0) "unknown HVC! 0x%016"PRIx64 +hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64 +hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]" From patchwork Wed May 19 20:22:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CC1BC433B4 for ; Wed, 19 May 2021 20:28:22 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EBD536135C for ; Wed, 19 May 2021 20:28:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EBD536135C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:38426 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSnl-0007sE-2R for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:28:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53476) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSjH-0005HY-8c; Wed, 19 May 2021 16:23:43 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48284 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSjF-0003Pb-A0; Wed, 19 May 2021 16:23:43 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 2F87560806C2; Wed, 19 May 2021 22:23:04 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 14/19] arm/hvf: Add a WFI handler Date: Wed, 19 May 2021 22:22:48 +0200 Message-Id: <20210519202253.76782-15-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" From: Peter Collingbourne Sleep on WFI until the VTIMER is due but allow ourselves to be woken up on IPI. In this implementation IPI is blocked on the CPU thread at startup and pselect() is used to atomically unblock the signal and begin sleeping. The signal is sent unconditionally so there's no need to worry about races between actually sleeping and the "we think we're sleeping" state. It may lead to an extra wakeup but that's better than missing it entirely. Signed-off-by: Peter Collingbourne [agraf: Remove unused 'set' variable, always advance PC on WFX trap] Signed-off-by: Alexander Graf Acked-by: Roman Bolshakov Reviewed-by: Sergio Lopez --- v6 -> v7: - Move WFI into function - Improve comment wording --- accel/hvf/hvf-accel-ops.c | 5 ++- include/sysemu/hvf_int.h | 1 + target/arm/hvf/hvf.c | 68 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 71 insertions(+), 3 deletions(-) diff --git a/accel/hvf/hvf-accel-ops.c b/accel/hvf/hvf-accel-ops.c index 48e402ef57..63ec8a6f25 100644 --- a/accel/hvf/hvf-accel-ops.c +++ b/accel/hvf/hvf-accel-ops.c @@ -369,15 +369,14 @@ static int hvf_init_vcpu(CPUState *cpu) cpu->hvf = g_malloc0(sizeof(*cpu->hvf)); /* init cpu signals */ - sigset_t set; struct sigaction sigact; memset(&sigact, 0, sizeof(sigact)); sigact.sa_handler = dummy_signal; sigaction(SIG_IPI, &sigact, NULL); - pthread_sigmask(SIG_BLOCK, NULL, &set); - sigdelset(&set, SIG_IPI); + pthread_sigmask(SIG_BLOCK, NULL, &cpu->hvf->unblock_ipi_mask); + sigdelset(&cpu->hvf->unblock_ipi_mask, SIG_IPI); #ifdef __aarch64__ r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL); diff --git a/include/sysemu/hvf_int.h b/include/sysemu/hvf_int.h index e52d67ed5c..6d4eef8065 100644 --- a/include/sysemu/hvf_int.h +++ b/include/sysemu/hvf_int.h @@ -51,6 +51,7 @@ struct hvf_vcpu_state { uint64_t fd; void *exit; bool vtimer_masked; + sigset_t unblock_ipi_mask; }; void assert_hvf_ok(hv_return_t ret); diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c index 3934c05979..67002efd36 100644 --- a/target/arm/hvf/hvf.c +++ b/target/arm/hvf/hvf.c @@ -2,6 +2,7 @@ * QEMU Hypervisor.framework support for Apple Silicon * Copyright 2020 Alexander Graf + * Copyright 2020 Google LLC * * This work is licensed under the terms of the GNU GPL, version 2 or later. * See the COPYING file in the top-level directory. @@ -17,6 +18,8 @@ #include "sysemu/hvf_int.h" #include "sysemu/hw_accel.h" +#include + #include "exec/address-spaces.h" #include "hw/irq.h" #include "qemu/main-loop.h" @@ -457,6 +460,7 @@ int hvf_arch_init_vcpu(CPUState *cpu) void hvf_kick_vcpu_thread(CPUState *cpu) { + cpus_kick_thread(cpu); hv_vcpus_exit(&cpu->hvf->fd, 1); } @@ -536,6 +540,67 @@ static int hvf_inject_interrupts(CPUState *cpu) return 0; } +static void hvf_wait_for_ipi(CPUState *cpu, struct timespec *ts) +{ + /* + * Use pselect to sleep so that other threads can IPI us while we're + * sleeping. + */ + qatomic_mb_set(&cpu->thread_kicked, false); + qemu_mutex_unlock_iothread(); + pselect(0, 0, 0, 0, ts, &cpu->hvf->unblock_ipi_mask); + qemu_mutex_lock_iothread(); +} + +static void hvf_wfi(CPUState *cpu) +{ + ARMCPU *arm_cpu = ARM_CPU(cpu); + hv_return_t r; + uint64_t ctl; + + if (cpu->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ)) { + /* Interrupt pending, no need to wait */ + return; + } + + r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0, + &ctl); + assert_hvf_ok(r); + + if (!(ctl & 1) || (ctl & 2)) { + /* Timer disabled or masked, just wait for an IPI. */ + hvf_wait_for_ipi(cpu, NULL); + return; + } + + uint64_t cval; + r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CVAL_EL0, + &cval); + assert_hvf_ok(r); + + int64_t ticks_to_sleep = cval - mach_absolute_time(); + if (ticks_to_sleep < 0) { + return; + } + + uint64_t seconds = ticks_to_sleep / arm_cpu->gt_cntfrq_hz; + uint64_t nanos = + (ticks_to_sleep - arm_cpu->gt_cntfrq_hz * seconds) * + 1000000000 / arm_cpu->gt_cntfrq_hz; + + /* + * Don't sleep for less than the time a context switch would take, + * so that we can satisfy fast timer requests on the same CPU. + * Measurements on M1 show the sweet spot to be ~2ms. + */ + if (!seconds && nanos < 2000000) { + return; + } + + struct timespec ts = { seconds, nanos }; + hvf_wait_for_ipi(cpu, &ts); +} + static void hvf_sync_vtimer(CPUState *cpu) { ARMCPU *arm_cpu = ARM_CPU(cpu); @@ -670,6 +735,9 @@ int hvf_vcpu_exec(CPUState *cpu) } case EC_WFX_TRAP: advance_pc = true; + if (!(syndrome & WFX_IS_WFE)) { + hvf_wfi(cpu); + } break; case EC_AA64_HVC: cpu_synchronize_state(cpu); From patchwork Wed May 19 20:22:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62FF8C433ED for ; Wed, 19 May 2021 20:33:56 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 10BF3611C2 for ; Wed, 19 May 2021 20:33:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 10BF3611C2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:54880 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSt9-00028P-2H for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:33:55 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53546) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSjO-0005W4-O9; Wed, 19 May 2021 16:23:51 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48270 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSjL-0003IC-Cn; Wed, 19 May 2021 16:23:50 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id CD93B6080727; Wed, 19 May 2021 22:23:04 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 15/19] hvf: arm: Implement -cpu host Date: Wed, 19 May 2021 22:22:49 +0200 Message-Id: <20210519202253.76782-16-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Now that we have working system register sync, we push more target CPU properties into the virtual machine. That might be useful in some situations, but is not the typical case that users want. So let's add a -cpu host option that allows them to explicitly pass all CPU capabilities of their host CPU into the guest. Signed-off-by: Alexander Graf Acked-by: Roman Bolshakov Reviewed-by: Sergio Lopez Tested-by: Sergio Lopez --- v6 -> v7: - Move function define to own header - Do not propagate SVE features for HVF - Remove stray whitespace change - Verify that EL0 and EL1 do not allow AArch32 mode - Only probe host CPU features once --- target/arm/cpu.c | 9 ++++-- target/arm/cpu.h | 2 ++ target/arm/hvf/hvf.c | 72 ++++++++++++++++++++++++++++++++++++++++++++ target/arm/hvf_arm.h | 19 ++++++++++++ target/arm/kvm_arm.h | 2 -- 5 files changed, 100 insertions(+), 4 deletions(-) create mode 100644 target/arm/hvf_arm.h diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 4eb0d2f85c..762d8a6d26 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -39,6 +39,7 @@ #include "sysemu/tcg.h" #include "sysemu/hw_accel.h" #include "kvm_arm.h" +#include "hvf_arm.h" #include "disas/capstone.h" #include "fpu/softfloat.h" @@ -1998,15 +1999,19 @@ static void arm_cpu_class_init(ObjectClass *oc, void *data) #endif /* CONFIG_TCG */ } -#ifdef CONFIG_KVM +#if defined(CONFIG_KVM) || defined(CONFIG_HVF) static void arm_host_initfn(Object *obj) { ARMCPU *cpu = ARM_CPU(obj); +#ifdef CONFIG_KVM kvm_arm_set_cpu_features_from_host(cpu); if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) { aarch64_add_sve_properties(obj); } +#else + hvf_arm_set_cpu_features_from_host(cpu); +#endif arm_cpu_post_init(obj); } @@ -2066,7 +2071,7 @@ static void arm_cpu_register_types(void) { type_register_static(&arm_cpu_type_info); -#ifdef CONFIG_KVM +#if defined(CONFIG_KVM) || defined(CONFIG_HVF) type_register_static(&host_arm_cpu_type_info); #endif } diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 616b393253..4360e77183 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -2977,6 +2977,8 @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync); #define ARM_CPU_TYPE_NAME(name) (name ARM_CPU_TYPE_SUFFIX) #define CPU_RESOLVING_TYPE TYPE_ARM_CPU +#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU + #define cpu_signal_handler cpu_arm_signal_handler #define cpu_list arm_cpu_list diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c index 67002efd36..bce46f3ed8 100644 --- a/target/arm/hvf/hvf.c +++ b/target/arm/hvf/hvf.c @@ -17,6 +17,7 @@ #include "sysemu/hvf.h" #include "sysemu/hvf_int.h" #include "sysemu/hw_accel.h" +#include "hvf_arm.h" #include @@ -44,6 +45,16 @@ #define TMR_CTL_IMASK (1 << 1) #define TMR_CTL_ISTATUS (1 << 2) +typedef struct ARMHostCPUFeatures { + ARMISARegisters isar; + uint64_t features; + uint64_t midr; + uint32_t reset_sctlr; + const char *dtb_compatible; +} ARMHostCPUFeatures; + +static ARMHostCPUFeatures arm_host_cpu_features; + struct hvf_reg_match { int reg; uint64_t offset; @@ -390,6 +401,67 @@ static uint64_t hvf_get_reg(CPUState *cpu, int rt) return val; } +static void hvf_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf) +{ + ARMISARegisters host_isar; + const struct isar_regs { + int reg; + uint64_t *val; + } regs[] = { + { HV_SYS_REG_ID_AA64PFR0_EL1, &host_isar.id_aa64pfr0 }, + { HV_SYS_REG_ID_AA64PFR1_EL1, &host_isar.id_aa64pfr1 }, + { HV_SYS_REG_ID_AA64DFR0_EL1, &host_isar.id_aa64dfr0 }, + { HV_SYS_REG_ID_AA64DFR1_EL1, &host_isar.id_aa64dfr1 }, + { HV_SYS_REG_ID_AA64ISAR0_EL1, &host_isar.id_aa64isar0 }, + { HV_SYS_REG_ID_AA64ISAR1_EL1, &host_isar.id_aa64isar1 }, + { HV_SYS_REG_ID_AA64MMFR0_EL1, &host_isar.id_aa64mmfr0 }, + { HV_SYS_REG_ID_AA64MMFR1_EL1, &host_isar.id_aa64mmfr1 }, + { HV_SYS_REG_ID_AA64MMFR2_EL1, &host_isar.id_aa64mmfr2 }, + }; + hv_vcpu_t fd; + hv_vcpu_exit_t *exit; + int i; + + ahcf->dtb_compatible = "arm,arm-v8"; + ahcf->features = (1ULL << ARM_FEATURE_V8) | + (1ULL << ARM_FEATURE_NEON) | + (1ULL << ARM_FEATURE_AARCH64) | + (1ULL << ARM_FEATURE_PMU) | + (1ULL << ARM_FEATURE_GENERIC_TIMER); + + /* We set up a small vcpu to extract host registers */ + + assert_hvf_ok(hv_vcpu_create(&fd, &exit, NULL)); + for (i = 0; i < ARRAY_SIZE(regs); i++) { + assert_hvf_ok(hv_vcpu_get_sys_reg(fd, regs[i].reg, regs[i].val)); + } + assert_hvf_ok(hv_vcpu_get_sys_reg(fd, HV_SYS_REG_MIDR_EL1, &ahcf->midr)); + assert_hvf_ok(hv_vcpu_destroy(fd)); + + ahcf->isar = host_isar; + ahcf->reset_sctlr = 0x00c50078; + + /* Make sure we don't advertise AArch32 support for EL0/EL1 */ + g_assert((host_isar.id_aa64pfr0 & 0xff) == 0x11); +} + +void hvf_arm_set_cpu_features_from_host(ARMCPU *cpu) +{ + if (!arm_host_cpu_features.dtb_compatible) { + if (!hvf_enabled()) { + cpu->host_cpu_probe_failed = true; + return; + } + hvf_arm_get_host_cpu_features(&arm_host_cpu_features); + } + + cpu->dtb_compatible = arm_host_cpu_features.dtb_compatible; + cpu->isar = arm_host_cpu_features.isar; + cpu->env.features = arm_host_cpu_features.features; + cpu->midr = arm_host_cpu_features.midr; + cpu->reset_sctlr = arm_host_cpu_features.reset_sctlr; +} + void hvf_arch_vcpu_destroy(CPUState *cpu) { } diff --git a/target/arm/hvf_arm.h b/target/arm/hvf_arm.h new file mode 100644 index 0000000000..603074a331 --- /dev/null +++ b/target/arm/hvf_arm.h @@ -0,0 +1,19 @@ +/* + * QEMU Hypervisor.framework (HVF) support -- ARM specifics + * + * Copyright (c) 2021 Alexander Graf + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + * + */ + +#ifndef QEMU_HVF_ARM_H +#define QEMU_HVF_ARM_H + +#include "qemu/accel.h" +#include "cpu.h" + +void hvf_arm_set_cpu_features_from_host(struct ARMCPU *cpu); + +#endif diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h index 34f8daa377..828dca4a4a 100644 --- a/target/arm/kvm_arm.h +++ b/target/arm/kvm_arm.h @@ -214,8 +214,6 @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try, */ void kvm_arm_destroy_scratch_host_vcpu(int *fdarray); -#define TYPE_ARM_HOST_CPU "host-" TYPE_ARM_CPU - /** * ARMHostCPUFeatures: information about the host CPU (identified * by asking the host kernel) From patchwork Wed May 19 20:22:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A782C433ED for ; Wed, 19 May 2021 20:36:11 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EC1E361353 for ; Wed, 19 May 2021 20:36:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EC1E361353 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:35260 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSvK-0007uN-0v for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:36:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53572) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSjQ-0005WH-4n; Wed, 19 May 2021 16:23:52 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48288 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSjM-0003QQ-45; Wed, 19 May 2021 16:23:51 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 730736080770; Wed, 19 May 2021 22:23:05 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 16/19] hvf: arm: Implement PSCI handling Date: Wed, 19 May 2021 22:22:50 +0200 Message-Id: <20210519202253.76782-17-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" We need to handle PSCI calls. Most of the TCG code works for us, but we can simplify it to only handle aa64 mode and we need to handle SUSPEND differently. This patch takes the TCG code as template and duplicates it in HVF. To tell the guest that we support PSCI 0.2 now, update the check in arm_cpu_initfn() as well. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez --- v6 -> v7: - This patch integrates "arm: Set PSCI to 0.2 for HVF" v7 -> v8: - Do not advance for HVC, PC is already updated by hvf - Fix checkpatch error --- target/arm/cpu.c | 4 +- target/arm/hvf/hvf.c | 123 ++++++++++++++++++++++++++++++++++-- target/arm/hvf/trace-events | 1 + 3 files changed, 122 insertions(+), 6 deletions(-) diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 762d8a6d26..b202d06e09 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -1079,8 +1079,8 @@ static void arm_cpu_initfn(Object *obj) cpu->psci_version = 1; /* By default assume PSCI v0.1 */ cpu->kvm_target = QEMU_KVM_ARM_TARGET_NONE; - if (tcg_enabled()) { - cpu->psci_version = 2; /* TCG implements PSCI 0.2 */ + if (tcg_enabled() || hvf_enabled()) { + cpu->psci_version = 2; /* TCG and HVF implement PSCI 0.2 */ } } diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c index bce46f3ed8..65c33e2a14 100644 --- a/target/arm/hvf/hvf.c +++ b/target/arm/hvf/hvf.c @@ -25,6 +25,7 @@ #include "hw/irq.h" #include "qemu/main-loop.h" #include "sysemu/cpus.h" +#include "arm-powerctl.h" #include "target/arm/cpu.h" #include "target/arm/internals.h" #include "trace/trace-target_arm_hvf.h" @@ -45,6 +46,8 @@ #define TMR_CTL_IMASK (1 << 1) #define TMR_CTL_ISTATUS (1 << 2) +static void hvf_wfi(CPUState *cpu); + typedef struct ARMHostCPUFeatures { ARMISARegisters isar; uint64_t features; @@ -553,6 +556,110 @@ static void hvf_raise_exception(CPUARMState *env, uint32_t excp, env->pc = addr; } +static int hvf_psci_cpu_off(ARMCPU *arm_cpu) +{ + int32_t ret = 0; + ret = arm_set_cpu_off(arm_cpu->mp_affinity); + assert(ret == QEMU_ARM_POWERCTL_RET_SUCCESS); + + return 0; +} + +static int hvf_handle_psci_call(CPUState *cpu) +{ + ARMCPU *arm_cpu = ARM_CPU(cpu); + CPUARMState *env = &arm_cpu->env; + uint64_t param[4] = { + env->xregs[0], + env->xregs[1], + env->xregs[2], + env->xregs[3] + }; + uint64_t context_id, mpidr; + bool target_aarch64 = true; + CPUState *target_cpu_state; + ARMCPU *target_cpu; + target_ulong entry; + int target_el = 1; + int32_t ret = 0; + + trace_hvf_psci_call(param[0], param[1], param[2], param[3], + arm_cpu->mp_affinity); + + switch (param[0]) { + case QEMU_PSCI_0_2_FN_PSCI_VERSION: + ret = QEMU_PSCI_0_2_RET_VERSION_0_2; + break; + case QEMU_PSCI_0_2_FN_MIGRATE_INFO_TYPE: + ret = QEMU_PSCI_0_2_RET_TOS_MIGRATION_NOT_REQUIRED; /* No trusted OS */ + break; + case QEMU_PSCI_0_2_FN_AFFINITY_INFO: + case QEMU_PSCI_0_2_FN64_AFFINITY_INFO: + mpidr = param[1]; + + switch (param[2]) { + case 0: + target_cpu_state = arm_get_cpu_by_id(mpidr); + if (!target_cpu_state) { + ret = QEMU_PSCI_RET_INVALID_PARAMS; + break; + } + target_cpu = ARM_CPU(target_cpu_state); + + ret = target_cpu->power_state; + break; + default: + /* Everything above affinity level 0 is always on. */ + ret = 0; + } + break; + case QEMU_PSCI_0_2_FN_SYSTEM_RESET: + qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET); + /* QEMU reset and shutdown are async requests, but PSCI + * mandates that we never return from the reset/shutdown + * call, so power the CPU off now so it doesn't execute + * anything further. + */ + return hvf_psci_cpu_off(arm_cpu); + case QEMU_PSCI_0_2_FN_SYSTEM_OFF: + qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN); + return hvf_psci_cpu_off(arm_cpu); + case QEMU_PSCI_0_1_FN_CPU_ON: + case QEMU_PSCI_0_2_FN_CPU_ON: + case QEMU_PSCI_0_2_FN64_CPU_ON: + mpidr = param[1]; + entry = param[2]; + context_id = param[3]; + ret = arm_set_cpu_on(mpidr, entry, context_id, + target_el, target_aarch64); + break; + case QEMU_PSCI_0_1_FN_CPU_OFF: + case QEMU_PSCI_0_2_FN_CPU_OFF: + return hvf_psci_cpu_off(arm_cpu); + case QEMU_PSCI_0_1_FN_CPU_SUSPEND: + case QEMU_PSCI_0_2_FN_CPU_SUSPEND: + case QEMU_PSCI_0_2_FN64_CPU_SUSPEND: + /* Affinity levels are not supported in QEMU */ + if (param[1] & 0xfffe0000) { + ret = QEMU_PSCI_RET_INVALID_PARAMS; + break; + } + /* Powerdown is not supported, we always go into WFI */ + env->xregs[0] = 0; + hvf_wfi(cpu); + break; + case QEMU_PSCI_0_1_FN_MIGRATE: + case QEMU_PSCI_0_2_FN_MIGRATE: + ret = QEMU_PSCI_RET_NOT_SUPPORTED; + break; + default: + return 1; + } + + env->xregs[0] = ret; + return 0; +} + static uint64_t hvf_sysreg_read(CPUState *cpu, uint32_t reg) { ARMCPU *arm_cpu = ARM_CPU(cpu); @@ -716,6 +823,8 @@ int hvf_vcpu_exec(CPUState *cpu) } if (cpu->halted) { + /* On unhalt, we usually have CPU state changes. Prepare for them. */ + cpu_synchronize_state(cpu); return EXCP_HLT; } @@ -813,13 +922,19 @@ int hvf_vcpu_exec(CPUState *cpu) break; case EC_AA64_HVC: cpu_synchronize_state(cpu); - trace_hvf_unknown_hvf(env->xregs[0]); - hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized()); + if (hvf_handle_psci_call(cpu)) { + trace_hvf_unknown_hvf(env->xregs[0]); + hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized()); + } break; case EC_AA64_SMC: cpu_synchronize_state(cpu); - trace_hvf_unknown_smc(env->xregs[0]); - hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized()); + if (!hvf_handle_psci_call(cpu)) { + advance_pc = true; + } else { + trace_hvf_unknown_smc(env->xregs[0]); + hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized()); + } break; default: cpu_synchronize_state(cpu); diff --git a/target/arm/hvf/trace-events b/target/arm/hvf/trace-events index 49a547dcf6..278b88cc62 100644 --- a/target/arm/hvf/trace-events +++ b/target/arm/hvf/trace-events @@ -8,3 +8,4 @@ hvf_sysreg_write(uint32_t reg, uint32_t op0, uint32_t op1, uint32_t crn, uint32_ hvf_unknown_hvf(uint64_t x0) "unknown HVC! 0x%016"PRIx64 hvf_unknown_smc(uint64_t x0) "unknown SMC! 0x%016"PRIx64 hvf_exit(uint64_t syndrome, uint32_t ec, uint64_t pc) "exit: 0x%"PRIx64" [ec=0x%x pc=0x%"PRIx64"]" +hvf_psci_call(uint64_t x0, uint64_t x1, uint64_t x2, uint64_t x3, uint32_t cpuid) "PSCI Call x0=0x%016"PRIx64" x1=0x%016"PRIx64" x2=0x%016"PRIx64" x3=0x%016"PRIx64" cpu=0x%x" From patchwork Wed May 19 20:22:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7B44C433B4 for ; Wed, 19 May 2021 20:41:19 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 95D6C611C2 for ; Wed, 19 May 2021 20:41:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 95D6C611C2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:45704 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljT0I-0006Um-OW for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:41:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53578) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSjR-0005ZY-IJ; Wed, 19 May 2021 16:23:54 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48272 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSjM-0003J9-4L; Wed, 19 May 2021 16:23:53 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 09A886080780; Wed, 19 May 2021 22:23:06 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 17/19] arm: Add Hypervisor.framework build target Date: Wed, 19 May 2021 22:22:51 +0200 Message-Id: <20210519202253.76782-18-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Now that we have all logic in place that we need to handle Hypervisor.framework on Apple Silicon systems, let's add CONFIG_HVF for aarch64 as well so that we can build it. Signed-off-by: Alexander Graf Reviewed-by: Roman Bolshakov Tested-by: Roman Bolshakov (x86 only) Reviewed-by: Sergio Lopez Reviewed-by: Peter Maydell --- v1 -> v2: - Fix build on 32bit arm v3 -> v4: - Remove i386-softmmu target v6 -> v7: - Simplify HVF matching logic in meson build file --- meson.build | 7 +++++++ target/arm/hvf/meson.build | 3 +++ target/arm/meson.build | 2 ++ 3 files changed, 12 insertions(+) create mode 100644 target/arm/hvf/meson.build diff --git a/meson.build b/meson.build index 698f4e9356..d28bd068ff 100644 --- a/meson.build +++ b/meson.build @@ -77,6 +77,13 @@ else endif accelerator_targets = { 'CONFIG_KVM': kvm_targets } + +if cpu in ['aarch64'] + accelerator_targets += { + 'CONFIG_HVF': ['aarch64-softmmu'] + } +endif + if cpu in ['x86', 'x86_64', 'arm', 'aarch64'] # i368 emulator provides xenpv machine type for multiple architectures accelerator_targets += { diff --git a/target/arm/hvf/meson.build b/target/arm/hvf/meson.build new file mode 100644 index 0000000000..855e6cce5a --- /dev/null +++ b/target/arm/hvf/meson.build @@ -0,0 +1,3 @@ +arm_softmmu_ss.add(when: [hvf, 'CONFIG_HVF'], if_true: files( + 'hvf.c', +)) diff --git a/target/arm/meson.build b/target/arm/meson.build index 5bfaf43b50..48004bf0e6 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -57,5 +57,7 @@ arm_softmmu_ss.add(files( 'psci.c', )) +subdir('hvf') + target_arch += {'arm': arm_ss} target_softmmu_arch += {'arm': arm_softmmu_ss} From patchwork Wed May 19 20:22:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F9EBC433B4 for ; Wed, 19 May 2021 20:32:05 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 14C27611AE for ; Wed, 19 May 2021 20:32:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 14C27611AE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:49808 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSrM-0007CE-4z for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:32:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53574) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSjQ-0005Y8-K0; Wed, 19 May 2021 16:23:54 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48290 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSjM-0003Qw-4u; Wed, 19 May 2021 16:23:52 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 9F1FC60807A6; Wed, 19 May 2021 22:23:06 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 18/19] arm: Enable Windows 10 trusted SMCCC boot call Date: Wed, 19 May 2021 22:22:52 +0200 Message-Id: <20210519202253.76782-19-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Windows 10 calls an SMCCC call via SMC unconditionally on boot. It lives in the trusted application call number space, but its purpose is unknown. In our current SMC implementation, we inject a UDEF for unknown SMC calls, including this one. However, Windows breaks on boot when we do this. Instead, let's return an error code. With this and -M virt,virtualization=on I can successfully boot the current Windows 10 Insider Preview in TCG. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez --- target/arm/kvm-consts.h | 2 ++ target/arm/psci.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/target/arm/kvm-consts.h b/target/arm/kvm-consts.h index 580f1c1fee..4b64f98117 100644 --- a/target/arm/kvm-consts.h +++ b/target/arm/kvm-consts.h @@ -85,6 +85,8 @@ MISMATCH_CHECK(QEMU_PSCI_0_2_FN64_CPU_SUSPEND, PSCI_0_2_FN64_CPU_SUSPEND); MISMATCH_CHECK(QEMU_PSCI_0_2_FN64_CPU_ON, PSCI_0_2_FN64_CPU_ON); MISMATCH_CHECK(QEMU_PSCI_0_2_FN64_MIGRATE, PSCI_0_2_FN64_MIGRATE); +#define QEMU_SMCCC_TC_WINDOWS10_BOOT 0xc3000001 + /* PSCI v0.2 return values used by TCG emulation of PSCI */ /* No Trusted OS migration to worry about when offlining CPUs */ diff --git a/target/arm/psci.c b/target/arm/psci.c index 6709e28013..4d11dd59c4 100644 --- a/target/arm/psci.c +++ b/target/arm/psci.c @@ -69,6 +69,7 @@ bool arm_is_psci_call(ARMCPU *cpu, int excp_type) case QEMU_PSCI_0_2_FN64_CPU_SUSPEND: case QEMU_PSCI_0_1_FN_MIGRATE: case QEMU_PSCI_0_2_FN_MIGRATE: + case QEMU_SMCCC_TC_WINDOWS10_BOOT: return true; default: return false; @@ -194,6 +195,7 @@ void arm_handle_psci_call(ARMCPU *cpu) break; case QEMU_PSCI_0_1_FN_MIGRATE: case QEMU_PSCI_0_2_FN_MIGRATE: + case QEMU_SMCCC_TC_WINDOWS10_BOOT: ret = QEMU_PSCI_RET_NOT_SUPPORTED; break; default: From patchwork Wed May 19 20:22:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 12268515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FA2CC433B4 for ; Wed, 19 May 2021 20:34:58 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BD409611C2 for ; Wed, 19 May 2021 20:34:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD409611C2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=csgraf.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:58066 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSu8-0004F6-Sz for qemu-devel@archiver.kernel.org; Wed, 19 May 2021 16:34:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53576) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ljSjR-0005ZX-H2; Wed, 19 May 2021 16:23:54 -0400 Received: from mail.csgraf.de ([85.25.223.15]:48278 helo=zulu616.server4you.de) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ljSjO-0003JN-BC; Wed, 19 May 2021 16:23:53 -0400 Received: from localhost.localdomain (dynamic-095-114-039-201.95.114.pool.telefonica.de [95.114.39.201]) by csgraf.de (Postfix) with ESMTPSA id 4554260807AF; Wed, 19 May 2021 22:23:07 +0200 (CEST) From: Alexander Graf To: QEMU Developers Subject: [PATCH v8 19/19] hvf: arm: Handle Windows 10 SMC call Date: Wed, 19 May 2021 22:22:53 +0200 Message-Id: <20210519202253.76782-20-agraf@csgraf.de> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20210519202253.76782-1-agraf@csgraf.de> References: <20210519202253.76782-1-agraf@csgraf.de> MIME-Version: 1.0 Received-SPF: pass client-ip=85.25.223.15; envelope-from=agraf@csgraf.de; helo=zulu616.server4you.de X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Eduardo Habkost , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Richard Henderson , Cameron Esfahani , Roman Bolshakov , qemu-arm , Frank Yang , Paolo Bonzini , Peter Collingbourne Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Windows 10 calls an SMCCC call via SMC unconditionally on boot. It lives in the trusted application call number space, but its purpose is unknown. In our current SMC implementation, we inject a UDEF for unknown SMC calls, including this one. However, Windows breaks on boot when we do this. Instead, let's return an error code. With this patch applied I can successfully boot the current Windows 10 Insider Preview in HVF. Signed-off-by: Alexander Graf Reviewed-by: Sergio Lopez Tested-by: Sergio Lopez --- v7 -> v8: - fix checkpatch --- target/arm/hvf/hvf.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c index 65c33e2a14..be670af578 100644 --- a/target/arm/hvf/hvf.c +++ b/target/arm/hvf/hvf.c @@ -931,6 +931,10 @@ int hvf_vcpu_exec(CPUState *cpu) cpu_synchronize_state(cpu); if (!hvf_handle_psci_call(cpu)) { advance_pc = true; + } else if (env->xregs[0] == QEMU_SMCCC_TC_WINDOWS10_BOOT) { + /* This special SMC is called by Windows 10 on boot. Return error */ + env->xregs[0] = -1; + advance_pc = true; } else { trace_hvf_unknown_smc(env->xregs[0]); hvf_raise_exception(env, EXCP_UDEF, syn_uncategorized());