From patchwork Wed Jun 26 16:28:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13713170 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6E96FC30658 for ; Wed, 26 Jun 2024 16:28:59 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.749269.1157302 (Exim 4.92) (envelope-from ) id 1sMVVk-0005uD-Oz; Wed, 26 Jun 2024 16:28:44 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 749269.1157302; Wed, 26 Jun 2024 16:28:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVk-0005u3-K0; Wed, 26 Jun 2024 16:28:44 +0000 Received: by outflank-mailman (input) for mailman id 749269; Wed, 26 Jun 2024 16:28:43 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVj-0005pK-0r for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:43 +0000 Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com [2a00:1450:4864:20::633]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 25b93049-33d9-11ef-b4bb-af5377834399; Wed, 26 Jun 2024 18:28:40 +0200 (CEST) Received: by mail-ej1-x633.google.com with SMTP id a640c23a62f3a-a725d756d41so135752866b.0 for ; Wed, 26 Jun 2024 09:28:40 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jun 2024 09:28:39 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 25b93049-33d9-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1719419320; x=1720024120; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qAI2lFL9v1i73OxDsNgyfceqLCFm8BlW5RHMtqw9d8k=; b=jgHGJ4ueZG/CuGnBJOPLAneSg22Rrabw8ZB/ZzjAOHEVM8hj2f4wtaRCED7JSl1Adm cB/2ZB5FT2uOP+mUFqx43Z07rXCoQS+l0ZTix6aFQXU5MjY2d6atVb7JiX9oEzmzIkAc 9x3DSEJvPDDg293JWYdDO/3JyBsBr3Y8f8mnA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719419320; x=1720024120; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qAI2lFL9v1i73OxDsNgyfceqLCFm8BlW5RHMtqw9d8k=; b=m+Nya1iTFScW0Th2IxICSNG2r4vKDkBHm8zB/sAgdp3+WL650KQ6V+aKTUN0Ofpvx7 8HUuHhh9Dqrgmi/HJPpib5q9emATwiJ5boG6PwWic9VjxwvuOk5Y6uctURLQkuO/8gL8 4LCbfiDibhvIFQkMKQvfQkIwu5dUThjcsIDhgC+XwjETKp6+IhVkWDKReJ0U7Z4gC6dU Qze2Pn7cPlpzBcLHNVWoyqqhsLxMaTkcDFuFR6p7b4aZJBhT02tG/rLQcauB2KJfCURv 7qFqIqgWGRoUSjqeJ/1JdBO7VdRFH35CQI/IIygHP9eYVySJMia4+syh0sxGAS87bLp6 6NPg== X-Gm-Message-State: AOJu0YyzWumUziGGYdJneE1PV+jm1zffnO7IcbBfspr001+mmYdHcY8G sGp/l0Sv/xsKS8/Y91szWqUvOtbWiM5Zc6qotaKWRXmox/fOiWuLaZjgP+697PpSIrLL2l3e8jV lKeg= X-Google-Smtp-Source: AGHT+IFL3aOpcUv5tos2lUCPGjcF7zAXhjcAbWgKdPNXb/lpgRWr7CZAG92QA3pbaZvO3I1MjsZLLQ== X-Received: by 2002:a17:907:d382:b0:a72:5d7f:dd4a with SMTP id a640c23a62f3a-a7296fa704emr8196066b.25.1719419319978; Wed, 26 Jun 2024 09:28:39 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Anthony PERARD , Oleksii Kurochko Subject: [PATCH for 4.19 v4 01/10] tools/hvmloader: Fix non-deterministic cpuid() Date: Wed, 26 Jun 2024 17:28:28 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 hvmloader's cpuid() implementation deviates from Xen's in that the value passed on ecx is unspecified. This means that when used on leaves that implement subleaves it's unspecified which one you get; though it's more than likely an invalid one. Import Xen's implementation so there are no surprises. Signed-off-by: Alejandro Vallejo --- This is a fix for a latent bug. Should go into 4.19. v4 * New patch --- tools/firmware/hvmloader/util.c | 9 --------- tools/firmware/hvmloader/util.h | 27 ++++++++++++++++++++++++--- 2 files changed, 24 insertions(+), 12 deletions(-) diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c index c34f077b38e3..d3b3f9038e64 100644 --- a/tools/firmware/hvmloader/util.c +++ b/tools/firmware/hvmloader/util.c @@ -267,15 +267,6 @@ memcmp(const void *s1, const void *s2, unsigned n) return 0; } -void -cpuid(uint32_t idx, uint32_t *eax, uint32_t *ebx, uint32_t *ecx, uint32_t *edx) -{ - asm volatile ( - "cpuid" - : "=a" (*eax), "=b" (*ebx), "=c" (*ecx), "=d" (*edx) - : "0" (idx) ); -} - static const char hex_digits[] = "0123456789abcdef"; /* Write a two-character hex representation of 'byte' to digits[]. diff --git a/tools/firmware/hvmloader/util.h b/tools/firmware/hvmloader/util.h index deb823a892ef..3ad7c4f6d6a2 100644 --- a/tools/firmware/hvmloader/util.h +++ b/tools/firmware/hvmloader/util.h @@ -184,9 +184,30 @@ int uart_exists(uint16_t uart_base); int lpt_exists(uint16_t lpt_base); int hpet_exists(unsigned long hpet_base); -/* Do cpuid instruction, with operation 'idx' */ -void cpuid(uint32_t idx, uint32_t *eax, uint32_t *ebx, - uint32_t *ecx, uint32_t *edx); +/* Some CPUID calls want 'count' to be placed in ecx */ +static inline void cpuid_count( + uint32_t op, + uint32_t count, + uint32_t *eax, + uint32_t *ebx, + uint32_t *ecx, + uint32_t *edx) +{ + asm volatile ( "cpuid" + : "=a" (*eax), "=b" (*ebx), "=c" (*ecx), "=d" (*edx) + : "0" (op), "c" (count) ); +} + +/* Generic CPUID function (subleaf 0) */ +static inline void cpuid( + uint32_t leaf, + uint32_t *eax, + uint32_t *ebx, + uint32_t *ecx, + uint32_t *edx) +{ + cpuid_count(leaf, 0, eax, ebx, ecx, edx); +} /* Read the TSC register. */ static inline uint64_t rdtsc(void) From patchwork Wed Jun 26 16:28:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13713174 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 72D83C3065D for ; Wed, 26 Jun 2024 16:29:00 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.749270.1157308 (Exim 4.92) (envelope-from ) id 1sMVVl-00062W-3n; Wed, 26 Jun 2024 16:28:45 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 749270.1157308; Wed, 26 Jun 2024 16:28:45 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVk-00061q-VR; Wed, 26 Jun 2024 16:28:44 +0000 Received: by outflank-mailman (input) for mailman id 749270; Wed, 26 Jun 2024 16:28:44 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVk-0005pK-0t for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:44 +0000 Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [2a00:1450:4864:20::635]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 26804efe-33d9-11ef-b4bb-af5377834399; Wed, 26 Jun 2024 18:28:42 +0200 (CEST) Received: by mail-ej1-x635.google.com with SMTP id a640c23a62f3a-a7194ce90afso570376166b.2 for ; Wed, 26 Jun 2024 09:28:42 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jun 2024 09:28:40 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 26804efe-33d9-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1719419321; x=1720024121; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YsReVAeUjwha0aJ48SN5/6FToZpj38HQfON50NxdYE0=; b=G49dRa14RDgt4uNpwBdbXEC5Sm101BrenmgMQQgmobHltitg/A8Wr6+aJzhl5SNoX6 Tgu0b2gYm69xETNMg8RU+ZY9rjt07VycDw2YMHMxBtICPPlabakoX8YMueLqAD5IDN21 ttPWSBvSTMdLhfn0P8ZAIGZOsBTcC+ZzIOBiU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719419321; x=1720024121; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YsReVAeUjwha0aJ48SN5/6FToZpj38HQfON50NxdYE0=; b=BttpK94NaD+LNP2PpUE33zz8gG8LxEEVs32EoNSE0ISv19HOVHj9KxJi598XlenHTx tGGq4vU9QQXVvk9sc/zFMndaHktAe6WFwzZmhYmrKcNXqPPL6CXmqfugxEjL+6brnC+C 2wOckidUgXdv7gu9rs9p5NsxU5vDNLAiq2vkd3j3b9Jr+geCtpvW4y2WzO94XMUPkFeP tmdo6rqOTsJvCXJRlowfsPLhvKOVRaLU5KkUWOASqP70DVOecBifljn12YrT52xX8CQJ dwFMJMmR2ve0lsOU0rAk9xhJXFMiJXXOvN5+PeE4M/X91W/p1iKWfSkZovX95lCFL49c 31eA== X-Gm-Message-State: AOJu0Yzv5XsUMKkjC7wygGWiKm7+GoiuSRb1iu+C79IxYg/IveH3oEJN 4ilsX//H3RMmaKCf3UIGeu0Ozrot/6dcZFEuiUd6StQbY9psQ/iSDeyuxy/V/Z8BHKsLX3tvcfm WQz0= X-Google-Smtp-Source: AGHT+IHCZu3RrV7SRUnu1c4M4htC2NGSpi5ANIL/1I0tyNvDVlF5ERgDnwZW3Gzmg/CNckHZIaIl6Q== X-Received: by 2002:a17:907:cbc2:b0:a72:7d5c:ace0 with SMTP id a640c23a62f3a-a727d5cae14mr293649566b.11.1719419321136; Wed, 26 Jun 2024 09:28:41 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Oleksii Kurochko Subject: [PATCH for 4.19 v4 02/10] x86/vlapic: Move lapic migration checks to the check hooks Date: Wed, 26 Jun 2024 17:28:29 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 While doing this, factor out checks common to architectural and hidden state. Signed-off-by: Alejandro Vallejo Reviewed-by: Roger Pau Monné --- This puts essential LAPIC information in the stream. It's technically a feature but it makes 4.19 guests a lot more future-proof. I think this should go on 4.19 v4: * Replaced BUG() with ASSERT_UNREACHABLE(), and allow ret -EINVAL on release. * Adjust printk() to be clearer * Assign lapic_check_common() outside the "if" condition. --- xen/arch/x86/hvm/vlapic.c | 85 ++++++++++++++++++++++++++------------- 1 file changed, 58 insertions(+), 27 deletions(-) diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index 9cfc82666ae5..1a7bca5afd2f 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -1553,60 +1553,91 @@ static void lapic_load_fixup(struct vlapic *vlapic) v, vlapic->loaded.id, vlapic->loaded.ldr, good_ldr); } -static int cf_check lapic_load_hidden(struct domain *d, hvm_domain_context_t *h) -{ - unsigned int vcpuid = hvm_load_instance(h); - struct vcpu *v; - struct vlapic *s; +static int lapic_check_common(const struct domain *d, unsigned int vcpuid) +{ if ( !has_vlapic(d) ) return -ENODEV; /* Which vlapic to load? */ - if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL ) + if ( !domain_vcpu(d, vcpuid) ) { - dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no apic%u\n", + dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no vCPU %u\n", d->domain_id, vcpuid); return -EINVAL; } - s = vcpu_vlapic(v); + + return 0; +} + +static int cf_check lapic_check_hidden(const struct domain *d, + hvm_domain_context_t *h) +{ + unsigned int vcpuid = hvm_load_instance(h); + struct hvm_hw_lapic s; + int rc = lapic_check_common(d, vcpuid); + + if ( rc ) + return rc; + + if ( hvm_load_entry_zeroextend(LAPIC, h, &s) != 0 ) + return -ENODATA; + + /* EN=0 with EXTD=1 is illegal */ + if ( (s.apic_base_msr & (APIC_BASE_ENABLE | APIC_BASE_EXTD)) == + APIC_BASE_EXTD ) + return -EINVAL; + + return 0; +} + +static int cf_check lapic_load_hidden(struct domain *d, hvm_domain_context_t *h) +{ + unsigned int vcpuid = hvm_load_instance(h); + struct vcpu *v = d->vcpu[vcpuid]; + struct vlapic *s = vcpu_vlapic(v); if ( hvm_load_entry_zeroextend(LAPIC, h, &s->hw) != 0 ) + { + ASSERT_UNREACHABLE(); return -EINVAL; + } s->loaded.hw = 1; if ( s->loaded.regs ) lapic_load_fixup(s); - if ( !(s->hw.apic_base_msr & APIC_BASE_ENABLE) && - unlikely(vlapic_x2apic_mode(s)) ) - return -EINVAL; - hvm_update_vlapic_mode(v); return 0; } -static int cf_check lapic_load_regs(struct domain *d, hvm_domain_context_t *h) +static int cf_check lapic_check_regs(const struct domain *d, + hvm_domain_context_t *h) { unsigned int vcpuid = hvm_load_instance(h); - struct vcpu *v; - struct vlapic *s; + int rc; - if ( !has_vlapic(d) ) - return -ENODEV; + if ( (rc = lapic_check_common(d, vcpuid)) ) + return rc; - /* Which vlapic to load? */ - if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL ) - { - dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no apic%u\n", - d->domain_id, vcpuid); - return -EINVAL; - } - s = vcpu_vlapic(v); + if ( !hvm_get_entry(LAPIC_REGS, h) ) + return -ENODATA; + + return 0; +} + +static int cf_check lapic_load_regs(struct domain *d, hvm_domain_context_t *h) +{ + unsigned int vcpuid = hvm_load_instance(h); + struct vcpu *v = d->vcpu[vcpuid]; + struct vlapic *s = vcpu_vlapic(v); if ( hvm_load_entry(LAPIC_REGS, h, s->regs) != 0 ) + { + ASSERT_UNREACHABLE(); return -EINVAL; + } s->loaded.id = vlapic_get_reg(s, APIC_ID); s->loaded.ldr = vlapic_get_reg(s, APIC_LDR); @@ -1623,9 +1654,9 @@ static int cf_check lapic_load_regs(struct domain *d, hvm_domain_context_t *h) return 0; } -HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL, +HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_check_hidden, lapic_load_hidden, 1, HVMSR_PER_VCPU); -HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL, +HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_check_regs, lapic_load_regs, 1, HVMSR_PER_VCPU); int vlapic_init(struct vcpu *v) From patchwork Wed Jun 26 16:28:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13713172 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 72DA3C3065C for ; Wed, 26 Jun 2024 16:29:00 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.749271.1157325 (Exim 4.92) (envelope-from ) id 1sMVVm-0006Xx-9V; Wed, 26 Jun 2024 16:28:46 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 749271.1157325; Wed, 26 Jun 2024 16:28:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVm-0006Xo-6P; Wed, 26 Jun 2024 16:28:46 +0000 Received: by outflank-mailman (input) for mailman id 749271; Wed, 26 Jun 2024 16:28:45 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVl-0005pK-1C for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:45 +0000 Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com [2a00:1450:4864:20::630]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 2708ca72-33d9-11ef-b4bb-af5377834399; Wed, 26 Jun 2024 18:28:43 +0200 (CEST) Received: by mail-ej1-x630.google.com with SMTP id a640c23a62f3a-a6f8ebbd268so142352466b.0 for ; Wed, 26 Jun 2024 09:28:43 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jun 2024 09:28:41 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2708ca72-33d9-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1719419322; x=1720024122; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/arT7jrgNzKbHjKJBGTvYB0vtdlsaxsy4yOtpT+pfjQ=; b=UaVUiPbr2JgDxbWB/dK1VFuXZECS0y7X3GUMPyuIVgAQaJRy7XawYE+pfvWnXkDJve ngJzX1p1wJjMfpD0Fyn36iKjQAeCiHRQCyReBcsDSgraigbabx9LjLBMzIAxKQ7wnijU J6W1SL2vXDaznYRFF93QMKXAc4fABuCbZr+UE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719419322; x=1720024122; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/arT7jrgNzKbHjKJBGTvYB0vtdlsaxsy4yOtpT+pfjQ=; b=vRUYXAlcGwYhjfiZZhda3ISamo1Zyz8vRD334FuLFWJ6tIelXtc4gxNjlcjYKUC4aX 90gG8rBw5DFuaj8i1TO1jrP1KhZXNWXl64kdiJgeV+6DgZSOWNM56A6wGOdVQIkqQCMY Xia0D4NsAYtJ6BRrQ504TWY1dBc+DELsD2bT9slEy1cOzDGJics6kdXjX60XWIl9e3+5 jbJkRPOYqp91OM6K9Qa9HV0C4+ZOCNgjbKonztUF6Lr10vAGWYxuU5vCPly9NLds4T/T OVHzqQnNOMtGIsYvU9u5wIyI/kjlfccAIY2EynjMZcRZefZGZh5jwYrcb0QV2Y4VwldF xZvg== X-Gm-Message-State: AOJu0YzT9btL1oq81pDHrYkUuJ+hEHTuNcbvBDRrbgTIye+fGoLjzCsx vEZSk1jWikmSOnShHKjM2rLDazYtYyDSLDKOvNNSn3CYHiPDLe/gUAjnYYANC1vNJ6F3S4mMymo EtGc= X-Google-Smtp-Source: AGHT+IHkphF+PpBgLaj8v7JiBBQGt/9/WfXJ5n0pdihowoSgcBiOCYQPdv1UTqbLZ2PaPIjiH/NwKA== X-Received: by 2002:a17:906:b54:b0:a72:5a0d:4831 with SMTP id a640c23a62f3a-a7296f809cfmr7910966b.23.1719419322071; Wed, 26 Jun 2024 09:28:42 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Oleksii Kurochko Subject: [PATCH for-4.19 v4 03/10] xen/x86: Add initial x2APIC ID to the per-vLAPIC save area Date: Wed, 26 Jun 2024 17:28:30 +0100 Message-Id: <5beadf9d7997ed2df1a7c28cc2f0c5583ca7f7a3.1719416329.git.alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 This allows the initial x2APIC ID to be sent on the migration stream. This allows further changes to topology and APIC ID assignment without breaking existing hosts. Given the vlapic data is zero-extended on restore, fix up migrations from hosts without the field by setting it to the old convention if zero. The hardcoded mapping x2apic_id=2*vcpu_id is kept for the time being, but it's meant to be overriden by toolstack on a later patch with appropriate values. Signed-off-by: Alejandro Vallejo --- Same rationale as previous patch for inclusion in 4.19 Roger replied to v3 with an R-by for this patch. I didn't add it here because the patch has seen substantial changes and it's probably worth looking at again All changes are removals. In particular... v4: * Removed hooks into cpu policy update events. They are no longer relevant. * Remove the derivation (within Xen) of x2apic_id from vcpu_id via lib/x86. * Rearranged for toolstack to provide those on hvmcontext blobs on a later patch. This still works out because the default is the legacy scheme of apicid=vcpuid*2 --- xen/arch/x86/cpuid.c | 14 +++++--------- xen/arch/x86/hvm/vlapic.c | 22 ++++++++++++++++++++-- xen/arch/x86/include/asm/hvm/vlapic.h | 1 + xen/include/public/arch-x86/hvm/save.h | 2 ++ 4 files changed, 28 insertions(+), 11 deletions(-) diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c index a822e80c7ea7..7ee596ab66a4 100644 --- a/xen/arch/x86/cpuid.c +++ b/xen/arch/x86/cpuid.c @@ -139,10 +139,9 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf, const struct cpu_user_regs *regs; case 0x1: - /* TODO: Rework topology logic. */ res->b &= 0x00ffffffu; if ( is_hvm_domain(d) ) - res->b |= (v->vcpu_id * 2) << 24; + res->b |= vlapic_x2apic_id(vcpu_vlapic(v)) << 24; /* TODO: Rework vPMU control in terms of toolstack choices. */ if ( vpmu_available(v) && @@ -312,18 +311,15 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf, case 0xb: /* - * In principle, this leaf is Intel-only. In practice, it is tightly - * coupled with x2apic, and we offer an x2apic-capable APIC emulation - * to guests on AMD hardware as well. - * - * TODO: Rework topology logic. + * Don't expose topology information to PV guests. Exposed on HVM + * along with x2APIC because they are tightly coupled. */ - if ( p->basic.x2apic ) + if ( is_hvm_domain(d) && p->basic.x2apic ) { *(uint8_t *)&res->c = subleaf; /* Fix the x2APIC identifier. */ - res->d = v->vcpu_id * 2; + res->d = vlapic_x2apic_id(vcpu_vlapic(v)); } break; diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index 1a7bca5afd2f..b57e39d1c6dd 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -1072,7 +1072,7 @@ static uint32_t x2apic_ldr_from_id(uint32_t id) static void set_x2apic_id(struct vlapic *vlapic) { const struct vcpu *v = vlapic_vcpu(vlapic); - uint32_t apic_id = v->vcpu_id * 2; + uint32_t apic_id = vlapic->hw.x2apic_id; uint32_t apic_ldr = x2apic_ldr_from_id(apic_id); /* @@ -1452,7 +1452,7 @@ void vlapic_reset(struct vlapic *vlapic) if ( v->vcpu_id == 0 ) vlapic->hw.apic_base_msr |= APIC_BASE_BSP; - vlapic_set_reg(vlapic, APIC_ID, (v->vcpu_id * 2) << 24); + vlapic_set_reg(vlapic, APIC_ID, SET_xAPIC_ID(vlapic->hw.x2apic_id)); vlapic_do_init(vlapic); } @@ -1520,6 +1520,16 @@ static void lapic_load_fixup(struct vlapic *vlapic) const struct vcpu *v = vlapic_vcpu(vlapic); uint32_t good_ldr = x2apic_ldr_from_id(vlapic->loaded.id); + /* + * Loading record without hw.x2apic_id in the save stream, calculate using + * the traditional "vcpu_id * 2" relation. There's an implicit assumption + * that vCPU0 always has x2APIC0, which is true for the old relation, and + * still holds under the new x2APIC generation algorithm. While that case + * goes through the conditional it's benign because it still maps to zero. + */ + if ( !vlapic->hw.x2apic_id ) + vlapic->hw.x2apic_id = v->vcpu_id * 2; + /* Skip fixups on xAPIC mode, or if the x2APIC LDR is already correct */ if ( !vlapic_x2apic_mode(vlapic) || (vlapic->loaded.ldr == good_ldr) ) @@ -1588,6 +1598,13 @@ static int cf_check lapic_check_hidden(const struct domain *d, APIC_BASE_EXTD ) return -EINVAL; + /* + * Fail migrations from newer versions of Xen where + * rsvd_zero is interpreted as something else. + */ + if ( s.rsvd_zero ) + return -EINVAL; + return 0; } @@ -1672,6 +1689,7 @@ int vlapic_init(struct vcpu *v) } vlapic->pt.source = PTSRC_lapic; + vlapic->hw.x2apic_id = 2 * v->vcpu_id; vlapic->regs_page = alloc_domheap_page(v->domain, MEMF_no_owner); if ( !vlapic->regs_page ) diff --git a/xen/arch/x86/include/asm/hvm/vlapic.h b/xen/arch/x86/include/asm/hvm/vlapic.h index 2c4ff94ae7a8..85c4a236b9f6 100644 --- a/xen/arch/x86/include/asm/hvm/vlapic.h +++ b/xen/arch/x86/include/asm/hvm/vlapic.h @@ -44,6 +44,7 @@ #define vlapic_xapic_mode(vlapic) \ (!vlapic_hw_disabled(vlapic) && \ !((vlapic)->hw.apic_base_msr & APIC_BASE_EXTD)) +#define vlapic_x2apic_id(vlapic) ((vlapic)->hw.x2apic_id) /* * Generic APIC bitmap vector update & search routines. diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h index 7ecacadde165..1c2ec669ffc9 100644 --- a/xen/include/public/arch-x86/hvm/save.h +++ b/xen/include/public/arch-x86/hvm/save.h @@ -394,6 +394,8 @@ struct hvm_hw_lapic { uint32_t disabled; /* VLAPIC_xx_DISABLED */ uint32_t timer_divisor; uint64_t tdt_msr; + uint32_t x2apic_id; + uint32_t rsvd_zero; }; DECLARE_HVM_SAVE_TYPE(LAPIC, 5, struct hvm_hw_lapic); From patchwork Wed Jun 26 16:28:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13713177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CF511C31D97 for ; Wed, 26 Jun 2024 16:29:01 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.749273.1157340 (Exim 4.92) (envelope-from ) id 1sMVVn-0006pk-Rt; Wed, 26 Jun 2024 16:28:47 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 749273.1157340; Wed, 26 Jun 2024 16:28:47 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVn-0006oN-L9; Wed, 26 Jun 2024 16:28:47 +0000 Received: by outflank-mailman (input) for mailman id 749273; Wed, 26 Jun 2024 16:28:46 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVm-0005pK-1D for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:46 +0000 Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com [2a00:1450:4864:20::630]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 27cb504e-33d9-11ef-b4bb-af5377834399; Wed, 26 Jun 2024 18:28:44 +0200 (CEST) Received: by mail-ej1-x630.google.com with SMTP id a640c23a62f3a-a72420e84feso594506166b.0 for ; Wed, 26 Jun 2024 09:28:44 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jun 2024 09:28:42 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 27cb504e-33d9-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1719419323; x=1720024123; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KUPCAuZy6+3qsiLL1TNf/Yu5F4mR3hLKr97Qs7ol+TI=; b=OTjninxtZWSDyE1V+mCKPUlE3Cl6eMIgfwVubBgUngEQIabJXs2JENQGvZ95lqHm9I 0BSYad4UJ0vQvHxbEQ7EEGBwa6s77Gz16eNESYrKNAnc2DB/Nf2kzEnDQ0zfemxP9ynJ 1JAkk52rEeH+bAzEpRvXoAib7UYuWt30R7OeY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719419323; x=1720024123; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KUPCAuZy6+3qsiLL1TNf/Yu5F4mR3hLKr97Qs7ol+TI=; b=V32V9V3AmZ7L48IKG9vVlshPVe1mCZJEqefYf/Svj/yOj3Akul5RP6ktrecJBG96xg A+7DZ6znPjS4oCZguOjIT50dOaCqmyOOXavw9a5rJVHAipdxMdo5MXMPkzVxq9ODyC0D eQsky3C43a9M1yv31gPvRDuNRAhXyIAKJ/HUOJISE/bW6ROTxVXoiDAEPPaoC3qsH754 5vUPgXJD6wkdN868d85ei7pdpxal9Z9dcDG+y42i1yOGUOMIis8jXbwzATgA2QCdXqFU V2CtkHfJg4KayurJ2sd8vA7XikJEPwagpt9ROzl3cg/L+aWmCL3ZfjoBJ0A8s3wslWNw hR9g== X-Gm-Message-State: AOJu0YzvC2a/tYaUHvDJG80mf/PdWBb8zAb47hixB7QIhedJLOP0YrvR r1zTbVpn4unfrLSj/gXNvuWJksRJooMwVD3dg0jjs/vSR8NIQ5ade+tpTvfQkAK9E78NLtzZ/VN jpgU= X-Google-Smtp-Source: AGHT+IGnjomOOSm/zrdoLTB5WjM2JZZxash/dLqgpcWWloLOuyWpg0ImCKC47O7zdJLeKl0l5JWH0Q== X-Received: by 2002:a17:906:6057:b0:a71:bf5a:3418 with SMTP id a640c23a62f3a-a7245c809demr827881466b.53.1719419323448; Wed, 26 Jun 2024 09:28:43 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v4 04/10] tools/hvmloader: Retrieve (x2)APIC IDs from the APs themselves Date: Wed, 26 Jun 2024 17:28:31 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Make it so the APs expose their own APIC IDs in a LUT. We can use that LUT to populate the MADT, decoupling the algorithm that relates CPU IDs and APIC IDs from hvmloader. While at this also remove ap_callin, as writing the APIC ID may serve the same purpose. Signed-off-by: Alejandro Vallejo --- v4: * Removed bogus ! in ASSERT() statement introduced in v3. --- tools/firmware/hvmloader/config.h | 6 ++- tools/firmware/hvmloader/hvmloader.c | 4 +- tools/firmware/hvmloader/smp.c | 54 ++++++++++++++++++++----- tools/include/xen-tools/common-macros.h | 5 +++ 4 files changed, 56 insertions(+), 13 deletions(-) diff --git a/tools/firmware/hvmloader/config.h b/tools/firmware/hvmloader/config.h index cd716bf39245..213ac1f28e17 100644 --- a/tools/firmware/hvmloader/config.h +++ b/tools/firmware/hvmloader/config.h @@ -4,6 +4,8 @@ #include #include +#include + enum virtual_vga { VGA_none, VGA_std, VGA_cirrus, VGA_pt }; extern enum virtual_vga virtual_vga; @@ -48,8 +50,10 @@ extern uint8_t ioapic_version; #define IOAPIC_ID 0x01 +extern uint32_t CPU_TO_X2APICID[HVM_MAX_VCPUS]; + #define LAPIC_BASE_ADDRESS 0xfee00000 -#define LAPIC_ID(vcpu_id) ((vcpu_id) * 2) +#define LAPIC_ID(vcpu_id) (CPU_TO_X2APICID[(vcpu_id)]) #define PCI_ISA_DEVFN 0x08 /* dev 1, fn 0 */ #define PCI_ISA_IRQ_MASK 0x0c20U /* ISA IRQs 5,10,11 are PCI connected */ diff --git a/tools/firmware/hvmloader/hvmloader.c b/tools/firmware/hvmloader/hvmloader.c index f8af88fabf24..5c02e8fc226a 100644 --- a/tools/firmware/hvmloader/hvmloader.c +++ b/tools/firmware/hvmloader/hvmloader.c @@ -341,11 +341,11 @@ int main(void) printf("CPU speed is %u MHz\n", get_cpu_mhz()); + smp_initialise(); + apic_setup(); pci_setup(); - smp_initialise(); - perform_tests(); if ( bios->bios_info_setup ) diff --git a/tools/firmware/hvmloader/smp.c b/tools/firmware/hvmloader/smp.c index 5d46eee1c5f4..43eb17e4e3be 100644 --- a/tools/firmware/hvmloader/smp.c +++ b/tools/firmware/hvmloader/smp.c @@ -29,7 +29,34 @@ #include -static int ap_callin; +/** + * Lookup table of x2APIC IDs. + * + * Each entry is populated its respective CPU as they come online. This is required + * for generating the MADT with minimal assumptions about ID relationships. + */ +uint32_t CPU_TO_X2APICID[HVM_MAX_VCPUS]; + +/** Tristate about x2apic being supported. -1=unknown */ +static int has_x2apic = -1; + +static uint32_t read_apic_id(void) +{ + uint32_t apic_id; + + if ( has_x2apic ) + cpuid(0xb, NULL, NULL, NULL, &apic_id); + else + { + cpuid(1, NULL, &apic_id, NULL, NULL); + apic_id >>= 24; + } + + /* Never called by cpu0, so should never return 0 */ + ASSERT(apic_id); + + return apic_id; +} static void __attribute__((regparm(1))) cpu_setup(unsigned int cpu) { @@ -37,13 +64,17 @@ static void __attribute__((regparm(1))) cpu_setup(unsigned int cpu) cacheattr_init(); printf("done.\n"); - if ( !cpu ) /* Used on the BSP too */ + /* The BSP exits early because its APIC ID is known to be zero */ + if ( !cpu ) return; wmb(); - ap_callin = 1; + ACCESS_ONCE(CPU_TO_X2APICID[cpu]) = read_apic_id(); - /* After this point, the BSP will shut us down. */ + /* + * After this point the BSP will shut us down. A write to + * CPU_TO_X2APICID[cpu] signals the BSP to bring down `cpu`. + */ for ( ;; ) asm volatile ( "hlt" ); @@ -54,10 +85,6 @@ static void boot_cpu(unsigned int cpu) static uint8_t ap_stack[PAGE_SIZE] __attribute__ ((aligned (16))); static struct vcpu_hvm_context ap; - /* Initialise shared variables. */ - ap_callin = 0; - wmb(); - /* Wake up the secondary processor */ ap = (struct vcpu_hvm_context) { .mode = VCPU_HVM_MODE_32B, @@ -90,10 +117,11 @@ static void boot_cpu(unsigned int cpu) BUG(); /* - * Wait for the secondary processor to complete initialisation. + * Wait for the secondary processor to complete initialisation, + * which is signaled by its x2APIC ID being written to the LUT. * Do not touch shared resources meanwhile. */ - while ( !ap_callin ) + while ( !ACCESS_ONCE(CPU_TO_X2APICID[cpu]) ) cpu_relax(); /* Take the secondary processor offline. */ @@ -104,6 +132,12 @@ static void boot_cpu(unsigned int cpu) void smp_initialise(void) { unsigned int i, nr_cpus = hvm_info->nr_vcpus; + uint32_t ecx; + + cpuid(1, NULL, NULL, &ecx, NULL); + has_x2apic = (ecx >> 21) & 1; + if ( has_x2apic ) + printf("x2APIC supported\n"); printf("Multiprocessor initialisation:\n"); cpu_setup(0); diff --git a/tools/include/xen-tools/common-macros.h b/tools/include/xen-tools/common-macros.h index 60912225cb7a..336c6309d96e 100644 --- a/tools/include/xen-tools/common-macros.h +++ b/tools/include/xen-tools/common-macros.h @@ -108,4 +108,9 @@ #define get_unaligned(ptr) get_unaligned_t(typeof(*(ptr)), ptr) #define put_unaligned(val, ptr) put_unaligned_t(typeof(*(ptr)), val, ptr) +#define __ACCESS_ONCE(x) ({ \ + (void)(typeof(x))0; /* Scalar typecheck. */ \ + (volatile typeof(x) *)&(x); }) +#define ACCESS_ONCE(x) (*__ACCESS_ONCE(x)) + #endif /* __XEN_TOOLS_COMMON_MACROS__ */ From patchwork Wed Jun 26 16:28:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13713167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7A6C1C30659 for ; Wed, 26 Jun 2024 16:28:59 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.749272.1157335 (Exim 4.92) (envelope-from ) id 1sMVVn-0006ml-Fy; Wed, 26 Jun 2024 16:28:47 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 749272.1157335; Wed, 26 Jun 2024 16:28:47 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVn-0006me-Cn; Wed, 26 Jun 2024 16:28:47 +0000 Received: by outflank-mailman (input) for mailman id 749272; Wed, 26 Jun 2024 16:28:46 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVm-0006MH-1a for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:46 +0000 Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com [2a00:1450:4864:20::630]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 285ce77d-33d9-11ef-90a3-e314d9c70b13; Wed, 26 Jun 2024 18:28:45 +0200 (CEST) Received: by mail-ej1-x630.google.com with SMTP id a640c23a62f3a-a724958f118so528867766b.0 for ; Wed, 26 Jun 2024 09:28:45 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jun 2024 09:28:43 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 285ce77d-33d9-11ef-90a3-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1719419324; x=1720024124; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+a3HTs1RURAZACbO2vkeTxBlKVMK+XtuHSHuPekrrv4=; b=CWY+Wdpkl3adtLlArY/ou46B99HHBRsvU7yepzw6vBuYTR/oTMAj+oFEwHOh/lxzTI p5JqWBckbfBvHLluhhYeb7F0RtM+HM93gg1GsfFuRtkwGUxfFGHsSEucT2hxw1HEQ5pz g41Rg837atHIRNmUurXPFflpgixrXqMAYJTYw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719419324; x=1720024124; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+a3HTs1RURAZACbO2vkeTxBlKVMK+XtuHSHuPekrrv4=; b=FM7qh2VPGzFhJ6P5ZTPjkf2CHAEUzl5+kCzP0GROQNmhk7ymR74jL0Ng/ivOJB8JPM kGvQyfu5cOeSIUZ4hxxTo4WUJ11avlwIqHBSq4fJk1LdhWQFkCIvechGY/F3JUEEq4az I8VCj+7voujMf7A920Nsj1bgas63+6rlSPtKS94b3/KlK1fe90ZRgKv6EByMiqscy3JV 4bzbX+o6uB207uX99sfF1bjKOhXJAWQHrb9hDnA8PMNNVdxdqKGatbemMRhllpKa6Y0o d8Oc1sCF44ve2H7eX1dGOOTu6X9K8oppD0S1SEAsv/UljFPcV0rept4fP9myDmomNPPm 1mzg== X-Gm-Message-State: AOJu0Yxzjt+nex8hKcnaQpcaE1Et1mqKmX2ex0vslYC5KvYzVJ8Tu4T6 AWGkJXIFqeab6XS+5v2S+Blx3vO2azNjk0PuV4Z5omnab6cDaMy4Z3ji74u6QapRurtTRIJPMt9 OtsI= X-Google-Smtp-Source: AGHT+IH358YZg6Ktj95b4hw3RChALxCQdzhV/QFgoEwYMEa0PDV0+0MqjNTKrlGlEzuXG5+dAy4sWg== X-Received: by 2002:a17:907:c308:b0:a72:603f:1ea2 with SMTP id a640c23a62f3a-a72603f2118mr814836066b.62.1719419324394; Wed, 26 Jun 2024 09:28:44 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v4 05/10] xen/x86: Add supporting code for uploading LAPIC contexts during domain create Date: Wed, 26 Jun 2024 17:28:32 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 This patch is a precondition for a later patch in which toolstack uses HVM contexts to upload LAPIC data to a newly constructed domain. If toolstack were to upload LAPIC contexts as part of domain creation as-is it would encounter a problem were the architectural state does not reflect the APIC ID in the hidden state. This patch ensures updates to the hidden state trigger an update in the architectural registers so the APIC ID in both is consistent. Signed-off-by: Alejandro Vallejo --- v4: * New patch --- xen/arch/x86/hvm/vlapic.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index b57e39d1c6dd..ebcf74711a13 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -1622,7 +1622,27 @@ static int cf_check lapic_load_hidden(struct domain *d, hvm_domain_context_t *h) s->loaded.hw = 1; if ( s->loaded.regs ) + { + /* + * We already processed architectural regs in lapic_load_regs(), so + * this must be a migration. Fix up inconsistencies from any older Xen. + */ lapic_load_fixup(s); + } + else + { + /* + * We haven't seen architectural regs so this could be a migration or a + * plain domain create. In the domain create case it's fine to modify + * the architectural state to align it to the APIC ID that was just + * uploaded and in the migrate case it doesn't matter because the + * architectural state will be replaced by the LAPIC_REGS ctx later on. + */ + if ( vlapic_x2apic_mode(s) ) + set_x2apic_id(s); + else + vlapic_set_reg(s, APIC_ID, SET_xAPIC_ID(s->hw.x2apic_id)); + } hvm_update_vlapic_mode(v); From patchwork Wed Jun 26 16:28:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13713169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 77D26C41513 for ; Wed, 26 Jun 2024 16:28:59 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.749274.1157354 (Exim 4.92) (envelope-from ) id 1sMVVp-0007IS-AL; Wed, 26 Jun 2024 16:28:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 749274.1157354; Wed, 26 Jun 2024 16:28:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVp-0007Gv-5l; Wed, 26 Jun 2024 16:28:49 +0000 Received: by outflank-mailman (input) for mailman id 749274; Wed, 26 Jun 2024 16:28:47 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVn-0005pK-QZ for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:47 +0000 Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com [2a00:1450:4864:20::536]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 28ede50e-33d9-11ef-b4bb-af5377834399; Wed, 26 Jun 2024 18:28:46 +0200 (CEST) Received: by mail-ed1-x536.google.com with SMTP id 4fb4d7f45d1cf-57d1782679fso678725a12.0 for ; Wed, 26 Jun 2024 09:28:46 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jun 2024 09:28:44 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 28ede50e-33d9-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1719419325; x=1720024125; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qRrrYvX+yDttN2Gg1qTkgVmajhbWR2NVsVybiYM7F5A=; b=DWQdEei9NajR8t828BZaK624uBUwS+bF4ZJsH+bScdVMgcVRu5WJ5PKXgUDRZYSw+W tU0KpEFvfBXemiGKGzVSFivApjOkk5I2XHaGPkAZJ/tt1oGQce+z0osWpUHLwjXL3G12 Rx3DtFin2FACzpN2mgqZYpOkXI8ifOND+TtVo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719419325; x=1720024125; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qRrrYvX+yDttN2Gg1qTkgVmajhbWR2NVsVybiYM7F5A=; b=SZ7EzqBXya2sGW/v4Crw8yA2dT4jj3wfdlSvAlYreAJB5a1y8ALjJC0Vsveih2lYai fHgxyIGlZnq1TSdvIyZcaeku7xTxw/8J3XbieMPzo2NdOGnfCF62ZjmflCTWbX3/PDBd gZWvsghivRPjFqfxaNq2GWNcUefCkKNxp+ud/iKdZHc14fiCmNRvKWuzoG0xg8kqlT68 fxvO2MpKV+P/ERVn96rAy7ffrgd8ay2Q+FjiequLs/ykUtY70L8TdW68+ojUHi8YoSvI DI6icUe0YyhBN7m1+BR9KYzEQVFIPg2a2OAJixLoDoVolAqjYlR7jfFrcqOmhkIBkoUV grEw== X-Gm-Message-State: AOJu0YyKFYzENZvewJXN2uUO88SDTDjdmKYPHXUdzfRYHlyV+Za7pak5 h6jfb+/IM2b245vxsEMwQXwJxlhE9BesIAQCG5D84IcMrQpe2beyygvpbH6IEoexmygUKoZYX6c NftY= X-Google-Smtp-Source: AGHT+IGIMJSmFVWpKQ8YVYg3rvYvGdWCd5kBCwEUCLXVW0MyhCXi56UMacCQ9aoBAV6VOB6qERmO2g== X-Received: by 2002:a17:907:c81b:b0:a72:4bf2:e16 with SMTP id a640c23a62f3a-a727f65e329mr261218066b.16.1719419325348; Wed, 26 Jun 2024 09:28:45 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Anthony PERARD , Juergen Gross Subject: [PATCH v4 06/10] tools/libguest: Make setting MTRR registers unconditional Date: Wed, 26 Jun 2024 17:28:33 +0100 Message-Id: <2c55d486bb0c54a3e813abc66d32f321edd28b81.1719416329.git.alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 This greatly simplifies a later patch that makes use of HVM contexts to upload LAPIC data. The idea is to reuse MTRR setting procedure to avoid code duplication. It's currently only used for PVH, but there's no real reason to overcomplicate the toolstack preventing them being set for HVM too when hvmloader will override them anyway. While at it, add a missing "goto out" to what was the error condition in the loop. Signed-off-by: Alejandro Vallejo --- v4: * New patch --- tools/libs/guest/xg_dom_x86.c | 83 ++++++++++++++++++----------------- 1 file changed, 43 insertions(+), 40 deletions(-) diff --git a/tools/libs/guest/xg_dom_x86.c b/tools/libs/guest/xg_dom_x86.c index cba01384ae75..82ea3e2aab0b 100644 --- a/tools/libs/guest/xg_dom_x86.c +++ b/tools/libs/guest/xg_dom_x86.c @@ -989,6 +989,7 @@ const static void *hvm_get_save_record(const void *ctx, unsigned int type, static int vcpu_hvm(struct xc_dom_image *dom) { + /* Initialises the BSP */ struct { struct hvm_save_descriptor header_d; HVM_SAVE_TYPE(HEADER) header; @@ -997,6 +998,18 @@ static int vcpu_hvm(struct xc_dom_image *dom) struct hvm_save_descriptor end_d; HVM_SAVE_TYPE(END) end; } bsp_ctx; + /* Initialises APICs and MTRRs of every vCPU */ + struct { + struct hvm_save_descriptor header_d; + HVM_SAVE_TYPE(HEADER) header; + struct hvm_save_descriptor mtrr_d; + HVM_SAVE_TYPE(MTRR) mtrr; + struct hvm_save_descriptor end_d; + HVM_SAVE_TYPE(END) end; + } vcpu_ctx; + /* Context from full_ctx */ + const HVM_SAVE_TYPE(MTRR) *mtrr_record; + /* Raw context as taken from Xen */ uint8_t *full_ctx = NULL; int rc; @@ -1083,51 +1096,41 @@ static int vcpu_hvm(struct xc_dom_image *dom) bsp_ctx.end_d.instance = 0; bsp_ctx.end_d.length = HVM_SAVE_LENGTH(END); - /* TODO: maybe this should be a firmware option instead? */ - if ( !dom->device_model ) + /* TODO: maybe setting MTRRs should be a firmware option instead? */ + mtrr_record = hvm_get_save_record(full_ctx, HVM_SAVE_CODE(MTRR), 0); + + if ( !mtrr_record) { - struct { - struct hvm_save_descriptor header_d; - HVM_SAVE_TYPE(HEADER) header; - struct hvm_save_descriptor mtrr_d; - HVM_SAVE_TYPE(MTRR) mtrr; - struct hvm_save_descriptor end_d; - HVM_SAVE_TYPE(END) end; - } mtrr = { - .header_d = bsp_ctx.header_d, - .header = bsp_ctx.header, - .mtrr_d.typecode = HVM_SAVE_CODE(MTRR), - .mtrr_d.length = HVM_SAVE_LENGTH(MTRR), - .end_d = bsp_ctx.end_d, - .end = bsp_ctx.end, - }; - const HVM_SAVE_TYPE(MTRR) *mtrr_record = - hvm_get_save_record(full_ctx, HVM_SAVE_CODE(MTRR), 0); - unsigned int i; - - if ( !mtrr_record ) - { - xc_dom_panic(dom->xch, XC_INTERNAL_ERROR, - "%s: unable to get MTRR save record", __func__); - goto out; - } + xc_dom_panic(dom->xch, XC_INTERNAL_ERROR, + "%s: unable to get MTRR save record", __func__); + goto out; + } - memcpy(&mtrr.mtrr, mtrr_record, sizeof(mtrr.mtrr)); + vcpu_ctx.header_d = bsp_ctx.header_d; + vcpu_ctx.header = bsp_ctx.header; + vcpu_ctx.mtrr_d.typecode = HVM_SAVE_CODE(MTRR); + vcpu_ctx.mtrr_d.length = HVM_SAVE_LENGTH(MTRR); + vcpu_ctx.mtrr = *mtrr_record; + vcpu_ctx.end_d = bsp_ctx.end_d; + vcpu_ctx.end = bsp_ctx.end; - /* - * Enable MTRR, set default type to WB. - * TODO: add MMIO areas as UC when passthrough is supported. - */ - mtrr.mtrr.msr_mtrr_def_type = MTRR_TYPE_WRBACK | MTRR_DEF_TYPE_ENABLE; + /* + * Enable MTRR, set default type to WB. + * TODO: add MMIO areas as UC when passthrough is supported in PVH + */ + vcpu_ctx.mtrr.msr_mtrr_def_type = MTRR_TYPE_WRBACK | MTRR_DEF_TYPE_ENABLE; - for ( i = 0; i < dom->max_vcpus; i++ ) + for ( unsigned int i = 0; i < dom->max_vcpus; i++ ) + { + vcpu_ctx.mtrr_d.instance = i; + + rc = xc_domain_hvm_setcontext(dom->xch, dom->guest_domid, + (uint8_t *)&vcpu_ctx, sizeof(vcpu_ctx)); + if ( rc != 0 ) { - mtrr.mtrr_d.instance = i; - rc = xc_domain_hvm_setcontext(dom->xch, dom->guest_domid, - (uint8_t *)&mtrr, sizeof(mtrr)); - if ( rc != 0 ) - xc_dom_panic(dom->xch, XC_INTERNAL_ERROR, - "%s: SETHVMCONTEXT failed (rc=%d)", __func__, rc); + xc_dom_panic(dom->xch, XC_INTERNAL_ERROR, + "%s: SETHVMCONTEXT failed (rc=%d)", __func__, rc); + goto out; } } From patchwork Wed Jun 26 16:28:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13713168 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 48FB8C27C4F for ; Wed, 26 Jun 2024 16:28:59 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.749275.1157359 (Exim 4.92) (envelope-from ) id 1sMVVp-0007OU-O9; Wed, 26 Jun 2024 16:28:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 749275.1157359; Wed, 26 Jun 2024 16:28:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVp-0007N2-IB; Wed, 26 Jun 2024 16:28:49 +0000 Received: by outflank-mailman (input) for mailman id 749275; Wed, 26 Jun 2024 16:28:48 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVn-0006MH-VA for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:48 +0000 Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com [2a00:1450:4864:20::22e]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 29aaca38-33d9-11ef-90a3-e314d9c70b13; Wed, 26 Jun 2024 18:28:47 +0200 (CEST) Received: by mail-lj1-x22e.google.com with SMTP id 38308e7fff4ca-2ec002caf3eso104744071fa.1 for ; Wed, 26 Jun 2024 09:28:47 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jun 2024 09:28:46 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 29aaca38-33d9-11ef-90a3-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1719419326; x=1720024126; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l+xOouMfNTmMxHo9v7bAjCWITyqn06N8bTQ+zZ/tKIw=; b=OYFEbiTnClOEsBRLewytNPp7hQYeYjUX6ji1S+VYKmIL4ODLRQgM3Z5ptDqcYlwoI6 Q+Ehq0m6IzqdSwCkWXARzCuEaq3Dl3QolxLyei7kxfhZyUzA8Q3soPbunRde0u+epwTk eSSS6Lwh+47LbYl2Qu79dLnp4QvAWKQA8YGHM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719419326; x=1720024126; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l+xOouMfNTmMxHo9v7bAjCWITyqn06N8bTQ+zZ/tKIw=; b=YBZnIXrv3RWbpAD7s0W4V0QovipOpzvVO1x6g51wtASB32RiWpVvQKBWut2SbHU2Os q1xtoKnW/mPy+iCrQTCzylZ6nnzo5vl/zsN62EQvG8KNPJYNOLJ+zR0e3sB7M0znl1jA s34+luSTAwyncADyAy61Dr6GXzxcUZ3kiVnWCECWBoFx5Qub87aq6mWtlJpT2t0lO4qb Hw/wZJIsJBs8u02DYYmGXQP8L4FD9DpBhZ5BIphxIaszevtzaAQOb+nQIHOKiHQS+cGZ flm7HjA8xeU96RxlPcRH3uXNgsYkDBll6CAH96dXaxDmTErS+YoT3rKGOcNAxtg8nq6Y egVw== X-Gm-Message-State: AOJu0YyP4Ll9gm8NNsOCvGAuBdjbGahFt+YfRNdHKUsN5XJ9ItiGr7k9 92yDAG/SmhXUSimIoHjDB3tWbYRb/yerB0dQ2c1p17DpQN/tXpTLQtLtzfsdBpi0PF7H2o2R0VN nKgM= X-Google-Smtp-Source: AGHT+IF3IPdvzxA3JXkykkYNidfujye+/BWcALPxMJdj71ZMCRp0vGOO+OeWys7ok3Dm5kfg6BdS0w== X-Received: by 2002:a2e:885a:0:b0:2eb:fca8:7f37 with SMTP id 38308e7fff4ca-2ec5b2e628fmr89509191fa.28.1719419326484; Wed, 26 Jun 2024 09:28:46 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v4 07/10] xen/lib: Add topology generator for x86 Date: Wed, 26 Jun 2024 17:28:34 +0100 Message-Id: <5ab2cb62745bca462ab3768ea1eb826d2b6e2c76.1719416329.git.alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Add a helper to populate topology leaves in the cpu policy from threads/core and cores/package counts. It's unit-tested in test-cpu-policy.c, but it's not connected to the rest of the code yet. Adds the ASSERT() macro to xen/lib/x86/private.h, as it was missing. Signed-off-by: Alejandro Vallejo --- v4: * v1->v2 introduced a bug. lppp must be MIN(0xff, threads_per_pkg). * Add missing MIN() when setting p->extd.nc (should've been done in v2) --- tools/tests/cpu-policy/test-cpu-policy.c | 133 +++++++++++++++++++++++ xen/include/xen/lib/x86/cpu-policy.h | 16 +++ xen/lib/x86/policy.c | 88 +++++++++++++++ xen/lib/x86/private.h | 4 + 4 files changed, 241 insertions(+) diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c index 301df2c00285..849d7cebaa7c 100644 --- a/tools/tests/cpu-policy/test-cpu-policy.c +++ b/tools/tests/cpu-policy/test-cpu-policy.c @@ -650,6 +650,137 @@ static void test_is_compatible_failure(void) } } +static void test_topo_from_parts(void) +{ + static const struct test { + unsigned int threads_per_core; + unsigned int cores_per_pkg; + struct cpu_policy policy; + } tests[] = { + { + .threads_per_core = 3, .cores_per_pkg = 1, + .policy = { + .x86_vendor = X86_VENDOR_AMD, + .topo.subleaf = { + { .nr_logical = 3, .level = 0, .type = 1, .id_shift = 2, }, + { .nr_logical = 1, .level = 1, .type = 2, .id_shift = 2, }, + }, + }, + }, + { + .threads_per_core = 1, .cores_per_pkg = 3, + .policy = { + .x86_vendor = X86_VENDOR_AMD, + .topo.subleaf = { + { .nr_logical = 1, .level = 0, .type = 1, .id_shift = 0, }, + { .nr_logical = 3, .level = 1, .type = 2, .id_shift = 2, }, + }, + }, + }, + { + .threads_per_core = 7, .cores_per_pkg = 5, + .policy = { + .x86_vendor = X86_VENDOR_AMD, + .topo.subleaf = { + { .nr_logical = 7, .level = 0, .type = 1, .id_shift = 3, }, + { .nr_logical = 5, .level = 1, .type = 2, .id_shift = 6, }, + }, + }, + }, + { + .threads_per_core = 2, .cores_per_pkg = 128, + .policy = { + .x86_vendor = X86_VENDOR_AMD, + .topo.subleaf = { + { .nr_logical = 2, .level = 0, .type = 1, .id_shift = 1, }, + { .nr_logical = 128, .level = 1, .type = 2, + .id_shift = 8, }, + }, + }, + }, + { + .threads_per_core = 3, .cores_per_pkg = 1, + .policy = { + .x86_vendor = X86_VENDOR_INTEL, + .topo.subleaf = { + { .nr_logical = 3, .level = 0, .type = 1, .id_shift = 2, }, + { .nr_logical = 3, .level = 1, .type = 2, .id_shift = 2, }, + }, + }, + }, + { + .threads_per_core = 1, .cores_per_pkg = 3, + .policy = { + .x86_vendor = X86_VENDOR_INTEL, + .topo.subleaf = { + { .nr_logical = 1, .level = 0, .type = 1, .id_shift = 0, }, + { .nr_logical = 3, .level = 1, .type = 2, .id_shift = 2, }, + }, + }, + }, + { + .threads_per_core = 7, .cores_per_pkg = 5, + .policy = { + .x86_vendor = X86_VENDOR_INTEL, + .topo.subleaf = { + { .nr_logical = 7, .level = 0, .type = 1, .id_shift = 3, }, + { .nr_logical = 35, .level = 1, .type = 2, .id_shift = 6, }, + }, + }, + }, + { + .threads_per_core = 2, .cores_per_pkg = 128, + .policy = { + .x86_vendor = X86_VENDOR_INTEL, + .topo.subleaf = { + { .nr_logical = 2, .level = 0, .type = 1, .id_shift = 1, }, + { .nr_logical = 256, .level = 1, .type = 2, + .id_shift = 8, }, + }, + }, + }, + }; + + printf("Testing topology synthesis from parts:\n"); + + for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i ) + { + const struct test *t = &tests[i]; + struct cpu_policy actual = { .x86_vendor = t->policy.x86_vendor }; + int rc = x86_topo_from_parts(&actual, t->threads_per_core, + t->cores_per_pkg); + + if ( rc || memcmp(&actual.topo, &t->policy.topo, sizeof(actual.topo)) ) + { +#define TOPO(n, f) t->policy.topo.subleaf[(n)].f, actual.topo.subleaf[(n)].f + fail("FAIL[%d] - '%s %u t/c, %u c/p'\n", + rc, + x86_cpuid_vendor_to_str(t->policy.x86_vendor), + t->threads_per_core, t->cores_per_pkg); + printf(" subleaf=%u expected_n=%u actual_n=%u\n" + " expected_lvl=%u actual_lvl=%u\n" + " expected_type=%u actual_type=%u\n" + " expected_shift=%u actual_shift=%u\n", + 0, + TOPO(0, nr_logical), + TOPO(0, level), + TOPO(0, type), + TOPO(0, id_shift)); + + printf(" subleaf=%u expected_n=%u actual_n=%u\n" + " expected_lvl=%u actual_lvl=%u\n" + " expected_type=%u actual_type=%u\n" + " expected_shift=%u actual_shift=%u\n", + 1, + TOPO(1, nr_logical), + TOPO(1, level), + TOPO(1, type), + TOPO(1, id_shift)); +#undef TOPO + } + } +} + int main(int argc, char **argv) { printf("CPU Policy unit tests\n"); @@ -667,6 +798,8 @@ int main(int argc, char **argv) test_is_compatible_success(); test_is_compatible_failure(); + test_topo_from_parts(); + if ( nr_failures ) printf("Done: %u failures\n", nr_failures); else diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h index d26012c6da78..79fdf9045a1b 100644 --- a/xen/include/xen/lib/x86/cpu-policy.h +++ b/xen/include/xen/lib/x86/cpu-policy.h @@ -542,6 +542,22 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host, const struct cpu_policy *guest, struct cpu_policy_errors *err); +/** + * Synthesise topology information in `p` given high-level constraints + * + * Topology is given in various fields accross several leaves, some of + * which are vendor-specific. This function uses the policy itself to + * derive such leaves from threads/core and cores/package. + * + * @param p CPU policy of the domain. + * @param threads_per_core threads/core. Doesn't need to be a power of 2. + * @param cores_per_package cores/package. Doesn't need to be a power of 2. + * @return 0 on success; -errno on failure + */ +int x86_topo_from_parts(struct cpu_policy *p, + unsigned int threads_per_core, + unsigned int cores_per_pkg); + #endif /* !XEN_LIB_X86_POLICIES_H */ /* diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c index f033d22785be..72b67b44a893 100644 --- a/xen/lib/x86/policy.c +++ b/xen/lib/x86/policy.c @@ -2,6 +2,94 @@ #include +static unsigned int order(unsigned int n) +{ + ASSERT(n); /* clz(0) is UB */ + + return 8 * sizeof(n) - __builtin_clz(n); +} + +int x86_topo_from_parts(struct cpu_policy *p, + unsigned int threads_per_core, + unsigned int cores_per_pkg) +{ + unsigned int threads_per_pkg = threads_per_core * cores_per_pkg; + unsigned int apic_id_size; + + if ( !p || !threads_per_core || !cores_per_pkg ) + return -EINVAL; + + p->basic.max_leaf = MAX(0xb, p->basic.max_leaf); + + memset(p->topo.raw, 0, sizeof(p->topo.raw)); + + /* thread level */ + p->topo.subleaf[0].nr_logical = threads_per_core; + p->topo.subleaf[0].id_shift = 0; + p->topo.subleaf[0].level = 0; + p->topo.subleaf[0].type = 1; + if ( threads_per_core > 1 ) + p->topo.subleaf[0].id_shift = order(threads_per_core - 1); + + /* core level */ + p->topo.subleaf[1].nr_logical = cores_per_pkg; + if ( p->x86_vendor == X86_VENDOR_INTEL ) + p->topo.subleaf[1].nr_logical = threads_per_pkg; + p->topo.subleaf[1].id_shift = p->topo.subleaf[0].id_shift; + p->topo.subleaf[1].level = 1; + p->topo.subleaf[1].type = 2; + if ( cores_per_pkg > 1 ) + p->topo.subleaf[1].id_shift += order(cores_per_pkg - 1); + + apic_id_size = p->topo.subleaf[1].id_shift; + + /* + * Contrary to what the name might seem to imply. HTT is an enabler for + * SMP and there's no harm in setting it even with a single vCPU. + */ + p->basic.htt = true; + p->basic.lppp = MIN(0xff, threads_per_pkg); + + switch ( p->x86_vendor ) + { + case X86_VENDOR_INTEL: { + struct cpuid_cache_leaf *sl = p->cache.subleaf; + + for ( size_t i = 0; sl->type && + i < ARRAY_SIZE(p->cache.raw); i++, sl++ ) + { + sl->cores_per_package = cores_per_pkg - 1; + sl->threads_per_cache = threads_per_core - 1; + if ( sl->type == 3 /* unified cache */ ) + sl->threads_per_cache = threads_per_pkg - 1; + } + break; + } + + case X86_VENDOR_AMD: + case X86_VENDOR_HYGON: + /* Expose p->basic.lppp */ + p->extd.cmp_legacy = true; + + /* Clip NC to the maximum value it can hold */ + p->extd.nc = MIN(0xff, threads_per_pkg - 1); + + /* TODO: Expose leaf e1E */ + p->extd.topoext = false; + + /* + * Clip APIC ID to 8 bits, as that's what high core-count machines do. + * + * That's what AMD EPYC 9654 does with >256 CPUs. + */ + p->extd.apic_id_size = MIN(8, apic_id_size); + + break; + } + + return 0; +} + int x86_cpu_policies_are_compatible(const struct cpu_policy *host, const struct cpu_policy *guest, struct cpu_policy_errors *err) diff --git a/xen/lib/x86/private.h b/xen/lib/x86/private.h index 60bb82a400b7..2ec9dbee33c2 100644 --- a/xen/lib/x86/private.h +++ b/xen/lib/x86/private.h @@ -4,6 +4,7 @@ #ifdef __XEN__ #include +#include #include #include #include @@ -17,6 +18,7 @@ #else +#include #include #include #include @@ -28,6 +30,8 @@ #include +#define ASSERT(x) assert(x) + static inline bool test_bit(unsigned int bit, const void *vaddr) { const char *addr = vaddr; From patchwork Wed Jun 26 16:28:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13713175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8D4FFC3065B for ; Wed, 26 Jun 2024 16:29:00 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.749276.1157374 (Exim 4.92) (envelope-from ) id 1sMVVr-0007q0-Aw; Wed, 26 Jun 2024 16:28:51 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 749276.1157374; Wed, 26 Jun 2024 16:28:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVr-0007pH-6b; Wed, 26 Jun 2024 16:28:51 +0000 Received: by outflank-mailman (input) for mailman id 749276; Wed, 26 Jun 2024 16:28:50 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVp-0005pK-Sm for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:49 +0000 Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com [2a00:1450:4864:20::12d]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 2a480a84-33d9-11ef-b4bb-af5377834399; Wed, 26 Jun 2024 18:28:48 +0200 (CEST) Received: by mail-lf1-x12d.google.com with SMTP id 2adb3069b0e04-52ce9ba0cedso4525241e87.2 for ; Wed, 26 Jun 2024 09:28:48 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jun 2024 09:28:47 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2a480a84-33d9-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1719419327; x=1720024127; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lYHRDHK5UDpTmTiIq7osG/ZblShK9KE6SHig+ML+kuw=; b=GBrmuotwVwjrUTe09bHDOlqWVGDWDmSJlfHT8uZVpuMKQIfXggwkZ7DldGGlbb1qLk hheWaXHf5NOp0MPspObzqxsJyB9aI0ocJJvciccQo5qpmuVbqU7JDYTCp4KUFeoLqWn0 Z+CRjREoe5tqL8PUfK4BLGVG9glDZTAFC10iQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719419327; x=1720024127; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lYHRDHK5UDpTmTiIq7osG/ZblShK9KE6SHig+ML+kuw=; b=VMqzYsoWX1cmWAhGfrvUGu0BzVZbcAjXGBgW86hb1Rho0IGxQ8+4AaSm7hHPf9V9hr BFwKISl3PcocnyFYpRMmmLvJEGjqH/3xSQSkZsYa6qHP3P4T93gbL2AGB8z7Kmz3r69o DG+Km8uOonsr/DF62T3BfNfEETzMXLfEii39jfLWkoZhO0eY9L9v+NM3Sy0t7PA87Ie7 YSteMNyzipxEP1Cetqd46nQix8ANsKNKANtmK9dp9RvahbMxNAx3lClkGjc5G6dg9ji0 Ou8FFhzzECy0+IPJWG4CSUV6E7bUzrKfmaUxKpWf98We//Y/58t+ruFLUBzUrMgS8HTN bdXQ== X-Gm-Message-State: AOJu0YwhMcc8bwQXVXw6PD1Cws0MFCmtTknjAp0uKbUgjP9JURFGvJ/U 2ygWwTtRcdKqNwUizRY8FXk8vZEee37KhLkoiOKR0wlr317G1bh02ZXoWdme00nDzlAfKDq+cgY n4Zo= X-Google-Smtp-Source: AGHT+IG3ALNl7GrDyAkGeat0PBA151FCdB5bzNb5KBoFoIZgPdmSqsTu6xRbFtJlJR/ERWDh3LdMSQ== X-Received: by 2002:ac2:4c8c:0:b0:52c:9ae0:beed with SMTP id 2adb3069b0e04-52ce18526ecmr8127045e87.52.1719419327515; Wed, 26 Jun 2024 09:28:47 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v4 08/10] xen/x86: Derive topologically correct x2APIC IDs from the policy Date: Wed, 26 Jun 2024 17:28:35 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Implements the helper for mapping vcpu_id to x2apic_id given a valid topology in a policy. The algo is written with the intention of extending it to leaves 0x1f and extended 0x26 in the future. Toolstack doesn't set leaf 0xb and the HVM default policy has it cleared, so the leaf is not implemented. In that case, the new helper just returns the legacy mapping. Signed-off-by: Alejandro Vallejo --- v2->v4 (v3 was not reviewed): * Rewrite eXX notation for CPUID leaves as "extended XX" * Newlines and linewraps * In the unit-test, reduce the scope of `policy` * In the unit-test, fail if topology generation fails. --- tools/tests/cpu-policy/test-cpu-policy.c | 68 +++++++++++++++++++++ xen/include/xen/lib/x86/cpu-policy.h | 11 ++++ xen/lib/x86/policy.c | 76 ++++++++++++++++++++++++ 3 files changed, 155 insertions(+) diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c index 849d7cebaa7c..e5f9b8f7ee39 100644 --- a/tools/tests/cpu-policy/test-cpu-policy.c +++ b/tools/tests/cpu-policy/test-cpu-policy.c @@ -781,6 +781,73 @@ static void test_topo_from_parts(void) } } +static void test_x2apic_id_from_vcpu_id_success(void) +{ + static const struct test { + unsigned int vcpu_id; + unsigned int threads_per_core; + unsigned int cores_per_pkg; + uint32_t x2apic_id; + uint8_t x86_vendor; + } tests[] = { + { + .vcpu_id = 3, .threads_per_core = 3, .cores_per_pkg = 8, + .x2apic_id = 1 << 2, + }, + { + .vcpu_id = 6, .threads_per_core = 3, .cores_per_pkg = 8, + .x2apic_id = 2 << 2, + }, + { + .vcpu_id = 24, .threads_per_core = 3, .cores_per_pkg = 8, + .x2apic_id = 1 << 5, + }, + { + .vcpu_id = 35, .threads_per_core = 3, .cores_per_pkg = 8, + .x2apic_id = (35 % 3) | (((35 / 3) % 8) << 2) | ((35 / 24) << 5), + }, + { + .vcpu_id = 96, .threads_per_core = 7, .cores_per_pkg = 3, + .x2apic_id = (96 % 7) | (((96 / 7) % 3) << 3) | ((96 / 21) << 5), + }, + }; + + const uint8_t vendors[] = { + X86_VENDOR_INTEL, + X86_VENDOR_AMD, + X86_VENDOR_CENTAUR, + X86_VENDOR_SHANGHAI, + X86_VENDOR_HYGON, + }; + + printf("Testing x2apic id from vcpu id success:\n"); + + /* Perform the test run on every vendor we know about */ + for ( size_t i = 0; i < ARRAY_SIZE(vendors); ++i ) + { + for ( size_t j = 0; j < ARRAY_SIZE(tests); ++j ) + { + struct cpu_policy policy = { .x86_vendor = vendors[i] }; + const struct test *t = &tests[j]; + uint32_t x2apic_id; + int rc = x86_topo_from_parts(&policy, t->threads_per_core, + t->cores_per_pkg); + + if ( rc ) { + fail("FAIL[%d] - 'x86_topo_from_parts() failed", rc); + continue; + } + + x2apic_id = x86_x2apic_id_from_vcpu_id(&policy, t->vcpu_id); + if ( x2apic_id != t->x2apic_id ) + fail("FAIL - '%s cpu%u %u t/c %u c/p'. bad x2apic_id: expected=%u actual=%u\n", + x86_cpuid_vendor_to_str(policy.x86_vendor), + t->vcpu_id, t->threads_per_core, t->cores_per_pkg, + t->x2apic_id, x2apic_id); + } + } +} + int main(int argc, char **argv) { printf("CPU Policy unit tests\n"); @@ -799,6 +866,7 @@ int main(int argc, char **argv) test_is_compatible_failure(); test_topo_from_parts(); + test_x2apic_id_from_vcpu_id_success(); if ( nr_failures ) printf("Done: %u failures\n", nr_failures); diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h index 79fdf9045a1b..d545d4727711 100644 --- a/xen/include/xen/lib/x86/cpu-policy.h +++ b/xen/include/xen/lib/x86/cpu-policy.h @@ -542,6 +542,17 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host, const struct cpu_policy *guest, struct cpu_policy_errors *err); +/** + * Calculates the x2APIC ID of a vCPU given a CPU policy + * + * If the policy lacks leaf 0xb falls back to legacy mapping of apic_id=cpu*2 + * + * @param p CPU policy of the domain. + * @param id vCPU ID of the vCPU. + * @returns x2APIC ID of the vCPU. + */ +uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t id); + /** * Synthesise topology information in `p` given high-level constraints * diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c index 72b67b44a893..c52b7192559a 100644 --- a/xen/lib/x86/policy.c +++ b/xen/lib/x86/policy.c @@ -2,6 +2,82 @@ #include +static uint32_t parts_per_higher_scoped_level(const struct cpu_policy *p, + size_t lvl) +{ + /* + * `nr_logical` reported by Intel is the number of THREADS contained in + * the next topological scope. For example, assuming a system with 2 + * threads/core and 3 cores/module in a fully symmetric topology, + * `nr_logical` at the core level will report 6. Because it's reporting + * the number of threads in a module. + * + * On AMD/Hygon, nr_logical is already normalized by the higher scoped + * level (cores/complex, etc) so we can return it as-is. + */ + if ( p->x86_vendor != X86_VENDOR_INTEL || !lvl ) + return p->topo.subleaf[lvl].nr_logical; + + return p->topo.subleaf[lvl].nr_logical / + p->topo.subleaf[lvl - 1].nr_logical; +} + +uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t id) +{ + uint32_t shift = 0, x2apic_id = 0; + + /* In the absence of topology leaves, fallback to traditional mapping */ + if ( !p->topo.subleaf[0].type ) + return id * 2; + + /* + * `id` means different things at different points of the algo + * + * At lvl=0: global thread_id (same as vcpu_id) + * At lvl=1: global core_id + * At lvl=2: global socket_id (actually complex_id in AMD, module_id + * in Intel, but the name is inconsequential) + * + * +--+ + * ____ |#0| ______ <= 1 socket + * / +--+ \+--+ + * __#0__ __|#1|__ <= 2 cores/socket + * / | \ +--+/ +-|+ \ + * #0 #1 #2 |#3| #4 #5 <= 3 threads/core + * +--+ + * + * ... and so on. Global in this context means that it's a unique + * identifier for the whole topology, and not relative to the level + * it's in. For example, in the diagram shown above, we're looking at + * thread #3 in the global sense, though it's #0 within its core. + * + * Note that dividing a global thread_id by the number of threads per + * core returns the global core id that contains it. e.g: 0, 1 or 2 + * divided by 3 returns core_id=0. 3, 4 or 5 divided by 3 returns core + * 1, and so on. An analogous argument holds for higher levels. This is + * the property we exploit to derive x2apic_id from vcpu_id. + * + * NOTE: `topo` is currently derived from leaf 0xb, which is bound to two + * levels, but once we track leaves 0x1f (or extended 0x26) there will be a + * few more. The algorithm is written to cope with that case. + */ + for ( uint32_t i = 0; i < ARRAY_SIZE(p->topo.raw); i++ ) + { + uint32_t nr_parts; + + if ( !p->topo.subleaf[i].type ) + /* sentinel subleaf */ + break; + + nr_parts = parts_per_higher_scoped_level(p, i); + x2apic_id |= (id % nr_parts) << shift; + id /= nr_parts; + shift = p->topo.subleaf[i].id_shift; + } + + return (id << shift) | x2apic_id; +} + static unsigned int order(unsigned int n) { ASSERT(n); /* clz(0) is UB */ From patchwork Wed Jun 26 16:28:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13713171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3CDB3C3064D for ; Wed, 26 Jun 2024 16:29:00 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.749277.1157382 (Exim 4.92) (envelope-from ) id 1sMVVs-00088n-Md; Wed, 26 Jun 2024 16:28:52 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 749277.1157382; Wed, 26 Jun 2024 16:28:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVs-00087q-II; Wed, 26 Jun 2024 16:28:52 +0000 Received: by outflank-mailman (input) for mailman id 749277; Wed, 26 Jun 2024 16:28:50 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVq-0005pK-Iz for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:50 +0000 Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com [2a00:1450:4864:20::631]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 2ab9b7b4-33d9-11ef-b4bb-af5377834399; Wed, 26 Jun 2024 18:28:49 +0200 (CEST) Received: by mail-ej1-x631.google.com with SMTP id a640c23a62f3a-a727d9dd367so242573966b.3 for ; Wed, 26 Jun 2024 09:28:49 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jun 2024 09:28:47 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2ab9b7b4-33d9-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1719419328; x=1720024128; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8mOpjR7SSQuHGedpTmNTfhfxLjqgXd8qu6qzKi4f14A=; b=HdvfMN91FrBV6e5vYd4uDdyJztAfUvRkjt+yORg27afdWOwe1iNHeVYO11LFylImEv jV3xDTuf/ULppt2Xn96ROy8825ifEfMb2whcARFRc5JcEybk7eBGxIAUVWBoH7rFHW3o LJ83Ayb3gfsvjfkqdomYtKzlPDKSw7b2NETt0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719419328; x=1720024128; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8mOpjR7SSQuHGedpTmNTfhfxLjqgXd8qu6qzKi4f14A=; b=s2n7bSOmRpfgiHYF30zqHl1+Dxb2A8KKyGiTQ3rk4PvET8hu3lmR4Qsr9VgP23te4/ BrcQSk8E7zGMsf2exyHQmt1gzG2al+5MafkQ1x+ITjip7ZgWkyQiAyPQpDUvj3rpB+lv +1ArcXlCIOh2n/c1mcG+DqgZOwxRVmQJM0KfQT9qEx2NlJSaT/vnnx/AaI5qCrWsMzhc q6CQkAAU9s2oKVK/ES2B09BBBZy/wDqvX6Jwh7oNseyfv5dRXf3TbkeJEMOu5NKSYFtv jUVanrB/obYxj/ywijDOQyhMWftJjaOPQL7cmFULozyEzbXnTL2/J0s4cHGfylGm7Pg9 h/dw== X-Gm-Message-State: AOJu0YzbHqP1G7osr1mgH0B6KeU1+Lweq22tqivQOoIFN62SJyBiRG/h KqfvRT388FeOGp7SMIfSeC4Vggd66A5IZGO08+mPkmt2Q1iZcDp6wnkUWQq4mDOo4Xg1ZmsjVY5 JvQ8= X-Google-Smtp-Source: AGHT+IH94sPg3I7PP/MnzrejaBHaa0UZyvLZuA9p7IfIniwFctzAD8RaBLO7yuYZqAkfLY2FSC4lLA== X-Received: by 2002:a17:907:a649:b0:a72:8135:2d4f with SMTP id a640c23a62f3a-a7281352e3cmr347847466b.48.1719419328384; Wed, 26 Jun 2024 09:28:48 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Anthony PERARD , Juergen Gross , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v4 09/10] xen/x86: Synthesise domain topologies Date: Wed, 26 Jun 2024 17:28:36 +0100 Message-Id: X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Expose sensible topologies in leaf 0xb. At the moment it synthesises non-HT systems, in line with the previous code intent. Leaf 0xb in the host policy is no longer zapped and the guest {max,def} policies have their topology leaves zapped instead. The intent is for toolstack to populate them. There's no current use for the topology information in the host policy, but it makes no harm. Signed-off-by: Alejandro Vallejo --- This patch MUST NOT go in without the following intimately related one "Set topologically correct x2APIC IDs for each vCPU" Otherwise we expose one topology and then create APIC IDs that don't reflect it v2->v4 (v3 was not reviewed): * Adjustments to the commit message * Various newline/linewrap fixes * Also print error code in new ERROR() message * Preserve old logic to recreate old CPUID policy to enable migrations from versions of Xen without policy information in the migration stream. --- tools/libs/guest/xg_cpuid_x86.c | 24 +++++++++++++++++++++++- xen/arch/x86/cpu-policy.c | 9 ++++++--- 2 files changed, 29 insertions(+), 4 deletions(-) diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c index 4453178100ad..6062dcab01ce 100644 --- a/tools/libs/guest/xg_cpuid_x86.c +++ b/tools/libs/guest/xg_cpuid_x86.c @@ -725,8 +725,16 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore, p->policy.basic.htt = test_bit(X86_FEATURE_HTT, host_featureset); p->policy.extd.cmp_legacy = test_bit(X86_FEATURE_CMP_LEGACY, host_featureset); } - else + else if ( restore ) { + /* + * Reconstruct the topology exposed on Xen <= 4.13. It makes very little + * sense, but it's what those guests saw so it's set in stone now. + * + * Guests from Xen 4.14 onwards carry their own CPUID leaves in the + * migration stream so they don't need special treatment. + */ + /* * Topology for HVM guests is entirely controlled by Xen. For now, we * hardcode APIC_ID = vcpu_id * 2 to give the illusion of no SMT. @@ -782,6 +790,20 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore, break; } } + else + { + /* TODO: Expose the ability to choose a custom topology for HVM/PVH */ + unsigned int threads_per_core = 1; + unsigned int cores_per_pkg = di.max_vcpu_id + 1; + + rc = x86_topo_from_parts(&p->policy, threads_per_core, cores_per_pkg); + if ( rc ) + { + ERROR("Failed to generate topology: rc=%d t/c=%u c/p=%u", + rc, threads_per_core, cores_per_pkg); + goto out; + } + } nr_leaves = ARRAY_SIZE(p->leaves); rc = x86_cpuid_copy_to_buffer(&p->policy, p->leaves, &nr_leaves); diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c index 304dc20cfab8..55a95f6e164c 100644 --- a/xen/arch/x86/cpu-policy.c +++ b/xen/arch/x86/cpu-policy.c @@ -263,9 +263,6 @@ static void recalculate_misc(struct cpu_policy *p) p->basic.raw[0x8] = EMPTY_LEAF; - /* TODO: Rework topology logic. */ - memset(p->topo.raw, 0, sizeof(p->topo.raw)); - p->basic.raw[0xc] = EMPTY_LEAF; p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES; @@ -613,6 +610,9 @@ static void __init calculate_pv_max_policy(void) recalculate_xstate(p); p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */ + + /* Wipe host topology. Populated by toolstack */ + memset(p->topo.raw, 0, sizeof(p->topo.raw)); } static void __init calculate_pv_def_policy(void) @@ -776,6 +776,9 @@ static void __init calculate_hvm_max_policy(void) /* It's always possible to emulate CPUID faulting for HVM guests */ p->platform_info.cpuid_faulting = true; + + /* Wipe host topology. Populated by toolstack */ + memset(p->topo.raw, 0, sizeof(p->topo.raw)); } static void __init calculate_hvm_def_policy(void) From patchwork Wed Jun 26 16:28:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13713176 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C7B78C3065F for ; Wed, 26 Jun 2024 16:29:01 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.749278.1157392 (Exim 4.92) (envelope-from ) id 1sMVVt-0008Gq-H7; Wed, 26 Jun 2024 16:28:53 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 749278.1157392; Wed, 26 Jun 2024 16:28:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVt-0008Eb-2y; Wed, 26 Jun 2024 16:28:53 +0000 Received: by outflank-mailman (input) for mailman id 749278; Wed, 26 Jun 2024 16:28:51 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sMVVr-0005pK-Dp for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:51 +0000 Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com [2a00:1450:4864:20::62d]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 2b24fcd9-33d9-11ef-b4bb-af5377834399; Wed, 26 Jun 2024 18:28:49 +0200 (CEST) Received: by mail-ej1-x62d.google.com with SMTP id a640c23a62f3a-a7252bfe773so452804766b.1 for ; Wed, 26 Jun 2024 09:28:49 -0700 (PDT) Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jun 2024 09:28:48 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2b24fcd9-33d9-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1719419329; x=1720024129; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FPZPi/1GxgTiDwYl688KIib+0eFv4vArXlISETLlZHk=; b=gSl/22rQ+1SlO8vAi0W5OW3RMsrUXEfN6Hn9ZDF3oy/4SdFlOoUHE0RMijQf0HLwLp Rukh+DtBtFmrVcxTsqIr+/66h4Ued+4j8m2ZNDBhZd0ClegQT2Rn3yv51cjAbhFSFIin j48IyrdMJpXFyEypNQgcjX4NMyUuc2NmWFk18= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719419329; x=1720024129; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FPZPi/1GxgTiDwYl688KIib+0eFv4vArXlISETLlZHk=; b=pS5/CqoJCUre+VX3hx+/naMNJIJkWJdT2h83hHTkgNDPyY54NNFy2pBiadqGEFBdPz potJdQOyqwcbYjC5AuXU+NW11+miljfJ1AoRMwMcXP+CjbtmCr0P2elxCESjiKIE1jRK qOJBjyN5nS12oXZgXIh0QaxUhkEKQV5omnRm2X/bL58QEF93mjbnmBPpWMzPZm80SwP0 NBgLTFwY/K29j5E3fzzsCGc4Uja4CuY35xMRoZUw6ZGsN11i8AdW2fwhuHYFInBn0tM5 M/WM1B7fPXLJQ31qXppQdKve+SGcMdxoacWyajiIl+v9gWoEfQFtgBvQWABgPVoU0NHi UMIA== X-Gm-Message-State: AOJu0Yxd+bIAylzpubVBunoqziwOc+GV3MH/MNHZejE2govC9eEJRgrD AEIKhM3KhwX1A6BMsm0QhfN3+oBwL1H0vg1wnal7U5uR+OYbEw/mV89kNvtXPSnCH8nKjIdKnAh qavc= X-Google-Smtp-Source: AGHT+IGZnws2mdi+XVnvyuewQ3id60JRRLgyqmqgHjjPNv44XKzzcauz1g5P06nGpalKkaBxSPUShQ== X-Received: by 2002:a17:907:8e93:b0:a6e:f62d:bd02 with SMTP id a640c23a62f3a-a7245c84f2emr837266166b.7.1719419329107; Wed, 26 Jun 2024 09:28:49 -0700 (PDT) From: Alejandro Vallejo To: Xen-devel Cc: Alejandro Vallejo , Anthony PERARD , Juergen Gross Subject: [PATCH v4 10/10] tools/libguest: Set topologically correct x2APIC IDs for each vCPU Date: Wed, 26 Jun 2024 17:28:37 +0100 Message-Id: <94a6d0ff6ce8d0e5be9546efba7aa50d2a21a2b8.1719416329.git.alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: References: MIME-Version: 1.0 Have toolstack populate the new x2APIC ID in the LAPIC save record with the proper IDs intended for each vCPU. Signed-off-by: Alejandro Vallejo --- v4: * New patch. Replaced v3's method of letting Xen find out via the same algorithm toolstack uses. --- tools/libs/guest/xg_dom_x86.c | 37 ++++++++++++++++++++++++++++++++++- 1 file changed, 36 insertions(+), 1 deletion(-) diff --git a/tools/libs/guest/xg_dom_x86.c b/tools/libs/guest/xg_dom_x86.c index 82ea3e2aab0b..2ae3a779b016 100644 --- a/tools/libs/guest/xg_dom_x86.c +++ b/tools/libs/guest/xg_dom_x86.c @@ -1004,19 +1004,40 @@ static int vcpu_hvm(struct xc_dom_image *dom) HVM_SAVE_TYPE(HEADER) header; struct hvm_save_descriptor mtrr_d; HVM_SAVE_TYPE(MTRR) mtrr; + struct hvm_save_descriptor lapic_d; + HVM_SAVE_TYPE(LAPIC) lapic; struct hvm_save_descriptor end_d; HVM_SAVE_TYPE(END) end; } vcpu_ctx; - /* Context from full_ctx */ + /* Contexts from full_ctx */ const HVM_SAVE_TYPE(MTRR) *mtrr_record; + const HVM_SAVE_TYPE(LAPIC) *lapic_record; /* Raw context as taken from Xen */ uint8_t *full_ctx = NULL; + xc_cpu_policy_t *policy = xc_cpu_policy_init(); int rc; DOMPRINTF_CALLED(dom->xch); assert(dom->max_vcpus); + /* + * Fetch the CPU policy of this domain. We need it to determine the APIC IDs + * each of vCPU in a manner consistent with the exported topology. + * + * TODO: It's silly to query a policy we have ourselves created. It should + * instead be part of xc_dom_image + */ + + rc = xc_cpu_policy_get_domain(dom->xch, dom->guest_domid, policy); + if ( rc != 0 ) + { + xc_dom_panic(dom->xch, XC_INTERNAL_ERROR, + "%s: unable to fetch cpu policy for dom%u (rc=%d)", + __func__, dom->guest_domid, rc); + goto out; + } + /* * Get the full HVM context in order to have the header, it is not * possible to get the header with getcontext_partial, and crafting one @@ -1111,6 +1132,8 @@ static int vcpu_hvm(struct xc_dom_image *dom) vcpu_ctx.mtrr_d.typecode = HVM_SAVE_CODE(MTRR); vcpu_ctx.mtrr_d.length = HVM_SAVE_LENGTH(MTRR); vcpu_ctx.mtrr = *mtrr_record; + vcpu_ctx.lapic_d.typecode = HVM_SAVE_CODE(LAPIC); + vcpu_ctx.lapic_d.length = HVM_SAVE_LENGTH(LAPIC); vcpu_ctx.end_d = bsp_ctx.end_d; vcpu_ctx.end = bsp_ctx.end; @@ -1124,6 +1147,17 @@ static int vcpu_hvm(struct xc_dom_image *dom) { vcpu_ctx.mtrr_d.instance = i; + lapic_record = hvm_get_save_record(full_ctx, HVM_SAVE_CODE(LAPIC), i); + if ( !lapic_record ) + { + xc_dom_panic(dom->xch, XC_INTERNAL_ERROR, + "%s: unable to get LAPIC[%d] save record", __func__, i); + goto out; + } + vcpu_ctx.lapic = *lapic_record; + vcpu_ctx.lapic.x2apic_id = x86_x2apic_id_from_vcpu_id(&policy->policy, i); + vcpu_ctx.lapic_d.instance = i; + rc = xc_domain_hvm_setcontext(dom->xch, dom->guest_domid, (uint8_t *)&vcpu_ctx, sizeof(vcpu_ctx)); if ( rc != 0 ) @@ -1146,6 +1180,7 @@ static int vcpu_hvm(struct xc_dom_image *dom) out: free(full_ctx); + xc_cpu_policy_destroy(policy); return rc; }