From patchwork Mon Oct 21 15:45:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13844336 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 87688D15DB4 for ; Mon, 21 Oct 2024 15:46:34 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.823732.1237778 (Exim 4.92) (envelope-from ) id 1t2uc0-00078w-Dd; Mon, 21 Oct 2024 15:46:28 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 823732.1237778; Mon, 21 Oct 2024 15:46:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc0-00078p-An; Mon, 21 Oct 2024 15:46:28 +0000 Received: by outflank-mailman (input) for mailman id 823732; Mon, 21 Oct 2024 15:46:27 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uby-00072f-Ul for xen-devel@lists.xenproject.org; Mon, 21 Oct 2024 15:46:26 +0000 Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com [2a00:1450:4864:20::632]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a144706c-8fc3-11ef-a0be-8be0dac302b0; Mon, 21 Oct 2024 17:46:26 +0200 (CEST) Received: by mail-ej1-x632.google.com with SMTP id a640c23a62f3a-a9aa8895facso28000366b.2 for ; Mon, 21 Oct 2024 08:46:26 -0700 (PDT) Received: from localhost.localdomain (0545937c.skybroadband.com. [5.69.147.124]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a9a91370fe5sm215688966b.112.2024.10.21.08.46.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Oct 2024 08:46:24 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a144706c-8fc3-11ef-a0be-8be0dac302b0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1729525585; x=1730130385; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GBassmQzkQGjGcGX6CCW94tabGk8bt3HmbNpfye2CBI=; b=fQVisM99hOGDTBagGT7JvBbBDzaIQopRXIbcYHrceNr/Z3GDkQ8ykeYmYyfWVbAsva 9GBOqsIBnTMRzIvgHWaMv3NaivkTIKX9xgKNfvJ+uKLe1gUdNWvBHN87ej6Bylum4MEn pOV6h05uG9H7SHpt38z5JPjMt+oLcmgpeUEg0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729525585; x=1730130385; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GBassmQzkQGjGcGX6CCW94tabGk8bt3HmbNpfye2CBI=; b=sc8AX5vH+JSc9Hgibno9jTZTvKS8Jw1Sm5xVzdECyr3IH7g8C0Xjh99FIEKjHOFsRP wlH27CEoxOLSfVd4ExvPkxKUhYXTk+06kNaJIzhgyAGPPShuW9nE9Ob6kd5Q0yju3VxU iafCFwigOvYQDF1XAWYB61pgxxuNAKP40NDQywrPLre7Eek6BKiDg8iX84IKfzmpclIP vJ4jUSfpnmiEg0eRhYCA/pGZBLDGMYaWT5R4MDsDjzC5Ue9bq7ZXwF6Eq+02WRCcnacy HeHFseHrA1SH8wq0CLUc0ii2Dp9zx0KZuyrf1KIEXdLLXcrReCcebe99sxxmBLguQwoj pi1A== X-Gm-Message-State: AOJu0YzlV9mUz4V4yQxjhM9fX6CFNOuPOvG3xU/0UaTca+ygiBSBnMWM XAMkaAqvjlf0kK2y45JauQSm4b/Y6yir/XcZEhCXz5bXHC74FWo7kwecxfsxDX23mGPmPEJx6fz x X-Google-Smtp-Source: AGHT+IEihvHwwhqYr+uHOEPw6bXq1/E4z6qPAHQE2TFcRK7PeBA8wIjJHn08cy5toE54HuoyOx+a0Q== X-Received: by 2002:a17:907:7fab:b0:a99:e505:2089 with SMTP id a640c23a62f3a-a9a69c92299mr1103020666b.45.1729525585037; Mon, 21 Oct 2024 08:46:25 -0700 (PDT) From: Alejandro Vallejo To: xen-devel@lists.xenproject.org Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v7 01/10] lib/x86: Bump max basic leaf in {pv,hvm}_max_policy Date: Mon, 21 Oct 2024 16:45:51 +0100 Message-ID: <20241021154600.11745-2-alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241021154600.11745-1-alejandro.vallejo@cloud.com> References: <20241021154600.11745-1-alejandro.vallejo@cloud.com> MIME-Version: 1.0 Bump it to ARRAY_SIZE() so toolstack is able to extend a policy past host limits (i.e: to emulate a feature not present in the host) Signed-off-by: Alejandro Vallejo --- v7: * Replaces v6/patch1("Relax checks about policy compatibility") * Bumps basic.max_leaf to ARRAY_SIZE(basic.raw) to pass the compatibility checks rather than tweaking the checker. --- xen/arch/x86/cpu-policy.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c index b6d9fad56773..715a66d2a978 100644 --- a/xen/arch/x86/cpu-policy.c +++ b/xen/arch/x86/cpu-policy.c @@ -585,6 +585,9 @@ static void __init calculate_pv_max_policy(void) */ p->feat.max_subleaf = ARRAY_SIZE(p->feat.raw) - 1; + /* Toolstack may populate leaves not present in the basic host leaves */ + p->basic.max_leaf = ARRAY_SIZE(p->basic.raw) - 1; + x86_cpu_policy_to_featureset(p, fs); for ( i = 0; i < ARRAY_SIZE(fs); ++i ) @@ -672,6 +675,9 @@ static void __init calculate_hvm_max_policy(void) */ p->feat.max_subleaf = ARRAY_SIZE(p->feat.raw) - 1; + /* Toolstack may populate leaves not present in the basic host leaves */ + p->basic.max_leaf = ARRAY_SIZE(p->basic.raw) - 1; + x86_cpu_policy_to_featureset(p, fs); mask = hvm_hap_supported() ? From patchwork Mon Oct 21 15:45:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13844337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 83E21D15DB3 for ; Mon, 21 Oct 2024 15:46:34 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.823734.1237792 (Exim 4.92) (envelope-from ) id 1t2uc1-0007KI-6i; Mon, 21 Oct 2024 15:46:29 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 823734.1237792; Mon, 21 Oct 2024 15:46:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc0-0007J6-UZ; Mon, 21 Oct 2024 15:46:28 +0000 Received: by outflank-mailman (input) for mailman id 823734; Mon, 21 Oct 2024 15:46:27 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2ubz-00072f-KY for xen-devel@lists.xenproject.org; Mon, 21 Oct 2024 15:46:27 +0000 Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com [2a00:1450:4864:20::629]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a1bd5b18-8fc3-11ef-a0be-8be0dac302b0; Mon, 21 Oct 2024 17:46:26 +0200 (CEST) Received: by mail-ej1-x629.google.com with SMTP id a640c23a62f3a-a9a4031f69fso670324066b.0 for ; Mon, 21 Oct 2024 08:46:26 -0700 (PDT) Received: from localhost.localdomain (0545937c.skybroadband.com. [5.69.147.124]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a9a91370fe5sm215688966b.112.2024.10.21.08.46.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Oct 2024 08:46:25 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a1bd5b18-8fc3-11ef-a0be-8be0dac302b0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1729525586; x=1730130386; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=f4JeQE08gNbfSWczpDPa3wKYhVIymZneTtaBPTmRUh4=; b=QYEkr2Ra5S1VXH9tIQPlrgItAiZG6jqghIZB/waLU7IkIPY6earLMxbSuGNEuM8flB IutR8xowSMSCLTmRQwt2NzOeBm7nlrsDXqtqjVMBFeic2+0+7YL6OlOweP+2gglEtCCU IcH7S4CK6ZZi+8kRheKxiRVjHqjKnifkWaDgA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729525586; x=1730130386; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=f4JeQE08gNbfSWczpDPa3wKYhVIymZneTtaBPTmRUh4=; b=FWv5P2p9Q2ooJdQ+rs059HB+FyDvWhyBd7K5W4eizVJBCq7gwNYy9bXk3METkHc2c6 H+HtstyYk75E2dwFJsKgYxa2hhmo2Lazv1/ohlflub0z3lfg3/UpU5TSVlhEewpJmbAZ y3RxDGweMrF9wMiCY3GuHAzH30tra+lvbRJLphJUeVsNUvM1rgVrQgiQloCK+XqZmwsh ly3ewg5mkYcB7efHgY+AQziD5B+oE/LZVN+3i1FW0NCy0+CclkAh4XKMSqwxduaSmcoP e7oxjlpFqfr18mMA42NI462eJHj4AJBWIp+ynkRJH73YwXJZ4u3Cq8JLEqKAMpturOX7 NHqg== X-Gm-Message-State: AOJu0YzQKqGghni68vE+NOGY6ogAZ19rLo6z+ewsMVAV2g7Szas1qVpZ 4LaMNAbOJdJUAwnkCmhQzqD36I4qEZyjRL92hxwlge1TrtN/xxnNwQw4YezwXL7DMWmEAV3ld4b T X-Google-Smtp-Source: AGHT+IFsK7Iu5vWYlhUZff+ic0jHttVXhWo6yGhM9JdgQlboahnygpBdmE//ilKBKhQ2dxolpNcy6w== X-Received: by 2002:a17:907:7e9e:b0:a9a:161:8da4 with SMTP id a640c23a62f3a-a9a69cdd435mr1195055966b.55.1729525585741; Mon, 21 Oct 2024 08:46:25 -0700 (PDT) From: Alejandro Vallejo To: xen-devel@lists.xenproject.org Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v7 02/10] xen/x86: Add initial x2APIC ID to the per-vLAPIC save area Date: Mon, 21 Oct 2024 16:45:52 +0100 Message-ID: <20241021154600.11745-3-alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241021154600.11745-1-alejandro.vallejo@cloud.com> References: <20241021154600.11745-1-alejandro.vallejo@cloud.com> MIME-Version: 1.0 This allows the initial x2APIC ID to be sent on the migration stream. This allows further changes to topology and APIC ID assignment without breaking existing hosts. Given the vlapic data is zero-extended on restore, fix up migrations from hosts without the field by setting it to the old convention if zero. The hardcoded mapping x2apic_id=2*vcpu_id is kept for the time being, but it's meant to be overriden by toolstack on a later patch with appropriate values. Signed-off-by: Alejandro Vallejo --- v7: * Preserve output for CPUID[0xb].edx on PV rather than nullify it. * s/vlapic->hw.x2apic_id/vlapic_x2apic_id(vlapic)/ in vlapic.c --- xen/arch/x86/cpuid.c | 18 +++++++----------- xen/arch/x86/hvm/vlapic.c | 22 ++++++++++++++++++++-- xen/arch/x86/include/asm/hvm/vlapic.h | 1 + xen/include/public/arch-x86/hvm/save.h | 2 ++ 4 files changed, 30 insertions(+), 13 deletions(-) diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c index 2a777436ee27..e2489ff8e346 100644 --- a/xen/arch/x86/cpuid.c +++ b/xen/arch/x86/cpuid.c @@ -138,10 +138,9 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf, const struct cpu_user_regs *regs; case 0x1: - /* TODO: Rework topology logic. */ res->b &= 0x00ffffffu; if ( is_hvm_domain(d) ) - res->b |= (v->vcpu_id * 2) << 24; + res->b |= vlapic_x2apic_id(vcpu_vlapic(v)) << 24; /* TODO: Rework vPMU control in terms of toolstack choices. */ if ( vpmu_available(v) && @@ -310,19 +309,16 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf, break; case 0xb: - /* - * In principle, this leaf is Intel-only. In practice, it is tightly - * coupled with x2apic, and we offer an x2apic-capable APIC emulation - * to guests on AMD hardware as well. - * - * TODO: Rework topology logic. - */ if ( p->basic.x2apic ) { *(uint8_t *)&res->c = subleaf; - /* Fix the x2APIC identifier. */ - res->d = v->vcpu_id * 2; + /* + * Fix the x2APIC identifier. The PV side is nonsensical, but + * we've always shown it like this so it's kept for compat. + */ + res->d = is_hvm_domain(d) ? vlapic_x2apic_id(vcpu_vlapic(v)) + : 2 * v->vcpu_id; } break; diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index 3363926b487b..33b463925f4e 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -1090,7 +1090,7 @@ static uint32_t x2apic_ldr_from_id(uint32_t id) static void set_x2apic_id(struct vlapic *vlapic) { const struct vcpu *v = vlapic_vcpu(vlapic); - uint32_t apic_id = v->vcpu_id * 2; + uint32_t apic_id = vlapic_x2apic_id(vlapic); uint32_t apic_ldr = x2apic_ldr_from_id(apic_id); /* @@ -1470,7 +1470,7 @@ void vlapic_reset(struct vlapic *vlapic) if ( v->vcpu_id == 0 ) vlapic->hw.apic_base_msr |= APIC_BASE_BSP; - vlapic_set_reg(vlapic, APIC_ID, (v->vcpu_id * 2) << 24); + vlapic_set_reg(vlapic, APIC_ID, SET_xAPIC_ID(vlapic_x2apic_id(vlapic))); vlapic_do_init(vlapic); } @@ -1538,6 +1538,16 @@ static void lapic_load_fixup(struct vlapic *vlapic) const struct vcpu *v = vlapic_vcpu(vlapic); uint32_t good_ldr = x2apic_ldr_from_id(vlapic->loaded.id); + /* + * Loading record without hw.x2apic_id in the save stream, calculate using + * the traditional "vcpu_id * 2" relation. There's an implicit assumption + * that vCPU0 always has x2APIC0, which is true for the old relation, and + * still holds under the new x2APIC generation algorithm. While that case + * goes through the conditional it's benign because it still maps to zero. + */ + if ( !vlapic->hw.x2apic_id ) + vlapic->hw.x2apic_id = v->vcpu_id * 2; + /* Skip fixups on xAPIC mode, or if the x2APIC LDR is already correct */ if ( !vlapic_x2apic_mode(vlapic) || (vlapic->loaded.ldr == good_ldr) ) @@ -1606,6 +1616,13 @@ static int cf_check lapic_check_hidden(const struct domain *d, APIC_BASE_EXTD ) return -EINVAL; + /* + * Fail migrations from newer versions of Xen where + * rsvd_zero is interpreted as something else. + */ + if ( s.rsvd_zero ) + return -EINVAL; + return 0; } @@ -1687,6 +1704,7 @@ int vlapic_init(struct vcpu *v) } vlapic->pt.source = PTSRC_lapic; + vlapic->hw.x2apic_id = 2 * v->vcpu_id; vlapic->regs_page = alloc_domheap_page(v->domain, MEMF_no_owner); if ( !vlapic->regs_page ) diff --git a/xen/arch/x86/include/asm/hvm/vlapic.h b/xen/arch/x86/include/asm/hvm/vlapic.h index 2c4ff94ae7a8..85c4a236b9f6 100644 --- a/xen/arch/x86/include/asm/hvm/vlapic.h +++ b/xen/arch/x86/include/asm/hvm/vlapic.h @@ -44,6 +44,7 @@ #define vlapic_xapic_mode(vlapic) \ (!vlapic_hw_disabled(vlapic) && \ !((vlapic)->hw.apic_base_msr & APIC_BASE_EXTD)) +#define vlapic_x2apic_id(vlapic) ((vlapic)->hw.x2apic_id) /* * Generic APIC bitmap vector update & search routines. diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h index 7ecacadde165..1c2ec669ffc9 100644 --- a/xen/include/public/arch-x86/hvm/save.h +++ b/xen/include/public/arch-x86/hvm/save.h @@ -394,6 +394,8 @@ struct hvm_hw_lapic { uint32_t disabled; /* VLAPIC_xx_DISABLED */ uint32_t timer_divisor; uint64_t tdt_msr; + uint32_t x2apic_id; + uint32_t rsvd_zero; }; DECLARE_HVM_SAVE_TYPE(LAPIC, 5, struct hvm_hw_lapic); From patchwork Mon Oct 21 15:45:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13844338 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2921CD15DB6 for ; Mon, 21 Oct 2024 15:46:35 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.823735.1237807 (Exim 4.92) (envelope-from ) id 1t2uc2-0007pj-DF; Mon, 21 Oct 2024 15:46:30 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 823735.1237807; Mon, 21 Oct 2024 15:46:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc2-0007pI-8z; Mon, 21 Oct 2024 15:46:30 +0000 Received: by outflank-mailman (input) for mailman id 823735; Mon, 21 Oct 2024 15:46:28 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc0-00072f-KX for xen-devel@lists.xenproject.org; Mon, 21 Oct 2024 15:46:28 +0000 Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com [2a00:1450:4864:20::633]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a206f3b7-8fc3-11ef-a0be-8be0dac302b0; Mon, 21 Oct 2024 17:46:27 +0200 (CEST) Received: by mail-ej1-x633.google.com with SMTP id a640c23a62f3a-a9a3dc089d8so644860566b.3 for ; Mon, 21 Oct 2024 08:46:27 -0700 (PDT) Received: from localhost.localdomain (0545937c.skybroadband.com. [5.69.147.124]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a9a91370fe5sm215688966b.112.2024.10.21.08.46.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Oct 2024 08:46:26 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a206f3b7-8fc3-11ef-a0be-8be0dac302b0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1729525586; x=1730130386; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pByswpXzSTiGKGfVPV2EdkKns62a7SkqiG3943NVlFI=; b=SzVzt4WHidbE0zJ9QCJxdlgrNuOVZeEpxB7ZQLDu45hlrGTufcqLIdsUbXLaExMpyT /GyDl8pmZd+Bg4gSenmnLu4Ta0ct+A2b5mIx1N9Kfj2faZfIAvGdp/OJEe6G/RDz4C26 +uxwJVnY68IF4180zWCB3w0TiCikTv0qkYDkw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729525586; x=1730130386; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pByswpXzSTiGKGfVPV2EdkKns62a7SkqiG3943NVlFI=; b=HdkweMDsO1fBa7WIXw89GJWdO6Xs3E/FgeGDPo+4WE7ZeOSWp4XCX6U5jGHcGNHXYT qcotSkmHuN/O6jOyB75yne8wVPn/SviSq7Fnyd1YWJaA0U6uhLY4fOuCIkCYgOO4afvu ooC3OW5EyIsqzzgGV8aEGcOzxN9CvzseFBa/FGLhPMGP2Kxrq2GqykqTYLnPZHuMr7HV vPjKPvvKa8PJ2+SBPlpvpc0vkzFSNhbNmdU+8H/sDRIXxwDa5sDKojXzK1zuxmpo1Y6z OqPNjVUOXSSEJJpNVT26TLngXHaHH+isd1yyvfbvQWPSW0AJWzYcjmEbBa5qvE2iC+gd hKEQ== X-Gm-Message-State: AOJu0YyPtj2RkNM3qprDSF0MW92ir5aCgS/PjRTOmEEE7JmstxsNyp2n dYZ0ee2hXXY7XfWkSvmbN17hzWuIOyTYxdq9sRH3T7IR20U3WWOCRpUE9H2xWGi6VDyL7T1K88z S X-Google-Smtp-Source: AGHT+IHM8n1JQD1o+pIuj+Z2TwTg9W/MBIC+VNA8N9YrEwNUTTBufyr/ioxaPO0pJOwlwX0Eg+kgew== X-Received: by 2002:a17:907:6094:b0:a99:4eac:bb98 with SMTP id a640c23a62f3a-a9a69c6a5a9mr1125006366b.51.1729525586449; Mon, 21 Oct 2024 08:46:26 -0700 (PDT) From: Alejandro Vallejo To: xen-devel@lists.xenproject.org Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v7 03/10] xen/x86: Add supporting code for uploading LAPIC contexts during domain create Date: Mon, 21 Oct 2024 16:45:53 +0100 Message-ID: <20241021154600.11745-4-alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241021154600.11745-1-alejandro.vallejo@cloud.com> References: <20241021154600.11745-1-alejandro.vallejo@cloud.com> MIME-Version: 1.0 A later patch will upload LAPIC contexts as part of domain creation. In order for it not to encounter a problem where the architectural state does not reflect the APIC ID in the hidden state, this patch ensures updates to the hidden state trigger an update in the architectural registers so the APIC ID in both is consistent. Signed-off-by: Alejandro Vallejo --- v7: * Rework the commit message so it explains a follow-up patch rather than hypothetical behaviour. --- xen/arch/x86/hvm/vlapic.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c index 33b463925f4e..03581eb33812 100644 --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -1640,7 +1640,27 @@ static int cf_check lapic_load_hidden(struct domain *d, hvm_domain_context_t *h) s->loaded.hw = 1; if ( s->loaded.regs ) + { + /* + * We already processed architectural regs in lapic_load_regs(), so + * this must be a migration. Fix up inconsistencies from any older Xen. + */ lapic_load_fixup(s); + } + else + { + /* + * We haven't seen architectural regs so this could be a migration or a + * plain domain create. In the domain create case it's fine to modify + * the architectural state to align it to the APIC ID that was just + * uploaded and in the migrate case it doesn't matter because the + * architectural state will be replaced by the LAPIC_REGS ctx later on. + */ + if ( vlapic_x2apic_mode(s) ) + set_x2apic_id(s); + else + vlapic_set_reg(s, APIC_ID, SET_xAPIC_ID(s->hw.x2apic_id)); + } hvm_update_vlapic_mode(v); From patchwork Mon Oct 21 15:45:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13844339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B4D24D15D99 for ; Mon, 21 Oct 2024 15:46:35 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.823736.1237818 (Exim 4.92) (envelope-from ) id 1t2uc3-00088h-LL; Mon, 21 Oct 2024 15:46:31 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 823736.1237818; Mon, 21 Oct 2024 15:46:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc3-00087j-HV; Mon, 21 Oct 2024 15:46:31 +0000 Received: by outflank-mailman (input) for mailman id 823736; Mon, 21 Oct 2024 15:46:29 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc1-00072f-Kw for xen-devel@lists.xenproject.org; Mon, 21 Oct 2024 15:46:29 +0000 Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com [2a00:1450:4864:20::12a]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a2cb0625-8fc3-11ef-a0be-8be0dac302b0; Mon, 21 Oct 2024 17:46:28 +0200 (CEST) Received: by mail-lf1-x12a.google.com with SMTP id 2adb3069b0e04-539ee1acb86so2582166e87.0 for ; Mon, 21 Oct 2024 08:46:28 -0700 (PDT) Received: from localhost.localdomain (0545937c.skybroadband.com. [5.69.147.124]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a9a91370fe5sm215688966b.112.2024.10.21.08.46.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Oct 2024 08:46:26 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a2cb0625-8fc3-11ef-a0be-8be0dac302b0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1729525587; x=1730130387; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EtunhDViKB6O/YaErMLF4cLS3+ZvtcQXDHgeo9Ij1bY=; b=djRRh/9z86qf/TEZU79596fEqF96wZI81c+pjR4scKPl3VFStBF/pULzIRyzNHF7MR H3yts0akrvmrixv8enDYClgNChFmVaTES0gDDmhpq6X272d46dNLMsj+6R8x/Iq5jCXZ bA/RwWrpKDlPmbiGp+/R6rtJcRTssGnqdR56s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729525587; x=1730130387; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EtunhDViKB6O/YaErMLF4cLS3+ZvtcQXDHgeo9Ij1bY=; b=VDQ+Lj2fqy2C/2oHSi03klJu/jZed/one70kHtcMoteFj4LYfkDQnsQDIlJ1dsXszz GDGqWkmfuFYYX4MrlU7lGLYfa5EwsYmF/X1utEgVR4IwdgmRbRazUTVKQvtt0/VKwPb8 klip1YEhKMpskAOJiNaVzJVRuJ3S7Ks/cNeX+PLxqr4MgvEv+MYboXxuGFjWeIDu0OEk QdY73ZSzZpV4idaseyCzKNTXL3R6XHFjDPdM7SGEBkZN9QD3fiZkevtQRKMUGzJc/pIv vC+Esh5kE9OKBpvySvEmOc42m4zNxT2zJrOzBVpWs0gjJXPIZLNLZeELLVXfuXs2sFBi pkmA== X-Gm-Message-State: AOJu0Yyc52QyQ4FyDjlbqRPGG9h3BL6DU6Jf86bLCL9dw61iSZrgfo0m DA1F3RU23IBw+tDDMq8TP/bPkJ9lpJb4AouaP0Q6kTrK5YAlu8nRD0NEqdrIVwxEhDDy0RQR4Gp z X-Google-Smtp-Source: AGHT+IFxh2loJHPDPgOCevVyoWp2JdQvGapLa1M3JZTi6PJ3c8ms+bUOhIryrbRzFPxi7M0HnoySFQ== X-Received: by 2002:a05:6512:31c1:b0:535:6cde:5c4d with SMTP id 2adb3069b0e04-53a1520bfaemr7157800e87.3.1729525587370; Mon, 21 Oct 2024 08:46:27 -0700 (PDT) From: Alejandro Vallejo To: xen-devel@lists.xenproject.org Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v7 04/10] tools/hvmloader: Retrieve (x2)APIC IDs from the APs themselves Date: Mon, 21 Oct 2024 16:45:54 +0100 Message-ID: <20241021154600.11745-5-alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241021154600.11745-1-alejandro.vallejo@cloud.com> References: <20241021154600.11745-1-alejandro.vallejo@cloud.com> MIME-Version: 1.0 Make it so the APs expose their own APIC IDs in a LUT. We can use that LUT to populate the MADT, decoupling the algorithm that relates CPU IDs and APIC IDs from hvmloader. Moved smp_initialise() ahead of apic_setup() in order to initialise cpu_to_x2apicid ASAP and avoid using it uninitialised. Note that bringing up the APs doesn't need the APIC in hvmloader becasue it always runs virtualized and uses the PV interface. While at this, exploit the assumption that CPU0 always has APICID0 to remove ap_callin, as writing the APIC ID may serve the same purpose. Signed-off-by: Alejandro Vallejo --- v7: * CPU_TO_X2APICID to lowercase * Spell out the CPU0<-->APICID0 relationship in the commit message as the rationale to remove ap_callin. * Explain the motion of smp_initialise() ahead of apic_setup() in the commit message. --- tools/firmware/hvmloader/config.h | 5 ++- tools/firmware/hvmloader/hvmloader.c | 6 +-- tools/firmware/hvmloader/mp_tables.c | 4 +- tools/firmware/hvmloader/smp.c | 57 ++++++++++++++++++++----- tools/firmware/hvmloader/util.c | 2 +- tools/include/xen-tools/common-macros.h | 5 +++ 6 files changed, 63 insertions(+), 16 deletions(-) diff --git a/tools/firmware/hvmloader/config.h b/tools/firmware/hvmloader/config.h index cd716bf39245..04cab1e59f08 100644 --- a/tools/firmware/hvmloader/config.h +++ b/tools/firmware/hvmloader/config.h @@ -4,6 +4,8 @@ #include #include +#include + enum virtual_vga { VGA_none, VGA_std, VGA_cirrus, VGA_pt }; extern enum virtual_vga virtual_vga; @@ -48,8 +50,9 @@ extern uint8_t ioapic_version; #define IOAPIC_ID 0x01 +extern uint32_t cpu_to_x2apicid[HVM_MAX_VCPUS]; + #define LAPIC_BASE_ADDRESS 0xfee00000 -#define LAPIC_ID(vcpu_id) ((vcpu_id) * 2) #define PCI_ISA_DEVFN 0x08 /* dev 1, fn 0 */ #define PCI_ISA_IRQ_MASK 0x0c20U /* ISA IRQs 5,10,11 are PCI connected */ diff --git a/tools/firmware/hvmloader/hvmloader.c b/tools/firmware/hvmloader/hvmloader.c index f8af88fabf24..bebdfa923880 100644 --- a/tools/firmware/hvmloader/hvmloader.c +++ b/tools/firmware/hvmloader/hvmloader.c @@ -224,7 +224,7 @@ static void apic_setup(void) /* 8259A ExtInts are delivered through IOAPIC pin 0 (Virtual Wire Mode). */ ioapic_write(0x10, APIC_DM_EXTINT); - ioapic_write(0x11, SET_APIC_ID(LAPIC_ID(0))); + ioapic_write(0x11, SET_APIC_ID(cpu_to_x2apicid[0])); } struct bios_info { @@ -341,11 +341,11 @@ int main(void) printf("CPU speed is %u MHz\n", get_cpu_mhz()); + smp_initialise(); + apic_setup(); pci_setup(); - smp_initialise(); - perform_tests(); if ( bios->bios_info_setup ) diff --git a/tools/firmware/hvmloader/mp_tables.c b/tools/firmware/hvmloader/mp_tables.c index 77d3010406d0..539260365e1e 100644 --- a/tools/firmware/hvmloader/mp_tables.c +++ b/tools/firmware/hvmloader/mp_tables.c @@ -198,8 +198,10 @@ static void fill_mp_config_table(struct mp_config_table *mpct, int length) /* fills in an MP processor entry for VCPU 'vcpu_id' */ static void fill_mp_proc_entry(struct mp_proc_entry *mppe, int vcpu_id) { + ASSERT(cpu_to_x2apicid[vcpu_id] < 0xFF ); + mppe->type = ENTRY_TYPE_PROCESSOR; - mppe->lapic_id = LAPIC_ID(vcpu_id); + mppe->lapic_id = cpu_to_x2apicid[vcpu_id]; mppe->lapic_version = 0x11; mppe->cpu_flags = CPU_FLAG_ENABLED; if ( vcpu_id == 0 ) diff --git a/tools/firmware/hvmloader/smp.c b/tools/firmware/hvmloader/smp.c index 1b940cefd071..d63536f14f00 100644 --- a/tools/firmware/hvmloader/smp.c +++ b/tools/firmware/hvmloader/smp.c @@ -29,7 +29,37 @@ #include -static int ap_callin; +/** + * Lookup table of (x2)APIC IDs. + * + * Each entry is populated its respective CPU as they come online. This is required + * for generating the MADT with minimal assumptions about ID relationships. + * + * While the name makes "x2" explicit, these may actually be xAPIC IDs if no + * x2APIC is present. "x2" merely highlights that each entry is 32 bits wide. + */ +uint32_t cpu_to_x2apicid[HVM_MAX_VCPUS]; + +/** Tristate about x2apic being supported. -1=unknown */ +static int has_x2apic = -1; + +static uint32_t read_apic_id(void) +{ + uint32_t apic_id; + + if ( has_x2apic ) + cpuid(0xb, NULL, NULL, NULL, &apic_id); + else + { + cpuid(1, NULL, &apic_id, NULL, NULL); + apic_id >>= 24; + } + + /* Never called by cpu0, so should never return 0 */ + ASSERT(apic_id); + + return apic_id; +} static void cpu_setup(unsigned int cpu) { @@ -37,13 +67,17 @@ static void cpu_setup(unsigned int cpu) cacheattr_init(); printf("done.\n"); - if ( !cpu ) /* Used on the BSP too */ + /* The BSP exits early because its APIC ID is known to be zero */ + if ( !cpu ) return; wmb(); - ap_callin = 1; + ACCESS_ONCE(cpu_to_x2apicid[cpu]) = read_apic_id(); - /* After this point, the BSP will shut us down. */ + /* + * After this point the BSP will shut us down. A write to + * cpu_to_x2apicid[cpu] signals the BSP to bring down `cpu`. + */ for ( ;; ) asm volatile ( "hlt" ); @@ -54,10 +88,6 @@ static void boot_cpu(unsigned int cpu) static uint8_t ap_stack[PAGE_SIZE] __attribute__ ((aligned (16))); static struct vcpu_hvm_context ap; - /* Initialise shared variables. */ - ap_callin = 0; - wmb(); - /* Wake up the secondary processor */ ap = (struct vcpu_hvm_context) { .mode = VCPU_HVM_MODE_32B, @@ -90,10 +120,11 @@ static void boot_cpu(unsigned int cpu) BUG(); /* - * Wait for the secondary processor to complete initialisation. + * Wait for the secondary processor to complete initialisation, + * which is signaled by its x2APIC ID being written to the LUT. * Do not touch shared resources meanwhile. */ - while ( !ap_callin ) + while ( !ACCESS_ONCE(cpu_to_x2apicid[cpu]) ) cpu_relax(); /* Take the secondary processor offline. */ @@ -104,6 +135,12 @@ static void boot_cpu(unsigned int cpu) void smp_initialise(void) { unsigned int i, nr_cpus = hvm_info->nr_vcpus; + uint32_t ecx; + + cpuid(1, NULL, NULL, &ecx, NULL); + has_x2apic = (ecx >> 21) & 1; + if ( has_x2apic ) + printf("x2APIC supported\n"); printf("Multiprocessor initialisation:\n"); cpu_setup(0); diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c index d3b3f9038e64..821b3086a87d 100644 --- a/tools/firmware/hvmloader/util.c +++ b/tools/firmware/hvmloader/util.c @@ -827,7 +827,7 @@ static void acpi_mem_free(struct acpi_ctxt *ctxt, static uint32_t acpi_lapic_id(unsigned cpu) { - return LAPIC_ID(cpu); + return cpu_to_x2apic_id[cpu]; } void hvmloader_acpi_build_tables(struct acpi_config *config, diff --git a/tools/include/xen-tools/common-macros.h b/tools/include/xen-tools/common-macros.h index 60912225cb7a..336c6309d96e 100644 --- a/tools/include/xen-tools/common-macros.h +++ b/tools/include/xen-tools/common-macros.h @@ -108,4 +108,9 @@ #define get_unaligned(ptr) get_unaligned_t(typeof(*(ptr)), ptr) #define put_unaligned(val, ptr) put_unaligned_t(typeof(*(ptr)), val, ptr) +#define __ACCESS_ONCE(x) ({ \ + (void)(typeof(x))0; /* Scalar typecheck. */ \ + (volatile typeof(x) *)&(x); }) +#define ACCESS_ONCE(x) (*__ACCESS_ONCE(x)) + #endif /* __XEN_TOOLS_COMMON_MACROS__ */ From patchwork Mon Oct 21 15:45:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13844341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6F231D15DB4 for ; Mon, 21 Oct 2024 15:46:37 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.823739.1237829 (Exim 4.92) (envelope-from ) id 1t2uc4-0008Na-N2; Mon, 21 Oct 2024 15:46:32 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 823739.1237829; Mon, 21 Oct 2024 15:46:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc4-0008KR-Ev; Mon, 21 Oct 2024 15:46:32 +0000 Received: by outflank-mailman (input) for mailman id 823739; Mon, 21 Oct 2024 15:46:30 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc2-00072f-L2 for xen-devel@lists.xenproject.org; Mon, 21 Oct 2024 15:46:30 +0000 Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com [2a00:1450:4864:20::129]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a329ebb5-8fc3-11ef-a0be-8be0dac302b0; Mon, 21 Oct 2024 17:46:29 +0200 (CEST) Received: by mail-lf1-x129.google.com with SMTP id 2adb3069b0e04-539fbbadf83so5935511e87.0 for ; Mon, 21 Oct 2024 08:46:29 -0700 (PDT) Received: from localhost.localdomain (0545937c.skybroadband.com. [5.69.147.124]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a9a91370fe5sm215688966b.112.2024.10.21.08.46.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Oct 2024 08:46:27 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a329ebb5-8fc3-11ef-a0be-8be0dac302b0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1729525588; x=1730130388; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kt6+5jEr1Vb36dNDj6FYHrZnMI7Ec+TK6iPRKwqkLa8=; b=dYWfdIqie9JvNXBWGcz9yHXzD4Btc/2WwTI1L2GNS2H0PGsJ9qpmSb0xm+Jlgi30i6 XDs/Kmb5AeNyJ+GH8u1JYMsAJO/qWMBOxOMVUsqgghWJmFx1GlC97p4Lzu/7TqO5byGz EuKMI7qlj0MPt5Amx8vcmtuA9AxYPLMox2tP8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729525588; x=1730130388; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kt6+5jEr1Vb36dNDj6FYHrZnMI7Ec+TK6iPRKwqkLa8=; b=l7XGlkpLjPJLcCLCiIyGyd9zIl8NUOIKVSrxuP4WiPtH6S4r1Cc/HD2wzLBKYxd7N+ oAraV0aAqm1N9bl+ulfFXCRMlwTbw0iFJ0VE6rWUvMpj9nH0lciIkz+ZViD3cEMj92mA 2H2ywZlkrzJW2vi4ZkOBOVsdR1RgLVFHp2s2jJSQQ6KIND2OhCQjr8k8HAwFnYFUw9tR u10NH573h7odvkYaBIOcKA4rxxRf4VR1FEhTEzhNGinnUErJZlyORq5mAn9shEdA1WeI Wul+BMZfvWBvdT18MzT9eR6FvBYHTmrLQbSX7BDVjrXDCKk334Nv330QT7ShTzDiH4Bk 8Nmg== X-Gm-Message-State: AOJu0YyzoFEZ23cjvM/GxFKCtCoL5sjEKnYfeQb/ecmjYXpELfKv7PWY YmA1uhcqHl6NTrSrkgsN9XYsW1gQNBoNQI8av/CI00U0MDHtIbCmy//naRZ2hjkeUrv5NAUi5cP G X-Google-Smtp-Source: AGHT+IEVrJLMBoJ/8SaBB8jA+N4KaLZEj3RQfn5qwUH4alpb3yPxMwBg2zaGlHFDoRArLQYZjKNmAw== X-Received: by 2002:a05:6512:b19:b0:539:d2e2:41ff with SMTP id 2adb3069b0e04-53a15493465mr5426897e87.23.1729525588191; Mon, 21 Oct 2024 08:46:28 -0700 (PDT) From: Alejandro Vallejo To: xen-devel@lists.xenproject.org Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Anthony PERARD , Juergen Gross Subject: [PATCH v7 05/10] tools/libacpi: Use LUT of APIC IDs rather than function pointer Date: Mon, 21 Oct 2024 16:45:55 +0100 Message-ID: <20241021154600.11745-6-alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241021154600.11745-1-alejandro.vallejo@cloud.com> References: <20241021154600.11745-1-alejandro.vallejo@cloud.com> MIME-Version: 1.0 Refactors libacpi so that a single LUT is the authoritative source of truth for the CPU to APIC ID mappings. This has a know-on effect in reducing complexity on future patches, as the same LUT can be used for configuring the APICs and configuring the ACPI tables for PVH. Not functional change intended, because the same mappings are preserved. Signed-off-by: Alejandro Vallejo --- v7: * NOTE: didn't add assert to libacpi as initially accepted in order to protect libvirt from an assert failure. * s/uint32_t/unsigned int/ in for loop of libxl. * turned Xen-style loop in libxl to libxl-style. --- tools/firmware/hvmloader/util.c | 7 +------ tools/include/xenguest.h | 5 +++++ tools/libacpi/build.c | 6 +++--- tools/libacpi/libacpi.h | 2 +- tools/libs/light/libxl_dom.c | 5 +++++ tools/libs/light/libxl_x86_acpi.c | 7 +------ 6 files changed, 16 insertions(+), 16 deletions(-) diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c index 821b3086a87d..afa3eb9d5775 100644 --- a/tools/firmware/hvmloader/util.c +++ b/tools/firmware/hvmloader/util.c @@ -825,11 +825,6 @@ static void acpi_mem_free(struct acpi_ctxt *ctxt, /* ACPI builder currently doesn't free memory so this is just a stub */ } -static uint32_t acpi_lapic_id(unsigned cpu) -{ - return cpu_to_x2apic_id[cpu]; -} - void hvmloader_acpi_build_tables(struct acpi_config *config, unsigned int physical) { @@ -859,7 +854,7 @@ void hvmloader_acpi_build_tables(struct acpi_config *config, } config->lapic_base_address = LAPIC_BASE_ADDRESS; - config->lapic_id = acpi_lapic_id; + config->cpu_to_apicid = cpu_to_x2apicid; config->ioapic_base_address = IOAPIC_BASE_ADDRESS; config->ioapic_id = IOAPIC_ID; config->pci_isa_irq_mask = PCI_ISA_IRQ_MASK; diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h index e01f494b772a..aa50b78dfb89 100644 --- a/tools/include/xenguest.h +++ b/tools/include/xenguest.h @@ -22,6 +22,8 @@ #ifndef XENGUEST_H #define XENGUEST_H +#include "xen/hvm/hvm_info_table.h" + #define XC_NUMA_NO_NODE (~0U) #define XCFLAGS_LIVE (1 << 0) @@ -236,6 +238,9 @@ struct xc_dom_image { #if defined(__i386__) || defined(__x86_64__) struct e820entry *e820; unsigned int e820_entries; + + /* LUT mapping cpu id to (x2)APIC ID */ + uint32_t cpu_to_apicid[HVM_MAX_VCPUS]; #endif xen_pfn_t vuart_gfn; diff --git a/tools/libacpi/build.c b/tools/libacpi/build.c index 2f29863db154..2ad1d461a2ec 100644 --- a/tools/libacpi/build.c +++ b/tools/libacpi/build.c @@ -74,7 +74,7 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt, const struct hvm_info_table *hvminfo = config->hvminfo; int i, sz; - if ( config->lapic_id == NULL ) + if ( config->cpu_to_apicid == NULL ) return NULL; sz = sizeof(struct acpi_20_madt); @@ -148,7 +148,7 @@ static struct acpi_20_madt *construct_madt(struct acpi_ctxt *ctxt, lapic->length = sizeof(*lapic); /* Processor ID must match processor-object IDs in the DSDT. */ lapic->acpi_processor_id = i; - lapic->apic_id = config->lapic_id(i); + lapic->apic_id = config->cpu_to_apicid[i]; lapic->flags = (test_bit(i, hvminfo->vcpu_online) ? ACPI_LOCAL_APIC_ENABLED : 0); lapic++; @@ -236,7 +236,7 @@ static struct acpi_20_srat *construct_srat(struct acpi_ctxt *ctxt, processor->type = ACPI_PROCESSOR_AFFINITY; processor->length = sizeof(*processor); processor->domain = config->numa.vcpu_to_vnode[i]; - processor->apic_id = config->lapic_id(i); + processor->apic_id = config->cpu_to_apicid[i]; processor->flags = ACPI_LOCAL_APIC_AFFIN_ENABLED; processor++; } diff --git a/tools/libacpi/libacpi.h b/tools/libacpi/libacpi.h index deda39e5dbc4..e8f603ee18ee 100644 --- a/tools/libacpi/libacpi.h +++ b/tools/libacpi/libacpi.h @@ -84,7 +84,7 @@ struct acpi_config { unsigned long rsdp; /* x86-specific parameters */ - uint32_t (*lapic_id)(unsigned cpu); + const uint32_t *cpu_to_apicid; /* LUT mapping cpu id to (x2)APIC ID */ uint32_t lapic_base_address; uint32_t ioapic_base_address; uint16_t pci_isa_irq_mask; diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c index 94fef374014e..5f4f6830e850 100644 --- a/tools/libs/light/libxl_dom.c +++ b/tools/libs/light/libxl_dom.c @@ -1082,6 +1082,11 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid, dom->container_type = XC_DOM_HVM_CONTAINER; +#if defined(__i386__) || defined(__x86_64__) + for (unsigned int i = 0; i < info->max_vcpus; i++) + dom->cpu_to_apicid[i] = 2 * i; /* TODO: Replace by topo calculation */ +#endif + /* The params from the configuration file are in Mb, which are then * multiplied by 1 Kb. This was then divided off when calling * the old xc_hvm_build_target_mem() which then turned them to bytes. diff --git a/tools/libs/light/libxl_x86_acpi.c b/tools/libs/light/libxl_x86_acpi.c index 5cf261bd6794..585d4c8755cb 100644 --- a/tools/libs/light/libxl_x86_acpi.c +++ b/tools/libs/light/libxl_x86_acpi.c @@ -75,11 +75,6 @@ static void acpi_mem_free(struct acpi_ctxt *ctxt, { } -static uint32_t acpi_lapic_id(unsigned cpu) -{ - return cpu * 2; -} - static int init_acpi_config(libxl__gc *gc, struct xc_dom_image *dom, const libxl_domain_build_info *b_info, @@ -144,7 +139,7 @@ static int init_acpi_config(libxl__gc *gc, config->hvminfo = hvminfo; config->lapic_base_address = LAPIC_BASE_ADDRESS; - config->lapic_id = acpi_lapic_id; + config->cpu_to_apicid = dom->cpu_to_apicid; config->acpi_revision = 5; rc = 0; From patchwork Mon Oct 21 15:45:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13844342 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B31A6D15DB3 for ; Mon, 21 Oct 2024 15:46:37 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.823740.1237837 (Exim 4.92) (envelope-from ) id 1t2uc5-0008TT-9G; Mon, 21 Oct 2024 15:46:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 823740.1237837; Mon, 21 Oct 2024 15:46:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc4-0008RY-U5; Mon, 21 Oct 2024 15:46:32 +0000 Received: by outflank-mailman (input) for mailman id 823740; Mon, 21 Oct 2024 15:46:31 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc3-00072f-L3 for xen-devel@lists.xenproject.org; Mon, 21 Oct 2024 15:46:31 +0000 Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com [2a00:1450:4864:20::12d]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a3938409-8fc3-11ef-a0be-8be0dac302b0; Mon, 21 Oct 2024 17:46:29 +0200 (CEST) Received: by mail-lf1-x12d.google.com with SMTP id 2adb3069b0e04-539f1292a9bso5581983e87.2 for ; Mon, 21 Oct 2024 08:46:29 -0700 (PDT) Received: from localhost.localdomain (0545937c.skybroadband.com. [5.69.147.124]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a9a91370fe5sm215688966b.112.2024.10.21.08.46.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Oct 2024 08:46:28 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a3938409-8fc3-11ef-a0be-8be0dac302b0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1729525589; x=1730130389; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=drYAV2GRBVtyItzvlbWxXVZKwAVghUrG2tjy6Y/nrn8=; b=NsgtkWqUJoX/ZecU5BPAlRrB+MnyVNR1CC5M9mWhhjsZ/hvtM0QgkxvBQvVC1finNQ wkifs8VW6vy7UGPpzis08JeEBqsIXvvzEfnn+HeDviUZ7c6BnG5VcNlYNUSAXueKHuqV SoaNnfGUpp6d1xRijZ4cnMl1JUR3RTiFULjXc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729525589; x=1730130389; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=drYAV2GRBVtyItzvlbWxXVZKwAVghUrG2tjy6Y/nrn8=; b=iwL5iu2egF4DxAXHAJhFxKu+N7sjbHxQxXLZVFSZPF3b3iRnHayv6VKCH1N9VKC9nE TasCIuZprVJRaHtQBshAK0S0iTDYJmKHHdnaUsznlb3vOpyEAkNYYtpJMtDld/5CYNJg ibeOh5ksV2xt45hvZTUlHW+rAq/ZNWXdeYtmiJ6qsCWLnPHTyDoS4osZsL/Ogw74brPO 2VPjka6bc3ey5PgYsXUZIj82f9Vd6qZoXiy6ctGj11sAqRMCmV+slwe0Yp5OjPRB5U3M aDDBtiSP20q1FXqcjtAG5tfPEniUxwWoUjU/li98ZWCkEAkLttOTX8MJMeVAvKS5Of7d tXXw== X-Gm-Message-State: AOJu0YyPSzBHGEeNY6bcMq2CBuhB9jInS6gp6eDSlPJDPcnbMBVaHIMZ arZqWWa75dzbWoJ8JR+k+YYz+f4yx7iQm3iEZKlbTvTT7M6yymGeT5kjV7kY2DNz6PEkYFoU/hW F X-Google-Smtp-Source: AGHT+IG3Qtze17MfiMkatL5qI9j1ubDDXbU9gfmG9CbFNA99JNwPYq0pyj7mFHKGcMnqXtCxG4WZGw== X-Received: by 2002:ac2:4c4a:0:b0:539:fb49:c47d with SMTP id 2adb3069b0e04-53a153410a1mr5489109e87.12.1729525588875; Mon, 21 Oct 2024 08:46:28 -0700 (PDT) From: Alejandro Vallejo To: xen-devel@lists.xenproject.org Cc: Alejandro Vallejo , Anthony PERARD , Juergen Gross Subject: [PATCH v7 06/10] tools/libguest: Always set vCPU context in vcpu_hvm() Date: Mon, 21 Oct 2024 16:45:56 +0100 Message-ID: <20241021154600.11745-7-alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241021154600.11745-1-alejandro.vallejo@cloud.com> References: <20241021154600.11745-1-alejandro.vallejo@cloud.com> MIME-Version: 1.0 Currently used by PVH to set MTRR, will be used by a later patch to set APIC state. Unconditionally send the hypercall, and gate overriding the MTRR so it remains functionally equivalent. While at it, add a missing "goto out" to what was the error condition in the loop. In principle this patch shouldn't affect functionality. An extra record (the MTRR) is sent to the hypervisor per vCPU on HVM, but these records are identical to those retrieved in the first place so there's no expected functional change. Signed-off-by: Alejandro Vallejo --- v7: * Unchanged --- tools/libs/guest/xg_dom_x86.c | 84 ++++++++++++++++++----------------- 1 file changed, 44 insertions(+), 40 deletions(-) diff --git a/tools/libs/guest/xg_dom_x86.c b/tools/libs/guest/xg_dom_x86.c index cba01384ae75..c98229317db7 100644 --- a/tools/libs/guest/xg_dom_x86.c +++ b/tools/libs/guest/xg_dom_x86.c @@ -989,6 +989,7 @@ const static void *hvm_get_save_record(const void *ctx, unsigned int type, static int vcpu_hvm(struct xc_dom_image *dom) { + /* Initialises the BSP */ struct { struct hvm_save_descriptor header_d; HVM_SAVE_TYPE(HEADER) header; @@ -997,6 +998,18 @@ static int vcpu_hvm(struct xc_dom_image *dom) struct hvm_save_descriptor end_d; HVM_SAVE_TYPE(END) end; } bsp_ctx; + /* Initialises APICs and MTRRs of every vCPU */ + struct { + struct hvm_save_descriptor header_d; + HVM_SAVE_TYPE(HEADER) header; + struct hvm_save_descriptor mtrr_d; + HVM_SAVE_TYPE(MTRR) mtrr; + struct hvm_save_descriptor end_d; + HVM_SAVE_TYPE(END) end; + } vcpu_ctx; + /* Context from full_ctx */ + const HVM_SAVE_TYPE(MTRR) *mtrr_record; + /* Raw context as taken from Xen */ uint8_t *full_ctx = NULL; int rc; @@ -1083,51 +1096,42 @@ static int vcpu_hvm(struct xc_dom_image *dom) bsp_ctx.end_d.instance = 0; bsp_ctx.end_d.length = HVM_SAVE_LENGTH(END); - /* TODO: maybe this should be a firmware option instead? */ - if ( !dom->device_model ) + /* TODO: maybe setting MTRRs should be a firmware option instead? */ + mtrr_record = hvm_get_save_record(full_ctx, HVM_SAVE_CODE(MTRR), 0); + + if ( !mtrr_record) { - struct { - struct hvm_save_descriptor header_d; - HVM_SAVE_TYPE(HEADER) header; - struct hvm_save_descriptor mtrr_d; - HVM_SAVE_TYPE(MTRR) mtrr; - struct hvm_save_descriptor end_d; - HVM_SAVE_TYPE(END) end; - } mtrr = { - .header_d = bsp_ctx.header_d, - .header = bsp_ctx.header, - .mtrr_d.typecode = HVM_SAVE_CODE(MTRR), - .mtrr_d.length = HVM_SAVE_LENGTH(MTRR), - .end_d = bsp_ctx.end_d, - .end = bsp_ctx.end, - }; - const HVM_SAVE_TYPE(MTRR) *mtrr_record = - hvm_get_save_record(full_ctx, HVM_SAVE_CODE(MTRR), 0); - unsigned int i; - - if ( !mtrr_record ) - { - xc_dom_panic(dom->xch, XC_INTERNAL_ERROR, - "%s: unable to get MTRR save record", __func__); - goto out; - } + xc_dom_panic(dom->xch, XC_INTERNAL_ERROR, + "%s: unable to get MTRR save record", __func__); + goto out; + } - memcpy(&mtrr.mtrr, mtrr_record, sizeof(mtrr.mtrr)); + vcpu_ctx.header_d = bsp_ctx.header_d; + vcpu_ctx.header = bsp_ctx.header; + vcpu_ctx.mtrr_d.typecode = HVM_SAVE_CODE(MTRR); + vcpu_ctx.mtrr_d.length = HVM_SAVE_LENGTH(MTRR); + vcpu_ctx.mtrr = *mtrr_record; + vcpu_ctx.end_d = bsp_ctx.end_d; + vcpu_ctx.end = bsp_ctx.end; - /* - * Enable MTRR, set default type to WB. - * TODO: add MMIO areas as UC when passthrough is supported. - */ - mtrr.mtrr.msr_mtrr_def_type = MTRR_TYPE_WRBACK | MTRR_DEF_TYPE_ENABLE; + /* + * Enable MTRR, set default type to WB. + * TODO: add MMIO areas as UC when passthrough is supported in PVH + */ + if ( !dom->device_model ) + vcpu_ctx.mtrr.msr_mtrr_def_type = MTRR_TYPE_WRBACK | MTRR_DEF_TYPE_ENABLE; + + for ( unsigned int i = 0; i < dom->max_vcpus; i++ ) + { + vcpu_ctx.mtrr_d.instance = i; - for ( i = 0; i < dom->max_vcpus; i++ ) + rc = xc_domain_hvm_setcontext(dom->xch, dom->guest_domid, + (uint8_t *)&vcpu_ctx, sizeof(vcpu_ctx)); + if ( rc != 0 ) { - mtrr.mtrr_d.instance = i; - rc = xc_domain_hvm_setcontext(dom->xch, dom->guest_domid, - (uint8_t *)&mtrr, sizeof(mtrr)); - if ( rc != 0 ) - xc_dom_panic(dom->xch, XC_INTERNAL_ERROR, - "%s: SETHVMCONTEXT failed (rc=%d)", __func__, rc); + xc_dom_panic(dom->xch, XC_INTERNAL_ERROR, + "%s: SETHVMCONTEXT failed (rc=%d)", __func__, rc); + goto out; } } From patchwork Mon Oct 21 15:45:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13844343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 33A94D15DB8 for ; Mon, 21 Oct 2024 15:46:38 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.823741.1237842 (Exim 4.92) (envelope-from ) id 1t2uc5-00009q-Ux; Mon, 21 Oct 2024 15:46:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 823741.1237842; Mon, 21 Oct 2024 15:46:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc5-00008K-Hb; Mon, 21 Oct 2024 15:46:33 +0000 Received: by outflank-mailman (input) for mailman id 823741; Mon, 21 Oct 2024 15:46:32 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc4-0006wR-BN for xen-devel@lists.xenproject.org; Mon, 21 Oct 2024 15:46:32 +0000 Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com [2a00:1450:4864:20::12c]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id a407dd8c-8fc3-11ef-99a3-01e77a169b0f; Mon, 21 Oct 2024 17:46:30 +0200 (CEST) Received: by mail-lf1-x12c.google.com with SMTP id 2adb3069b0e04-539f2b95775so5435340e87.1 for ; Mon, 21 Oct 2024 08:46:30 -0700 (PDT) Received: from localhost.localdomain (0545937c.skybroadband.com. [5.69.147.124]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a9a91370fe5sm215688966b.112.2024.10.21.08.46.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Oct 2024 08:46:29 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a407dd8c-8fc3-11ef-99a3-01e77a169b0f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1729525590; x=1730130390; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=B+0Jbu2kh7Cn4CCRfH/8q2upu7Cfme230B9kVzgUhq8=; b=PhSZxRjenbKet4mSJFH9t5Jb128DCXflavWSQy3ynGzQLw6p7YhS3JtMjTBX4WNRkY CYyvqXwXVfIzdZ6mxzsAtp+PZP51dCoMbms0yt5pJqn5g0FFUIZToe3fBWf0toAoLAig /mVUhcA0bIjs0pbMwpVi4tmilxy9e3ekfQh98= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729525590; x=1730130390; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=B+0Jbu2kh7Cn4CCRfH/8q2upu7Cfme230B9kVzgUhq8=; b=ZZKkaGGMJHLiqduhIrP3h2LvoJ2qcbkijNgAYk14pXowwiEooFy+PUvvq8/9X0GYbl eoHTpQlI14xUZdfNVPxaG2PnZmFiNoN53U1mwuwMnkCueeYgvc+tLvPh9dfcspgTAUyz MlAgyYt7XW6A0S63LJNqp7+BMncPoXSJqhHtlJJRAktgPyj3PWSqf4odEQ9SrnJJsNd6 aTraI/753tiYSMVRweDneOWFWIKh0cfj2bmDRBpUM+JmLS1VepOrvBQoffqspyws6dZa QgJhQDMAq0UOv5QR1R0kpgJnvLJACUATPku/b1kZliOedEta/JSiHM99OxkY8KY7cDYX bi8g== X-Gm-Message-State: AOJu0YxU8HpjjzrzRbrT3XJAEYiGooWywl8Jvoy/ch7TlAc9h2zI5FQj /fVh0Xv35gNULV+WwF9DSnNCjfsPuUTLQjR46XZ3wki+YlrmN88wbSRwhY4yrzwSPZHps8Qi7D1 5 X-Google-Smtp-Source: AGHT+IHXZ1CnR00lcFsoJkWHss8JP3iG6PSkfzvmmszq9athvUFwwflQJ2IXTTXzwW2fFJl3M3ZsZg== X-Received: by 2002:a05:6512:3ca7:b0:531:8f2f:8ae7 with SMTP id 2adb3069b0e04-53a1545dfcbmr9958279e87.25.1729525589641; Mon, 21 Oct 2024 08:46:29 -0700 (PDT) From: Alejandro Vallejo To: xen-devel@lists.xenproject.org Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v7 07/10] xen/lib: Add topology generator for x86 Date: Mon, 21 Oct 2024 16:45:57 +0100 Message-ID: <20241021154600.11745-8-alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241021154600.11745-1-alejandro.vallejo@cloud.com> References: <20241021154600.11745-1-alejandro.vallejo@cloud.com> MIME-Version: 1.0 Add a helper to populate topology leaves in the cpu policy from threads/core and cores/package counts. It's unit-tested in test-cpu-policy.c, but it's not connected to the rest of the code yet. Intel's cache leaves (CPUID[4]) have limited width for core counts, so (in the absence of real world data for how it might behave) this implementation takes the view that those counts should clip to their maximum values on overflow. Just like lppp and NC. Adds the ASSERT() macro to xen/lib/x86/private.h, as it was missing. Signed-off-by: Alejandro Vallejo --- v7: * MAX/MIN -> max/min; adding U suffixes to literals for type-matching and uppercases for MISRA compliance. * Clip core counts in cache leaves to their maximum values * Remove unified cache conditional. Less code, and less likely for the threads_per_cache field to clip. * Add extra check to ensure threads_per_pkg fit in 16 bits (which is the space they have in leaf 0xb. * Add extra check to detect overflow in threads_per_pkg calculation. * Reworked the comment for the topo generator, expressing more clearly what are inputs and what are outputs. --- tools/tests/cpu-policy/test-cpu-policy.c | 133 +++++++++++++++++++++++ xen/include/xen/lib/x86/cpu-policy.h | 16 +++ xen/lib/x86/policy.c | 93 ++++++++++++++++ xen/lib/x86/private.h | 4 + 4 files changed, 246 insertions(+) diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c index 301df2c00285..849d7cebaa7c 100644 --- a/tools/tests/cpu-policy/test-cpu-policy.c +++ b/tools/tests/cpu-policy/test-cpu-policy.c @@ -650,6 +650,137 @@ static void test_is_compatible_failure(void) } } +static void test_topo_from_parts(void) +{ + static const struct test { + unsigned int threads_per_core; + unsigned int cores_per_pkg; + struct cpu_policy policy; + } tests[] = { + { + .threads_per_core = 3, .cores_per_pkg = 1, + .policy = { + .x86_vendor = X86_VENDOR_AMD, + .topo.subleaf = { + { .nr_logical = 3, .level = 0, .type = 1, .id_shift = 2, }, + { .nr_logical = 1, .level = 1, .type = 2, .id_shift = 2, }, + }, + }, + }, + { + .threads_per_core = 1, .cores_per_pkg = 3, + .policy = { + .x86_vendor = X86_VENDOR_AMD, + .topo.subleaf = { + { .nr_logical = 1, .level = 0, .type = 1, .id_shift = 0, }, + { .nr_logical = 3, .level = 1, .type = 2, .id_shift = 2, }, + }, + }, + }, + { + .threads_per_core = 7, .cores_per_pkg = 5, + .policy = { + .x86_vendor = X86_VENDOR_AMD, + .topo.subleaf = { + { .nr_logical = 7, .level = 0, .type = 1, .id_shift = 3, }, + { .nr_logical = 5, .level = 1, .type = 2, .id_shift = 6, }, + }, + }, + }, + { + .threads_per_core = 2, .cores_per_pkg = 128, + .policy = { + .x86_vendor = X86_VENDOR_AMD, + .topo.subleaf = { + { .nr_logical = 2, .level = 0, .type = 1, .id_shift = 1, }, + { .nr_logical = 128, .level = 1, .type = 2, + .id_shift = 8, }, + }, + }, + }, + { + .threads_per_core = 3, .cores_per_pkg = 1, + .policy = { + .x86_vendor = X86_VENDOR_INTEL, + .topo.subleaf = { + { .nr_logical = 3, .level = 0, .type = 1, .id_shift = 2, }, + { .nr_logical = 3, .level = 1, .type = 2, .id_shift = 2, }, + }, + }, + }, + { + .threads_per_core = 1, .cores_per_pkg = 3, + .policy = { + .x86_vendor = X86_VENDOR_INTEL, + .topo.subleaf = { + { .nr_logical = 1, .level = 0, .type = 1, .id_shift = 0, }, + { .nr_logical = 3, .level = 1, .type = 2, .id_shift = 2, }, + }, + }, + }, + { + .threads_per_core = 7, .cores_per_pkg = 5, + .policy = { + .x86_vendor = X86_VENDOR_INTEL, + .topo.subleaf = { + { .nr_logical = 7, .level = 0, .type = 1, .id_shift = 3, }, + { .nr_logical = 35, .level = 1, .type = 2, .id_shift = 6, }, + }, + }, + }, + { + .threads_per_core = 2, .cores_per_pkg = 128, + .policy = { + .x86_vendor = X86_VENDOR_INTEL, + .topo.subleaf = { + { .nr_logical = 2, .level = 0, .type = 1, .id_shift = 1, }, + { .nr_logical = 256, .level = 1, .type = 2, + .id_shift = 8, }, + }, + }, + }, + }; + + printf("Testing topology synthesis from parts:\n"); + + for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i ) + { + const struct test *t = &tests[i]; + struct cpu_policy actual = { .x86_vendor = t->policy.x86_vendor }; + int rc = x86_topo_from_parts(&actual, t->threads_per_core, + t->cores_per_pkg); + + if ( rc || memcmp(&actual.topo, &t->policy.topo, sizeof(actual.topo)) ) + { +#define TOPO(n, f) t->policy.topo.subleaf[(n)].f, actual.topo.subleaf[(n)].f + fail("FAIL[%d] - '%s %u t/c, %u c/p'\n", + rc, + x86_cpuid_vendor_to_str(t->policy.x86_vendor), + t->threads_per_core, t->cores_per_pkg); + printf(" subleaf=%u expected_n=%u actual_n=%u\n" + " expected_lvl=%u actual_lvl=%u\n" + " expected_type=%u actual_type=%u\n" + " expected_shift=%u actual_shift=%u\n", + 0, + TOPO(0, nr_logical), + TOPO(0, level), + TOPO(0, type), + TOPO(0, id_shift)); + + printf(" subleaf=%u expected_n=%u actual_n=%u\n" + " expected_lvl=%u actual_lvl=%u\n" + " expected_type=%u actual_type=%u\n" + " expected_shift=%u actual_shift=%u\n", + 1, + TOPO(1, nr_logical), + TOPO(1, level), + TOPO(1, type), + TOPO(1, id_shift)); +#undef TOPO + } + } +} + int main(int argc, char **argv) { printf("CPU Policy unit tests\n"); @@ -667,6 +798,8 @@ int main(int argc, char **argv) test_is_compatible_success(); test_is_compatible_failure(); + test_topo_from_parts(); + if ( nr_failures ) printf("Done: %u failures\n", nr_failures); else diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h index f43e1a3b21e9..67d16fda933d 100644 --- a/xen/include/xen/lib/x86/cpu-policy.h +++ b/xen/include/xen/lib/x86/cpu-policy.h @@ -542,6 +542,22 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host, const struct cpu_policy *guest, struct cpu_policy_errors *err); +/** + * Synthesise topology information in `p` given high-level constraints + * + * Topology is expressed in various fields accross several leaves, some of + * which are vendor-specific. This function populates such fields given + * threads/core, cores/package and other existing policy fields. + * + * @param p CPU policy of the domain. + * @param threads_per_core threads/core. Doesn't need to be a power of 2. + * @param cores_per_package cores/package. Doesn't need to be a power of 2. + * @return 0 on success; -errno on failure + */ +int x86_topo_from_parts(struct cpu_policy *p, + unsigned int threads_per_core, + unsigned int cores_per_pkg); + #endif /* !XEN_LIB_X86_POLICIES_H */ /* diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c index f033d22785be..5ff89022e901 100644 --- a/xen/lib/x86/policy.c +++ b/xen/lib/x86/policy.c @@ -2,6 +2,99 @@ #include +static unsigned int order(unsigned int n) +{ + ASSERT(n); /* clz(0) is UB */ + + return 8 * sizeof(n) - __builtin_clz(n); +} + +int x86_topo_from_parts(struct cpu_policy *p, + unsigned int threads_per_core, + unsigned int cores_per_pkg) +{ + unsigned int threads_per_pkg = threads_per_core * cores_per_pkg; + unsigned int apic_id_size; + + /* + * threads_per_pkg must fit in 16bits to avoid overflowing + * nr_logical in leaf 0xb on Intel systems. + */ + if ( !p || !threads_per_core || !cores_per_pkg || + threads_per_pkg > UINT16_MAX || + threads_per_pkg / cores_per_pkg != threads_per_core ) + return -EINVAL; + + p->basic.max_leaf = max(0xBU, p->basic.max_leaf); + + memset(p->topo.raw, 0, sizeof(p->topo.raw)); + + /* thread level */ + p->topo.subleaf[0].nr_logical = threads_per_core; + p->topo.subleaf[0].id_shift = 0; + p->topo.subleaf[0].level = 0; + p->topo.subleaf[0].type = 1; + if ( threads_per_core > 1 ) + p->topo.subleaf[0].id_shift = order(threads_per_core - 1); + + /* core level */ + p->topo.subleaf[1].nr_logical = cores_per_pkg; + if ( p->x86_vendor == X86_VENDOR_INTEL ) + p->topo.subleaf[1].nr_logical = threads_per_pkg; + p->topo.subleaf[1].id_shift = p->topo.subleaf[0].id_shift; + p->topo.subleaf[1].level = 1; + p->topo.subleaf[1].type = 2; + if ( cores_per_pkg > 1 ) + p->topo.subleaf[1].id_shift += order(cores_per_pkg - 1); + + apic_id_size = p->topo.subleaf[1].id_shift; + + /* + * Contrary to what the name might seem to imply. HTT is an enabler for + * SMP and there's no harm in setting it even with a single vCPU. + */ + p->basic.htt = true; + p->basic.lppp = min(0xFFU, threads_per_pkg); + + switch ( p->x86_vendor ) + { + case X86_VENDOR_INTEL: { + struct cpuid_cache_leaf *sl = p->cache.subleaf; + + for ( size_t i = 0; sl->type && + i < ARRAY_SIZE(p->cache.raw); i++, sl++ ) + { + /* Clip these values to their max if they overflow */ + sl->cores_per_package = min(63U, cores_per_pkg - 1); + sl->threads_per_cache = min(4095U, threads_per_core - 1); + } + break; + } + + case X86_VENDOR_AMD: + case X86_VENDOR_HYGON: + /* Expose p->basic.lppp */ + p->extd.cmp_legacy = true; + + /* Clip NC to the maximum value it can hold */ + p->extd.nc = min(0xFFU, threads_per_pkg - 1); + + /* TODO: Expose leaf e1E */ + p->extd.topoext = false; + + /* + * Clip APIC ID to 8 bits, as that's what high core-count machines do. + * + * That's what AMD EPYC 9654 does with >256 CPUs. + */ + p->extd.apic_id_size = min(8U, apic_id_size); + + break; + } + + return 0; +} + int x86_cpu_policies_are_compatible(const struct cpu_policy *host, const struct cpu_policy *guest, struct cpu_policy_errors *err) diff --git a/xen/lib/x86/private.h b/xen/lib/x86/private.h index 60bb82a400b7..2ec9dbee33c2 100644 --- a/xen/lib/x86/private.h +++ b/xen/lib/x86/private.h @@ -4,6 +4,7 @@ #ifdef __XEN__ #include +#include #include #include #include @@ -17,6 +18,7 @@ #else +#include #include #include #include @@ -28,6 +30,8 @@ #include +#define ASSERT(x) assert(x) + static inline bool test_bit(unsigned int bit, const void *vaddr) { const char *addr = vaddr; From patchwork Mon Oct 21 15:45:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13844344 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E9887D15D99 for ; Mon, 21 Oct 2024 15:46:39 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.823744.1237851 (Exim 4.92) (envelope-from ) id 1t2uc7-0000RT-3W; Mon, 21 Oct 2024 15:46:35 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 823744.1237851; Mon, 21 Oct 2024 15:46:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc6-0000P1-JC; Mon, 21 Oct 2024 15:46:34 +0000 Received: by outflank-mailman (input) for mailman id 823744; Mon, 21 Oct 2024 15:46:33 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc5-0006wR-BY for xen-devel@lists.xenproject.org; Mon, 21 Oct 2024 15:46:33 +0000 Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com [2a00:1450:4864:20::62b]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id a477a862-8fc3-11ef-99a3-01e77a169b0f; Mon, 21 Oct 2024 17:46:31 +0200 (CEST) Received: by mail-ej1-x62b.google.com with SMTP id a640c23a62f3a-a9a68480164so419718166b.3 for ; Mon, 21 Oct 2024 08:46:31 -0700 (PDT) Received: from localhost.localdomain (0545937c.skybroadband.com. [5.69.147.124]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a9a91370fe5sm215688966b.112.2024.10.21.08.46.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Oct 2024 08:46:30 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a477a862-8fc3-11ef-99a3-01e77a169b0f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1729525590; x=1730130390; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tyMILG5xYLAUWZBjTYch+adQ7UE+pZOJc6bGIxa6c/k=; b=gbIqMD/LvondHsiZusAleCtxCJ5+HF2niZpv+nnedf0K91XU4zShgsemfSsL4j2W+o QZnh2x7hbqEpAJ/O0TtsYia/p3SIWYCiaFBc4DKC5B/NuA5hfDT0fzTSkO0UTfM82e6R mpqOio20ZhhFpQVByvMQShUc8pAHb3a25Yf2s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729525590; x=1730130390; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tyMILG5xYLAUWZBjTYch+adQ7UE+pZOJc6bGIxa6c/k=; b=j+VSmJqo1sKSq5Zdv6va34n71gg5tMVwrZOowZRRuCixfWcqxNuWTxQsMixg4Gs2yt c0eZV13/KnTTUqIwjunB9wNYOcqdhucUsP8oV9X06zgzdgTvpZtuxS9pL3z/kezRx/0d Y11+9FcahEHUKdss58nKyc7M8YlCrZsyn7MKwaD+k1czieVwfOn4iLv8oMy1gSh9qi2I 01+XJTBn5SBdXuXLnpw+dWUHiQWnoEdYECASz6An1E/5h2UCkvWG09Kd0m2/VPoqXTXB c3WkAPebUeUYUfE+hwP1AEj9CLLZFtTEKDp3ci29hvxPNmo/I8hq/KmOAJR8uNTtdPMm 2zOQ== X-Gm-Message-State: AOJu0YwBxKLpuv5LOgwSAw3BeNEnMCrwxf1IRMxcNhSAjy83how1KzsR hMIQ7DGF0wa2Q1XUroH/eDBvKRDpE/IgPIsS7K4xHCmIfDclrq7naC3MTZI5Ums2ns66eB3alZK W X-Google-Smtp-Source: AGHT+IEHO+BMoGb02KNxKaz4YLWXtEMPsaSKlJNp8iV78Jnc1cuBBSu1Fy+E5dU4JnuL1X/1iUvnbg== X-Received: by 2002:a17:907:9629:b0:a99:43b2:417e with SMTP id a640c23a62f3a-a9a69ce1072mr1361128466b.62.1729525590393; Mon, 21 Oct 2024 08:46:30 -0700 (PDT) From: Alejandro Vallejo To: xen-devel@lists.xenproject.org Cc: Alejandro Vallejo , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Anthony PERARD Subject: [PATCH v7 08/10] xen/x86: Derive topologically correct x2APIC IDs from the policy Date: Mon, 21 Oct 2024 16:45:58 +0100 Message-ID: <20241021154600.11745-9-alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241021154600.11745-1-alejandro.vallejo@cloud.com> References: <20241021154600.11745-1-alejandro.vallejo@cloud.com> MIME-Version: 1.0 Implements the helper for mapping vcpu_id to x2apic_id given a valid topology in a policy. The algo is written with the intention of extending it to leaves 0x1f and extended 0x26 in the future. The helper returns the legacy mapping when leaf 0xb is not implemented (as is the case at the moment). Signed-off-by: Alejandro Vallejo --- v7: * Changes to commit message --- tools/tests/cpu-policy/test-cpu-policy.c | 68 +++++++++++++++++++++ xen/include/xen/lib/x86/cpu-policy.h | 11 ++++ xen/lib/x86/policy.c | 76 ++++++++++++++++++++++++ 3 files changed, 155 insertions(+) diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c index 849d7cebaa7c..e5f9b8f7ee39 100644 --- a/tools/tests/cpu-policy/test-cpu-policy.c +++ b/tools/tests/cpu-policy/test-cpu-policy.c @@ -781,6 +781,73 @@ static void test_topo_from_parts(void) } } +static void test_x2apic_id_from_vcpu_id_success(void) +{ + static const struct test { + unsigned int vcpu_id; + unsigned int threads_per_core; + unsigned int cores_per_pkg; + uint32_t x2apic_id; + uint8_t x86_vendor; + } tests[] = { + { + .vcpu_id = 3, .threads_per_core = 3, .cores_per_pkg = 8, + .x2apic_id = 1 << 2, + }, + { + .vcpu_id = 6, .threads_per_core = 3, .cores_per_pkg = 8, + .x2apic_id = 2 << 2, + }, + { + .vcpu_id = 24, .threads_per_core = 3, .cores_per_pkg = 8, + .x2apic_id = 1 << 5, + }, + { + .vcpu_id = 35, .threads_per_core = 3, .cores_per_pkg = 8, + .x2apic_id = (35 % 3) | (((35 / 3) % 8) << 2) | ((35 / 24) << 5), + }, + { + .vcpu_id = 96, .threads_per_core = 7, .cores_per_pkg = 3, + .x2apic_id = (96 % 7) | (((96 / 7) % 3) << 3) | ((96 / 21) << 5), + }, + }; + + const uint8_t vendors[] = { + X86_VENDOR_INTEL, + X86_VENDOR_AMD, + X86_VENDOR_CENTAUR, + X86_VENDOR_SHANGHAI, + X86_VENDOR_HYGON, + }; + + printf("Testing x2apic id from vcpu id success:\n"); + + /* Perform the test run on every vendor we know about */ + for ( size_t i = 0; i < ARRAY_SIZE(vendors); ++i ) + { + for ( size_t j = 0; j < ARRAY_SIZE(tests); ++j ) + { + struct cpu_policy policy = { .x86_vendor = vendors[i] }; + const struct test *t = &tests[j]; + uint32_t x2apic_id; + int rc = x86_topo_from_parts(&policy, t->threads_per_core, + t->cores_per_pkg); + + if ( rc ) { + fail("FAIL[%d] - 'x86_topo_from_parts() failed", rc); + continue; + } + + x2apic_id = x86_x2apic_id_from_vcpu_id(&policy, t->vcpu_id); + if ( x2apic_id != t->x2apic_id ) + fail("FAIL - '%s cpu%u %u t/c %u c/p'. bad x2apic_id: expected=%u actual=%u\n", + x86_cpuid_vendor_to_str(policy.x86_vendor), + t->vcpu_id, t->threads_per_core, t->cores_per_pkg, + t->x2apic_id, x2apic_id); + } + } +} + int main(int argc, char **argv) { printf("CPU Policy unit tests\n"); @@ -799,6 +866,7 @@ int main(int argc, char **argv) test_is_compatible_failure(); test_topo_from_parts(); + test_x2apic_id_from_vcpu_id_success(); if ( nr_failures ) printf("Done: %u failures\n", nr_failures); diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h index 67d16fda933d..61d5cf3c7f12 100644 --- a/xen/include/xen/lib/x86/cpu-policy.h +++ b/xen/include/xen/lib/x86/cpu-policy.h @@ -542,6 +542,17 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host, const struct cpu_policy *guest, struct cpu_policy_errors *err); +/** + * Calculates the x2APIC ID of a vCPU given a CPU policy + * + * If the policy lacks leaf 0xb falls back to legacy mapping of apic_id=cpu*2 + * + * @param p CPU policy of the domain. + * @param id vCPU ID of the vCPU. + * @returns x2APIC ID of the vCPU. + */ +uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t id); + /** * Synthesise topology information in `p` given high-level constraints * diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c index 5ff89022e901..427a90f907a2 100644 --- a/xen/lib/x86/policy.c +++ b/xen/lib/x86/policy.c @@ -2,6 +2,82 @@ #include +static uint32_t parts_per_higher_scoped_level(const struct cpu_policy *p, + size_t lvl) +{ + /* + * `nr_logical` reported by Intel is the number of THREADS contained in + * the next topological scope. For example, assuming a system with 2 + * threads/core and 3 cores/module in a fully symmetric topology, + * `nr_logical` at the core level will report 6. Because it's reporting + * the number of threads in a module. + * + * On AMD/Hygon, nr_logical is already normalized by the higher scoped + * level (cores/complex, etc) so we can return it as-is. + */ + if ( p->x86_vendor != X86_VENDOR_INTEL || !lvl ) + return p->topo.subleaf[lvl].nr_logical; + + return p->topo.subleaf[lvl].nr_logical / + p->topo.subleaf[lvl - 1].nr_logical; +} + +uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t id) +{ + uint32_t shift = 0, x2apic_id = 0; + + /* In the absence of topology leaves, fallback to traditional mapping */ + if ( !p->topo.subleaf[0].type ) + return id * 2; + + /* + * `id` means different things at different points of the algo + * + * At lvl=0: global thread_id (same as vcpu_id) + * At lvl=1: global core_id + * At lvl=2: global socket_id (actually complex_id in AMD, module_id + * in Intel, but the name is inconsequential) + * + * +--+ + * ____ |#0| ______ <= 1 socket + * / +--+ \+--+ + * __#0__ __|#1|__ <= 2 cores/socket + * / | \ +--+/ +-|+ \ + * #0 #1 #2 |#3| #4 #5 <= 3 threads/core + * +--+ + * + * ... and so on. Global in this context means that it's a unique + * identifier for the whole topology, and not relative to the level + * it's in. For example, in the diagram shown above, we're looking at + * thread #3 in the global sense, though it's #0 within its core. + * + * Note that dividing a global thread_id by the number of threads per + * core returns the global core id that contains it. e.g: 0, 1 or 2 + * divided by 3 returns core_id=0. 3, 4 or 5 divided by 3 returns core + * 1, and so on. An analogous argument holds for higher levels. This is + * the property we exploit to derive x2apic_id from vcpu_id. + * + * NOTE: `topo` is currently derived from leaf 0xb, which is bound to two + * levels, but once we track leaves 0x1f (or extended 0x26) there will be a + * few more. The algorithm is written to cope with that case. + */ + for ( uint32_t i = 0; i < ARRAY_SIZE(p->topo.raw); i++ ) + { + uint32_t nr_parts; + + if ( !p->topo.subleaf[i].type ) + /* sentinel subleaf */ + break; + + nr_parts = parts_per_higher_scoped_level(p, i); + x2apic_id |= (id % nr_parts) << shift; + id /= nr_parts; + shift = p->topo.subleaf[i].id_shift; + } + + return (id << shift) | x2apic_id; +} + static unsigned int order(unsigned int n) { ASSERT(n); /* clz(0) is UB */ From patchwork Mon Oct 21 15:45:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13844345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 915AED15DB4 for ; Mon, 21 Oct 2024 15:46:41 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.823745.1237869 (Exim 4.92) (envelope-from ) id 1t2uc8-0000w5-VF; Mon, 21 Oct 2024 15:46:36 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 823745.1237869; Mon, 21 Oct 2024 15:46:36 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc8-0000sr-EJ; Mon, 21 Oct 2024 15:46:36 +0000 Received: by outflank-mailman (input) for mailman id 823745; Mon, 21 Oct 2024 15:46:34 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc6-0006wR-Bm for xen-devel@lists.xenproject.org; Mon, 21 Oct 2024 15:46:34 +0000 Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [2a00:1450:4864:20::634]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id a4dd602d-8fc3-11ef-99a3-01e77a169b0f; Mon, 21 Oct 2024 17:46:32 +0200 (CEST) Received: by mail-ej1-x634.google.com with SMTP id a640c23a62f3a-a9a6acac4c3so513927966b.0 for ; Mon, 21 Oct 2024 08:46:32 -0700 (PDT) Received: from localhost.localdomain (0545937c.skybroadband.com. [5.69.147.124]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a9a91370fe5sm215688966b.112.2024.10.21.08.46.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Oct 2024 08:46:30 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a4dd602d-8fc3-11ef-99a3-01e77a169b0f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1729525591; x=1730130391; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1Ah3/FAbVDfMti3UAa0nn+9bbUxazwt70g5vkEVtpiQ=; b=NWUY+epNoew5/1ldTepo39PKCN1U7eBVUnS36w5owKxRhJqxVsbFaIscMzvDPch6fa v08E/2ilPKswXK7A27b4q1AN6km/9GfvzinaWZf5PDrxO1HwIGR4MLMhUuxLsfMEpPhI oTrgaJqMkPBtZIr9N1sUCa0bhwZue4kgdfPLk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729525591; x=1730130391; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1Ah3/FAbVDfMti3UAa0nn+9bbUxazwt70g5vkEVtpiQ=; b=VNfWA7UugFsYGu/tURwu4l8zOBERnp9RIqaSK6zNOIpS0g7lCnpth6s4vzawqAuH8E iITBjbUjQ532aPAOxt5ka9d3dB5hOZqmo1VBV1l+jlw7gszuIEou+jG7MQT+aIhKqxRY 30+Jmp457atv2puJ3Li2nxXmEVu55MeuEJ/Cxtv/nsLZ+a1S7WBOOU64yuXZ9prCnY9S D4BiRTEnKGulB3yPR374iFUA/stZvbb8e3xD9hFo7WviqufMgymHbEUdsMdl1x1x2Nxq 65GQxJwvk0roN0tCk4mEJLq88HHz79aO2IsL7CnTnKYptixumqBdrTjGvLkvJmLfVjcH gp/w== X-Gm-Message-State: AOJu0Yw87dQ++8Zch4n0JvgFOP4WcxDrUvdfdZeA3W/WgSnPBgd3aDIZ zEpCCUmQb0ccLxXok4Ne+dlwgOs5G9TIrc6/LRp1NbkearX9xW8mTYSZ7bdQKwNlY+pYBNU+hIe Q X-Google-Smtp-Source: AGHT+IHrrel4rYJieUp5DclhkwRc/OzjjzSiMA6PoQqA3Un6eQfLV09C+ETms8nfNVWbTtbGfL3cog== X-Received: by 2002:a17:907:e89:b0:a99:61f7:8413 with SMTP id a640c23a62f3a-a9a69a75abdmr1097909866b.23.1729525591042; Mon, 21 Oct 2024 08:46:31 -0700 (PDT) From: Alejandro Vallejo To: xen-devel@lists.xenproject.org Cc: Alejandro Vallejo , Anthony PERARD , Juergen Gross Subject: [PATCH v7 09/10] tools/libguest: Set distinct x2APIC IDs for each vCPU Date: Mon, 21 Oct 2024 16:45:59 +0100 Message-ID: <20241021154600.11745-10-alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241021154600.11745-1-alejandro.vallejo@cloud.com> References: <20241021154600.11745-1-alejandro.vallejo@cloud.com> MIME-Version: 1.0 Have toolstack populate the new x2APIC ID in the LAPIC save record with the proper IDs intended for each vCPU. Signed-off-by: Alejandro Vallejo --- v7: * Unchanged --- tools/libs/guest/xg_dom_x86.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/tools/libs/guest/xg_dom_x86.c b/tools/libs/guest/xg_dom_x86.c index c98229317db7..38486140ed15 100644 --- a/tools/libs/guest/xg_dom_x86.c +++ b/tools/libs/guest/xg_dom_x86.c @@ -1004,11 +1004,14 @@ static int vcpu_hvm(struct xc_dom_image *dom) HVM_SAVE_TYPE(HEADER) header; struct hvm_save_descriptor mtrr_d; HVM_SAVE_TYPE(MTRR) mtrr; + struct hvm_save_descriptor lapic_d; + HVM_SAVE_TYPE(LAPIC) lapic; struct hvm_save_descriptor end_d; HVM_SAVE_TYPE(END) end; } vcpu_ctx; - /* Context from full_ctx */ + /* Contexts from full_ctx */ const HVM_SAVE_TYPE(MTRR) *mtrr_record; + const HVM_SAVE_TYPE(LAPIC) *lapic_record; /* Raw context as taken from Xen */ uint8_t *full_ctx = NULL; int rc; @@ -1111,6 +1114,8 @@ static int vcpu_hvm(struct xc_dom_image *dom) vcpu_ctx.mtrr_d.typecode = HVM_SAVE_CODE(MTRR); vcpu_ctx.mtrr_d.length = HVM_SAVE_LENGTH(MTRR); vcpu_ctx.mtrr = *mtrr_record; + vcpu_ctx.lapic_d.typecode = HVM_SAVE_CODE(LAPIC); + vcpu_ctx.lapic_d.length = HVM_SAVE_LENGTH(LAPIC); vcpu_ctx.end_d = bsp_ctx.end_d; vcpu_ctx.end = bsp_ctx.end; @@ -1125,6 +1130,18 @@ static int vcpu_hvm(struct xc_dom_image *dom) { vcpu_ctx.mtrr_d.instance = i; + lapic_record = hvm_get_save_record(full_ctx, HVM_SAVE_CODE(LAPIC), i); + if ( !lapic_record ) + { + xc_dom_panic(dom->xch, XC_INTERNAL_ERROR, + "%s: unable to get LAPIC[%d] save record", __func__, i); + goto out; + } + + vcpu_ctx.lapic = *lapic_record; + vcpu_ctx.lapic.x2apic_id = dom->cpu_to_apicid[i]; + vcpu_ctx.lapic_d.instance = i; + rc = xc_domain_hvm_setcontext(dom->xch, dom->guest_domid, (uint8_t *)&vcpu_ctx, sizeof(vcpu_ctx)); if ( rc != 0 ) From patchwork Mon Oct 21 15:46:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alejandro Vallejo X-Patchwork-Id: 13844346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9F2ECD15DB5 for ; Mon, 21 Oct 2024 15:46:41 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.823743.1237858 (Exim 4.92) (envelope-from ) id 1t2uc7-0000co-Od; Mon, 21 Oct 2024 15:46:35 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 823743.1237858; Mon, 21 Oct 2024 15:46:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc7-0000Yz-A4; Mon, 21 Oct 2024 15:46:35 +0000 Received: by outflank-mailman (input) for mailman id 823743; Mon, 21 Oct 2024 15:46:33 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t2uc5-00072f-8B for xen-devel@lists.xenproject.org; Mon, 21 Oct 2024 15:46:33 +0000 Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com [2a00:1450:4864:20::129]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a5497378-8fc3-11ef-a0be-8be0dac302b0; Mon, 21 Oct 2024 17:46:32 +0200 (CEST) Received: by mail-lf1-x129.google.com with SMTP id 2adb3069b0e04-539e1543ab8so7757360e87.2 for ; Mon, 21 Oct 2024 08:46:32 -0700 (PDT) Received: from localhost.localdomain (0545937c.skybroadband.com. [5.69.147.124]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a9a91370fe5sm215688966b.112.2024.10.21.08.46.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 21 Oct 2024 08:46:31 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a5497378-8fc3-11ef-a0be-8be0dac302b0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloud.com; s=cloud; t=1729525592; x=1730130392; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eLoTPwH/b49P11LPnPYXfCXJJX0udy5YrCnU2/Yv3Q8=; b=TZiH5fQzDmBSYxkHYc4E8iKOuLHjb9hgosDXyRrwAO91dS3rn6XJqeleqKdDp9WkQk hW0gXLdcbdx5kAMBBsW8hlAD0e1oYdxtNEqxh0Np0UUHcpRC7QqGy17CFUnyEyMtXjLI KZjE/RcrscVKwuz0Cm7YWyq4ZG6KBA+dI9Mq0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729525592; x=1730130392; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eLoTPwH/b49P11LPnPYXfCXJJX0udy5YrCnU2/Yv3Q8=; b=UY4iIUgrOsebtQf2gPyvW6zQFIMC3D+oJJfMqzlNkOx0VkOAA2fbwlUxHooReGJWzG PQXjL3hMtE/R/yGwJWuGQGlYjH04Uwbu9TsBg30F4S7AVv1qv2U1pGuBEgboTB+6OId7 ufCQv6hgOaNWmaD99WnX59OB6/lJ0Gi3ozog2dZPo6iw59i8lrp60/YlGJEIJQQ+XfAO 4ewAE2Hh5fVO6EBSrCauoroCHT1Ze0gouTWp0xo69p3gK+7mkV3HhbUCiLntYpZUe7za sHAIdJddbiP61EdRQAf9zw2wFtHv2w3/YzqAxQMbNPtqOxLAAOMUh6+NfErR8xgOtyEu +P9Q== X-Gm-Message-State: AOJu0YzDOQrEgNjPP/ErsMb4FyxLnyAHh2fEYHUkajixwtfMq8KsPNKq EGAin/5tpTKkTStObCVSJ217KtO8XCdzia9FRQnF+emc7Cor+67wijqZJDV8hDJjkiMF1cdxcGO 1 X-Google-Smtp-Source: AGHT+IEKI3F754OZTMcPDYuuXAH9Oj+SnjFeFSVHzQIMr0uet8yShlvwWryHEeCKjJKW5m2/CM2Iqg== X-Received: by 2002:a05:6512:1244:b0:536:a7a4:c3d4 with SMTP id 2adb3069b0e04-53a154c9f17mr10083781e87.39.1729525591793; Mon, 21 Oct 2024 08:46:31 -0700 (PDT) From: Alejandro Vallejo To: xen-devel@lists.xenproject.org Cc: Alejandro Vallejo , Anthony PERARD , Juergen Gross , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v7 10/10] tools/x86: Synthesise domain topologies Date: Mon, 21 Oct 2024 16:46:00 +0100 Message-ID: <20241021154600.11745-11-alejandro.vallejo@cloud.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241021154600.11745-1-alejandro.vallejo@cloud.com> References: <20241021154600.11745-1-alejandro.vallejo@cloud.com> MIME-Version: 1.0 Expose sensible topologies in leaf 0xb. At the moment it synthesises non-HT systems, in line with the previous code intent. Leaf 0xb in the host policy is no longer zapped and the guest {max,def} policies have their topology leaves zapped instead. The intent is for toolstack to populate them. There's no current use for the topology information in the host policy, but it makes no harm. Signed-off-by: Alejandro Vallejo --- v7: * No changes --- tools/include/xenguest.h | 3 +++ tools/libs/guest/xg_cpuid_x86.c | 29 ++++++++++++++++++++++++++++- tools/libs/light/libxl_dom.c | 22 +++++++++++++++++++++- xen/arch/x86/cpu-policy.c | 9 ++++++--- 4 files changed, 58 insertions(+), 5 deletions(-) diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h index aa50b78dfb89..dcabf219b9cb 100644 --- a/tools/include/xenguest.h +++ b/tools/include/xenguest.h @@ -831,6 +831,9 @@ int xc_set_domain_cpu_policy(xc_interface *xch, uint32_t domid, uint32_t xc_get_cpu_featureset_size(void); +/* Returns the APIC ID of the `cpu`-th CPU according to `policy` */ +uint32_t xc_cpu_to_apicid(const xc_cpu_policy_t *policy, unsigned int cpu); + enum xc_static_cpu_featuremask { XC_FEATUREMASK_KNOWN, XC_FEATUREMASK_SPECIAL, diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c index 4453178100ad..c591f8732a1a 100644 --- a/tools/libs/guest/xg_cpuid_x86.c +++ b/tools/libs/guest/xg_cpuid_x86.c @@ -725,8 +725,16 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore, p->policy.basic.htt = test_bit(X86_FEATURE_HTT, host_featureset); p->policy.extd.cmp_legacy = test_bit(X86_FEATURE_CMP_LEGACY, host_featureset); } - else + else if ( restore ) { + /* + * Reconstruct the topology exposed on Xen <= 4.13. It makes very little + * sense, but it's what those guests saw so it's set in stone now. + * + * Guests from Xen 4.14 onwards carry their own CPUID leaves in the + * migration stream so they don't need special treatment. + */ + /* * Topology for HVM guests is entirely controlled by Xen. For now, we * hardcode APIC_ID = vcpu_id * 2 to give the illusion of no SMT. @@ -782,6 +790,20 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore, break; } } + else + { + /* TODO: Expose the ability to choose a custom topology for HVM/PVH */ + unsigned int threads_per_core = 1; + unsigned int cores_per_pkg = di.max_vcpu_id + 1; + + rc = x86_topo_from_parts(&p->policy, threads_per_core, cores_per_pkg); + if ( rc ) + { + ERROR("Failed to generate topology: rc=%d t/c=%u c/p=%u", + rc, threads_per_core, cores_per_pkg); + goto out; + } + } nr_leaves = ARRAY_SIZE(p->leaves); rc = x86_cpuid_copy_to_buffer(&p->policy, p->leaves, &nr_leaves); @@ -1028,3 +1050,8 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host, return false; } + +uint32_t xc_cpu_to_apicid(const xc_cpu_policy_t *policy, unsigned int cpu) +{ + return x86_x2apic_id_from_vcpu_id(&policy->policy, cpu); +} diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c index 5f4f6830e850..1d7c34820d8f 100644 --- a/tools/libs/light/libxl_dom.c +++ b/tools/libs/light/libxl_dom.c @@ -1063,6 +1063,9 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid, libxl_domain_build_info *const info = &d_config->b_info; struct xc_dom_image *dom = NULL; bool device_model = info->type == LIBXL_DOMAIN_TYPE_HVM ? true : false; +#if defined(__i386__) || defined(__x86_64__) + struct xc_cpu_policy *policy = NULL; +#endif xc_dom_loginit(ctx->xch); @@ -1083,8 +1086,22 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid, dom->container_type = XC_DOM_HVM_CONTAINER; #if defined(__i386__) || defined(__x86_64__) + policy = xc_cpu_policy_init(); + if (!policy) { + LOGE(ERROR, "xc_cpu_policy_get_domain failed d%u", domid); + rc = ERROR_NOMEM; + goto out; + } + + rc = xc_cpu_policy_get_domain(ctx->xch, domid, policy); + if (rc != 0) { + LOGE(ERROR, "xc_cpu_policy_get_domain failed d%u", domid); + rc = ERROR_FAIL; + goto out; + } + for (unsigned int i = 0; i < info->max_vcpus; i++) - dom->cpu_to_apicid[i] = 2 * i; /* TODO: Replace by topo calculation */ + dom->cpu_to_apicid[i] = xc_cpu_to_apicid(policy, i); #endif /* The params from the configuration file are in Mb, which are then @@ -1214,6 +1231,9 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid, out: assert(rc != 0); if (dom != NULL) xc_dom_release(dom); +#if defined(__i386__) || defined(__x86_64__) + xc_cpu_policy_destroy(policy); +#endif return rc; } diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c index 715a66d2a978..a7e0d44cce78 100644 --- a/xen/arch/x86/cpu-policy.c +++ b/xen/arch/x86/cpu-policy.c @@ -266,9 +266,6 @@ static void recalculate_misc(struct cpu_policy *p) p->basic.raw[0x8] = EMPTY_LEAF; - /* TODO: Rework topology logic. */ - memset(p->topo.raw, 0, sizeof(p->topo.raw)); - p->basic.raw[0xc] = EMPTY_LEAF; p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES; @@ -619,6 +616,9 @@ static void __init calculate_pv_max_policy(void) recalculate_xstate(p); p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */ + + /* Wipe host topology. Populated by toolstack */ + memset(p->topo.raw, 0, sizeof(p->topo.raw)); } static void __init calculate_pv_def_policy(void) @@ -785,6 +785,9 @@ static void __init calculate_hvm_max_policy(void) /* It's always possible to emulate CPUID faulting for HVM guests */ p->platform_info.cpuid_faulting = true; + + /* Wipe host topology. Populated by toolstack */ + memset(p->topo.raw, 0, sizeof(p->topo.raw)); } static void __init calculate_hvm_def_policy(void)