From patchwork Wed May 29 09:01:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 13678481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0A7FFC25B7C for ; Wed, 29 May 2024 09:02:41 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.731688.1137333 (Exim 4.92) (envelope-from ) id 1sCFCT-0006mq-Hw; Wed, 29 May 2024 09:02:25 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 731688.1137333; Wed, 29 May 2024 09:02:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCT-0006mj-F4; Wed, 29 May 2024 09:02:25 +0000 Received: by outflank-mailman (input) for mailman id 731688; Wed, 29 May 2024 09:02:24 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCR-0006YK-Vs for xen-devel@lists.xenproject.org; Wed, 29 May 2024 09:02:23 +0000 Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com [2607:f8b0:4864:20::731]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 28b94192-1d9a-11ef-b4bb-af5377834399; Wed, 29 May 2024 11:02:22 +0200 (CEST) Received: by mail-qk1-x731.google.com with SMTP id af79cd13be357-794c3946468so117511285a.1 for ; Wed, 29 May 2024 02:02:22 -0700 (PDT) Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id af79cd13be357-794db33d9f8sm73278785a.134.2024.05.29.02.02.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 02:02:19 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 28b94192-1d9a-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1716973340; x=1717578140; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=erP1nWqmPbjeOHjS7xu18nxyw0w9w/NsNw7WUoH3eXI=; b=r+yAnjlX7xkDSh4NSiYWo7CscaXOqMbnAKxpl6+seBmvqNr8U22JRPbik/7UTILPz8 5Yc5nrxkEF7NpfRqlSeCWi/YqXKAyZuEFnJXtCHHPPw/re/+/1LHm9u3UBfoN9npEWTQ pILzOwNLQlM0L6zf9Re4Qypwyc1YGqRPqtU3M= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716973340; x=1717578140; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=erP1nWqmPbjeOHjS7xu18nxyw0w9w/NsNw7WUoH3eXI=; b=AlScULjXJ163QT54JPhrA6FwcuJKakr8yOiRO/Nl2wKG6QjLtpVmpz9L4XGC1KTMH3 zcFB8d/LdjuHL1GAX7QXYsKcEUWsh+H5lCNBm004opt+NlGHwvnVXcLEGBMhUX1d68qn 1ovamcA/aOAzGekx8DFPKb1rbjvA8jYlwLfjCE7xwa0GibSDIvoTVG0ANiiWZVT9Id7B TgKvjWOwY9wEveV9QEomCx+bIgFvENj1ovY1ZQook9WRlKZ7TjGsXUxVFLmoXDmKU1sZ njwzqXoYPi1eDQrZb7mzM4Smhzhenydgn4aUKDdCcVAO9Eu8cXomWSk1rzGMo4bcPgIc cP0g== X-Gm-Message-State: AOJu0YyMWVcIpnped4PS5JyGXOEFrCDBgFGlzpAj/Wor+Q3dyUjM+dcC HDpP301R/tbMmtR1PgkL6P4NoIapXTp3vNWonKb3EsemNl4pfvI6NdHE1I+Vji3OY4kYYMirNXa 3 X-Google-Smtp-Source: AGHT+IGtDPgZsC09q6labm0ZQe0wplK8jSwdnIGkpbIgFSfZy/tLnmb/8TjhjPNyH7JAOytWhLdvRg== X-Received: by 2002:a05:620a:394d:b0:794:a993:711e with SMTP id af79cd13be357-794ab060b3cmr1999790485a.16.1716973340284; Wed, 29 May 2024 02:02:20 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper Subject: [PATCH for-4.19 1/9] x86/irq: remove offline CPUs from old CPU mask when adjusting move_cleanup_count Date: Wed, 29 May 2024 11:01:23 +0200 Message-ID: <20240529090132.59434-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240529090132.59434-1-roger.pau@citrix.com> References: <20240529090132.59434-1-roger.pau@citrix.com> MIME-Version: 1.0 When adjusting move_cleanup_count to account for CPUs that are offline also adjust old_cpu_mask, otherwise further calls to fixup_irqs() could subtract those again creating and create an imbalance in move_cleanup_count. Fixes: 472e0b74c5c4 ('x86/IRQ: deal with move cleanup count state in fixup_irqs()') Signed-off-by: Roger Pau Monné Reviewed-by: Jan Beulich --- xen/arch/x86/irq.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index c16205a9beb6..9716e00e873b 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -2572,6 +2572,14 @@ void fixup_irqs(const cpumask_t *mask, bool verbose) desc->arch.move_cleanup_count -= cpumask_weight(affinity); if ( !desc->arch.move_cleanup_count ) release_old_vec(desc); + else + /* + * Adjust old_cpu_mask to account for the offline CPUs, + * otherwise further calls to fixup_irqs() could subtract those + * again and possibly underflow the counter. + */ + cpumask_and(desc->arch.old_cpu_mask, desc->arch.old_cpu_mask, + &cpu_online_map); } if ( !desc->action || cpumask_subset(desc->affinity, mask) ) From patchwork Wed May 29 09:01:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 13678483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2A349C27C50 for ; Wed, 29 May 2024 09:02:42 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.731689.1137343 (Exim 4.92) (envelope-from ) id 1sCFCU-00071I-QB; Wed, 29 May 2024 09:02:26 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 731689.1137343; Wed, 29 May 2024 09:02:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCU-00071B-ND; Wed, 29 May 2024 09:02:26 +0000 Received: by outflank-mailman (input) for mailman id 731689; Wed, 29 May 2024 09:02:25 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCT-0006nV-T7 for xen-devel@lists.xenproject.org; Wed, 29 May 2024 09:02:25 +0000 Received: from mail-qv1-xf34.google.com (mail-qv1-xf34.google.com [2607:f8b0:4864:20::f34]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 2a42f51a-1d9a-11ef-90a1-e314d9c70b13; Wed, 29 May 2024 11:02:24 +0200 (CEST) Received: by mail-qv1-xf34.google.com with SMTP id 6a1803df08f44-6ab9d406101so10364256d6.3 for ; Wed, 29 May 2024 02:02:24 -0700 (PDT) Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6ac163189cbsm52218246d6.125.2024.05.29.02.02.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 02:02:22 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2a42f51a-1d9a-11ef-90a1-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1716973343; x=1717578143; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=q0Nh3rBmN+pTUZ9mi1QuaHwm6Qdmm1bW1P320o0L9uQ=; b=UrCRNhoAdwKdMwnay26lSkyLKktz/PB5H92GLpZxY27JCpJxj/cf5ZewimxjIy28hr fV5XTeAa4MyoxbDVvBL9cm17LFMuRO7GsJxeoVM/UGz6OLS7VaMhpVYhtbiQ/CIhfBEo mBUKP+Twdv2nzuy9KtzB8y4SXpuXmzmzbicmM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716973343; x=1717578143; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=q0Nh3rBmN+pTUZ9mi1QuaHwm6Qdmm1bW1P320o0L9uQ=; b=jKCK9rPPQij7Z2sFb3KGSteqLY6q78V5/TOV71rWFIfCbI3XO8DVuJikl7kqAhiIBS g/JA5396YceHog8lOwiDgB8XKPYQkMJbYcOsaDbr3DMeLvl0287vnebNW/xM6MjJlnq0 KDjN9QyFk8Cx1dVTpFyWZMcIXZoIV4z7rvsqyJZJOPG6sgF/iO3G0M4nKUhbT8r2cAL6 6O/xKXw1tNd0xxDWxbYZ8WqengB6CVyR2ceW1aw0KNu8iG8khEuoFZnNjgW3xbPso4UW VEimtzETbXHnZZtI9Q50CUdr+jc+sg4Kt1kmVWm1P0l3zKyl6ETokyVF1jcJV0bs9wnk Br0w== X-Gm-Message-State: AOJu0Yxg+JxPe7mLlFPVUa02Ni7NyzAo6dT9b4/sZKs4DnfiWVqULpEO ZT5doBshy5l01CSd4cnByPlgSV5ZglTsTVjAG2iGlYSPmm8+dFS6KMmIgCgkGvkY6cHy7yIqRQc m X-Google-Smtp-Source: AGHT+IFHRIN1eZv4oe0SSy/cz+Iz9pCra+5E0JzLjhkBniawjYKENoDwI5H8li2IB8bRXo70SdnG0g== X-Received: by 2002:a05:6214:4a07:b0:6a0:b3cc:ee48 with SMTP id 6a1803df08f44-6abc8cd1be4mr175200136d6.37.1716973342641; Wed, 29 May 2024 02:02:22 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini Subject: [PATCH for-4.19 2/9] xen/cpu: do not get the CPU map in stop_machine_run() Date: Wed, 29 May 2024 11:01:24 +0200 Message-ID: <20240529090132.59434-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240529090132.59434-1-roger.pau@citrix.com> References: <20240529090132.59434-1-roger.pau@citrix.com> MIME-Version: 1.0 The current callers of stop_machine_run() outside of init code already have the CPU maps locked, and hence there's no reason for stop_machine_run() to attempt to lock again. Replace the get_cpu_maps() call with a suitable unreachable assert. Further changes will modify the conditions under which get_cpu_maps() returns success and without the adjustment proposed here the usage of stop_machine_run() in cpu_down() would then return an error. Signed-off-by: Roger Pau Monné --- xen/common/cpu.c | 5 +++++ xen/common/stop_machine.c | 15 ++++++++------- xen/include/xen/cpu.h | 2 ++ 3 files changed, 15 insertions(+), 7 deletions(-) diff --git a/xen/common/cpu.c b/xen/common/cpu.c index 8709db4d2957..6173220e771b 100644 --- a/xen/common/cpu.c +++ b/xen/common/cpu.c @@ -68,6 +68,11 @@ void cpu_hotplug_done(void) write_unlock(&cpu_add_remove_lock); } +bool cpu_map_locked(void) +{ + return rw_is_locked(&cpu_add_remove_lock); +} + static NOTIFIER_HEAD(cpu_chain); void __init register_cpu_notifier(struct notifier_block *nb) diff --git a/xen/common/stop_machine.c b/xen/common/stop_machine.c index 398cfd507c10..7face75648e8 100644 --- a/xen/common/stop_machine.c +++ b/xen/common/stop_machine.c @@ -82,9 +82,15 @@ int stop_machine_run(int (*fn)(void *data), void *data, unsigned int cpu) BUG_ON(!local_irq_is_enabled()); BUG_ON(!is_idle_vcpu(current)); - /* cpu_online_map must not change. */ - if ( !get_cpu_maps() ) + /* + * cpu_online_map must not change. The only two callers of + * stop_machine_run() outside of init code already have the CPU map locked. + */ + if ( system_state >= SYS_STATE_active && !cpu_map_locked() ) + { + ASSERT_UNREACHABLE(); return -EBUSY; + } nr_cpus = num_online_cpus(); if ( cpu_online(this) ) @@ -92,10 +98,7 @@ int stop_machine_run(int (*fn)(void *data), void *data, unsigned int cpu) /* Must not spin here as the holder will expect us to be descheduled. */ if ( !spin_trylock(&stopmachine_lock) ) - { - put_cpu_maps(); return -EBUSY; - } stopmachine_data.fn = fn; stopmachine_data.fn_data = data; @@ -136,8 +139,6 @@ int stop_machine_run(int (*fn)(void *data), void *data, unsigned int cpu) spin_unlock(&stopmachine_lock); - put_cpu_maps(); - return ret; } diff --git a/xen/include/xen/cpu.h b/xen/include/xen/cpu.h index e1d4eb59675c..d8c8264c58b0 100644 --- a/xen/include/xen/cpu.h +++ b/xen/include/xen/cpu.h @@ -13,6 +13,8 @@ void put_cpu_maps(void); void cpu_hotplug_begin(void); void cpu_hotplug_done(void); +bool cpu_map_locked(void); + /* Receive notification of CPU hotplug events. */ void register_cpu_notifier(struct notifier_block *nb); From patchwork Wed May 29 09:01:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 13678480 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5723AC25B75 for ; Wed, 29 May 2024 09:02:40 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.731690.1137353 (Exim 4.92) (envelope-from ) id 1sCFCW-0007GR-5b; Wed, 29 May 2024 09:02:28 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 731690.1137353; Wed, 29 May 2024 09:02:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCW-0007GK-1p; Wed, 29 May 2024 09:02:28 +0000 Received: by outflank-mailman (input) for mailman id 731690; Wed, 29 May 2024 09:02:27 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCV-0006nV-3E for xen-devel@lists.xenproject.org; Wed, 29 May 2024 09:02:27 +0000 Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [2607:f8b0:4864:20::82c]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 2b3c2558-1d9a-11ef-90a1-e314d9c70b13; Wed, 29 May 2024 11:02:26 +0200 (CEST) Received: by mail-qt1-x82c.google.com with SMTP id d75a77b69052e-43f87ba7d3eso8992531cf.0 for ; Wed, 29 May 2024 02:02:26 -0700 (PDT) Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-43fb16b9770sm52129441cf.6.2024.05.29.02.02.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 02:02:24 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2b3c2558-1d9a-11ef-90a1-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1716973345; x=1717578145; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nBJNMx2S48pD7GARHNMhhe8qb1w3vAy8ioD3DvtxaK0=; b=h9HQD/CtQY7JTRR314/fx5bxpAqhS/rmFY2phrwXtUnOMsOcrbt+FI+IeRb29Zd+nd P6LN6vcwDq5/L9m+8t508co5xltNbfdxtPNykzdw71RBxWCh8dgSKqQE9k9t1ugjGlCn Z7VFnCxAYQhA24rpL0USKvu7EBJGXAu2TVVQs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716973345; x=1717578145; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nBJNMx2S48pD7GARHNMhhe8qb1w3vAy8ioD3DvtxaK0=; b=dsfrday04ntS/rA0twvR5OYLlEGyC4JZKqS+hrnom1JQ5vPCPrhw+cgTSEynRypMtq JUXUHMnYh+jb1e5+kkrpw3h4r7p4YNRyjyYHfWvPqepvucM/zZWSV9R4hGK4W46VyBlF CzOeTMgJVtaPXltXlOcALuBX30zjVOCMKk+jpkGyR3YRzWsjavLH0vQzJT2ARq4TSMbu jEKMGSjSRI0F3VPf3IP6R4ntK/MVoESrM2qio6uSMhvXuQcWz+53roXB3Ti9pq+gG8yQ Ed+WDjdaD6v4q6rI7TNS6vZqDqLerTlge+4O2Hf3U3kHuhFllzylmD1hlcwKEyZxvPyk 9eVw== X-Gm-Message-State: AOJu0YzltI1JhO+aSzNLo56x7goXzBDzmPNqUJjW52t0hD8YWVAiuqAC fXycsdrS5g5W859QY9uCUDQgNqgqtHrV6bNMX/0hYACgjhiNkLxaHi2PK2rdSDLhLkS9HZrZwUw 7 X-Google-Smtp-Source: AGHT+IFHrl3J3xvu8cJrUJptqFsfoUxoTDgqwa6ABiAHIEPBty/+m2iD8CgPpr1TfF9sKgBxFQzAoQ== X-Received: by 2002:a05:622a:1811:b0:43e:e6e:21df with SMTP id d75a77b69052e-43fb0e8584emr144478991cf.15.1716973345017; Wed, 29 May 2024 02:02:25 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini Subject: [PATCH for-4.19 3/9] xen/cpu: ensure get_cpu_maps() returns false if CPU operations are underway Date: Wed, 29 May 2024 11:01:25 +0200 Message-ID: <20240529090132.59434-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240529090132.59434-1-roger.pau@citrix.com> References: <20240529090132.59434-1-roger.pau@citrix.com> MIME-Version: 1.0 Due to the current rwlock logic, if the CPU calling get_cpu_maps() does so from a cpu_hotplug_{begin,done}() region the function will still return success, because a CPU taking the rwlock in read mode after having taken it in write mode is allowed. Such behavior however defeats the purpose of get_cpu_maps(), as it should always return false when called with a CPU hot{,un}plug operation is in progress. Otherwise the logic in send_IPI_mask() for example is wrong, as it could decide to use the shorthand even when a CPU operation is in progress. Adjust the logic in get_cpu_maps() to return false when the CPUs lock is already hold in write mode by the current CPU, as read_trylock() would otherwise return true. Fixes: 868a01021c6f ('rwlock: allow recursive read locking when already locked in write mode') Signed-off-by: Roger Pau Monné --- xen/common/cpu.c | 3 ++- xen/include/xen/rwlock.h | 2 ++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/xen/common/cpu.c b/xen/common/cpu.c index 6173220e771b..d76f80fe2e99 100644 --- a/xen/common/cpu.c +++ b/xen/common/cpu.c @@ -49,7 +49,8 @@ static DEFINE_RWLOCK(cpu_add_remove_lock); bool get_cpu_maps(void) { - return read_trylock(&cpu_add_remove_lock); + return !rw_is_write_locked_by_me(&cpu_add_remove_lock) && + read_trylock(&cpu_add_remove_lock); } void put_cpu_maps(void) diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h index a2e98cad343e..4e7802821859 100644 --- a/xen/include/xen/rwlock.h +++ b/xen/include/xen/rwlock.h @@ -316,6 +316,8 @@ static always_inline void write_lock_irq(rwlock_t *l) #define rw_is_locked(l) _rw_is_locked(l) #define rw_is_write_locked(l) _rw_is_write_locked(l) +#define rw_is_write_locked_by_me(l) \ + lock_evaluate_nospec(_is_write_locked_by_me(atomic_read(&(l)->cnts))) typedef struct percpu_rwlock percpu_rwlock_t; From patchwork Wed May 29 09:01:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 13678482 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 68EA8C41513 for ; Wed, 29 May 2024 09:02:42 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.731691.1137363 (Exim 4.92) (envelope-from ) id 1sCFCa-0007aS-DC; Wed, 29 May 2024 09:02:32 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 731691.1137363; Wed, 29 May 2024 09:02:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCa-0007aB-9z; Wed, 29 May 2024 09:02:32 +0000 Received: by outflank-mailman (input) for mailman id 731691; Wed, 29 May 2024 09:02:31 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCZ-0006YK-BK for xen-devel@lists.xenproject.org; Wed, 29 May 2024 09:02:31 +0000 Received: from mail-oi1-x232.google.com (mail-oi1-x232.google.com [2607:f8b0:4864:20::232]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 2cfd3434-1d9a-11ef-b4bb-af5377834399; Wed, 29 May 2024 11:02:29 +0200 (CEST) Received: by mail-oi1-x232.google.com with SMTP id 5614622812f47-3d1bb1c3a88so1045063b6e.2 for ; Wed, 29 May 2024 02:02:29 -0700 (PDT) Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id af79cd13be357-794abcc0f27sm450462085a.31.2024.05.29.02.02.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 02:02:26 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2cfd3434-1d9a-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1716973347; x=1717578147; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=51wZoBHUI8YnkZDZQOJzY42CcaN9TxiSHoaNW33FOv0=; b=a9GP9EOSCE9MyHLQzMFUWTVJIFmRByi4NJctgTO6GqhZhTfXgO5XMTfwZT5/AT2Xdy wDjWbO+rbibgxjT8Zusy3iO/ntGE33szBsBxE1KxPL2RTqWJBtUSubNOyFFxfipbSZuK xVOcrdnzXC4kJmPGjeTktumriXTH+Yb4+Gu4Y= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716973347; x=1717578147; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=51wZoBHUI8YnkZDZQOJzY42CcaN9TxiSHoaNW33FOv0=; b=Os2n9ISTbcGzhLtelnL3KKHgp21fiw8QMjmSZYnpkuTjTgHfZpLH8hHaTdIpM82Lq3 69g/RXvFwV5oEqVpfmt+CFTQMGnGt4GdjJiUEUW7WkqO33+huXvaaDQGo3T0rFgGUBip Fs9y6uo5VfJSDeT48KXgT0w+j1icjRCje1/+7CXcwCibJZ18esNJsD+2ckjFHNm+YduX jVjyqUsZb6Ll9t10jo2wuWwNM4hp+8iQF929BFr2fwJhDgDH0+5tYuJyHd1vouzCLWT9 efJasPt6Xkm9OxKUBI9hY33VoP9MgQmi6gVbZ8rqveh5J7MCrlmCQFap0YRc2g0VENP9 Hx0Q== X-Gm-Message-State: AOJu0Yx8whlMjgbDJS3eKrWjsFKwPQU12JK3zG+NXWZMmPcMrqFZOlxI 8B1A1oe61VtycBgXwLpfI4pTqq/7dljQ7y2vQrkOXEekSg0558mUcwZHi779Sd15K31l+Qx5PIr y X-Google-Smtp-Source: AGHT+IFfACwML2Op18jla+kB7L8l2Ckh1hot8lils5Zo/p1hrUkHo+HwbRRecuy/K1vQlG/qE76xBg== X-Received: by 2002:a05:6870:a54b:b0:24f:df37:75dc with SMTP id 586e51a60fabf-24fdf383398mr11135367fac.51.1716973347325; Wed, 29 May 2024 02:02:27 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper Subject: [PATCH for-4.19 4/9] x86/irq: describe how the interrupt CPU movement works Date: Wed, 29 May 2024 11:01:26 +0200 Message-ID: <20240529090132.59434-5-roger.pau@citrix.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240529090132.59434-1-roger.pau@citrix.com> References: <20240529090132.59434-1-roger.pau@citrix.com> MIME-Version: 1.0 The logic to move interrupts across CPUs is complex, attempt to provide a comment that describes the expected behavior so users of the interrupt system have more context about the usage of the arch_irq_desc structure fields. Signed-off-by: Roger Pau Monné --- xen/arch/x86/include/asm/irq.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/xen/arch/x86/include/asm/irq.h b/xen/arch/x86/include/asm/irq.h index 413994d2133b..80a3aa7a88b9 100644 --- a/xen/arch/x86/include/asm/irq.h +++ b/xen/arch/x86/include/asm/irq.h @@ -28,6 +28,32 @@ typedef struct { struct irq_desc; +/* + * Xen logic for moving interrupts around CPUs allows manipulating interrupts + * that target remote CPUs. The logic to move an interrupt from CPU(s) is as + * follows: + * + * 1. cpu_mask and vector is copied to old_cpu_mask and old_vector. + * 2. New cpu_mask and vector are set, vector is setup at the new destination. + * 3. move_in_progress is set. + * 4. Interrupt source is updated to target new CPU and vector. + * 5. Interrupts arriving at old_cpu_mask are processed normally. + * 6. When an interrupt is delivered at the new destination (cpu_mask) as part + * of acking the interrupt move_in_progress is cleared and move_cleanup_count + * is set to the weight of online CPUs in old_cpu_mask. + * IRQ_MOVE_CLEANUP_VECTOR is sent to all CPUs in old_cpu_mask. + * 7. When receiving IRQ_MOVE_CLEANUP_VECTOR CPUs in old_cpu_mask clean the + * vector entry and decrease the count in move_cleanup_count. The CPU that + * sets move_cleanup_count to 0 releases the vector. + * + * Note that when interrupt movement (either move_in_progress or + * move_cleanup_count set) is in progress it's not possible to move the + * interrupt to yet a different CPU. + * + * By keeping the vector in the old CPU(s) configured until the interrupt is + * acked on the new destination Xen allows draining any pending interrupts at + * the old destinations. + */ struct arch_irq_desc { s16 vector; /* vector itself is only 8 bits, */ s16 old_vector; /* but we use -1 for unassigned */ From patchwork Wed May 29 09:01:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 13678484 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 17ADCC3DA40 for ; Wed, 29 May 2024 09:02:41 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.731692.1137373 (Exim 4.92) (envelope-from ) id 1sCFCb-0007r6-NN; Wed, 29 May 2024 09:02:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 731692.1137373; Wed, 29 May 2024 09:02:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCb-0007qt-Ik; Wed, 29 May 2024 09:02:33 +0000 Received: by outflank-mailman (input) for mailman id 731692; Wed, 29 May 2024 09:02:32 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCa-0006nV-A0 for xen-devel@lists.xenproject.org; Wed, 29 May 2024 09:02:32 +0000 Received: from mail-qk1-x72d.google.com (mail-qk1-x72d.google.com [2607:f8b0:4864:20::72d]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 2e5a6174-1d9a-11ef-90a1-e314d9c70b13; Wed, 29 May 2024 11:02:31 +0200 (CEST) Received: by mail-qk1-x72d.google.com with SMTP id af79cd13be357-794ab0eb68cso51887985a.0 for ; Wed, 29 May 2024 02:02:31 -0700 (PDT) Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id af79cd13be357-794abca595fsm451836885a.20.2024.05.29.02.02.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 02:02:29 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2e5a6174-1d9a-11ef-90a1-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1716973350; x=1717578150; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eAuY/xXpNim8cPIlMAplQjmAZ+i26Dd77jwyP57jPsU=; b=psIvWBPFrvk87102dToGJzui93PeHM2ALt6pJimyhtj9Jqx/gws6tfycUk+Tp6lH9Q WRUBd4dj/h+LgIylwqd0IkKyBXutnkxWBbwfBq1+Rdg5wrdtIE+IT2eN9t004Jocwyi3 37f0IrudiZIkb0kBrrWeMaURMSzYBCBQaqJH8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716973350; x=1717578150; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eAuY/xXpNim8cPIlMAplQjmAZ+i26Dd77jwyP57jPsU=; b=Q1n1OfEuGEK1QDlBixzyAzl7YSQSAHmmkNcLmurWwboz4jsrfttg8pTVQtN2oXP+hd iyCTm110LRmg5fGe3yk1l9MjYP3NMeNVmUkDf4jRBDp/pTAfOpYsBSYnAWhayI7VxzDR ul9fZ/ZxIyR58/sO+RE/PZNIrX222P3jkK+TNV7xZXlGNqBLGv3jjZrwMd1pmVCeL5Lv EB+OFK1Ii6FKwpujsQe0f6nIhjRfSfkxyBau0JiBh58Jg7aQpkHIIE8S6BMmVPR5x4Z4 ZqA3j8UXE1sWuIx175k49NOFHCMDWfQm0yxUVKE6Gze1dEc4XQJywYXu0IjJYs/8qFHS t5kw== X-Gm-Message-State: AOJu0YzvvnKl+p77R9jVDhVfWR8+/AM78VCXnnfOBD5NywQFP4EjjTHR uEaW2VWi/z3v1nlXjldIapJ0SKZBV/sqx8cb99CKs6iHHD/QKBrVTbgOCTqQCnCQi4/VQLS+Foc Q X-Google-Smtp-Source: AGHT+IEX7YTYod6TYWAapgRjN+lQ5kEIMzH3l8YDx0WLvT/Q0qT/SVJs+By7i8tud7uDQbMZHbLf+A== X-Received: by 2002:a05:620a:6112:b0:794:ca6a:2101 with SMTP id af79cd13be357-794dfb9935bmr229992585a.14.1716973349638; Wed, 29 May 2024 02:02:29 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper Subject: [PATCH for-4.19 5/9] x86/irq: limit interrupt movement done by fixup_irqs() Date: Wed, 29 May 2024 11:01:27 +0200 Message-ID: <20240529090132.59434-6-roger.pau@citrix.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240529090132.59434-1-roger.pau@citrix.com> References: <20240529090132.59434-1-roger.pau@citrix.com> MIME-Version: 1.0 The current check used in fixup_irqs() to decide whether to move around interrupts is based on the affinity mask, but such mask can have all bits set, and hence is unlikely to be a subset of the input mask. For example if an interrupt has an affinity mask of all 1s, any input to fixup_irqs() that's not an all set CPU mask would cause that interrupt to be shuffled around unconditionally. What fixup_irqs() care about is evacuating interrupts from CPUs not set on the input CPU mask, and for that purpose it should check whether the interrupt is assigned to a CPU not present in the input mask. Assume that ->arch.cpu_mask is a subset of the ->affinity mask, and keep the current logic that resets the ->affinity mask if the interrupt has to be shuffled around. While there also adjust the comment as to the purpose of fixup_irqs(). Signed-off-by: Roger Pau Monné --- Changes since v1: - Do not AND ->arch.cpu_mask with cpu_online_map. - Keep using cpumask_subset(). - Integrate into bigger series. --- xen/arch/x86/include/asm/irq.h | 2 +- xen/arch/x86/irq.c | 9 +++++++-- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/include/asm/irq.h b/xen/arch/x86/include/asm/irq.h index 80a3aa7a88b9..b7dc75d0acbd 100644 --- a/xen/arch/x86/include/asm/irq.h +++ b/xen/arch/x86/include/asm/irq.h @@ -156,7 +156,7 @@ void free_domain_pirqs(struct domain *d); int map_domain_emuirq_pirq(struct domain *d, int pirq, int emuirq); int unmap_domain_pirq_emuirq(struct domain *d, int pirq); -/* Reset irq affinities to match the given CPU mask. */ +/* Evacuate interrupts assigned to CPUs not present in the input CPU mask. */ void fixup_irqs(const cpumask_t *mask, bool verbose); void fixup_eoi(void); diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index 9716e00e873b..1b7127090377 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -2525,7 +2525,7 @@ static int __init cf_check setup_dump_irqs(void) } __initcall(setup_dump_irqs); -/* Reset irq affinities to match the given CPU mask. */ +/* Evacuate interrupts assigned to CPUs not present in the input CPU mask. */ void fixup_irqs(const cpumask_t *mask, bool verbose) { unsigned int irq; @@ -2582,7 +2582,12 @@ void fixup_irqs(const cpumask_t *mask, bool verbose) &cpu_online_map); } - if ( !desc->action || cpumask_subset(desc->affinity, mask) ) + /* + * Avoid shuffling the interrupt around as long as current target CPUs + * are a subset of the input mask. What fixup_irqs() cares about is + * evacuating interrupts from CPUs not in the input mask. + */ + if ( !desc->action || cpumask_subset(desc->arch.cpu_mask, mask) ) { spin_unlock(&desc->lock); continue; From patchwork Wed May 29 09:01:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 13678486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6FC43C25B75 for ; Wed, 29 May 2024 09:02:47 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.731693.1137383 (Exim 4.92) (envelope-from ) id 1sCFCd-0008AM-Ui; Wed, 29 May 2024 09:02:35 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 731693.1137383; Wed, 29 May 2024 09:02:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCd-0008AB-RO; Wed, 29 May 2024 09:02:35 +0000 Received: by outflank-mailman (input) for mailman id 731693; Wed, 29 May 2024 09:02:34 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCc-0006nV-Mr for xen-devel@lists.xenproject.org; Wed, 29 May 2024 09:02:34 +0000 Received: from mail-qk1-x72c.google.com (mail-qk1-x72c.google.com [2607:f8b0:4864:20::72c]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 2fae73e0-1d9a-11ef-90a1-e314d9c70b13; Wed, 29 May 2024 11:02:34 +0200 (CEST) Received: by mail-qk1-x72c.google.com with SMTP id af79cd13be357-794ba2d4579so116587785a.1 for ; Wed, 29 May 2024 02:02:34 -0700 (PDT) Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id af79cd13be357-794abd39bf3sm451529785a.114.2024.05.29.02.02.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 02:02:31 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2fae73e0-1d9a-11ef-90a1-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1716973352; x=1717578152; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ALobPQ5l/DjhjFTulvS4fT4tWBapunVg8uReYEyLy/E=; b=NN6sNLWAMpjm5z/6U1xsOe2F8lj6MiTnZTkEQaJY3IM33I70k12LMRMFhBKc4hb18R BOu8PQtnSNAsVyDWQ9fYk1GbQUDKLvUII2QegZKsefLNZxRdXG/CVBmgSfoaYa/OEOV1 /ON4cJqxYar8+MWdfGmYGJW9X9lR+bYrEGs+w= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716973352; x=1717578152; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ALobPQ5l/DjhjFTulvS4fT4tWBapunVg8uReYEyLy/E=; b=SdFyns2rw/E64vmszjoQAyUdmlEDDccMfL1NNJ5wcmNyTPYmrvu8bnmcY2J4fTNGGP BMQ1Zvr1IODDyR8sPwSmj3CW1R0ZSTOd8qBsG1GTCvDP8kksBV9eVsUp3UBf+TOh8lqr 6MjHpXkyaDhe0hOmjWpCicMPJxot51FNbtMHsskpaXEC79Sjd+JeuXeqWctrOQyIOtin JyNRTIekwA7cgVhXRWjGcBeQQYaaSy2bDOf4R37nzDvAJLaVAysM4FJ43ieA70hMTkH4 OO99af/klQZCdhBluIevnbMPf80s16LKfWxPnNRfLjLHTynbcl++qxuIxRgzoQxVxEl6 XiNg== X-Gm-Message-State: AOJu0Yzap0RTor07fLQxNEHqr1Gc+CZyFn03yp3FCGCVfrNWe9QUz+t7 ratmDoIj4uc4qha0UvIBx0y1eVbIzBFBGAr1wSDqgBsdbHQBucR6yY7QKAgh7cgLlbKDt0+OhC7 l X-Google-Smtp-Source: AGHT+IFeh7suErWELrxwwMWXUq8jDXXBH01dea6zwgIMVCoOkjB04GbH9B18swjZG3RrGBpGvWUrUw== X-Received: by 2002:a05:620a:4141:b0:790:789d:b3d0 with SMTP id af79cd13be357-794ab099dd1mr1642553285a.38.1716973352006; Wed, 29 May 2024 02:02:32 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper Subject: [PATCH for-4.19 6/9] x86/irq: restrict CPU movement in set_desc_affinity() Date: Wed, 29 May 2024 11:01:28 +0200 Message-ID: <20240529090132.59434-7-roger.pau@citrix.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240529090132.59434-1-roger.pau@citrix.com> References: <20240529090132.59434-1-roger.pau@citrix.com> MIME-Version: 1.0 If external interrupts are using logical mode it's possible to have an overlap between the current ->arch.cpu_mask and the provided mask (or TARGET_CPUS). If that's the case avoid assigning a new vector and just move the interrupt to a member of ->arch.cpu_mask that overlaps with the provided mask and is online. While there also add an extra assert to ensure the mask containing the possible destinations is not empty before calling cpu_mask_to_apicid(), as at that point having an empty mask is not expected. Signed-off-by: Roger Pau Monné --- xen/arch/x86/irq.c | 34 +++++++++++++++++++++++++++------- 1 file changed, 27 insertions(+), 7 deletions(-) diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index 1b7127090377..ae8fa574d4e7 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -846,19 +846,38 @@ void cf_check irq_complete_move(struct irq_desc *desc) unsigned int set_desc_affinity(struct irq_desc *desc, const cpumask_t *mask) { - int ret; - unsigned long flags; cpumask_t dest_mask; if ( mask && !cpumask_intersects(mask, &cpu_online_map) ) return BAD_APICID; - spin_lock_irqsave(&vector_lock, flags); - ret = _assign_irq_vector(desc, mask ?: TARGET_CPUS); - spin_unlock_irqrestore(&vector_lock, flags); + /* + * mask input set con contain CPUs that are not online. To decide whether + * the interrupt needs to be migrated restrict the input mask to the CPUs + * that are online. + */ + if ( mask ) + cpumask_and(&dest_mask, mask, &cpu_online_map); + else + cpumask_copy(&dest_mask, TARGET_CPUS); - if ( ret < 0 ) - return BAD_APICID; + /* + * Only move the interrupt if there are no CPUs left in ->arch.cpu_mask + * that can handle it, otherwise just shuffle it around ->arch.cpu_mask + * to an available destination. + */ + if ( !cpumask_intersects(desc->arch.cpu_mask, &dest_mask) ) + { + int ret; + unsigned long flags; + + spin_lock_irqsave(&vector_lock, flags); + ret = _assign_irq_vector(desc, mask ?: TARGET_CPUS); + spin_unlock_irqrestore(&vector_lock, flags); + + if ( ret < 0 ) + return BAD_APICID; + } if ( mask ) { @@ -871,6 +890,7 @@ unsigned int set_desc_affinity(struct irq_desc *desc, const cpumask_t *mask) cpumask_copy(&dest_mask, desc->arch.cpu_mask); } cpumask_and(&dest_mask, &dest_mask, &cpu_online_map); + ASSERT(!cpumask_empty(&dest_mask)); return cpu_mask_to_apicid(&dest_mask); } From patchwork Wed May 29 09:01:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 13678487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 68E27C25B7C for ; Wed, 29 May 2024 09:02:48 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.731694.1137393 (Exim 4.92) (envelope-from ) id 1sCFCg-0008Uk-8K; Wed, 29 May 2024 09:02:38 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 731694.1137393; Wed, 29 May 2024 09:02:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCg-0008UO-5B; Wed, 29 May 2024 09:02:38 +0000 Received: by outflank-mailman (input) for mailman id 731694; Wed, 29 May 2024 09:02:37 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCf-0006nV-Gh for xen-devel@lists.xenproject.org; Wed, 29 May 2024 09:02:37 +0000 Received: from mail-qv1-xf31.google.com (mail-qv1-xf31.google.com [2607:f8b0:4864:20::f31]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 31767148-1d9a-11ef-90a1-e314d9c70b13; Wed, 29 May 2024 11:02:37 +0200 (CEST) Received: by mail-qv1-xf31.google.com with SMTP id 6a1803df08f44-6ad74e5afeaso3776796d6.0 for ; Wed, 29 May 2024 02:02:37 -0700 (PDT) Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6ac17a0a904sm51825196d6.140.2024.05.29.02.02.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 02:02:33 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 31767148-1d9a-11ef-90a1-e314d9c70b13 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1716973355; x=1717578155; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UBMxjkuwWl1wII73dXqlBR3D5ru/taoCebE2CrhXMFU=; b=VTybKQemf7cHSxbVVTJb3uNNKfTB+KGbuRp3klWjuRwfIe82OLN6JIRMJXv0SElzp8 wIwFR02fAgDdW6iqa5fcnRnLID6CtQR7aFYMCZD3l8gvufitWms4uSj3/mLbe+Wlh4+s gYohS9eJ/KfQW+CH/iOdW3g0sRss1B3kc6F8o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716973355; x=1717578155; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UBMxjkuwWl1wII73dXqlBR3D5ru/taoCebE2CrhXMFU=; b=U8Fsfvxy5r79F0JT0BsngRaDB9+JIZKUckwqOtxXIDCXi3CRrLmaBJ0+ixF6GZa+Zg pu6CWEz0fnm2EMneghYp78h83+hHJt0FzJ0FcFsib1s4O7pmQsHOPl0NMv3RhI8vd8gE Cv4GBbhM1YUmnLv2OyDAvwcd0B6APRInWzbZGYncG1z59ciVtcP9k0qOwr8v6TVkOJcg Vjbf0D6seRq282k0YVpX6UGo1d1ZCX7JKy3cQ8A48sKy4zN/LPpF5DqLa8vr7vleKcfh s1B+ABcGCVsJvxV/VFnnVCPSKp5c3VOg16mFC0uHTstlFohYi+5fexvzzf1wk+UltOKA 21yg== X-Gm-Message-State: AOJu0YygypSdB1HgIyriMq0+60CICpOaAd7HIez4Ej/WT7e/GH3G0U4j YXxf0I4b/a/ZFdsCXJf7HvTzLesQHe1HbtIIgCQjJjaQKIqtJFgRTF/NCPS08eKbyEOQmMFD+2M d X-Google-Smtp-Source: AGHT+IGDSyevI0ED2KfpIKZoqT7Y+oUaLl05GYOTvdKaWQqSzqfErWgaoruVthSi4UMNX+Z5fAaxJQ== X-Received: by 2002:a05:6214:20a6:b0:6ad:5cb2:b6c with SMTP id 6a1803df08f44-6ad9f9596e6mr24131236d6.9.1716973354286; Wed, 29 May 2024 02:02:34 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper Subject: [PATCH for-4.19 7/9] x86/irq: deal with old_cpu_mask for interrupts in movement in fixup_irqs() Date: Wed, 29 May 2024 11:01:29 +0200 Message-ID: <20240529090132.59434-8-roger.pau@citrix.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240529090132.59434-1-roger.pau@citrix.com> References: <20240529090132.59434-1-roger.pau@citrix.com> MIME-Version: 1.0 Given the current logic it's possible for ->arch.old_cpu_mask to get out of sync: if a CPU set in old_cpu_mask is offlined and then onlined again without old_cpu_mask having been updated the data in the mask will no longer be accurate, as when brought back online the CPU will no longer have old_vector configured to handle the old interrupt source. If there's an interrupt movement in progress, and the to be offlined CPU (which is the call context) is in the old_cpu_mask clear it and update the mask, so it doesn't contain stale data. Signed-off-by: Roger Pau Monné --- xen/arch/x86/irq.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index ae8fa574d4e7..8822f382bfe4 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -2602,6 +2602,28 @@ void fixup_irqs(const cpumask_t *mask, bool verbose) &cpu_online_map); } + if ( desc->arch.move_in_progress && + !cpumask_test_cpu(cpu, &cpu_online_map) && + cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) ) + { + /* + * This CPU is going offline, remove it from ->arch.old_cpu_mask + * and possibly release the old vector if the old mask becomes + * empty. + * + * Note cleaning ->arch.old_cpu_mask is required if the CPU is + * brought offline and then online again, as when re-onlined the + * per-cpu vector table will no longer have ->arch.old_vector + * setup, and hence ->arch.old_cpu_mask would be stale. + */ + cpumask_clear_cpu(cpu, desc->arch.old_cpu_mask); + if ( cpumask_empty(desc->arch.old_cpu_mask) ) + { + desc->arch.move_in_progress = 0; + release_old_vec(desc); + } + } + /* * Avoid shuffling the interrupt around as long as current target CPUs * are a subset of the input mask. What fixup_irqs() cares about is From patchwork Wed May 29 09:01:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 13678488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 60624C25B75 for ; Wed, 29 May 2024 09:02:50 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.731696.1137403 (Exim 4.92) (envelope-from ) id 1sCFCk-0000VQ-Kc; Wed, 29 May 2024 09:02:42 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 731696.1137403; Wed, 29 May 2024 09:02:42 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCk-0000VH-Gj; Wed, 29 May 2024 09:02:42 +0000 Received: by outflank-mailman (input) for mailman id 731696; Wed, 29 May 2024 09:02:41 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCj-0006YK-8g for xen-devel@lists.xenproject.org; Wed, 29 May 2024 09:02:41 +0000 Received: from mail-ot1-x32e.google.com (mail-ot1-x32e.google.com [2607:f8b0:4864:20::32e]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 329e9554-1d9a-11ef-b4bb-af5377834399; Wed, 29 May 2024 11:02:39 +0200 (CEST) Received: by mail-ot1-x32e.google.com with SMTP id 46e09a7af769-6f12ed79fdfso977450a34.0 for ; Wed, 29 May 2024 02:02:39 -0700 (PDT) Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-43fb1800900sm51957851cf.49.2024.05.29.02.02.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 02:02:36 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 329e9554-1d9a-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1716973357; x=1717578157; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vw6o6qcvvPGFZkVuxNsOOgXzV4/8wiApb/K/FBe9Ung=; b=Lj4UUHhIye+Rmj3k6w5hYyy1hvn97MWilKKJ5dQZAWYOV0DlfU5iJVRhVeQPs4TELf TiWFfFXDc4DlpY7ZbiQdkhE/2DEkC9a7pIB3+bOzAEHeUMv7Sj9/sYHklNJZVuHuGG+o NwXP40hVBeXXIXXsfk6sBQRD/teKnpQhREgjo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716973357; x=1717578157; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vw6o6qcvvPGFZkVuxNsOOgXzV4/8wiApb/K/FBe9Ung=; b=MB9f6Y/4MjDopF4x2/1i4eK8ZsxCRjk76D1vwktTcxHi4pBQa9hPFk3lp9Af436CYK xw6Y+Lk6/MiFwRHrav312MQueX4furrlGNKyHyDHLIfUIEK0QVD+6HndG/yh390lU0Y3 ih419QHOz9VlRoboeiYIJDqamJG6OmY6haKoEU6bd9NJmwYmE9Nums57mrpDWRx6WZ95 /1r8YGOU0XGFD4ewvGe87q46BPull9wOCwyXSQlDt3wwLXcmf0bauL8ylTn5GNk1ZfTv E/qzrtEDN7i5laCLlecfRcbaftVKBRJuHG+p4C+TDyZMHE0m9vSfw7gaDjwVZpnChRT0 m5/g== X-Gm-Message-State: AOJu0YxYpKvD1yRHpUAfY6kJVTCg4GT2uo2GMo97bK8gSk3W2LplMP6e ojiLqJOZGLufuLG94F/w57ENbD0kOR7uBayDDp5s0lVBKI1E/6/4uiK1fAitUO7mnG7ML1gp7Py b X-Google-Smtp-Source: AGHT+IFLHUdyzkgrzHVkeQlrYo1dYeqmlu2o9LZO6GV5j4xp6V8azEcXB3lbTGzHXJWxe/5WS1YSzA== X-Received: by 2002:a9d:6246:0:b0:6f1:2485:6d7e with SMTP id 46e09a7af769-6f8d0b9258cmr13539533a34.36.1716973356660; Wed, 29 May 2024 02:02:36 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper Subject: [PATCH for-4.19 8/9] x86/irq: handle moving interrupts in _assign_irq_vector() Date: Wed, 29 May 2024 11:01:30 +0200 Message-ID: <20240529090132.59434-9-roger.pau@citrix.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240529090132.59434-1-roger.pau@citrix.com> References: <20240529090132.59434-1-roger.pau@citrix.com> MIME-Version: 1.0 Currently there's logic in fixup_irqs() that attempts to prevent _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all interrupts form the CPUs not present in the input mask. The current logic in fixup_irqs() is incomplete, as it doesn't deal with interrupts that have move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field. Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that _assign_irq_vector() cannot fail, introduce some logic in _assign_irq_vector() to deal with interrupts that have either move_{in_progress,cleanup_count} set and no remaining online CPUs in ->arch.cpu_mask. If _assign_irq_vector() is requested to move an interrupt in the state described above, first attempt to see if ->arch.old_cpu_mask contains any valid CPUs that could be used as fallback, and if that's the case do move the interrupt back to the previous destination. Note this is easier because the vector hasn't been released yet, so there's no need to allocate and setup a new vector on the destination. Due to the logic in fixup_irqs() that clears offline CPUs from ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it shouldn't be possible to get into _assign_irq_vector() with ->arch.move_{in_progress,cleanup_count} set but no online CPUs in ->arch.old_cpu_mask. Signed-off-by: Roger Pau Monné --- xen/arch/x86/irq.c | 66 ++++++++++++++++++++++++++-------------------- 1 file changed, 38 insertions(+), 28 deletions(-) diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index 8822f382bfe4..33979729257e 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -553,7 +553,44 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask) } if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count ) - return -EAGAIN; + { + /* + * If the current destination is online refuse to shuffle. Retry after + * the in-progress movement has finished. + */ + if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) ) + return -EAGAIN; + + /* + * Fallback to the old destination if moving is in progress and the + * current destination is to be offlined. This is required by + * fixup_irqs() to evacuate interrupts from a CPU to be offlined. + * + * Due to the logic in fixup_irqs() that clears offlined CPUs from + * ->arch.old_cpu_mask it shouldn't be possible to get here with + * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in + * ->arch.old_cpu_mask. + */ + ASSERT(valid_irq_vector(desc->arch.old_vector)); + ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, mask)); + ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map)); + + /* Set the old destination as the current one. */ + desc->arch.vector = desc->arch.old_vector; + cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask); + + /* Undo any possibly done cleanup. */ + for_each_cpu(cpu, desc->arch.cpu_mask) + per_cpu(vector_irq, cpu)[desc->arch.vector] = irq; + + /* Cancel the pending move. */ + desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED; + cpumask_clear(desc->arch.old_cpu_mask); + desc->arch.move_in_progress = 0; + desc->arch.move_cleanup_count = 0; + + return 0; + } err = -ENOSPC; @@ -2635,33 +2672,6 @@ void fixup_irqs(const cpumask_t *mask, bool verbose) continue; } - /* - * In order for the affinity adjustment below to be successful, we - * need _assign_irq_vector() to succeed. This in particular means - * clearing desc->arch.move_in_progress if this would otherwise - * prevent the function from succeeding. Since there's no way for the - * flag to get cleared anymore when there's no possible destination - * left (the only possibility then would be the IRQs enabled window - * after this loop), there's then also no race with us doing it here. - * - * Therefore the logic here and there need to remain in sync. - */ - if ( desc->arch.move_in_progress && - !cpumask_intersects(mask, desc->arch.cpu_mask) ) - { - unsigned int cpu; - - cpumask_and(affinity, desc->arch.old_cpu_mask, &cpu_online_map); - - spin_lock(&vector_lock); - for_each_cpu(cpu, affinity) - per_cpu(vector_irq, cpu)[desc->arch.old_vector] = ~irq; - spin_unlock(&vector_lock); - - release_old_vec(desc); - desc->arch.move_in_progress = 0; - } - if ( !cpumask_intersects(mask, desc->affinity) ) { break_affinity = true; From patchwork Wed May 29 09:01:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 13678489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3F25BC25B75 for ; Wed, 29 May 2024 09:02:53 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.731698.1137413 (Exim 4.92) (envelope-from ) id 1sCFCm-0000wx-Uk; Wed, 29 May 2024 09:02:44 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 731698.1137413; Wed, 29 May 2024 09:02:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCm-0000wb-Pt; Wed, 29 May 2024 09:02:44 +0000 Received: by outflank-mailman (input) for mailman id 731698; Wed, 29 May 2024 09:02:42 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1sCFCk-0006YK-OZ for xen-devel@lists.xenproject.org; Wed, 29 May 2024 09:02:42 +0000 Received: from mail-qv1-xf29.google.com (mail-qv1-xf29.google.com [2607:f8b0:4864:20::f29]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 33d8ac75-1d9a-11ef-b4bb-af5377834399; Wed, 29 May 2024 11:02:41 +0200 (CEST) Received: by mail-qv1-xf29.google.com with SMTP id 6a1803df08f44-6ab9d9ad0c0so8254406d6.2 for ; Wed, 29 May 2024 02:02:41 -0700 (PDT) Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6ad96fac09esm12093906d6.32.2024.05.29.02.02.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 May 2024 02:02:38 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 33d8ac75-1d9a-11ef-b4bb-af5377834399 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1716973359; x=1717578159; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0uM4Rhz8t2QnRPi9kZmp+UEqFUepWlQwqYUfgN8dy9c=; b=e+xCEhtAQCp3qNhOk2GTGEcioOcrlflSePRuLPytC8M77bXWqazMR3EvNKNnleqpPT FW0I9SDB7h9jMqPvWJxW0WbC3z10gSXLf1a+fwbundu71Ybid3Dwmczc2fOdrau9qSfx 2He6uiR2e/IH6OQ3HBuioJkRMGko9ScYYt2MA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716973359; x=1717578159; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0uM4Rhz8t2QnRPi9kZmp+UEqFUepWlQwqYUfgN8dy9c=; b=YzF0yNC3c9zx84sLeqqmeD3nCa2wx5PDxHKHh2bsaGoi1rPGnMfKu/LtKqB2Xmjl3U GTHuGlBpj4/KCirArZ6cCZRcbMigCOoHLgpvPgZGwqVsuxtp8aYV7cWVz8HAVs1ZyJDY wUT5d9lVaPfvE0HKV7fpqPO8inHmkvS57XNUICcypUKh8OkpTlYfreFLdEY5hqK76Ffq r/XuGtyXSndFPMEEE+HuSjBOZQEZwKD7UFGhIrybVl+xdoliooy/lnV8Nb9M3NTHYfx/ fhpdNPauzrrM81/YugCd6Dzygv3HaXCIC43pTPzRpIWB5EMOMh+Gu4BGFuXtL2TjQPTv ddBQ== X-Gm-Message-State: AOJu0YwQnd1XdBaAmPZ8S5wfbtADgh6Pb4AiukM46rj00leaSrT0pUOT NGUrYsXdglJtN1iOuq3OdE7z8SaojuxdH1cUYiIyOmLT9tcxvM7OF25KdYMtY09CXK8jBVoncLN C X-Google-Smtp-Source: AGHT+IFVnbdL74wVKWAekUSIphAYXJzvIYyS86bdPsV32FCEpHOK4om7tjuX8q29tjlykqBQJmATNA== X-Received: by 2002:a05:6214:3a0a:b0:6ab:71d6:cbd5 with SMTP id 6a1803df08f44-6abcd169caamr175634936d6.53.1716973359026; Wed, 29 May 2024 02:02:39 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper Subject: [PATCH for-4.19 9/9] x86/irq: forward pending interrupts to new destination in fixup_irqs() Date: Wed, 29 May 2024 11:01:31 +0200 Message-ID: <20240529090132.59434-10-roger.pau@citrix.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240529090132.59434-1-roger.pau@citrix.com> References: <20240529090132.59434-1-roger.pau@citrix.com> MIME-Version: 1.0 fixup_irqs() is used to evacuate interrupts from to be offlined CPUs. Given the CPU is to become offline, the normal migration logic used by Xen where the vector in the previous target(s) is left configured until the interrupt is received on the new destination is not suitable. Instead attempt to do as much as possible in order to prevent loosing interrupts. If fixup_irqs() is called from the CPU to be offlined (as is currently the case) attempt to forward pending vectors when interrupts that target the current CPU are migrated to a different destination. Additionally, for interrupts that have already been moved from the current CPU prior to the call to fixup_irqs() but that haven't been delivered to the new destination (iow: interrupts with move_in_progress set and the current CPU set in ->arch.old_cpu_mask) also check whether the previous vector is pending and forward it to the new destination. Signed-off-by: Roger Pau Monné --- xen/arch/x86/apic.c | 5 +++++ xen/arch/x86/include/asm/apic.h | 3 +++ xen/arch/x86/irq.c | 39 +++++++++++++++++++++++++++++++-- 3 files changed, 45 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c index 6567af685a1b..4d77fba3ed19 100644 --- a/xen/arch/x86/apic.c +++ b/xen/arch/x86/apic.c @@ -1543,3 +1543,8 @@ void check_for_unexpected_msi(unsigned int vector) { BUG_ON(apic_isr_read(vector)); } + +bool lapic_check_pending(unsigned int vector) +{ + return apic_read(APIC_IRR + (vector / 32 * 0x10)) & (1U << (vector % 32)); +} diff --git a/xen/arch/x86/include/asm/apic.h b/xen/arch/x86/include/asm/apic.h index d1cb001fb4ab..7b5a0832c05e 100644 --- a/xen/arch/x86/include/asm/apic.h +++ b/xen/arch/x86/include/asm/apic.h @@ -179,6 +179,9 @@ extern void record_boot_APIC_mode(void); extern enum apic_mode current_local_apic_mode(void); extern void check_for_unexpected_msi(unsigned int vector); +/* Return whether vector is pending in IRR. */ +bool lapic_check_pending(unsigned int vector); + uint64_t calibrate_apic_timer(void); uint32_t apic_tmcct_read(void); diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index 33979729257e..211ad3bd7cf5 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -2591,8 +2591,8 @@ void fixup_irqs(const cpumask_t *mask, bool verbose) for ( irq = 0; irq < nr_irqs; irq++ ) { - bool break_affinity = false, set_affinity = true; - unsigned int vector; + bool break_affinity = false, set_affinity = true, check_irr = false; + unsigned int vector, cpu = smp_processor_id(); cpumask_t *affinity = this_cpu(scratch_cpumask); if ( irq == 2 ) @@ -2643,6 +2643,25 @@ void fixup_irqs(const cpumask_t *mask, bool verbose) !cpumask_test_cpu(cpu, &cpu_online_map) && cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) ) { + /* + * This to be offlined CPU was the target of an interrupt that's + * been moved, and the new destination target hasn't yet + * acknowledged any interrupt from it. + * + * We know the interrupt is configured to target the new CPU at + * this point, so we can check IRR for any pending vectors and + * forward them to the new destination. + * + * Note the difference between move_in_progress or + * move_cleanup_count being set. For the later we know the new + * destination has already acked at least one interrupt from this + * source, and hence there's no need to forward any stale + * interrupts. + */ + if ( lapic_check_pending(desc->arch.old_vector) ) + send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)), + desc->arch.vector); + /* * This CPU is going offline, remove it from ->arch.old_cpu_mask * and possibly release the old vector if the old mask becomes @@ -2683,11 +2702,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose) if ( desc->handler->disable ) desc->handler->disable(desc); + /* + * If the current CPU is going offline and is (one of) the target(s) of + * the interrupt signal to check whether there are any pending vectors + * to be handled in the local APIC after the interrupt has been moved. + */ + if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) ) + check_irr = true; + if ( desc->handler->set_affinity ) desc->handler->set_affinity(desc, affinity); else if ( !(warned++) ) set_affinity = false; + if ( check_irr && lapic_check_pending(vector) ) + /* + * Forward pending interrupt to the new destination, this CPU is + * going offline and otherwise the interrupt would be lost. + */ + send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)), + desc->arch.vector); + if ( desc->handler->enable ) desc->handler->enable(desc);